[go: up one dir, main page]

WO2025235677A1 - Stereo tomography of the anterior eye - Google Patents

Stereo tomography of the anterior eye

Info

Publication number
WO2025235677A1
WO2025235677A1 PCT/US2025/028233 US2025028233W WO2025235677A1 WO 2025235677 A1 WO2025235677 A1 WO 2025235677A1 US 2025028233 W US2025028233 W US 2025028233W WO 2025235677 A1 WO2025235677 A1 WO 2025235677A1
Authority
WO
WIPO (PCT)
Prior art keywords
eye
images
imaging
slit
illumination
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2025/028233
Other languages
French (fr)
Inventor
David Huang
Siyu CHEN
Alfonso JIMENEZ-VILLAR
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Oregon Health and Science University
Original Assignee
Oregon Health and Science University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oregon Health and Science University filed Critical Oregon Health and Science University
Publication of WO2025235677A1 publication Critical patent/WO2025235677A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/13Ophthalmic microscopes
    • A61B3/135Slit-lamp microscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/102Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for optical coherence tomography [OCT]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/103Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining refraction, e.g. refractometers, skiascopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/113Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining or recording eye movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/14Arrangements specially adapted for eye photography
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/32Fiducial marks and measuring scales within the optical system
    • G02B27/34Fiducial marks and measuring scales within the optical system illuminated
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • G02B30/20Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
    • G02B30/22Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes of the stereoscopic type

Definitions

  • the present disclosure describes an apparatus and method that combines stereo vision with optical sectioning (i.e., patterned illumination) for visualizing, recording and reviewing images of the entire anterior eye in 3D spatial contexts.
  • the solutions herein combine telecentric, "on-axis”, fast updating patterned illumination and “off-axis” stereo vision cameras. It is also useful to have one “on-axis” camera for some visualization formats such as dark field.
  • the setup enables a large field of view, as well as recording of cross-sectional (i.e., tomography) and volumetric 3D representation of the entire anterior eye with minimum obstruction.
  • a mathematical model that calculates the spatial coordinates (x, y, z) of each voxel. The high resolution and precise spatial coordinate calculation improve visualization quality and accurate cornea, iris, lens, and scleral biometry.
  • a multi-focal configuration can be implemented where various cameras target different structures, and each camera has an optimized depth imaging range and magnification (i.e., optical resolution). Therefore, the multi-focal imaging system captures sharp images of the cornea layers, crystalline lens, and anything in between, while also recording cellular details such as endothelial cells and corneal nerves.
  • a spatial light modulator together with highly efficient semiconductor light sources, enable precise control of the illumination (e.g., the width and location of the slit illumination), exposure time, and color (i.e., light wavelength).
  • the apparatus facilitates a variety of imaging modalities, such as flood illumination, optical sectioning, confocal illuminationdetection, specular microscopy, and hyperspectral imaging/spectroscopic analysis.
  • the solutions herein allow a comprehensive examination of the anterior eye using one compact and easy-to-operate instrument.
  • the whole anterior eye volume can be imagined in an automated fashion with no or minimal operator input.
  • the recorded 3D volumetric dataset can be easily viewed as a fly-through movie to appreciate all imaged components or zoomed-in for specific features of interest.
  • the measured surfaces of the eye and their visualization in the space coordinates can provide accurate information about the location and seriousness of a disease or injury.
  • Precise spatial calibration improves ocular biometry accuracy, including cornea surface curvature, corneal thickness, anterior chamber depth and volume, and anterior chamber angle.
  • the image store is inherently digital, allowing documentation, computer-assisted study, and deep-learning analysis.
  • Other features, aspects, and aims of the disclosed invention will be emphasized in detail in the description section. The summary only overviews this technology and does not limit any aim, which the attached claims will explicitly define.
  • FIG. 1 A is a schematic view of an example imaging system with four cameras, including a stereo photography device in combination with a patterned illumination system, in accordance with various embodiments.
  • FIG. 1 B depicts the cameras 106-109 of FIG. 1 A in the transverse (x-z) plane, consistent with FIG. 1 A, in accordance with various embodiments.
  • FIG. 1 C depicts cameras 156, 157, 158 and 159 in a sagittal (y-z) plane, in accordance with various embodiments.
  • FIG. 2 depicts an example calibration technique to determine a triangulation function for 3D imaging, consistent with the system of FIG. 1A, in accordance with various embodiments.
  • FIG. 3 depicts an example tomographic reconstruction algorithm, consistent with the system of FIG. 1 A, in accordance with various embodiments.
  • FIG. 4A is an example cross-sectional tomographic image of an anterior segment 401 of the eye, consistent with FIGs. 1A-3, in accordance with various embodiments.
  • FIG. 4B depicts an example image of a frontal chamber 402 of the eye, consistent with FIGs. 1 A-3, in accordance with various embodiments.
  • FIG. 5A depicts a top view of an example optical setup of another stereo slit-scanning tomographer, in accordance with various embodiments.
  • FIG. 5B depicts a side view of the example optical setup of FIG. 5A, in accordance with various embodiments.
  • FIG. 6A depicts an ex-vivo porcine cornea imaged by a microscope of magnification 10x, in accordance with various embodiments.
  • FIG. 6B depicts an image of a partial thickness corneal laceration, in accordance with various embodiments.
  • FIG. 6C depicts an image of a metallic particle on the epithelium of the cornea, on both sides of the corneal laceration of FIG. 6B, in accordance with various embodiments.
  • FIG. 6D depicts an image of a punctate corneal abrasion due to cornea dehydration, in accordance with various embodiments.
  • FIG. 6E depicts an image of Descemet’s membrane detachment, in accordance with various embodiments.
  • FIG. 6F depicts an image of damage of the lens capsule, in accordance with various embodiments.
  • FIG. 7 depicts an example of the triangulation principle to map camera image pixels into spatial coordinates, consistent with the system of FIG. 5A and 5B, in accordance with various embodiments.
  • FIG. 8 depicts an example calibration technique to determine a stereo triangulation function for 3D imaging, consistent with the system of FIG. 5A and 5B, in accordance with various embodiments.
  • FIGs. 9A1 , 9B1 and 9C1 depict example elevation maps of reference spheres with radii of 9.65 mm, 8.00 mm and 6.15 mm, respectively, consistent with the system of FIGs. 5A and 5B, in accordance with various embodiments.
  • FIGs. 9A2, 9B2 and 9C2 depict example plots of radius of curvature vs. horizontal displacement, consistent with the reference spheres of FIGs. 9A1 , 9B1 and 9C1 , respectively, for different axial and horizontal positions, in accordance with various embodiments.
  • FIG. 10A depicts an exemplary large-scale volumetric reconstruction of the anterior segment of the human eye in-vivo, in accordance with various embodiments.
  • FIG. 10B depicts an example 3D rendering of the human eye of FIG. 10A, in the sagittal plane, in accordance with various embodiments.
  • FIG. 1 1 A depicts an example cross-sectional image of a fluid-filled sac or iris cyst located on the iris surface, in accordance with various embodiments.
  • FIG. 1 1 B depicts an example cross-sectional image of main layers of the corneal structure, in accordance with various embodiments.
  • FIG. 1 1 C depicts a close-up view of the apex region 1 150 of the cornea of FIG. 1 1 B, in accordance with various embodiments.
  • FIGs. 12A1 , 12B1 and 12C1 depict Bland-Altman plots for corneal thickness, anterior chamber depth, and radius of curvature, respectively, showing repeatability of the stereo slit-scanning device using two consecutive measurements, in accordance with various embodiments.
  • FIGs. 12A2, 12B2 and 12C2 depict Bland-Altman plots for corneal thickness, anterior chamber depth, and radius of curvature, respectively, showing an agreement analysis between the Pentacam®HR and the proof-of-concept prototype, in accordance with various embodiments.
  • the optics of the human eye can be approximated as two converging lenses (the cornea and the crystalline lens), which project images of the environment onto the retina.
  • the cornea is responsible for about 2/3 of the total refractive power.
  • the crystalline lens is an active optical element, able to change its shape and thickness to adjust its refractive power for near-or-distant vision.
  • Good visual acuity is only achieved if the images formed on the retina are sufficiently focused and without significant light blockage or aberrations in the transparent structures and ocular media. Additionally, supportive structures (e.g., the iris and the sclera) and ocular adnexa (e.g., the eyelids) are also important for the health of the eye.
  • the anterior segment can be affected by various eye diseases such as opacities (e.g., corneal scarring, cataracts), shape distortion and thickness variation (e.g., corneal ectasia and edema), infections (e.g., bacterial, viral, or parasitic), or traumatic injuries (mechanical, chemical or thermal). Achieving accurate diagnosis is important for providing patients with the appropriate care.
  • opacities e.g., corneal scarring, cataracts
  • shape distortion and thickness variation e.g., corneal ectasia and edema
  • infections e.g., bacterial, viral, or parasitic
  • traumatic injuries mechanical, chemical or thermal
  • High-resolution imaging of the anterior segment is essential for the correct diagnosis of injuries and diseases. Examples include determining wound depth in corneal abrasion and lacerations, monitoring endothelial cell loss in Fuchs dystrophy, measuring iris elevations caused by a ciliary body cyst, or differentiating nuclear cataracts from capsular subtypes. Likewise, quantitative measurements of the corneal and crystalline lens surfaces are important for numerous ophthalmic procedures, such as refractive corneal surgery, contact lens fitting, or choosing the intra ocular lens (IOL) for cataract surgery.
  • IOL intra ocular lens
  • Slit-lamp examination is currently the standard diagnostic procedure for the anterior eye in clinical practice.
  • This optical instrument involves two principal components: an illuminator that projects an adjustable slit-shaped beam and a biomicroscope to visualize anatomical features. Both parts are able to rotate independently about one common vertical axis to perform different illumination techniques, such as flood illumination (bright field), sclerotic scatter or specular reflection.
  • the most distinctive imaging mode of the slit-lamp is based on the principle of “optical sectioning”. This configuration makes a virtual sectioning of the anterior eye segment by projecting a narrow "sheet of light" onto the anterior eye. When this virtual sectioning is viewed at an angle through the biomicroscope, the different layers of the cornea and lens are appreciated, generating a cross-sectional view.
  • slit lamps are fundamental tools for assessing anterior eye diseases and injuries, the examination is inherently subjective and qualitative. This limitation can lead to poor agreement among different physicians, e.g., a high-inter- grader variation. Likewise, the performance of the slit lamp depends directly on the skill of the operator and the selected magnification of the biomicroscope (e.g., large field of view versus high magnification/resolution). While the images can be recorded using a camera attachment, the lack of standardization leads to significant variations. It is also challenging, if not impossible, to record multiple imaging locations or views, which can potentially lead to missed pathology or incorrect diagnosis.
  • slitscanning imaging uses optical sectioning to generate depth-resolved cross-sectional views.
  • the first commercial device that implemented this technology was OrbscanTM (Bausch & Lomb Incorporated, Rochester, NY, USA). This optical system projects two narrow vertical slits (one nasal, one temporal) onto the cornea at an angle of 45 degrees from the visual axis.
  • a motorized translation stage displaces the slits horizontally to record forty images (20 nasal, 20 temporal).
  • the slow mechanical scanning extends imaging time and limits the number of total projected slits. Therefore, the low sampling density of the OrbscanTM precludes acquiring data covering the entire anterior eye, which limits the measurement accuracy and can miss areas of pathology. Because the illumination is off-axis, obstruction of the illumination can occur on the contralateral side, further reduce imaging efficiency.
  • a competing slit-scanning configuration uses Scheimpflug cameras to match the focal plane of the imaging lens with the illuminated slit.
  • Scheimpflug cameras extend the depth field of view and allow reliable imaging of the cornea and lens simultaneously.
  • This imaging capability requires a rotational slitscanning arm to maintain the strict geometric relationship between the illumination plane and the camera position.
  • the most common Scheimpflug device is the PentacamOHR (Oculus Inc., Wetzlar, Germany), which takes 50 frames in 2 seconds at equally spaced meridians.
  • the mechanical translation of the slit around the corneal apex limits its imaging speed and sampling density. This poor scanning density results in a lack of coverage at the periphery, e.g., peripheral cornea and sclera.
  • the requirement for bulk mechanical rotation also introduces instabilities that may interfere with measurement accuracy.
  • anterior segment optical coherence tomography uses low-coherence interferometry to resolve the depth information and to visualize anatomical features of the anterior segment in 3D.
  • AS-OCT anterior segment optical coherence tomography
  • the solutions herein include a 3D imaging platform that integrates computer stereo vision methods and slit-scanning imaging to achieve a 3D visualization of the anterior segment.
  • Computer stereo vision a method for depth estimation based on triangulation algorithms, has been widely applied to ophthalmic imaging. These applications include corneal topography, instrument tracking in surgical maneuvers, volumetric reconstruction of the optic nerve head, and stereoscopic 3D slit-lamp photography.
  • Our developed optical system can image the entire anterior segment, with high sampling density, a feature not available with competing technologies. Additionally, we present a calibration method to perform triangulation and display 3D-rendered images of the eye.
  • This volumetric visualization of the anterior structures provides valuable diagnostic information including the location, extent, and morphological features of the lesion or disorder in a manner similar to AS-OCTs, allowing the physician to assess their severity and urgency.
  • the system can extract quantitative measures including the central corneal thickness, anterior chamber depth, and radius of curvature of the anterior cornea.
  • the solutions provide a new stereo-imaging device that can record 2D cross-sectional and 3D volumetric images of the entire anterior eye.
  • the solutions can include a non-invasive and non-contact optical imaging modality that records photographs of the anterior eye and allows visualization and measurement of features, including geometric shapes, surface and internal defects, and tissue light- reflecting/light-scattering properties.
  • the solutions can involve optical sectioning using slit-scanning patterns or another structural light approach and stereoscopic measurements of the cornea, the crystalline lens, and other anterior eye elements.
  • the solutions can involve performing tomography of the anterior segment of the eye and ocular adnexa.
  • the solutions can replace or supplement the standard-slit lamp examination with automated, fully digitized imaging of the anterior eye.
  • the recorded images can capture changes in tissue appearance (e.g., reflectivity, color, and texture) and shape (e.g., curvature and thickness).
  • the images can be digitally stored and input to computer algorithms for quantitative analysis.
  • FIG. 1 A is a schematic view of an example imaging system with four cameras, including a stereo photography device in combination with a patterned illumination system, in accordance with various embodiments.
  • a feature of the solutions herein lies in a fast, programmable projection of illumination patterns, wherein at least one dimension has high spatial confinement (e.g., slit beam or light sheet or light blade, ring, and dot).
  • a condenser lens 101 or lens group concentrates the light from a high intensity light source 100 such as a laser or light emitting diode (LED) to illuminate a spatial light modulator 102.
  • a high intensity light source 100 such as a laser or light emitting diode (LED)
  • a spatial light modulator is an active optical element that selectively reflects the light into the eye 1 10.
  • the spatial light modulator can be a digital micromirror device (DMD) 102.
  • DMD digital micromirror device
  • a DMD is a Micro-Electro-Mechanical Systems (MEMS) device that utilizes an array of tiny, individually controlled mirrors to modulate light. These mirrors can be tilted rapidly between two positions, effectively switching light on or off, or controlling its direction.
  • Each mirror can be controlled by a corresponding memory cell, allowing for individual control of each mirror's position.
  • This capability enables the imaging software to create specific light patterns, such as slits or various designs like grids, dots, or rings, by controlling the activation of the micromirrors. Therefore, it allows high flexibility in illumination pattern design, spatial scanning density, and speed.
  • the second advantage is related to the programmable activation period of the micromirrors. Longer "on” periods imply more exposure time and light collected by the cameras. By adjusting the activation period of each mirror independently, multiple light sheets or other patterns with different exposure times can be projected simultaneously.
  • the DMD 102 is fast switching between the “on” and “off” states, allowing the generation of hundreds of different patterns within a second. This fast positioning of the patterns allows shorter total imaging time, reducing motion artifacts during the measurement.
  • a slit beam is directly projected onto the eye 1 10.
  • This embodiment replaces the DMD 102 with a cylindrical lens or by a pair of anamorphic prisms; thus, the illuminating beam is shaped as a narrow line. See FIGs. 5A and 5B.
  • this embodiment requires mechanical scanning mirrors, such as polygonal mirrors or galvanometer-resonant scanners, to displace the slit beam along the ocular surface 1 11.
  • the illumination pattern is also limited to a single moving slit.
  • the slit illumination (or other programmed patterns) generated by the DMD 102 is projected onto the ocular surface 1 1 1 by an optical relay system (here shown with two positive lenses 103 and 105).
  • the optical magnification (approximately 1 x to 3x, dependent on the size of the DMD 102 chip) is chosen so the image of the DMD 102 (i.e., the extent of all scanning patterns) covers the entire anterior eye (> 25mm x 25 mm field of view).
  • a beam splitter, or a dichroic mirror 104 is located between the lenses 103 and 105 to merge the relay system with the optics of the frontal view camera 1 19 and the fixation target subsystem 120 or module.
  • FIGs. 1 B and 1 C we show an embodiment consisting of four cameras in “toe-in” configurations.
  • Two paired cameras, 107 and 108 are located with their optical axes at a moderate offset angle (an acute angle such as about +/-30 degrees) relative to the illumination axis 125, aiming within the anterior and chamber of the eye to record surfaces of the cornea and crystalline lens 113.
  • another set of paired cameras 106 and 109 are placed with their optical axes at a larger skew angle (an acute angle such as about +/-60 relative to the illumination axis 125, on opposing sides of the axis) to have higher, better optical sectioning performance, resolving and differentiating cornea layers 11 1.
  • the two paired cameras together form a stereo camera, and each camera has a lens and image sensor. Accordingly, a first pair of cameras may be placed at a first angle relative to the illumination axis, on opposing sides of the illumination axis, and a second pair of cameras may be placed at a second angle relative to the illumination axis, on opposing sides of the illumination axis, where the first angle is different than the second angle.
  • dynamic focusing i.e., acquiring multiple images at various focal depths and then digitally fusing the acquired volumes, can improve depth imaging range while maintaining high optical throughput and resolution.
  • the fixation target subsystem 120 of the present embodiment is based on a Badal I Nagel optometer variation.
  • This optical accessory uses an electronic display (e.g., an LCD screen) to show a fixation target 121 (e.g., a dot, cross, bull’s eye, etc.).
  • the combination of two fixed positive lenses 105, 122, and one movable positive lens 123 can compensate for the subject’s refractive error for clear viewing.
  • the fixation target is used to help the patient focus their gaze on a specific location to stabilize the eye and ensure that high-quality images of the eye are captured. Consequently, it helps to maintain gaze and stabilize the eye during imaging.
  • a large displacement of the lens 123 can stimulate the accommodative response of the crystalline lens 113, which enables the study of the lens shape changes based on the desired vergence stimuli.
  • the described optical module merges a front-view camera 1 19 with the disclosed device by an extra beam splitter 1 18.
  • the front-view camera 1 19 eases the alignment of the proposed system with the cornea 1 1 1 , iris 1 12, and crystalline lens 1 13.
  • FIG. 1 B depicts the cameras 106-109 of FIG. 1 A in the transverse (x-z) plane, consistent with FIG. 1 A, in accordance with various embodiments.
  • the cameras 107 and 108 have their optical axes at angles a1 and -a1 , respectively, relative to the illumination axis (the z axis), and focused at a depth represented by a focal point z1
  • the cameras 106 and 109 have their optical axes are at angles a2 and - a2, respectively, relative to the illumination axis, and focused at a depth represented by a focal point z2.
  • the two pairs of cameras can therefore be focused at different depths of the eye, and positioned at different angles relative to the illumination axis.
  • a1 and -a1 are first acute angles and a2 and -a2 are second acute angles.
  • FIG. 1 C depicts cameras 156, 157, 158 and 159 in a sagittal (y-z) plane, in accordance with various embodiments.
  • the cameras 157 and 158 have their optical axes at angles a3 and -a3, respectively, relative to the illumination axis, on opposing sides of the axis, and focused at a depth represented by a focal point z3, and the cameras 156 and 159 have their optical axes at angles a4 and - a4, respectively, relative to the illumination axis, on opposing sides of the axis, and focused at a depth represented by a focal point z4.
  • the points z1 -z4 can all represent different depths, in one approach.
  • a3 and -a3 are third acute angles and a4 and -a4 are fourth acute angles.
  • more than two pairs of cameras are used in the transverse and/or sagittal planes.
  • imaging methods enabled by the presented stereo tomography device. It is noted that the following imaging methods are not mutually exclusive; two or more of the methods may be performed at the same time to visualize target features.
  • the instrument performs an automated slit-lamp examination.
  • the illuminating slit (with a height of > 25 mm and width of 10 - 200 pm) scans the entire anterior surface of the eye 111 by, i.e., alternating the activated DMD 102 mirrors.
  • the spatial distribution of all slits satisfies the Nyquist sampling requirement. Therefore, to adequately cover the entire field of view of the eye 110, a total of 250 to 5,000 slit illumination patterns are required.
  • Multiple slits can be illuminated simultaneously, as long as the images do not overlap in the camera's view. This approach reduces the number of total required camera exposures, thus decreasing the overall imaging time.
  • the cameras 106, 107, 108, and 109 are fitted with low magnification lenses (e.g., 0.2x - 1 ,0x) to cover the necessary field of view (e.g., 25 mm x 25 mm).
  • the cameras with the same offset angle e.g., cameras 107 and 1028 produce stereo pairs of images, which are acquired simultaneously according to the scanning period of the DMD 102. For example, a stereo pair of images can be obtained by each pair of correspondingly positioned cameras which can be in the sagittal plane or the transverse plane. Recording starts once the device is aligned in front of the eye 110.
  • one pair of stereo cameras have slight off angle (an acute angle such as about +/-15 degrees) for imaging deeper structures such as crystalline lens and anterior vitreous.
  • zoom lenses are desirable because they allow easy switching between the low-magnification and high-magnification configurations.
  • the field of view is centered on the center cornea 1 1 1 , iris 112, and anterior chamber (up to 10 mm x 10 mm wide and 3 mm deep).
  • the optical sectioning, scanning, and recording sequence is similar to that described above.
  • Modes can also be provided for imaging of deeper structure such as crystalline lens and anterior vitreous using cameras at small angular offset (infrared or red useful to reduce pupil constriction), wide-field view of sclera and eye lids, and evaluation of anterior chamber angle with narrow slit (infrared useful to better penetrate the limbus).
  • the combination of spatially confined illumination patterns and discrete camera pixels enables digital confocal or dark-field imaging modes.
  • the activated DMD 102 micromirrors act as illumination pinholes. Depending on the number of activated mirrors, the effective pinhole diameter ranges between 10 pm and 200 pm.
  • the front-view camera 1 19, fitted with a high-magnification (between 4x and 20x), high-resolution (between 1 pm to 5 pm) lens captures the image.
  • the camera sensor 1 19 is conjugated to the DMD 102 through relay optics; thereby, there will be a unique pixel-micromirror correspondence.
  • a digital confocal image can be reconstructed.
  • a dark field image can be reconstructed.
  • this region can be an offset of an effective numerical aperture from 0.01 to 0.1.
  • a full-frontal view confocal/dark-field image can be generated by scanning the illuminating and corresponding collecting camera pixels across the desired field of view (e.g., the center 10 mm x 10 mm).
  • a pinhole array i.e., activating multiple isolated DMD 102 pinholes, can reduce total imaging time, given that the spacing is large enough to avoid crosstalk (10x - 100x corresponding pinhole diameter).
  • a slit pinhole can also be used, although the confocality is limited to a single dimension.
  • dark-field images are captured using offset camera(s) co-oriented with the slit, e.g., camera(s) in the sagittal plane when illuminating using vertical slits.
  • flood-illuminated images can be captured by activating all DMD 102 micromirrors. By subtracting the flood-illuminated bright field image from the digitally reconstructed confocal or dark-field ones, the system can further eliminate specular and stray reflections. Consequently, the use of confocal and dark-field imaging models enhances the contrast of unstained cells, including corneal epithelium and cells in the anterior chamber of diseases or injured eyes.
  • spectral light source 100 enables the analysis of the wavelengthdependent absorption and scattering properties of the ocular tissue, cells, or foreign body.
  • light sources 100 of different wavelengths are combined using either dichroic mirrors or optical fibers.
  • the selection of wavelengths includes visible light red (e.g., 640nm), green (e.g., 532nm), and blue (i.e., 488nm), as well as near-infrared (between 700 and 900nm).
  • red, green, and blue illumination to combine images results in a true color representation of the anatomical structures. Additionally, green illumination highlights blood vessels and red blood cells in the anterior chamber.
  • Shorter wavelength light such as blue light
  • longer wavelengths are more scattered than longer wavelengths, making it more sensitive to detect cataracts in the crystalline lens 1 13 and allowing quantify the degree of density and opacity.
  • infrared light experiences less scattering in ocular tissues, better penetrating the limbus and the sclera, which allows for more precise visualization and measurement of the anterior chamber angle.
  • High DMD 102 and camera 106, 107, 108, 109, and 1 19 speeds reduce the total imaging time, mitigating the impact of involuntary eye movements and improving yields. Additionally, the high acquisition speed of the technique enables the visualization of the ocular dynamics. High-volume imaging rates of up to 10 volumes per second are achieved by optimizing the imaging field of view and sampling density. This characteristic enables the acquisition of the dynamics of different ocular structures, including the iris 1 12 and the crystalline lens 113. For example, the pupil constriction of the iris 112 can be induced by displaying a bright image on the fixation target.
  • the accommodative response of the lens is stimulated by moving the lens 123 of the fixation subsystem.
  • the instrument can switch to flood illimitation mode (e.g., illuminating the entire anterior eye by switching on all DMD mirrors) to reach up to 200 recordings per second.
  • the operator Before initializing the measurement, the operator has the option to select between manual or automatic instrument alignment.
  • the apparatus switches between flood illumination, which helps to locate the eye, and patterned illumination (e.g., a single slit beam) that brings to focus the desired anatomical structure.
  • the user displaces the optical setup similarly to the operation of the conventional slit-lamp biomicroscopes, using as reference the preview images from the front-view camera 1 19 and any of the offset ones (cameras 106, 107, 108, and 109).
  • simple image processing algorithms including iris detection and image correlation, allow automated positioning of the imaging interface.
  • the controlling software Based on the detected eye position, the controlling software sends electrical or digital commands from computer 114 to a motorized translation stage for instrument positioning.
  • the system can continuously track the eye’s position using a feedback algorithm in a closed loop.
  • Imaging commences with one or more of the modes described previously.
  • the high imaging speed of the apparatus minimizes the effects of involuntary eye movements.
  • a predetermined interval of time e.g., between 0.1 and 0.5 seconds
  • the entire anterior eye segment is illuminated, either using flood illumination by turning on all the DMD 102 micromirrors or using a separate illuminator.
  • the captured full-field image is then used as a reference to track eye position and gaze during the measurement. If eye movements are detected because the recorded image does not correlate with the reference one, the imaging software can pause imaging and re-obtain data.
  • the movement information is recorded and used in volumetric calibration, as described later in this patent.
  • the imaging interface of the presented embodiment provides a fast visualization of the acquired dataset with minimal processing.
  • Each stereo camera captures a series of eye images that can either be viewed independently or combined into a mosaic of images.
  • the illuminated areas are divided and reorganized into a sequence based on their spatial location (e.g., arranged from left to right).
  • the captured sequence can be quickly displayed as a fly-through video, allowing the examiner to review and identify ocular abnormalities or injuries efficiently.
  • the high sampling density ensures full coverage of the entire ocular surface in the imaging field. Users can pause the video, zoom in on specific regions of interest, and scroll through all the captured image frames.
  • the recorded volumetric datasets can be combined to achieve a true 3D representation of the entire anterior eye.
  • the method volumetric tomography reconstruction we describe a method using mathematical triangulation to fuse images from multiple cameras and retrieve the true spatial locations of the imaged features.
  • FIG. 2 depicts an example calibration technique to determine a triangulation function for 3D imaging, consistent with the system of FIG. 1 A, in accordance with various embodiments.
  • the technique includes a procedure of the optical system to perform triangulation among the different images of the cameras.
  • this method uses a reference target (e.g., dot grid 201 ).
  • the target is initially located at one end (e.g., the nearest distance) of the depth of field.
  • the target can be provided on a planar surface which is used as part of the triangulation, prior to using the system for imaging of the eye.
  • the target can include a grid of dark circles on a white background, for example.
  • the illumination system projects a slit beam 202 on one dot column 201 a of target 201 as a reference for finding pixel correspondences.
  • the dots are separated by distances x and y horizontally and vertically, respectively.
  • Displacing the target 201 along the illumination axis (z) the cameras 106-109 acquire images 207, 208, 209, and 210, respectively, at different depth positions.
  • triangulation is described as a function (T) that relates the spatial coordinates (x,y, z) of the object with their corresponding pixel coordinates (u n ,v n ) of "n" cameras, such that:
  • the calibration process involves estimating the triangulation function by solving for the spatial projection using regression analysis.
  • the centers of each circle on the target are defined with specific spatial coordinates based on the geometrical dot spacing (x, y) and the measured depth (z).
  • Image processing algorithms such as the circle Hough transform, are used to detect the centers of the dots in images 207, 208, 209, and 210.
  • the center of the dot 203 is at a pixel coordinate (u,v) in each of the images 207-210.
  • the obtained pixel coordinates are then associated with their corresponding spatial coordinates to calculate the triangulation function.
  • FIG. 3 depicts an example tomographic reconstruction algorithm, consistent with the system of FIG. 1 A, in accordance with various embodiments.
  • the algorithm is for two cameras of the optical system. For each slit-scanning period, the corresponding left-side and right-side cameras 301 and 302, respectively, capture tomographic images of the anterior segment at different viewpoints 303 and 304, respectively.
  • PL(UI,VI) and PR(U2,V2) represent pixels of the images captured by the left and right side cameras, respectively.
  • a restoration and enhancement image processing process can be performed to remove noise degradation and improve the contrast and sharpness of the imaged ocular structures.
  • Steps 305 and 306 accomplish the projective transformation of the cross- sectional images of each camera into corrected 3D global coordinates (images 307 and 308; li_(y,z) and lR(y,z)).
  • images 307 and 308 Prior to adding both projected images 307 and 308, adjustment of the contrast can be done to emphasize significant clinical details (e.g., foreign bodies, cataracts).
  • the merged frame 309 is then stacked into a 3D tomographic dataset 310 of other merged frames based on the scanning density of the measurement.
  • FIG. 4A depicts an example cross-sectional tomographic image of an anterior segment 401 of the eye, consistent with FIGs. 1A-3, in accordance with various embodiments.
  • FIG. 4B depicts an example image of a frontal chamber 402 of the eye, consistent with FIGs. 1A-3, in accordance with various embodiments.
  • the image is rendered by a 3D tomographic reconstruction.
  • Each dataset can be digitally stored for easy documentation, computer-assisted analysis, and machine learning and artificial intelligence algorithms.
  • Anterior eye tomography is an imaging modality that generates a depth-resolved, cross-sectional view of various structures of the anterior eye, including the cornea, iris, and crystalline lens.
  • Cross sectional view allows evaluating both all parameters regarding tissue health, including surface condition (e.g., smoothness and geometry) and internal features (e.g., opacity and reflectivity).
  • tissue health including surface condition (e.g., smoothness and geometry) and internal features (e.g., opacity and reflectivity).
  • surface condition e.g., smoothness and geometry
  • internal features e.g., opacity and reflectivity.
  • Numerous approaches, including slit-lamp and low-coherence interferometry, are currently available in the market. However, they have limitations in rapidly covering the entire anterior eye in a short time period, a pre-requisite for comprehensive assessing of anterior eye diseases or injuries.
  • the aims of this research are to: (1 ) present a stereo corneal tomography prototype for qualitative assessment of simulated corneal injuries, (2) demonstrate 3D imaging and visualization of the anterior segment, and (3) compare the ocular biometry of the disclosed optical setup with the Pentacam®HR, a high-resolution, rotating Scheimpflug camera used to analyze the anterior segment of the eye.
  • FIG. 5A depicts a top view of an example optical setup of a stereo slitscanning tomographer, in accordance with various embodiments. This approach differs from FIG. 1 A, e.g., in that it uses a slit 530 to provide a light sheet rather than a DMD.
  • the optical setup is based on two primary components: an illuminator 590 and the stereoscopic cameras (CMOS1 510 and CMOS2 520) in the transverse plane.
  • CMOS1 510 and CMOS2 520 stereoscopic cameras
  • One pair of cameras is depicted but more than one pair may be used. Also, as discussed, one or more pairs of cameras can be used, in additionally or alternatively, in the sagittal plane.
  • the galvanometer scanner 540 positioned between the relay lenses (e.g., lenses 535 and 550), telecentrically scans the light sheet horizontally based on control signals from a driver 541 .
  • the driver in turn may be responsive to a computer 542 (e.g., control circuit or controller).
  • the computer can include a memory 542a to store instructions and a processor 542b to execute the instructions to provide the functions described herein.
  • the scanner works by rapidly moving a mirror (or other optical element) using a galvanometer motor, allowing for fast and accurate beam steering.
  • the galvanometer motor is a type of rotary motor that can rapidly and precisely rotate a mirror.
  • the mirror is mounted on the shaft of a galvanometer motor and is typically made of a low-inertia material to allow for fast movement.
  • the feedback system can include a sensor that detects the mirror's position and provides feedback to a control system, ensuring accurate positioning.
  • the control system manages the motor's movement, enabling the laser light to be directed to specific locations or scanned across an area.
  • the dichroic mirror 545 (DMLP490R, Thorlabs Inc., USA) is added to merge a fixation target 560 with the optics of the illuminator.
  • the output power of the system onto the cornea was ⁇ 4pW, well below the applicable safety limit.
  • the light paths 516, 536, 546 and 551 are illumination paths, and the light paths 511 and 521 are imaging paths.
  • a path depicted by the dashed lines 561 is a fixation light path.
  • the galvanometer mirror (scanner 540) scanned the slit across a range of 18mm in the horizontal direction.
  • the two cameras were synchronized with the galvanometer mirror at a frame rate of 50 frames per second (fps). Each camera captured one frame per slit beam position with an exposure time of 25ms, totaling 350 frames.
  • the cameras can communicate with the computer 542 to receive control signals from the computer and transmit image data to the computer for processing as described herein.
  • FIG. 5B depicts a side view of the example optical setup of FIG. 5A, in accordance with various embodiments.
  • the side view includes the lenses 515, 525, 535 and 550 and the slit 530, omitting components that do not affect the light in this orientation.
  • This view depicts conjugate focal planes between the mechanical slit and the corneal plane.
  • the cylinder lens 525 focuses the beam on the slit plane to optimize the light efficiency of the projected light sheet.
  • FIGs. 6A-6F illustrate the device’s potential for diagnosis of mechanical corneal traumas. These figures provide an example panel of images showing ocular traumas imaged by the stereo slit-lamp tomographer using a nearinfrared light source, in accordance with various embodiments.
  • FIG. 6A depicts an ex-vivo porcine cornea imaged by a microscope of magnification 10x. This view, analogous to conventional frontal photography of the eye, lacks necessary diagnostic information, such as cut depth.
  • FIG. 6B depicts an image of a partial thickness corneal laceration 610, in accordance with various embodiments.
  • FIG. 6C depicts an image of a metallic particle 620 on the epithelium of the cornea, on both sides of the corneal laceration 610 of FIG. 6B, in accordance with various embodiments.
  • FIG. 6D depicts an image of a punctate corneal abrasion (denoted by the dots) due to cornea dehydration, in accordance with various embodiments.
  • FIG. 6E depicts an image of Descemet’s membrane detachment, in accordance with various embodiments.
  • FIG. 6F depicts the same Descemet’s membrane detachment as FIG. 6E, but from the opposite camera view.
  • FIG. 7 shows the working principle of this process, where common local features were detected and matched using feature matching algorithms.
  • FIG. 7 depicts an example of the triangulation principle to map camera image pixels into spatial coordinates, consistent with the system of FIG. 5, in accordance with various embodiments.
  • the corresponding “m, n” pixel coordinates from the left-side camera 710 and “u, v” pixel coordinates from the paired right-side camera 720 has a common point “x, y, z” in the real space.
  • the common point is defined by the function T, which depends on the pixel coordinates according to (x,y,z) ⁇ T(m,n,u,v).
  • an image 71 1 captured by the camera 710 includes pixels 711 a, 711 b and 71 1 c which correspond to points 730a, 730b and 730c, respectively, of the eye 730.
  • an image 721 captured by the camera 720 includes pixels 721 a, 721 b and 721 c which correspond to the points 730a, 730b and 730c, respectively, of the eye 730.
  • FIG. 8 depicts an example calibration technique to determine a stereo triangulation function for 3D imaging, consistent with the system of FIG. 5A and 5B, in accordance with various embodiments.
  • triangulation is the process where parallax differences between two or more images are used to infer the position of an object in 3D space.
  • FIG. 8 illustrates the experimental setup, where a dot grid distortion target (0.5mm/dot spacing, Edmund Optics Inc., USA) was placed perpendicular to the illumination axis (z-axis) 815.
  • a dot grid distortion target 0.5mm/dot spacing, Edmund Optics Inc., USA
  • the illuminator projected a light sheet 831 on one dot column 831 a of the target 830 as a reference for finding pixel correspondence during the image processing.
  • the cameras recorded one frame.
  • CMOS1 810 recorded a left-side frame or image 811 , where a column 811 a of pixels corresponds to the dot column 831 a, and (u1 ,v1 ) is the coordinate of an example pixel.
  • CMOS2 820 records a right-side frame or image 821 , where a column 821 a of pixels corresponds to the dot column 831 a, and (u2,v2) is the coordinate of an example pixel.
  • rays 812a and 812b depict correspondences between the dot 832 and pixels 832a and 832b, respectively, and rays 813a and 813b depict correspondences between the dot 833 and pixels 833a and 833b, respectively.
  • each slit illumination i.e., light sheet
  • H homography matrix
  • the triangular relation between the stereo images and spatial coordinates is crucial for accurate 3D reconstruction of the eye.
  • standard local features of the cornea, lens, iris, and sclera must be correctly detected for each pair of stereo images.
  • featurematching algorithms Specifically, we used an Oriented FAST and rotated BRIEF (ORB) descriptor and a Fast Library for Approximate Nearest Neighbors (FLANN) matcher to find common characteristics in a loop of 2500 iterations. Suitable matches were filtered using a ratio test filter, assessing the symmetry of the features by cross-check matching and verifying the distance calculation of the matcher by RANSAC.
  • Oriented FAST and rotated BRIEF (ORB) descriptor refers to a fast and efficient feature descriptor in computer vision that combines the strengths of FAST key point detection and BRIEF descriptor.
  • ORB uses FAST (Features from Accelerated Segment Test) to identify potential key points in an image.
  • FAST is a fast corner detector that identifies points where the intensity changes significantly around a pixel.
  • FAST identifies potential key points
  • ORB calculates the orientation of each key point. It does this by analyzing the intensity gradient around the key point, determining the dominant direction of change.
  • ORB uses a variant of BRIEF (Binary Robust Independent Elementary Features) called Rotated BRIEF to generate a binary descriptor for each key point. Rotated BRIEF is based on comparing the intensity of pairs of pixels in a rotated patch around the key point. These comparisons result in a 256-bit binary vector for each key point.
  • BRIEF Binary Robust Independent Elementary Features
  • Pachymetric mapping is feasible for curvature measurement in the slow scan axis which can be susceptible to motion error with topography mapping. Pachymetric mapping allows for precise registration to compensate for eye motion. [00130] In-vivo imaging protocol and comparison with Pentacam®HR measurements
  • the protocol included only healthy eyes with no known ocular diseases.
  • the study subjects have spherical refractive errors ranging from -5.00 D to +5.00 D.
  • the right eye of each volunteer was scanned twice using our prototype instrument and once using the Pentacam®HR device.
  • An experienced operator performed the data acquisition using the stereo slit-scanning tomographer.
  • the subject placed his/her head on the chinrest and stared at the fixation target.
  • the operator precisely focused on the anterior surface of the cornea. All measurements were taken in a dimmed room to avoid stray light reflections from the environment.
  • FIGs. 9A1 , 9B1 and 9C1 depict example elevation maps of reference spheres with radii of 9.65 mm, 8.00 mm and 6.15 mm, respectively, consistent with the system of FIGs. 5A and 5B, in accordance with various embodiments. These figures show the elevation maps of the reference spheres.
  • the key 930 shows a correspondence between the shade and the displacement in mm.
  • FIGs. 9A2, 9B2 and 9C2 depict example plots of radius of curvature vs. horizontal displacement, consistent with the reference spheres of FIGs. 9A1 , 9B1 and 9C1 , respectively, for different axial and horizontal positions, in accordance with various embodiments.
  • the squares, circles and triangles depict Az -1 , 0 and +1 , respectively.
  • FIGs. 9A2, 9B2 and 9C2 provide radii of curvatures for three reference spheres for different axial and horizontal positions.
  • the horizontal continue line represents the values given by the manufacturer.
  • the areas 901 , 912 and 922 correspond to the best fit between the calibrated samples and the estimated spheres.
  • the areas 900, 91 1 and 921 represent a displacement greater than 100pm but less than 200pm, while the areas 910 and 920 show a displacement greater than 200pm.
  • the center region ( ⁇ 2.5mm) shows good agreement between the measurements and the ground truth. While, as the curvature of the spheres increased, the peripheral region of the sample became depressed, below the best-fit sphere. This deviation was mainly caused by defocused slit illumination as the surface moved further away from the focal plane.
  • Table 1 summarizes the mean and standard deviation for the experimental results of the three reference spheres as well as their refractive power values.
  • FIGs. 10A and 10B show a 3D rendering of the frontal chamber of a myopic human eye.
  • FIG. 10A depicts an exemplary large-scale volumetric reconstruction of the anterior segment of the in-vivo human eye, showing anterior eye structures including cornea, iris, and lens capsule, in accordance with various embodiments
  • FIG. 10B depicts an example 3D rendering of the human eye of FIG. 10A in the sagittal plane, in accordance with various embodiments. Images are projected into geometrical coordinates (with an origin 0, and x, y and z dimensions) for a voxel dataset of 400x400x500 samples, and the size of the imaged volume is 15x15x6mm.
  • FIG. 1 1 A depicts an example cross-sectional image of a fluid-filled sac or iris cyst located on the iris surface, in accordance with various embodiments.
  • FIG. 1 1 B depicts an example cross-sectional image of main layers of the corneal structure, in accordance with various embodiments.
  • EP denotes the epithelium
  • S denotes the stroma
  • EN denotes the endothelium.
  • the rectangular region 1 150 shows a corneal punctate pattern on the tear film.
  • FIG. 1 1 C depicts a close-up view of the apex region 1 150 of the cornea of FIG. 1 1 B, in accordance with various embodiments.
  • the white arrow indicates the cellular debris found within the tear film.
  • FIGs. 1 1 A and 1 B show some cellular debris or mucus within the tear film after a few seconds without blinking. This characteristic is clinically relevant in the diagnosis of dry eye and contact lens fitting.
  • FIGs. 12A1 , 12B1 and 12C1 depict Bland-Altman plots for central corneal thickness (CCT), anterior chamber depth (ACD), and radius of curvature (Rc), respectively, showing repeatability of the stereo slit-scanning device using two consecutive measurements, in accordance with various embodiments.
  • the standard deviation (SD) and mean are depicted.
  • the Bland-Altman analysis showed that the maximum absolute error for repeated CCT, ACD, and Rc measurements was 25.30pm, 0.45pm and 0.27pm, respectively.
  • the mean absolute error was 12.58 pm for the CCT, 81 .46 pm for the ACD, and 84.91 pm for the Rc.
  • FIGs. 12A2, 12B2 and 12C2 depict Bland-Altman plots for corneal thickness, anterior chamber depth, and radius of curvature, respectively, showing an agreement analysis between the Pentacam®HR and the proof-of-concept prototype, in accordance with various embodiments.
  • kappa angle This angular deviation, referred to as kappa angle, can complicate elevation map computations. For instance, larger kappa angles generate more elevated topographic maps, requiring the use of best-fit toric ellipse algorithms instead of spherical ones. Additionally, sampling density and imaging performance on peripheral regions (e.g., the limbus) and the iris is poor, often requiring interpolation (e.g., linear geometrical assumptions), which can result in missed pathological features from these structures. For example, Scheimpflug tomographers cannot accurately measure the iridio-corneal angle, which is crucial for evaluating the risk of angle-closure glaucoma.
  • the stereo slit-scanning tomography device can provide depth-resolved images of anterior ocular structures. This ability could facilitate the diagnosis of ocular disorders at the iris (e.g., iridioschisis and focal iris atrophy) and the anterior chamber (i.e., Fuchs heterochromic iridocyclitis).
  • the use of telecentric raster scanning configuration enables better coverage of the corneal surface. Because the light sheet scans parallel to the visual axis of the eye, the field of view can be maximized up to the corneoscleral junction.
  • Our scanning density is primarily limited by the low power of the low-cost LED light source, which leads to longer exposure times. Upgrading to a 5-1 Ox brighter light source can easily lead to a frame rate of 200 - 300Hz while still meeting the light safety limit. Higher speeds can mitigate eye movements, allowing more consistent measurements. Faster speeds can also enable high sampling density, which is essential in cases of focal pathological changes.
  • imaging speed is currently limited by the long camera exposure time required due to the low illumination power.
  • Using a more powerful light source and optimizing the optical design may enable faster imaging and thus mitigate motion artifacts.
  • computer registration algorithms can be developed to register each frame to the correct spatial location.
  • Stereo slit-scanning tomography is a low-cost alternative to AS-OCT and overcomes the limitations of current corneal topography and tomography instruments. It merges optical sectioning and stereoscopic photography for performing 2D cross- sectional imaging and 3D reconstruction of the anterior segment.
  • the proposed instrument has the potential to obtain tomographic images comparable with slit-lamp photography, with higher sampling density and better corneal coverage. This advantage enables visualization of particular anatomical structures, such as the iridocorneal angle and the eyelid. Corneal topography can also be obtained. Widespread clinical application includes teleophthalmology for rural areas and emergency departments where ophthalmology availability is highly limited.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Veterinary Medicine (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Human Computer Interaction (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

A tomographic imaging system, methods to visualize tomographic images, and a ‎stereoscopic calibration procedure for obtaining two-dimensional cross-sectional ‎and three-dimensional volumetric images over the entire anterior eye. The optical ‎system combines stereoscopic imaging with patterned illumination (e.g., slit-‎scanning or array of slits). The captured cross-sectional images can be viewed ‎individually or as a fly-through video. The calibration and reconstruction methods ‎determine the true spatial location of imaged features and record a 3D volumetric ‎representation of the anterior eye. The system can image the entire anterior ‎chamber and contiguous structures, including the corneal limbus, sclera and ‎eyelids. Furthermore, the system can obtain corneal topography. ‎

Description

STEREO TOMOGRAPHY OF THE ANTERIOR EYE
PRIORITY CLAIM
[0001] This application claims the benefit of U.S. provisional patent application no. 63/644,464, filed May 8, 2024, titled “Stereo Tomography Of The Anterior Eye,” and incorporated herein by reference.
STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
[0002] This invention was made with government support under HT9425-24-1 - 0747 awarded by the Department of Defense. The government has certain rights in the invention.
BACKGROUND
[0003] Various techniques have been used for imaging of the eye to evaluate its health, including slit-lamp examination, corneal topography, Scheimpflug technology, stereo photography, and anterior segment optical coherence tomography (AS-OCT). However, these approaches have various limitations and there is a need for improved techniques for imaging of the eye.
SUMMARY
[0004] The present disclosure describes an apparatus and method that combines stereo vision with optical sectioning (i.e., patterned illumination) for visualizing, recording and reviewing images of the entire anterior eye in 3D spatial contexts. The solutions herein combine telecentric, "on-axis", fast updating patterned illumination and "off-axis" stereo vision cameras. It is also useful to have one “on-axis” camera for some visualization formats such as dark field. The setup enables a large field of view, as well as recording of cross-sectional (i.e., tomography) and volumetric 3D representation of the entire anterior eye with minimum obstruction. Additionally, we describe a mathematical model that calculates the spatial coordinates (x, y, z) of each voxel. The high resolution and precise spatial coordinate calculation improve visualization quality and accurate cornea, iris, lens, and scleral biometry.
[0005] Apart from subjective viewing and quantitative ocular surface biometry, automated scanning and fly-through visualization of 3D datasets facilitate the diagnosis of various anterior segment diseases and injuries. Additionally, a multi-focal configuration can be implemented where various cameras target different structures, and each camera has an optimized depth imaging range and magnification (i.e., optical resolution). Therefore, the multi-focal imaging system captures sharp images of the cornea layers, crystalline lens, and anything in between, while also recording cellular details such as endothelial cells and corneal nerves. A spatial light modulator, together with highly efficient semiconductor light sources, enable precise control of the illumination (e.g., the width and location of the slit illumination), exposure time, and color (i.e., light wavelength). Collectively, the apparatus facilitates a variety of imaging modalities, such as flood illumination, optical sectioning, confocal illuminationdetection, specular microscopy, and hyperspectral imaging/spectroscopic analysis.
[0006] The solutions herein allow a comprehensive examination of the anterior eye using one compact and easy-to-operate instrument. Notably, the whole anterior eye volume can be imagined in an automated fashion with no or minimal operator input. The recorded 3D volumetric dataset can be easily viewed as a fly-through movie to appreciate all imaged components or zoomed-in for specific features of interest. The measured surfaces of the eye and their visualization in the space coordinates can provide accurate information about the location and seriousness of a disease or injury. Precise spatial calibration improves ocular biometry accuracy, including cornea surface curvature, corneal thickness, anterior chamber depth and volume, and anterior chamber angle. Additionally, the image store is inherently digital, allowing documentation, computer-assisted study, and deep-learning analysis. Other features, aspects, and aims of the disclosed invention will be emphasized in detail in the description section. The summary only overviews this technology and does not limit any aim, which the attached claims will explicitly define.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] FIG. 1 A is a schematic view of an example imaging system with four cameras, including a stereo photography device in combination with a patterned illumination system, in accordance with various embodiments.
[0008] FIG. 1 B depicts the cameras 106-109 of FIG. 1 A in the transverse (x-z) plane, consistent with FIG. 1 A, in accordance with various embodiments.
[0009] FIG. 1 C depicts cameras 156, 157, 158 and 159 in a sagittal (y-z) plane, in accordance with various embodiments. [0010] FIG. 2 depicts an example calibration technique to determine a triangulation function for 3D imaging, consistent with the system of FIG. 1A, in accordance with various embodiments.
[0011] FIG. 3 depicts an example tomographic reconstruction algorithm, consistent with the system of FIG. 1 A, in accordance with various embodiments.
[0012] FIG. 4A is an example cross-sectional tomographic image of an anterior segment 401 of the eye, consistent with FIGs. 1A-3, in accordance with various embodiments.
[0013] FIG. 4B depicts an example image of a frontal chamber 402 of the eye, consistent with FIGs. 1 A-3, in accordance with various embodiments.
[0014] FIG. 5A depicts a top view of an example optical setup of another stereo slit-scanning tomographer, in accordance with various embodiments.
[0015] FIG. 5B depicts a side view of the example optical setup of FIG. 5A, in accordance with various embodiments.
[0016] FIG. 6A depicts an ex-vivo porcine cornea imaged by a microscope of magnification 10x, in accordance with various embodiments.
[0017] FIG. 6B depicts an image of a partial thickness corneal laceration, in accordance with various embodiments.
[0018] FIG. 6C depicts an image of a metallic particle on the epithelium of the cornea, on both sides of the corneal laceration of FIG. 6B, in accordance with various embodiments.
[0019] FIG. 6D depicts an image of a punctate corneal abrasion due to cornea dehydration, in accordance with various embodiments.
[0020] FIG. 6E depicts an image of Descemet’s membrane detachment, in accordance with various embodiments.
[0021] FIG. 6F depicts an image of damage of the lens capsule, in accordance with various embodiments.
[0022] FIG. 7 depicts an example of the triangulation principle to map camera image pixels into spatial coordinates, consistent with the system of FIG. 5A and 5B, in accordance with various embodiments.
[0023] FIG. 8 depicts an example calibration technique to determine a stereo triangulation function for 3D imaging, consistent with the system of FIG. 5A and 5B, in accordance with various embodiments. [0024] FIGs. 9A1 , 9B1 and 9C1 depict example elevation maps of reference spheres with radii of 9.65 mm, 8.00 mm and 6.15 mm, respectively, consistent with the system of FIGs. 5A and 5B, in accordance with various embodiments.
[0025] FIGs. 9A2, 9B2 and 9C2 depict example plots of radius of curvature vs. horizontal displacement, consistent with the reference spheres of FIGs. 9A1 , 9B1 and 9C1 , respectively, for different axial and horizontal positions, in accordance with various embodiments.
[0026] FIG. 10A depicts an exemplary large-scale volumetric reconstruction of the anterior segment of the human eye in-vivo, in accordance with various embodiments. [0027] FIG. 10B depicts an example 3D rendering of the human eye of FIG. 10A, in the sagittal plane, in accordance with various embodiments.
[0028] FIG. 1 1 A depicts an example cross-sectional image of a fluid-filled sac or iris cyst located on the iris surface, in accordance with various embodiments.
[0029] FIG. 1 1 B depicts an example cross-sectional image of main layers of the corneal structure, in accordance with various embodiments.
[0030] FIG. 1 1 C depicts a close-up view of the apex region 1 150 of the cornea of FIG. 1 1 B, in accordance with various embodiments.
[0031] FIGs. 12A1 , 12B1 and 12C1 depict Bland-Altman plots for corneal thickness, anterior chamber depth, and radius of curvature, respectively, showing repeatability of the stereo slit-scanning device using two consecutive measurements, in accordance with various embodiments.
[0032] FIGs. 12A2, 12B2 and 12C2 depict Bland-Altman plots for corneal thickness, anterior chamber depth, and radius of curvature, respectively, showing an agreement analysis between the Pentacam®HR and the proof-of-concept prototype, in accordance with various embodiments.
DETAILED DESCRIPTION
[0033] As mentioned at the outset, there is a need for improved techniques for imaging of the eye.
[0034] The optics of the human eye can be approximated as two converging lenses (the cornea and the crystalline lens), which project images of the environment onto the retina. The cornea is responsible for about 2/3 of the total refractive power. At the same time, the crystalline lens is an active optical element, able to change its shape and thickness to adjust its refractive power for near-or-distant vision. [0035] Good visual acuity is only achieved if the images formed on the retina are sufficiently focused and without significant light blockage or aberrations in the transparent structures and ocular media. Additionally, supportive structures (e.g., the iris and the sclera) and ocular adnexa (e.g., the eyelids) are also important for the health of the eye. The anterior segment can be affected by various eye diseases such as opacities (e.g., corneal scarring, cataracts), shape distortion and thickness variation (e.g., corneal ectasia and edema), infections (e.g., bacterial, viral, or parasitic), or traumatic injuries (mechanical, chemical or thermal). Achieving accurate diagnosis is important for providing patients with the appropriate care.
[0036] High-resolution imaging of the anterior segment is essential for the correct diagnosis of injuries and diseases. Examples include determining wound depth in corneal abrasion and lacerations, monitoring endothelial cell loss in Fuchs dystrophy, measuring iris elevations caused by a ciliary body cyst, or differentiating nuclear cataracts from capsular subtypes. Likewise, quantitative measurements of the corneal and crystalline lens surfaces are important for numerous ophthalmic procedures, such as refractive corneal surgery, contact lens fitting, or choosing the intra ocular lens (IOL) for cataract surgery.
[0037] Slit-lamp examination is currently the standard diagnostic procedure for the anterior eye in clinical practice. This optical instrument involves two principal components: an illuminator that projects an adjustable slit-shaped beam and a biomicroscope to visualize anatomical features. Both parts are able to rotate independently about one common vertical axis to perform different illumination techniques, such as flood illumination (bright field), sclerotic scatter or specular reflection.
[0038] The most distinctive imaging mode of the slit-lamp is based on the principle of “optical sectioning”. This configuration makes a virtual sectioning of the anterior eye segment by projecting a narrow "sheet of light" onto the anterior eye. When this virtual sectioning is viewed at an angle through the biomicroscope, the different layers of the cornea and lens are appreciated, generating a cross-sectional view.
[0039] Although slit lamps are fundamental tools for assessing anterior eye diseases and injuries, the examination is inherently subjective and qualitative. This limitation can lead to poor agreement among different physicians, e.g., a high-inter- grader variation. Likewise, the performance of the slit lamp depends directly on the skill of the operator and the selected magnification of the biomicroscope (e.g., large field of view versus high magnification/resolution). While the images can be recorded using a camera attachment, the lack of standardization leads to significant variations. It is also challenging, if not impossible, to record multiple imaging locations or views, which can potentially lead to missed pathology or incorrect diagnosis.
[0040] Alternatively, there are two distinctive commercially available approaches for semi-automated examination of the anterior eye and recording digital images: slitscanning imaging and optical coherence tomography. Analogous to slit-lamp examination, slit-scanning imaging uses optical sectioning to generate depth-resolved cross-sectional views. The first commercial device that implemented this technology was Orbscan™ (Bausch & Lomb Incorporated, Rochester, NY, USA). This optical system projects two narrow vertical slits (one nasal, one temporal) onto the cornea at an angle of 45 degrees from the visual axis. A motorized translation stage displaces the slits horizontally to record forty images (20 nasal, 20 temporal). However, the slow mechanical scanning extends imaging time and limits the number of total projected slits. Therefore, the low sampling density of the Orbscan™ precludes acquiring data covering the entire anterior eye, which limits the measurement accuracy and can miss areas of pathology. Because the illumination is off-axis, obstruction of the illumination can occur on the contralateral side, further reduce imaging efficiency.
[0041] A competing slit-scanning configuration uses Scheimpflug cameras to match the focal plane of the imaging lens with the illuminated slit. Scheimpflug cameras extend the depth field of view and allow reliable imaging of the cornea and lens simultaneously. However, this imaging capability requires a rotational slitscanning arm to maintain the strict geometric relationship between the illumination plane and the camera position. The most common Scheimpflug device is the PentacamOHR (Oculus Inc., Wetzlar, Germany), which takes 50 frames in 2 seconds at equally spaced meridians. Like the Orbscan™, the mechanical translation of the slit around the corneal apex limits its imaging speed and sampling density. This poor scanning density results in a lack of coverage at the periphery, e.g., peripheral cornea and sclera. The requirement for bulk mechanical rotation also introduces instabilities that may interfere with measurement accuracy.
[0042] Finally, anterior segment optical coherence tomography (AS-OCT) uses low-coherence interferometry to resolve the depth information and to visualize anatomical features of the anterior segment in 3D. Considerable improvements in imaging speed, long-range, and deep penetration have further increased its utility in clinical practice. Over the last 15 years, AS-OCT has proven effective as a corneal topographer and tomographer. Nonetheless, these systems are relatively complex and expensive, compared to other slit-illumination based modalities.
[0043] In one aspect, the solutions herein include a 3D imaging platform that integrates computer stereo vision methods and slit-scanning imaging to achieve a 3D visualization of the anterior segment. Computer stereo vision, a method for depth estimation based on triangulation algorithms, has been widely applied to ophthalmic imaging. These applications include corneal topography, instrument tracking in surgical maneuvers, volumetric reconstruction of the optic nerve head, and stereoscopic 3D slit-lamp photography. Our developed optical system can image the entire anterior segment, with high sampling density, a feature not available with competing technologies. Additionally, we present a calibration method to perform triangulation and display 3D-rendered images of the eye. This volumetric visualization of the anterior structures provides valuable diagnostic information including the location, extent, and morphological features of the lesion or disorder in a manner similar to AS-OCTs, allowing the physician to assess their severity and urgency. Finally, the system can extract quantitative measures including the central corneal thickness, anterior chamber depth, and radius of curvature of the anterior cornea.
[0044] In one aspect, the solutions provide a new stereo-imaging device that can record 2D cross-sectional and 3D volumetric images of the entire anterior eye. The solutions can include a non-invasive and non-contact optical imaging modality that records photographs of the anterior eye and allows visualization and measurement of features, including geometric shapes, surface and internal defects, and tissue light- reflecting/light-scattering properties. The solutions can involve optical sectioning using slit-scanning patterns or another structural light approach and stereoscopic measurements of the cornea, the crystalline lens, and other anterior eye elements. The solutions can involve performing tomography of the anterior segment of the eye and ocular adnexa.
[0045] The solutions can replace or supplement the standard-slit lamp examination with automated, fully digitized imaging of the anterior eye. The recorded images can capture changes in tissue appearance (e.g., reflectivity, color, and texture) and shape (e.g., curvature and thickness). The images can be digitally stored and input to computer algorithms for quantitative analysis. [0046] The above and other advantages will be further apparent in view of the following discussion.
[0047] As used in this specification and the appended claims, the singular articles "a", "an", and "the" include plural concepts except if the authors rule opposite cases. Those expressions such as "including", "integrating", "having", "characterized by" or similar ones, as well as their grammatical equals, are open-ended terms that do not exclude adding elements or method steps.
[0048] FIG. 1 A is a schematic view of an example imaging system with four cameras, including a stereo photography device in combination with a patterned illumination system, in accordance with various embodiments. A feature of the solutions herein lies in a fast, programmable projection of illumination patterns, wherein at least one dimension has high spatial confinement (e.g., slit beam or light sheet or light blade, ring, and dot). A condenser lens 101 or lens group concentrates the light from a high intensity light source 100 such as a laser or light emitting diode (LED) to illuminate a spatial light modulator 102.
[0049] A spatial light modulator is an active optical element that selectively reflects the light into the eye 1 10. For example, the spatial light modulator can be a digital micromirror device (DMD) 102. A DMD is a Micro-Electro-Mechanical Systems (MEMS) device that utilizes an array of tiny, individually controlled mirrors to modulate light. These mirrors can be tilted rapidly between two positions, effectively switching light on or off, or controlling its direction. Each mirror can be controlled by a corresponding memory cell, allowing for individual control of each mirror's position. This capability enables the imaging software to create specific light patterns, such as slits or various designs like grids, dots, or rings, by controlling the activation of the micromirrors. Therefore, it allows high flexibility in illumination pattern design, spatial scanning density, and speed.
[0050] The second advantage is related to the programmable activation period of the micromirrors. Longer "on" periods imply more exposure time and light collected by the cameras. By adjusting the activation period of each mirror independently, multiple light sheets or other patterns with different exposure times can be projected simultaneously. The DMD 102 is fast switching between the “on” and “off” states, allowing the generation of hundreds of different patterns within a second. This fast positioning of the patterns allows shorter total imaging time, reducing motion artifacts during the measurement. [0051] In a second embodiment, a slit beam is directly projected onto the eye 1 10. This embodiment replaces the DMD 102 with a cylindrical lens or by a pair of anamorphic prisms; thus, the illuminating beam is shaped as a narrow line. See FIGs. 5A and 5B. However, this embodiment requires mechanical scanning mirrors, such as polygonal mirrors or galvanometer-resonant scanners, to displace the slit beam along the ocular surface 1 11. The illumination pattern is also limited to a single moving slit.
[0052] The slit illumination (or other programmed patterns) generated by the DMD 102 is projected onto the ocular surface 1 1 1 by an optical relay system (here shown with two positive lenses 103 and 105). The optical magnification (approximately 1 x to 3x, dependent on the size of the DMD 102 chip) is chosen so the image of the DMD 102 (i.e., the extent of all scanning patterns) covers the entire anterior eye (> 25mm x 25 mm field of view). A beam splitter, or a dichroic mirror 104, is located between the lenses 103 and 105 to merge the relay system with the optics of the frontal view camera 1 19 and the fixation target subsystem 120 or module.
[0053] Although at least two cameras must be placed off and at opposing sides of the illumination axis 125, additional cameras with differential targeting, offset angles, and/or magnifications are desirable. The cameras shown are in the transverse plane. Cameras can also be placed in the sagittal plane, above and below the illumination axis. See FIGs. 1 B and 1 C. Here, we show an embodiment consisting of four cameras in “toe-in” configurations. Two paired cameras, 107 and 108, are located with their optical axes at a moderate offset angle (an acute angle such as about +/-30 degrees) relative to the illumination axis 125, aiming within the anterior and chamber of the eye to record surfaces of the cornea and crystalline lens 113. On the other hand, another set of paired cameras 106 and 109 are placed with their optical axes at a larger skew angle (an acute angle such as about +/-60 relative to the illumination axis 125, on opposing sides of the axis) to have higher, better optical sectioning performance, resolving and differentiating cornea layers 11 1. The two paired cameras together form a stereo camera, and each camera has a lens and image sensor. Accordingly, a first pair of cameras may be placed at a first angle relative to the illumination axis, on opposing sides of the illumination axis, and a second pair of cameras may be placed at a second angle relative to the illumination axis, on opposing sides of the illumination axis, where the first angle is different than the second angle. [0054] The optimization of camera lens f-numbers involves a balance between image resolution and depth of field. For example, lenses with f-number=f/6 provide a depth of focus over 2 mm, with a lateral resolution of 12 urn. Alternatively, dynamic focusing, i.e., acquiring multiple images at various focal depths and then digitally fusing the acquired volumes, can improve depth imaging range while maintaining high optical throughput and resolution.
[0055] The fixation target subsystem 120 of the present embodiment is based on a Badal I Nagel optometer variation. This optical accessory uses an electronic display (e.g., an LCD screen) to show a fixation target 121 (e.g., a dot, cross, bull’s eye, etc.). The combination of two fixed positive lenses 105, 122, and one movable positive lens 123 can compensate for the subject’s refractive error for clear viewing. The fixation target is used to help the patient focus their gaze on a specific location to stabilize the eye and ensure that high-quality images of the eye are captured. Consequently, it helps to maintain gaze and stabilize the eye during imaging.
[0056] A large displacement of the lens 123 can stimulate the accommodative response of the crystalline lens 113, which enables the study of the lens shape changes based on the desired vergence stimuli. Moreover, the described optical module merges a front-view camera 1 19 with the disclosed device by an extra beam splitter 1 18. The front-view camera 1 19 eases the alignment of the proposed system with the cornea 1 1 1 , iris 1 12, and crystalline lens 1 13.
[0057] FIG. 1 B depicts the cameras 106-109 of FIG. 1 A in the transverse (x-z) plane, consistent with FIG. 1 A, in accordance with various embodiments. The cameras 107 and 108 have their optical axes at angles a1 and -a1 , respectively, relative to the illumination axis (the z axis), and focused at a depth represented by a focal point z1 , and the cameras 106 and 109 have their optical axes are at angles a2 and - a2, respectively, relative to the illumination axis, and focused at a depth represented by a focal point z2. The two pairs of cameras can therefore be focused at different depths of the eye, and positioned at different angles relative to the illumination axis. In this example, a1 and -a1 are first acute angles and a2 and -a2 are second acute angles.
[0058] FIG. 1 C depicts cameras 156, 157, 158 and 159 in a sagittal (y-z) plane, in accordance with various embodiments. The cameras 157 and 158 have their optical axes at angles a3 and -a3, respectively, relative to the illumination axis, on opposing sides of the axis, and focused at a depth represented by a focal point z3, and the cameras 156 and 159 have their optical axes at angles a4 and - a4, respectively, relative to the illumination axis, on opposing sides of the axis, and focused at a depth represented by a focal point z4. The points z1 -z4 can all represent different depths, in one approach. In this example, a3 and -a3 are third acute angles and a4 and -a4 are fourth acute angles.
[0059] In another option, more than two pairs of cameras are used in the transverse and/or sagittal planes.
[0060] Imaging Modes
[0061] In this section, we describe imaging methods enabled by the presented stereo tomography device. It is noted that the following imaging methods are not mutually exclusive; two or more of the methods may be performed at the same time to visualize target features.
[0062] Cross-sectional imaging of the entire anterior eve
[0063] In this mode, the instrument performs an automated slit-lamp examination. The illuminating slit (with a height of > 25 mm and width of 10 - 200 pm) scans the entire anterior surface of the eye 111 by, i.e., alternating the activated DMD 102 mirrors. The spatial distribution of all slits satisfies the Nyquist sampling requirement. Therefore, to adequately cover the entire field of view of the eye 110, a total of 250 to 5,000 slit illumination patterns are required. Multiple slits can be illuminated simultaneously, as long as the images do not overlap in the camera's view. This approach reduces the number of total required camera exposures, thus decreasing the overall imaging time.
[0064] The cameras 106, 107, 108, and 109 are fitted with low magnification lenses (e.g., 0.2x - 1 ,0x) to cover the necessary field of view (e.g., 25 mm x 25 mm). The cameras with the same offset angle (e.g., cameras 107 and 108) produce stereo pairs of images, which are acquired simultaneously according to the scanning period of the DMD 102. For example, a stereo pair of images can be obtained by each pair of correspondingly positioned cameras which can be in the sagittal plane or the transverse plane. Recording starts once the device is aligned in front of the eye 110. A computer 114 (e.g., control circuit or controller) sends a sequence of illumination patterns to the DMD drivers 115, responsible for activating the elements of the array of micromirrors of the DMD 102. The computer can include a memory 114a to store instructions and a processor 114b to execute the instructions to provide the functions described herein. Simultaneously, the synchronization signal 1 16 (e.g., from the computer 1 14) is sent to the cameras 106, 107, 108, and 109 to coordinate the image acquisition. Likewise, the front-view camera 1 19 can receive the same synchronization signal to acquire en-face views of the eye 1 10. Finally, a high-speed interface 1 17 sends the captured images to the computer 1 14.
[0065] In an alternative embodiment, one pair of stereo cameras have slight off angle (an acute angle such as about +/-15 degrees) for imaging deeper structures such as crystalline lens and anterior vitreous.
[0066] Biomicroscope imaging of the cornea and anterior chamber cells
[0067] In this mode, a subset of the cameras is fitted with high-magnification (between 4x and 20x), high-resolution (between 1 pm to 5 pm) lenses. Zoom lenses are desirable because they allow easy switching between the low-magnification and high-magnification configurations. The field of view is centered on the center cornea 1 1 1 , iris 112, and anterior chamber (up to 10 mm x 10 mm wide and 3 mm deep). The optical sectioning, scanning, and recording sequence is similar to that described above. These images are combined during the post-processing to provide an en-face morphological analysis of the selected corneal layer (e.g., the assessing the corneal epithelium to differentiate surface abrasion from deeper laceration, and visualizing the corneal endothelium). A more common application is assessment of the corneal epithelium as corneal abrasion is very common and is distinguished from deeper laceration. This analysis also includes cells within the anterior chambers and gathers information on cellular information such as morphology (shape), size, and density.
[0068] Modes can also be provided for imaging of deeper structure such as crystalline lens and anterior vitreous using cameras at small angular offset (infrared or red useful to reduce pupil constriction), wide-field view of sclera and eye lids, and evaluation of anterior chamber angle with narrow slit (infrared useful to better penetrate the limbus).
[0069] Digital confocal and dark field biomicroscopy
[0070] The combination of spatially confined illumination patterns and discrete camera pixels enables digital confocal or dark-field imaging modes. The activated DMD 102 micromirrors act as illumination pinholes. Depending on the number of activated mirrors, the effective pinhole diameter ranges between 10 pm and 200 pm. The front-view camera 1 19, fitted with a high-magnification (between 4x and 20x), high-resolution (between 1 pm to 5 pm) lens captures the image. The camera sensor 1 19 is conjugated to the DMD 102 through relay optics; thereby, there will be a unique pixel-micromirror correspondence. By digitally selecting only the camera 1 19 pixels corresponding to the activated DMD 102 micromirror pinhole, and rejecting all other ones, a digital confocal image can be reconstructed. Similarly, by averaging an annular region surrounding the conjugated pinhole, a dark field image can be reconstructed. For example, this region can be an offset of an effective numerical aperture from 0.01 to 0.1.
[0071] A full-frontal view confocal/dark-field image can be generated by scanning the illuminating and corresponding collecting camera pixels across the desired field of view (e.g., the center 10 mm x 10 mm). Using a pinhole array, i.e., activating multiple isolated DMD 102 pinholes, can reduce total imaging time, given that the spacing is large enough to avoid crosstalk (10x - 100x corresponding pinhole diameter). Alternatively, a slit pinhole can also be used, although the confocality is limited to a single dimension. In another embodiment, dark-field images are captured using offset camera(s) co-oriented with the slit, e.g., camera(s) in the sagittal plane when illuminating using vertical slits. Additionally, flood-illuminated images can be captured by activating all DMD 102 micromirrors. By subtracting the flood-illuminated bright field image from the digitally reconstructed confocal or dark-field ones, the system can further eliminate specular and stray reflections. Consequently, the use of confocal and dark-field imaging models enhances the contrast of unstained cells, including corneal epithelium and cells in the anterior chamber of diseases or injured eyes.
[0072] High dynamic range imaging
[0073] In this mode, multiple images of the same optically sectioned area are captured under various illumination and exposure conditions. This approach allows both dim and bright features to be effectively visualized similar to high dynamic range (HDR) photography. Images are acquired with different light intensities by adjusting the activation period of the DMD 102 or by modifying the exposure time and gain of the cameras 106-109 and 1 19. These differently exposed images are then merged, resulting in a significant enhancement in contrast, sensitivity, and dynamic range.
[0074] Hyperspectral imaging
[0075] The use of spectral light source 100 enables the analysis of the wavelengthdependent absorption and scattering properties of the ocular tissue, cells, or foreign body. In a specific embodiment, light sources 100 of different wavelengths (represented by light sources 100a, 100b and 100c) are combined using either dichroic mirrors or optical fibers. The selection of wavelengths includes visible light red (e.g., 640nm), green (e.g., 532nm), and blue (i.e., 488nm), as well as near-infrared (between 700 and 900nm). Using red, green, and blue illumination to combine images results in a true color representation of the anatomical structures. Additionally, green illumination highlights blood vessels and red blood cells in the anterior chamber. Shorter wavelength light, such as blue light, is more scattered than longer wavelengths, making it more sensitive to detect cataracts in the crystalline lens 1 13 and allowing quantify the degree of density and opacity. In contrast, infrared light experiences less scattering in ocular tissues, better penetrating the limbus and the sclera, which allows for more precise visualization and measurement of the anterior chamber angle.
[0076] Monitoring the ocular dynamics
[0077] High DMD 102 and camera 106, 107, 108, 109, and 1 19 speeds (e.g., 500 frames per second) reduce the total imaging time, mitigating the impact of involuntary eye movements and improving yields. Additionally, the high acquisition speed of the technique enables the visualization of the ocular dynamics. High-volume imaging rates of up to 10 volumes per second are achieved by optimizing the imaging field of view and sampling density. This characteristic enables the acquisition of the dynamics of different ocular structures, including the iris 1 12 and the crystalline lens 113. For example, the pupil constriction of the iris 112 can be induced by displaying a bright image on the fixation target. On the other hand, the accommodative response of the lens is stimulated by moving the lens 123 of the fixation subsystem. To capture even faster movements and dynamics, such as blink, nystagmus, and saccades, the instrument can switch to flood illimitation mode (e.g., illuminating the entire anterior eye by switching on all DMD mirrors) to reach up to 200 recordings per second.
[0078] Alignment, Examination, and Visualization of the Anterior Eye Segment
[0079] Before initializing the measurement, the operator has the option to select between manual or automatic instrument alignment. In the manual configuration, the apparatus switches between flood illumination, which helps to locate the eye, and patterned illumination (e.g., a single slit beam) that brings to focus the desired anatomical structure. The user displaces the optical setup similarly to the operation of the conventional slit-lamp biomicroscopes, using as reference the preview images from the front-view camera 1 19 and any of the offset ones (cameras 106, 107, 108, and 109). In the automatic alignment, simple image processing algorithms, including iris detection and image correlation, allow automated positioning of the imaging interface. Based on the detected eye position, the controlling software sends electrical or digital commands from computer 114 to a motorized translation stage for instrument positioning. The system can continuously track the eye’s position using a feedback algorithm in a closed loop.
[0080] Imaging commences with one or more of the modes described previously. The high imaging speed of the apparatus minimizes the effects of involuntary eye movements. However, it may still be essential to use a method that ensures correct frame acquisition and mitigates motion artifacts. At a predetermined interval of time (e.g., between 0.1 and 0.5 seconds), the entire anterior eye segment is illuminated, either using flood illumination by turning on all the DMD 102 micromirrors or using a separate illuminator. The captured full-field image is then used as a reference to track eye position and gaze during the measurement. If eye movements are detected because the recorded image does not correlate with the reference one, the imaging software can pause imaging and re-obtain data. In alternative methodologies, the movement information is recorded and used in volumetric calibration, as described later in this patent.
[0081] The imaging interface of the presented embodiment provides a fast visualization of the acquired dataset with minimal processing. Each stereo camera captures a series of eye images that can either be viewed independently or combined into a mosaic of images. For specific illumination patterns, such as multiple slit beams, the illuminated areas are divided and reorganized into a sequence based on their spatial location (e.g., arranged from left to right). Furthermore, the captured sequence can be quickly displayed as a fly-through video, allowing the examiner to review and identify ocular abnormalities or injuries efficiently. The high sampling density ensures full coverage of the entire ocular surface in the imaging field. Users can pause the video, zoom in on specific regions of interest, and scroll through all the captured image frames.
[0082] Triangulation method for tomographic reconstruction of the eve
[0083] In addition to reviewing images captured by each camera individually, the recorded volumetric datasets can be combined to achieve a true 3D representation of the entire anterior eye. We term the method volumetric tomography reconstruction. Here, we describe a method using mathematical triangulation to fuse images from multiple cameras and retrieve the true spatial locations of the imaged features.
[0084] FIG. 2 depicts an example calibration technique to determine a triangulation function for 3D imaging, consistent with the system of FIG. 1 A, in accordance with various embodiments. The technique includes a procedure of the optical system to perform triangulation among the different images of the cameras. Specifically, this method uses a reference target (e.g., dot grid 201 ). The target is initially located at one end (e.g., the nearest distance) of the depth of field. The target can be provided on a planar surface which is used as part of the triangulation, prior to using the system for imaging of the eye. The target can include a grid of dark circles on a white background, for example.
[0085] The illumination system projects a slit beam 202 on one dot column 201 a of target 201 as a reference for finding pixel correspondences. The dots are separated by distances x and y horizontally and vertically, respectively. Displacing the target 201 along the illumination axis (z), the cameras 106-109 acquire images 207, 208, 209, and 210, respectively, at different depth positions.
[0086] Generally, triangulation is described as a function (T) that relates the spatial coordinates (x,y, z) of the object with their corresponding pixel coordinates (un,vn) of "n" cameras, such that:
(1) (x,y, z) ~ T(u1,v1, u2,v2, ..., un,vn).
[0087] The calibration process involves estimating the triangulation function by solving for the spatial projection using regression analysis. By the use of an exemplar dot grid calibration target 201 , the centers of each circle on the target are defined with specific spatial coordinates based on the geometrical dot spacing (x, y) and the measured depth (z). Image processing algorithms, such as the circle Hough transform, are used to detect the centers of the dots in images 207, 208, 209, and 210. For example, the center of the dot 203 is at a pixel coordinate (u,v) in each of the images 207-210. The obtained pixel coordinates are then associated with their corresponding spatial coordinates to calculate the triangulation function.
[0088] While this calibration method gives a well-defined triangulation function that calculates the spatial coordinates based on the corresponding local features of eye cross-sectional images, displaying x, y, and z components in a high-resolution point cloud model requires a significant data size and many processing requirements. Additionally, displaying only corresponding local features has the issue of omitting regions of interest that would have to be interpolated. The optical sectioning capability allows the use of geometric transformation that links camera space to the true spatial coordinates, which allows 2D cross-sectional and 3D volumetric visualization in anatomically correct scales. This approach is preferred over a point cloud model, which inherently has limited sampling density and resolution and, hence, cannot reliably capture pathologies. The calibration only needs to be performed once and used for all subsequent imaging sessions, given that the optical setup remains unchanged.
[0089] FIG. 3 depicts an example tomographic reconstruction algorithm, consistent with the system of FIG. 1 A, in accordance with various embodiments. The algorithm is for two cameras of the optical system. For each slit-scanning period, the corresponding left-side and right-side cameras 301 and 302, respectively, capture tomographic images of the anterior segment at different viewpoints 303 and 304, respectively. PL(UI,VI) and PR(U2,V2) represent pixels of the images captured by the left and right side cameras, respectively.
[0090] A restoration and enhancement image processing process can be performed to remove noise degradation and improve the contrast and sharpness of the imaged ocular structures.
[0091] Steps 305 and 306 accomplish the projective transformation of the cross- sectional images of each camera into corrected 3D global coordinates (images 307 and 308; li_(y,z) and lR(y,z)). Prior to adding both projected images 307 and 308, adjustment of the contrast can be done to emphasize significant clinical details (e.g., foreign bodies, cataracts). The merged frame 309 is then stacked into a 3D tomographic dataset 310 of other merged frames based on the scanning density of the measurement. Finally, ray-tracing algorithms correct the refraction of the cornea and crystalline lens, which enables the visualization of ocular features in actual anatomical dimensions, as well as the quantification of biometric and keratometry parameters, including corneal and crystalline lens thickness and the depth and volume of the anterior chamber. Corneal and lens thickness is the depth displacement between the anterior and posterior surfaces of the cornea and lens, respectively. The anterior chamber depth (ACD) is the central anterior corneal surface to the anterior crystalline lens surface. The iridocorneal angle is the sharp angle formed between the cornea, sclera and iris. [0092] FIG. 4A depicts an example cross-sectional tomographic image of an anterior segment 401 of the eye, consistent with FIGs. 1A-3, in accordance with various embodiments.
[0093] FIG. 4B depicts an example image of a frontal chamber 402 of the eye, consistent with FIGs. 1A-3, in accordance with various embodiments. The image is rendered by a 3D tomographic reconstruction. Each dataset can be digitally stored for easy documentation, computer-assisted analysis, and machine learning and artificial intelligence algorithms.
[0094] Example 1
[0095] Purpose: Anterior eye tomography is an imaging modality that generates a depth-resolved, cross-sectional view of various structures of the anterior eye, including the cornea, iris, and crystalline lens. Cross sectional view allows evaluating both all parameters regarding tissue health, including surface condition (e.g., smoothness and geometry) and internal features (e.g., opacity and reflectivity). Numerous approaches, including slit-lamp and low-coherence interferometry, are currently available in the market. However, they have limitations in rapidly covering the entire anterior eye in a short time period, a pre-requisite for comprehensive assessing of anterior eye diseases or injuries. Therefore, the aims of this research are to: (1 ) present a stereo corneal tomography prototype for qualitative assessment of simulated corneal injuries, (2) demonstrate 3D imaging and visualization of the anterior segment, and (3) compare the ocular biometry of the disclosed optical setup with the Pentacam®HR, a high-resolution, rotating Scheimpflug camera used to analyze the anterior segment of the eye.
[0096] Methodology: Analogous to slit-lamp biomicroscopy, the stereo-slit scanning system uses optical sectioning to visualize the ocular structures.
[0097] FIG. 5A depicts a top view of an example optical setup of a stereo slitscanning tomographer, in accordance with various embodiments. This approach differs from FIG. 1 A, e.g., in that it uses a slit 530 to provide a light sheet rather than a DMD.
[0098] The optical setup is based on two primary components: an illuminator 590 and the stereoscopic cameras (CMOS1 510 and CMOS2 520) in the transverse plane. One pair of cameras is depicted but more than one pair may be used. Also, as discussed, one or more pairs of cameras can be used, in additionally or alternatively, in the sagittal plane. [0099] The illuminator includes, e.g., a 470nm light emitting diode 505 (EP470S04, Thorlabs Inc., USA) and an anamorphic lens pair (aspheric condenser lens Lc 515 with f=30 mm and cylinder lens CL 525 with f=50 mm) to illuminate an adjustable mechanical slit 530. An optical relay system (including an achromatic lens L1 535 with f=75 mm, a scanner 540 (an active optical element), a dichroic mirror or DM 545, and an achromatic lens L2 550 with f=200 mm) projects the vertical light sheet onto the ocular surface 555 with a size of 15x0.15mm. The galvanometer scanner 540, positioned between the relay lenses (e.g., lenses 535 and 550), telecentrically scans the light sheet horizontally based on control signals from a driver 541 . The driver in turn may be responsive to a computer 542 (e.g., control circuit or controller). The computer can include a memory 542a to store instructions and a processor 542b to execute the instructions to provide the functions described herein.
[00100] The scanner works by rapidly moving a mirror (or other optical element) using a galvanometer motor, allowing for fast and accurate beam steering. The galvanometer motor is a type of rotary motor that can rapidly and precisely rotate a mirror. The mirror is mounted on the shaft of a galvanometer motor and is typically made of a low-inertia material to allow for fast movement. The feedback system can include a sensor that detects the mirror's position and provides feedback to a control system, ensuring accurate positioning. The control system manages the motor's movement, enabling the laser light to be directed to specific locations or scanned across an area.
[00101] Moreover, the dichroic mirror 545 (DMLP490R, Thorlabs Inc., USA) is added to merge a fixation target 560 with the optics of the illuminator. The output power of the system onto the cornea was ~4pW, well below the applicable safety limit.
[00102] The light paths 516, 536, 546 and 551 are illumination paths, and the light paths 511 and 521 are imaging paths. A path depicted by the dashed lines 561 is a fixation light path.
[00103] The stereo-imaging setup has two CMOS cameras (acA2040-120um, Basler AG, Germany) placed in a "toe-in" configuration at an angle of 40 degrees from the illumination axis. We fitted them with parfocal lenses of focal length f = 16mm and numerical aperture NA = 0.08 to achieve a depth of field of 2.25mm and a lateral resolution of 12pm. The galvanometer mirror (scanner 540) scanned the slit across a range of 18mm in the horizontal direction. The two cameras were synchronized with the galvanometer mirror at a frame rate of 50 frames per second (fps). Each camera captured one frame per slit beam position with an exposure time of 25ms, totaling 350 frames. The cameras can communicate with the computer 542 to receive control signals from the computer and transmit image data to the computer for processing as described herein.
[00104] FIG. 5B depicts a side view of the example optical setup of FIG. 5A, in accordance with various embodiments. The side view includes the lenses 515, 525, 535 and 550 and the slit 530, omitting components that do not affect the light in this orientation. This view depicts conjugate focal planes between the mechanical slit and the corneal plane. The cylinder lens 525 focuses the beam on the slit plane to optimize the light efficiency of the projected light sheet.
[00105] Results: FIGs. 6A-6F illustrate the device’s potential for diagnosis of mechanical corneal traumas. These figures provide an example panel of images showing ocular traumas imaged by the stereo slit-lamp tomographer using a nearinfrared light source, in accordance with various embodiments.
[00106] We simulated partial and full-thickness corneal lacerations and cataract surgical incisions using a scalpel. We also added some metallic particles to imitate the presence of foreign bodies on the cornea. FIG. 6A depicts an ex-vivo porcine cornea imaged by a microscope of magnification 10x. This view, analogous to conventional frontal photography of the eye, lacks necessary diagnostic information, such as cut depth.
[00107] FIG. 6B depicts an image of a partial thickness corneal laceration 610, in accordance with various embodiments.
[00108] FIG. 6C depicts an image of a metallic particle 620 on the epithelium of the cornea, on both sides of the corneal laceration 610 of FIG. 6B, in accordance with various embodiments.
[00109] FIG. 6D depicts an image of a punctate corneal abrasion (denoted by the dots) due to cornea dehydration, in accordance with various embodiments.
[00110] FIG. 6E depicts an image of Descemet’s membrane detachment, in accordance with various embodiments.
[00111] FIG. 6F depicts the same Descemet’s membrane detachment as FIG. 6E, but from the opposite camera view. [00112] Conclusion: (1 ) Stereo slit-lamp tomography is a novel imaging modality that enables depth-resolved visualization and 3D tomographic reconstruction. (2) This device is shown as an alternative to the slit-lamp examination for mechanical traumas. Deep-learning algorithms might help automate future clinical cases.
[00113] Example 2
[00114] Methodology: For each stereo pair of frames, we resolved the depth of the anterior segment using triangulation. FIG. 7 shows the working principle of this process, where common local features were detected and matched using feature matching algorithms.
[00115] FIG. 7 depicts an example of the triangulation principle to map camera image pixels into spatial coordinates, consistent with the system of FIG. 5, in accordance with various embodiments. The corresponding “m, n” pixel coordinates from the left-side camera 710 and “u, v” pixel coordinates from the paired right-side camera 720 has a common point “x, y, z” in the real space. The common point is defined by the function T, which depends on the pixel coordinates according to (x,y,z)~T(m,n,u,v). For example, an image 71 1 captured by the camera 710 includes pixels 711 a, 711 b and 71 1 c which correspond to points 730a, 730b and 730c, respectively, of the eye 730. Similarly, an image 721 captured by the camera 720 includes pixels 721 a, 721 b and 721 c which correspond to the points 730a, 730b and 730c, respectively, of the eye 730.
[00116] Prior to the 3D reconstruction, we determined the triangulation function by a third-order regression model during a calibration process. Finally, we used the detected pixels and the calculated spatial coordinates to perform a projective transformation of the cross-sectional images into geometrical coordinates. Both images were merged and stacked into a 3D tomographic dataset.
[00117] FIG. 8 depicts an example calibration technique to determine a stereo triangulation function for 3D imaging, consistent with the system of FIG. 5A and 5B, in accordance with various embodiments. As mentioned in connection with the example of FIG. 2, which had four cameras instead of two cameras, in computer stereo vision, triangulation is the process where parallax differences between two or more images are used to infer the position of an object in 3D space.
[00118] Using the projected light sheet 831 as a reference, the x, y, z coordinates of the dot grid target 10 830 are associated with their corresponding pixels u, v of the left and right cameras 810 and 820, respectively. [00119] In particular, while the mathematical equation (1 ), discussed above, exists only for ideal stereo systems, it cannot easily accommodate the geometrical distortions introduced by imperfect optics. Therefore, we use a regression model to approximate the triangulation function numerically using a calibration procedure. FIG. 8 illustrates the experimental setup, where a dot grid distortion target (0.5mm/dot spacing, Edmund Optics Inc., USA) was placed perpendicular to the illumination axis (z-axis) 815. The illuminator projected a light sheet 831 on one dot column 831 a of the target 830 as a reference for finding pixel correspondence during the image processing. We shifted the grid distortion target 6mm along the z-axis in steps of 0.25mm. For each position, the cameras recorded one frame. For example, CMOS1 810 recorded a left-side frame or image 811 , where a column 811 a of pixels corresponds to the dot column 831 a, and (u1 ,v1 ) is the coordinate of an example pixel. Similarly, CMOS2 820 records a right-side frame or image 821 , where a column 821 a of pixels corresponds to the dot column 831 a, and (u2,v2) is the coordinate of an example pixel.
[00120] Additionally, rays 812a and 812b depict correspondences between the dot 832 and pixels 832a and 832b, respectively, and rays 813a and 813b depict correspondences between the dot 833 and pixels 833a and 833b, respectively.
[00121] Because each slit illumination (i.e., light sheet) defines a planar geometry, we performed a projective transformation from the camera image plane to the illumination plane using the homography matrix (H). This linear transformation is described in homogeneous coordinates as follows:
(2) [y,z, l] T = H • [un, vn, l] T.
[00122] The triangular relation between the stereo images and spatial coordinates is crucial for accurate 3D reconstruction of the eye. For precise reconstruction, standard local features of the cornea, lens, iris, and sclera must be correctly detected for each pair of stereo images. To achieve this purpose, we used featurematching algorithms. Specifically, we used an Oriented FAST and rotated BRIEF (ORB) descriptor and a Fast Library for Approximate Nearest Neighbors (FLANN) matcher to find common characteristics in a loop of 2500 iterations. Suitable matches were filtered using a ratio test filter, assessing the symmetry of the features by cross-check matching and verifying the distance calculation of the matcher by RANSAC. [00123] Oriented FAST and rotated BRIEF (ORB) descriptor refers to a fast and efficient feature descriptor in computer vision that combines the strengths of FAST key point detection and BRIEF descriptor. ORB uses FAST (Features from Accelerated Segment Test) to identify potential key points in an image. FAST is a fast corner detector that identifies points where the intensity changes significantly around a pixel. Once FAST identifies potential key points, ORB calculates the orientation of each key point. It does this by analyzing the intensity gradient around the key point, determining the dominant direction of change. ORB uses a variant of BRIEF (Binary Robust Independent Elementary Features) called Rotated BRIEF to generate a binary descriptor for each key point. Rotated BRIEF is based on comparing the intensity of pairs of pixels in a rotated patch around the key point. These comparisons result in a 256-bit binary vector for each key point.
[00124] We calculated a predefined homography matrix for each camera in case of an insufficient number of detected features, either due to a low signal-to-noise ratio (SNR) or a lack of suitable structures in the captured images. Each element (Hij) of the predefined matrices was estimated by a third-order polynomial fitting, such that:
(3) Hij = at j • k3 + bij • k2 + ctj • k + dij, where "a," "b," "c," and "d" are the polynomial coefficients of each "i" row and "j" column, and "k" is the camera frame. Finally, each stereo pair of images projected into the real space coordinates was merged and stacked in a volumetric data set to conclude the tomographic reconstruction.
[00125] Verification of topographic measurements using standard reference spheres
[00126] The calibration accuracy was verified by measuring three reference spheres (CoorsTek Inc., USA) of radius Ri = 9.65mm, R2 = 8.00mm, and R3 = 6.15mm. We performed a total of fifteen measurements for each sphere at different locations. Maintaining a central static light sheet during the preview mode, we placed the sphere apex in the focal plane of the illuminator. We selected two more axial locations by moving the instrument 1 mm closer and away from the best focal plane. Finally, for each z-axis position, we shifted the spheres transversally along both directions of the x-axis in steps of 1 mm to obtain five measurements.
[00127] For each acquired data set, we segmented the surfaces of all the cross- sectional images using a binary thresholding algorithm. We found the best-fitting sphere over a region of 5mm in order to calculate its radii of curvatures and display its elevation maps. The elevation map is defined as the difference between the actual measured surface location versus the theoretical location of the sphere surface. Finally, we analyzed the accuracy of the keratometry readings for each sphere using the mean and standard deviation of all the measurements.
[00128] Pachymetric mapping
[00129] Pachymetric mapping is feasible for curvature measurement in the slow scan axis which can be susceptible to motion error with topography mapping. Pachymetric mapping allows for precise registration to compensate for eye motion. [00130] In-vivo imaging protocol and comparison with Pentacam®HR measurements
[00131] We studied the repeatability of the stereo slit-scanning tomographer and its agreement with a commercial Scheimpflug corneal tomographer in a pilot clinical study.
[00132] The protocol included only healthy eyes with no known ocular diseases. The study subjects have spherical refractive errors ranging from -5.00 D to +5.00 D. [00133] The right eye of each volunteer was scanned twice using our prototype instrument and once using the Pentacam®HR device. An experienced operator performed the data acquisition using the stereo slit-scanning tomographer. The subject placed his/her head on the chinrest and stared at the fixation target. The operator precisely focused on the anterior surface of the cornea. All measurements were taken in a dimmed room to avoid stray light reflections from the environment. [00134] We calculated corneal curvature and ocular biometry, and we selected the mid-sagittal cross-sectional image using the pupil center as the visual axis reference. After segmenting the ocular surfaces (anterior and posterior cornea, iris, and anterior surface of the crystalline lens), we utilized ray tracing for refractive distortion. The chief ray is defined as the line between the center of the camera entrance pupil and each pixel corresponding to the posterior corneal surface. The refraction angle is calculated based on Snell’s law, using the 3D-reconstructed local curvature of the anterior corneal surface and the following refractive indexes for ocular components: cornea - 1 .3771 , aqueous - 1 .33741 . We computationally refracted the chief ray at the cornea-air interface and repositioned the posterior corneal surface to its corrected position. Similarly, we refracted the ray at the cornea-aqueous interface. Due to the fact that we used the mid-sagittal cross-sectional image that intersected the cornea apex, the refraction correction is almost symmetric on the two cameras. We measured the central corneal thickness (CCT) and the anterior chamber depth (ACD) based on the segmented surfaces. The radius of curvature (Rc) of the cornea was calculated by fitting a circular model to the 5 mm central area of the vertical meridian of the eye.
[00135] The repeatability of our designed prototype was evaluated by comparing the internal consistency of the two measurements for the described parameters. Similarly, we compared the mean values of CCT, ACD, and Rc with the pupil center pachymetry, anterior chamber depth, and the mean radius of curvature of the topometric maps from Pentacam®HR, respectively. In both cases, we used a Bland- Altman analysis. Statistical significance was assessed by a paired t-test for a level a = 0.05. We confirmed the normality of the sample distribution using the Shapiro-Wilk test.
[00136] Results
[00137] Topographic Validation Using Reference Spheres
[00138] FIGs. 9A1 , 9B1 and 9C1 depict example elevation maps of reference spheres with radii of 9.65 mm, 8.00 mm and 6.15 mm, respectively, consistent with the system of FIGs. 5A and 5B, in accordance with various embodiments. These figures show the elevation maps of the reference spheres. The key 930 shows a correspondence between the shade and the displacement in mm.
[00139] The figures provide an assessment of the keratometric reading from the stereo slit-scanning tomographer. FIGs. 9A1 , 9B1 and 9C1 provide elevation maps of each reference sphere for a circular diameter of 5 mm at the initial axial position (Az = 0 mm). A peripheral depression is observed around the periphery when the surface becomes more curved.
[00140] FIGs. 9A2, 9B2 and 9C2 depict example plots of radius of curvature vs. horizontal displacement, consistent with the reference spheres of FIGs. 9A1 , 9B1 and 9C1 , respectively, for different axial and horizontal positions, in accordance with various embodiments. The squares, circles and triangles depict Az=-1 , 0 and +1 , respectively. FIGs. 9A2, 9B2 and 9C2 provide radii of curvatures for three reference spheres for different axial and horizontal positions. The horizontal continue line represents the values given by the manufacturer.
[00141] The areas 901 , 912 and 922 correspond to the best fit between the calibrated samples and the estimated spheres. The areas 900, 91 1 and 921 represent a displacement greater than 100pm but less than 200pm, while the areas 910 and 920 show a displacement greater than 200pm. The center region (~2.5mm) shows good agreement between the measurements and the ground truth. While, as the curvature of the spheres increased, the peripheral region of the sample became depressed, below the best-fit sphere. This deviation was mainly caused by defocused slit illumination as the surface moved further away from the focal plane.
[00142] However, this systematic irregularity at the periphery did not strongly affect the calculations of the radii of curvature. Vertical shifts along the y-axis have negligible impacts because they only introduce an "in-plane" translation within the capture camera frames. The limited focal depth of the illuminating slit and cameras contributed to slight variations in the measurements due to horizontal and axial displacements, as depicted in FIGs. 9A2, 9B2 and 9C2.
[00143] Nevertheless, the high consistency suggests that the instrument can perform reliably for clinical applications. Table 1 summarizes the mean and standard deviation for the experimental results of the three reference spheres as well as their refractive power values.
Table 1 - Accuracy and Repeatability of the Keratometric Measurements of the Stereo Slit-Scanning Prototype. Refractive Power Values were Calculated using a Refractive Index of n = 1 .3375.
Reference values Experimental Values r> .. z x Refractive Power „ ,. , . Refractive Power
Radius (mm) Radius (mm)
9.65 35.0 9.63 ± 0.06 35.05 ± 0.22
8.00 42.2 8.02 ± 0.05 42.08 ± 0.26
6.15 54.9 6.13 ± 0.04 55.06 ± 0.36
[00144] The experimental measurements of the radii of curvature were consistent with the values given by the manufacturer. However, we noticed an increase in the keratometric error when the spheres became more curved, which was associated with the observed peripheral irregularities of the elevation maps.
[00145] Structural Imaging of the Anterior Eye Segment and Pachymetry
[00146] One application of our stereo slit-scanning prototype is the ability to provide a complete 3D visualization of the anterior segment.
[00147] FIGs. 10A and 10B show a 3D rendering of the frontal chamber of a myopic human eye. In particular, FIG. 10A depicts an exemplary large-scale volumetric reconstruction of the anterior segment of the in-vivo human eye, showing anterior eye structures including cornea, iris, and lens capsule, in accordance with various embodiments, and FIG. 10B depicts an example 3D rendering of the human eye of FIG. 10A in the sagittal plane, in accordance with various embodiments. Images are projected into geometrical coordinates (with an origin 0, and x, y and z dimensions) for a voxel dataset of 400x400x500 samples, and the size of the imaged volume is 15x15x6mm.
[00148] FIGs. 1 1 A-1 1 C present examples of anatomical structures of the front section of the eye, examined using a numerical aperture NA = 0.125 to improve the lateral resolution and the SNR The presented features were confirmed by an ophthalmologist with slit-lamp examination. These figures depict anatomical features of the eye using cross-sectional images from the stereo slit-scanning prototype.
[00149] FIG. 1 1 A depicts an example cross-sectional image of a fluid-filled sac or iris cyst located on the iris surface, in accordance with various embodiments.
[00150] FIG. 1 1 B depicts an example cross-sectional image of main layers of the corneal structure, in accordance with various embodiments. EP denotes the epithelium, S denotes the stroma, and EN denotes the endothelium. The rectangular region 1 150 shows a corneal punctate pattern on the tear film.
[00151] FIG. 1 1 C depicts a close-up view of the apex region 1 150 of the cornea of FIG. 1 1 B, in accordance with various embodiments. The white arrow indicates the cellular debris found within the tear film.
[00152] The sagittal tomography images illustrate specific structures, such as a congenital iris cyst (FIG. 1 1 A, pointed to by the white arrow) and the corneal layers (FIG. 1 1 B) from two right eyes of 36 and 38-year-old subjects, respectively. An interesting clinical observation is the evaluation of the tear film quality. FIGs. 1 1 B and 1 1 C show some cellular debris or mucus within the tear film after a few seconds without blinking. This characteristic is clinically relevant in the diagnosis of dry eye and contact lens fitting.
[00153] In-vivo Performance of the Stereo Slit-Scanning Tomography-Based Biometry
[00154] Ten eyes from 10 subjects (5 males and 5 females) were enrolled in this pilot study. The mean age (± standard deviation, SD) was 35.1 1 ± 6.94 years old. The mean ± SD spherical refractive error was -2.55 ± 1 .72 D. [00155] After the stereo reconstruction and segmenting the corneal and lens surfaces, we compared the corneal thickness and the anterior chamber depth with the Pentacam®HR. Table 2 shows the results.
[00156] Table 2--Mean and standard deviation of the central corneal thickness (CCT) and anterior chamber depth (ACD) using two optical tomographers.
Stereo Scanning-Slit Tomographer Pentacam®HR
CCT (pm) 545.45 ± 21 .20 574.00 ± 11 .52
ACD (mm) 3.12 ± 0.31 3.70 ± 0.1
[00157] FIGs. 12A1 , 12B1 and 12C1 depict Bland-Altman plots for central corneal thickness (CCT), anterior chamber depth (ACD), and radius of curvature (Rc), respectively, showing repeatability of the stereo slit-scanning device using two consecutive measurements, in accordance with various embodiments. The standard deviation (SD) and mean are depicted. The Bland-Altman analysis showed that the maximum absolute error for repeated CCT, ACD, and Rc measurements was 25.30pm, 0.45pm and 0.27pm, respectively. The mean absolute error was 12.58 pm for the CCT, 81 .46 pm for the ACD, and 84.91 pm for the Rc.
[00158] Additionally, we did not find any systematic bias with statistical significance between the first and second scans (p-value CCT = 0.668, p-value ACD = 0.701 , p- value Rc = 0.868). The precision of the instrument was high, with standard deviations of 26.29pm, 0.25mm, and 0.31 mm for the CCT, ACD, and Rc, respectively.
[00159] Similarly, Bland-Altman plots were used to study the differences in CCT, ACD, and Rc measurements performed by our optical setup and the Pentacam®HR. See FIGs. 12A2, 12B2 and 12C2. FIGs. 12A2, 12B2 and 12C2 depict Bland-Altman plots for corneal thickness, anterior chamber depth, and radius of curvature, respectively, showing an agreement analysis between the Pentacam®HR and the proof-of-concept prototype, in accordance with various embodiments.
[00160] Discussion
[00161] An optical imaging platform has been discussed for visualizing and characterizing anterior eye structures, including the cornea, iris, lens, sclera, and eyelids. Our approach uniquely combines the well-established slit-scanning modality with stereoscopic photography for obtaining 2D cross-sectional and 3D volumetric images. [00162] Our prototype instrument represents a significant advancement over existing devices due to its ability to rapidly capture a 3D volume of the entire anterior eye that does not implicitly rely on assumptions of axial symmetry. For example, Scheimpflug tomographers (e.g., Pentacam®HR or Galilei G6) use radial scanning that relies on precise alignment of the scanning pivot with the cornea apex. However, the visual axis of most human eyes does not correspond with the corneal apex. This angular deviation, referred to as kappa angle, can complicate elevation map computations. For instance, larger kappa angles generate more elevated topographic maps, requiring the use of best-fit toric ellipse algorithms instead of spherical ones. Additionally, sampling density and imaging performance on peripheral regions (e.g., the limbus) and the iris is poor, often requiring interpolation (e.g., linear geometrical assumptions), which can result in missed pathological features from these structures. For example, Scheimpflug tomographers cannot accurately measure the iridio-corneal angle, which is crucial for evaluating the risk of angle-closure glaucoma. The stereo slit-scanning tomography device can provide depth-resolved images of anterior ocular structures. This ability could facilitate the diagnosis of ocular disorders at the iris (e.g., iridioschisis and focal iris atrophy) and the anterior chamber (i.e., Fuchs heterochromic iridocyclitis). The use of telecentric raster scanning configuration enables better coverage of the corneal surface. Because the light sheet scans parallel to the visual axis of the eye, the field of view can be maximized up to the corneoscleral junction.
[00163] The use of two cameras to perform stereo vision gives an extra advantage. In Dual Scheimpflug tomography, the two cameras capture optical sectioning images from opposite sides of the cornea. This ability improves the detection of the posterior corneal surface and can compensate for unintentional misalignments or cyclotorsion movements. Another significant advantage compared to slit-scanning tomographers is spatial resolution. The mechanical displacement (either horizontal or rotational) of the illuminator constrains the scanning density in commercial tomographers. For example, the Pentacam®HR acquires 50 frames in 2 seconds, and the Galilei G6 requires 1 second to acquire 60 images per camera. Currently, our prototype captures images at 50 frames per second. Our scanning density is primarily limited by the low power of the low-cost LED light source, which leads to longer exposure times. Upgrading to a 5-1 Ox brighter light source can easily lead to a frame rate of 200 - 300Hz while still meeting the light safety limit. Higher speeds can mitigate eye movements, allowing more consistent measurements. Faster speeds can also enable high sampling density, which is essential in cases of focal pathological changes.
[00164] The topographic performance of the stereo slit-scanning system was verified using three calibrated reference spheres. The radii of the sphere were selected so that they approximate the expected values of the cornea. The measured surface profile and curvature were in agreement with the values provided by the manufacturer. Furthermore, the prototype showed a high repeatability of the keratometric readings. Nonetheless, as the curvature of the sphere increased, we observed that the elevation maps showed a regular peripheral depression with respect to their best-fitting spheres. We hypothesized that the leading cause of this error was the defocus artifacts due to the measured surface being further away from the illumination and camera focal planes. The depth of field of our system is 2.25 mm, which is shorter than the sum of the average depth of the anterior segment (3.64 mm).
[00165] Different alternatives could mitigate the impact of this aberration. One such solution is the use of multiple CMOS arrays to focus on different planes. This solution has been implemented recently in odontology to improve the performance of dental implants. The use of three cameras instead of two enabled the estimation of the spatial coordinates even when the camera was blocked entirely. Another alternative solution is the use of electrically tunable lenses (ETL) to adjust the illuminator focus dynamically during the scan. This active optical element has been used in ophthalmic imaging, especially for OCT, where the control of the optical beam enhances images and extends the depth of field. Similarly, by adjusting the focal plane position of the cameras during the acquisition process, we could extend the effective depth of field of the cameras.
[00166] The in-vivo measurements performed in this pilot study demonstrated good repeatability of the stereo slit-scanning device for all the parameters. Corneal thickness, anterior chamber depth, and radius of curvature showed agreement with the Pentacam®HR. However, we noticed a systematic difference of 0.26mm for the anterior chamber depth. A possible explanation for the shorter obtained measurements could be the greater depth of field provided by Scheimpflug cameras, which allows the Pentacam®HR to perform more precise measurements of the anterior chamber. Additional studies with a larger sample size and diverse spherical refractive errors can confirm this finding and accept it as a correction factor. [00167] Finally, one limitation of the presented methodology is the vulnerability to eye movements. As mentioned above, imaging speed is currently limited by the long camera exposure time required due to the low illumination power. Using a more powerful light source and optimizing the optical design may enable faster imaging and thus mitigate motion artifacts. Additionally, computer registration algorithms can be developed to register each frame to the correct spatial location.
[00168] Conclusion
[00169] Stereo slit-scanning tomography is a low-cost alternative to AS-OCT and overcomes the limitations of current corneal topography and tomography instruments. It merges optical sectioning and stereoscopic photography for performing 2D cross- sectional imaging and 3D reconstruction of the anterior segment. The proposed instrument has the potential to obtain tomographic images comparable with slit-lamp photography, with higher sampling density and better corneal coverage. This advantage enables visualization of particular anatomical structures, such as the iridocorneal angle and the eyelid. Corneal topography can also be obtained. Widespread clinical application includes teleophthalmology for rural areas and emergency departments where ophthalmology availability is highly limited.
[00170] Although the disclosed invention has been described according to the exemplary embodiments, those skilled in the art should understand that a significant number of modifications and variations might be developed in this invention. The present embodiments are to be considered illustrative and not restrictive.

Claims

CLAIMS What is claimed is:
1 . An ophthalmic optical imaging apparatus for acquiring cross-sectional and volumetric images of an entire anterior segment of an eye, the apparatus comprising: an illumination subsystem configured to project light patterns telecentrically with respect to a visual axis of the eye; and a multi-camera imaging subsystem synchronized with the illumination subsystem and comprising: a frontal view camera along an illumination axis; one or more stereo cameras including a first stereo camera comprising first and second cameras having optical axes at a first acute angle with respect to the illumination axis and placed on opposing sides of the illumination axis; and a fixation target subsystem to control gaze during imaging, wherein the with fixation target subsystem has an adjustable focus to compensate for refractive error.
2. The imaging apparatus of claim 1 , wherein the illumination subsystem is configured to control an effective exposure time by altering a duration of the light patterns projected onto the eye.
3. The imaging apparatus of claim 2, wherein the illumination subsystem is configured to project multiple light patterns simultaneously, each with a distinctive exposure time.
4. The imaging apparatus of claim 2, wherein the illumination subsystem is configured to adjust the effective exposure time of the projected light patterns to increase a dynamic range and accommodate poorly reflective or strongly reflective structures and features.
5. The imaging apparatus of claim 4, further comprising a computer configured to take and merge multiple images of a same projected light pattern at different levels of effective exposure time and/or camera gain, leading to high dynamic range (HDR) imaging.
6. The imaging apparatus of claim 2, wherein the illumination subsystem is configured to project the light patterns simultaneously or sequentially at different wavelengths.
7. The imaging apparatus of claim 6, wherein the imaging subsystem is configured to acquire multiple images of a same projected light pattern at the different wavelengths.
8. The imaging apparatus of claim 7, further comprising acquiring multiple wavelength-dependent datasets to enable hyperspectral imaging and spectroscopy analysis of imaged features.
9. The imaging apparatus of claim 1 , wherein the illumination subsystem is configured to project the light patterns as slit beams using at least one of a planoconvex cylindrical lens or anamorphic prisms.
10. The imaging apparatus of claim 9, wherein the projected slit beams map ocular surfaces using scanning mirrors, and the scanning mirrors comprise at least one of a galvanometer-resonant scanner or polygonal mirrors.
11. A method for examining an entire anterior segment and ocular adnexa of an eye, comprising: projecting patterned light comprising a slit beam or another light pattern to the eye to enable optical sectioning and generating of cross-sectional images of a cornea, iris, sclera, and crystalline lens by reflected and scattered light from ocular tissues; and using a computer and cameras, acquiring and synchronizing images of the eye based on the projected light pattern, organizing and displaying cross-sectional images of the eye, and storing a captured image dataset for at least one of manual, semi-automated or automated image analysis.
12. The method of claim 11 , further comprising displacing the patterned light to sufficiently sample the entire anterior segment.
13. The method of claim 12, further comprising examining acquired cross- sectional views to identify structural abnormalities and changes in optical properties, including intensity changes.
14. The method of claim 13, further comprising re-organizing a captured image stack according to their corresponding spatial location to provide a reorganized sequence of images from each camera.
15. The method of claim 14, further comprising playing a movie comprising the re-organized sequence of images to allow for qualitative assessment of the anterior segment, analogous to slit lamp examination.
16. The method of claim 11 , further comprising merging and stacking tomographic images to achieve a 3D volumetric reconstruction of the entire anterior segment.
17. The method of claim 16, wherein the 3D volumetric reconstruction is calibrated based on stereo vision and triangulation, so that the reconstructed volumetric reconstruction of the anterior segment is geometrically accurate.
18. The method of claim 11 , further comprising visualizing and locating ocular abnormalities of the anterior segment and eyelids including, but not limited to, pterygium, chalazion, or blepharitis.
19. The method of claim 11 , further comprising visualizing and locating corneal layers including epithelium, stroma, and endothelium, as well as diseases and/or injuries affecting the cornea including, but not limited to, scarring, lacerations or perforations.
20. The method of claim 11 , further comprising visualizing and locating cellular suspension in a tear film and anterior chamber.
21 . The method of claim 11 , further comprising visualizing and locating lens opacity including cataract, and assessing its severity.
22. The method of claim 11 , further comprising performing non-contact ocular biometry of the anterior segment, including cornea and lens keratometry, corneal thickness, anterior chamber depth and volume, and anterior chamber angle.
23. The method of claim 11 , further comprising visualizing and tracking dynamics of the eye including pupillary response to light and lens accommodation.
24. The method of claim 11 , further comprising assessing a tear film stability by visualizing and quantifying a tear film break-up time without requirement of any dye including fluorescein.
PCT/US2025/028233 2024-05-08 2025-05-07 Stereo tomography of the anterior eye Pending WO2025235677A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463644464P 2024-05-08 2024-05-08
US63/644,464 2024-05-08

Publications (1)

Publication Number Publication Date
WO2025235677A1 true WO2025235677A1 (en) 2025-11-13

Family

ID=97675603

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2025/028233 Pending WO2025235677A1 (en) 2024-05-08 2025-05-07 Stereo tomography of the anterior eye

Country Status (1)

Country Link
WO (1) WO2025235677A1 (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5013146A (en) * 1988-09-14 1991-05-07 Kowa Company Limited Opthalmological measurement apparatus
US20020154269A1 (en) * 2001-04-16 2002-10-24 David Liu Stereoscopic measurement of cornea and illumination patterns
US20150131055A1 (en) * 2013-11-08 2015-05-14 Precision Ocular Metrology, L.L.C. Systems and methods for mapping the ocular surface
US20150272436A1 (en) * 2014-03-25 2015-10-01 Kabushiki Kaisha Topcon Ophthalmologic apparatus
US20160206199A1 (en) * 2013-03-15 2016-07-21 Neurovision Imaging Llc System and method for rejecting afocal light collected from an in vivo human retina
US20160242734A1 (en) * 2012-02-02 2016-08-25 Wei Su Eye imaging apparatus and systems
US20160363435A1 (en) * 2006-06-20 2016-12-15 Carl Zeiss Meditec, Inc. Spectral domain optical coherence tomography system
US20210068655A1 (en) * 2019-09-08 2021-03-11 Aizhong Zhang Multispectral and hyperspectral ocular surface evaluator
US20220273170A1 (en) * 2021-03-01 2022-09-01 University Of Tsukuba Method of processing ophthalmic data, ophthalmic data processing apparatus, and ophthalmic examination apparatus
US20230389789A1 (en) * 2017-08-29 2023-12-07 Verily Life Sciences Llc Focus stacking for retinal imaging

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5013146A (en) * 1988-09-14 1991-05-07 Kowa Company Limited Opthalmological measurement apparatus
US20020154269A1 (en) * 2001-04-16 2002-10-24 David Liu Stereoscopic measurement of cornea and illumination patterns
US20160363435A1 (en) * 2006-06-20 2016-12-15 Carl Zeiss Meditec, Inc. Spectral domain optical coherence tomography system
US20160242734A1 (en) * 2012-02-02 2016-08-25 Wei Su Eye imaging apparatus and systems
US20160206199A1 (en) * 2013-03-15 2016-07-21 Neurovision Imaging Llc System and method for rejecting afocal light collected from an in vivo human retina
US20150131055A1 (en) * 2013-11-08 2015-05-14 Precision Ocular Metrology, L.L.C. Systems and methods for mapping the ocular surface
US20150272436A1 (en) * 2014-03-25 2015-10-01 Kabushiki Kaisha Topcon Ophthalmologic apparatus
US20230389789A1 (en) * 2017-08-29 2023-12-07 Verily Life Sciences Llc Focus stacking for retinal imaging
US20210068655A1 (en) * 2019-09-08 2021-03-11 Aizhong Zhang Multispectral and hyperspectral ocular surface evaluator
US20220273170A1 (en) * 2021-03-01 2022-09-01 University Of Tsukuba Method of processing ophthalmic data, ophthalmic data processing apparatus, and ophthalmic examination apparatus

Similar Documents

Publication Publication Date Title
CN110934563B (en) Ophthalmic information processing device, ophthalmic device, and ophthalmic information processing method
CN103491857B (en) Systems and methods for improving ophthalmic imaging
CN109068973B (en) Keratometer with detachable micro microscope for cataract operation
JP7343331B2 (en) Ophthalmological device, its control method, program, and recording medium
JP2022040372A (en) Ophthalmic equipment
JP2016054854A (en) Ophthalmological photographing device and ophthalmological information processing device
JP2018149449A (en) Ophthalmic photographing apparatus and ophthalmic information processing apparatus
Ni et al. Panretinal optical coherence tomography
JP7236927B2 (en) Ophthalmic device, control method thereof, ophthalmic information processing device, control method thereof, program, and recording medium
JP7349807B2 (en) ophthalmology equipment
JP2023102006A (en) ophthalmic equipment
JP2023102032A (en) ophthalmic equipment
JP7420476B2 (en) Ophthalmological apparatus, control method thereof, ophthalmological information processing apparatus, control method thereof, program, and recording medium
WO2025235677A1 (en) Stereo tomography of the anterior eye
JP7292072B2 (en) ophthalmic equipment
JP7638343B2 (en) Ophthalmic Equipment
JP7769510B2 (en) Ophthalmic information processing device and ophthalmic device
JP7201855B2 (en) Ophthalmic device and ophthalmic information processing program
JP7339011B2 (en) Ophthalmic device, ophthalmic information processing device, program, and recording medium
JP7499590B2 (en) Laminate, model eye, and ophthalmic device
Jiménez-Villar et al. Stereo slit-scanning tomography of the anterior segment of the human eye
WO2024004455A1 (en) Opthalmic information processing device, opthalmic device, opthalmic information processing method, and program
Hong Investigations into high resolution imaging of the aqueous outflow system and cornea
WO2022168259A1 (en) Ophthalmic information processing device, ophthalmic device, ophthalmic information processing method, and program