[go: up one dir, main page]

WO2025158396A1 - Systèmes et procédés de génération d'images à gradient spectral et spatial de tissu biologique - Google Patents

Systèmes et procédés de génération d'images à gradient spectral et spatial de tissu biologique

Info

Publication number
WO2025158396A1
WO2025158396A1 PCT/IB2025/050840 IB2025050840W WO2025158396A1 WO 2025158396 A1 WO2025158396 A1 WO 2025158396A1 IB 2025050840 W IB2025050840 W IB 2025050840W WO 2025158396 A1 WO2025158396 A1 WO 2025158396A1
Authority
WO
WIPO (PCT)
Prior art keywords
spectral
gradient image
images
cube
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/IB2025/050840
Other languages
English (en)
Inventor
Antoine DUROCHER-JEAN
Claudia Chevrefils
Jean-Philippe Sylvestre
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Optina Diagnostics Inc
Original Assignee
Optina Diagnostics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Optina Diagnostics Inc filed Critical Optina Diagnostics Inc
Publication of WO2025158396A1 publication Critical patent/WO2025158396A1/fr
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0075Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by spectroscopy, i.e. measuring spectra, e.g. Raman spectroscopy, infrared absorption spectroscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue
    • A61B5/14546Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue for measuring analytes not otherwise provided for, e.g. ions, cytochromes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • A61B5/14551Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases
    • A61B5/14555Measuring characteristics of blood in vivo, e.g. gas concentration or pH-value ; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid or cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases specially adapted for the eye fundus
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6813Specially adapted to be attached to a specific body part
    • A61B5/6814Head
    • A61B5/6821Eye
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/027Control of working procedures of a spectrometer; Failure detection; Bandwidth calculation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/10ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to drugs or medications, e.g. for ensuring correct administration to patients
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/67ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/02Details of sensors specially adapted for in-vivo measurements
    • A61B2562/0233Special features of optical sensors or probes classified in A61B5/00
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0033Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
    • A61B5/0036Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room including treatment, e.g., using an implantable medical device, ablating, ventilating
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0071Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence by measuring fluorescence emission
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/28Investigating the spectrum
    • G01J3/2823Imaging spectrometer
    • G01J2003/2826Multispectral imaging, e.g. filter imaging
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • Multispectral and hyperspectral imaging techniques have increasingly been used for diagnostic and other purposes. These techniques involve capturing images of biological tissue at different wavelengths of light, where the different wavelengths provide different spectral responses based on the features of the blood vessels and other structures in the biological tissue. These wavelength-specific images allow for more detailed analyses of the biological tissue.
  • Certain characteristics of the biological tissue can hide features of the tissue. For example, in the eye, evaluating the foveal area can pose challenges due to the luteal pigment's ability to block blue light. Because of these limitations, indicators of a disease or indicators of the progression of a disease might not be visible. It is an object of the present technology to ameliorate at least some of the limitations present in the prior art.
  • a method for identifying one or more features in biological tissue of an individual using a hyperspectral camera comprising: receiving imaging data comprising a plurality of images of the biological tissue, wherein the images of the biological tissue are captured using the hyperspectral camera configured to capture the plurality of images corresponding to different spectral bands; generating, based on the imaging data, a hyperspectral cube comprising a plurality of frames, each frame of the plurality of frames corresponding to a different spectral band; generating at least one spectral gradient image from the hyperspectral cube, wherein each pixel of the at least one spectral gradient image indicates a rate of change of an intensity of pixels at a same location in each frame of the plurality of frames; and outputting for display the at least one spectral gradient image.
  • the method further comprises: identifying, using the at least one spectral gradient image, the one or more features in the biological tissue of the individual.
  • the imaging data comprises a plurality of images of a fundus of the individual’s eye.
  • the imaging data comprises a plurality of images of the individual’s skin.
  • the one or more features are areas of atrophy.
  • the one or more features are areas of geographic atrophy.
  • the one or more features are areas of macular pigment.
  • the one or more features comprise an indication of disruption of integrity of melanin in a retinal pigment epithelium (RPE) of the individual.
  • RPE retinal pigment epithelium
  • the one or more features correspond to areas of lesion.
  • the method further comprises: generating a training data point comprising the at least one spectral gradient image and a label indicating the one or more features; and training, using the training data point, a machine learning algorithm to identify areas of atrophy in spectral gradient images.
  • the method further comprises: generating a training data point comprising the at least one spectral gradient image and a label indicating the one or more features; and training, using the training data point, a machine learning algorithm to identify areas of geographic atrophy in spectral gradient images.
  • the method further comprises: generating a training data point comprising the at least one spectral gradient image and a label indicating the one or more features; and training, using the training data point, a machine learning algorithm to identify areas of macular pigment in spectral gradient images.
  • the method further comprises: generating a training data point comprising the at least one spectral gradient image and a label indicating the one or more features; and training, using the training data point, a machine learning algorithm to identify disruption of integrity of melanin in a retinal pigment epithelium (RPE) based on spectral gradient images.
  • RPE retinal pigment epithelium
  • the method further comprises: generating a training data point comprising the at least one spectral gradient image and a label indicating the one or more features; and training, using the training data point, a machine learning algorithm to identify areas of lesion in spectral gradient images.
  • the method further comprises determining a treatment schedule for the individual based on the one or more features.
  • the method further comprises determining a dosage of a treatment for the individual based on the one or more features.
  • generating at least one spectral gradient image comprises: selecting a pixel location of the hyperspectral cube; selecting a plurality of pixels from different frames of the hyperspectral cube that correspond to the pixel location; retrieving intensity values for the plurality of pixels; determining a rate of change of the intensity values for the plurality of pixels; and determining, based on the rate of change, an intensity value for a pixel of the at least one spectral gradient image that corresponds to the pixel location.
  • the method further comprises applying a denoising algorithm to the hyperspectral cube before generating the at least one spectral gradient image.
  • the method further comprises applying a pseudonormalization algorithm, thresholding algorithm, or histogram equalization algorithm to the at least one spectral gradient image.
  • the at least one spectral gradient image corresponds to a first-order derivative of the hyperspectral cube, and further comprising generating a second at least one spectral image representing a higher-order derivative of the hyperspectral cube.
  • the method further comprises generating, based on the at least one spectral gradient image, at least one spatial gradient image, wherein each pixel of the spatial gradient image indicates a rate of change of an intensity across surrounding pixels, in a horizontal or vertical direction, in the at least one spatial gradient image.
  • the method further comprises generating a relative spectral gradient image by dividing the at least one spectral gradient image by a frame of the hyperspectral cube.
  • the method further comprises displaying the at least one spectral gradient image and overlaying a grid on the at least one spectral gradient image.
  • the method further comprises: inputting the one or more features to a machine learning algorithm (MLA), wherein the MLA was trained to predict a response to a treatment; and outputting, by the MLA and based on the one or more features, a predicted response of the individual to the treatment.
  • MLA machine learning algorithm
  • the method further comprises: inputting the one or more features to a machine learning algorithm (MLA), wherein the MLA was trained to predict whether an adverse event will occur; and outputting, by the MLA and based on the one or more features, a predicted likelihood that an adverse event will occur for the individual.
  • MLA machine learning algorithm
  • the method further comprises detecting and mapping anomalies in tissue based on the one or more features.
  • the anomalies comprise a pathological lesion of the individual’s eye.
  • the method further comprises detecting and mapping physiological characteristics of the individual based on the one or more features.
  • a method for identifying one or more features in biological tissue of an individual using a hyperspectral camera comprising: receiving imaging data comprising a plurality of images of the biological tissue, wherein the images of the biological tissue are captured using the hyperspectral camera configured to capture the plurality of images corresponding to different spectral bands; generating, based on the imaging data, a hyperspectral cube comprising a plurality of frames, each frame of the plurality of frames corresponding to a different spectral band; generating at least one spatial gradient image from the hyperspectral cube, wherein each pixel of a spatial gradient image indicates a rate of change of an intensity across surrounding pixels in a horizontal or vertical direction; and outputting for display the at least one spectral gradient image.
  • the method further comprises identifying, using the at least one spectral gradient image, the one or more features in the biological tissue of the individual.
  • a method for identifying one or more features in biological tissue of an individual using a hyperspectral camera comprising: receiving imaging data comprising a plurality of images of the biological tissue, wherein the images of the biological tissue are captured using the hyperspectral camera configured to capture the plurality of images corresponding to different spectral bands; generating, based on the imaging data, a hyperspectral cube comprising a plurality of frames, each frame of the plurality of frames corresponding to a different spectral band; generating at least one spectral gradient image from the hyperspectral cube, wherein each pixel of the at least one spectral gradient image indicates a rate of change of an intensity of pixels at a same location in each frame of the plurality of frames; generating at least one spatial gradient image from the hyperspectral
  • the method further comprises identifying, using the at least one spectral gradient image and the at least one spatial gradient image, the one or more features in the biological tissue of the individual.
  • a method for identifying one or more features in biological tissue of an individual using a hyperspectral camera comprising: receiving imaging data comprising a plurality of images of the biological tissue, wherein the images of the biological tissue are captured using the hyperspectral camera configured to capture the plurality of images corresponding to different spectral bands; generating, based on the imaging data, a hyperspectral cube comprising a plurality of frames, each frame of the plurality of frames corresponding to a different spectral band; generating at least one spectral gradient image from the hyperspectral cube, wherein each pixel of the at least one spectral gradient image indicates a rate of change of an intensity of pixels at a same location in each frame of the plurality of frames; generating at least one spatial gradient image from the hyperspectral cube, wherein each pixel of a spatial gradient image indicates a rate of change of an intensity across surrounding pixels in a horizontal or vertical direction; and generating a spatial-spectral
  • the method further comprises identifying, using the spatial-spectral gradient cube, the one or more features in the biological tissue of the individual.
  • generating the spatial-spectral gradient cube comprises determining an angle between a pixel in the spatial gradient cube and a corresponding pixel in the spectral gradient cube.
  • a method comprising: receiving imaging data comprising a plurality of images of a retina, wherein the images of the retina are captured using the hyperspectral camera configured to capture the plurality of images corresponding to different spectral bands; generating, based on the imaging data, a hyperspectral cube comprising a plurality of frames, each frame of the plurality of frames of the hyperspectral cube corresponding to a different spectral band; generating, based on the hyperspectral cube, an absorbance cube comprising a plurality of frames, each frame of the plurality of frames of the absorbance cube corresponding to a different spectral band; and generating at least one spectral absorbance gradient image from the hyperspectral cube, wherein each pixel of the at least one spectral absorbance gradient image indicates a rate of change of an intensity of pixels at a same location in each frame of the plurality of frames of the absorbance cube.
  • the method further comprises determining, based on the at least one spectral absorbance gradient image, an abundance of melanin in the retina.
  • the at least one spectral absorbance gradient image comprises a 560 nm image.
  • the method further comprises: determining, based on the at least one spectral absorbance gradient image, at least one second-order spectral absorbance gradient image; and determining, based on the at least one second-order spectral absorbance gradient image, an abundance of oxyhemoglobin in the retina.
  • the at least one second-order spectral absorbance gradient image comprises a 575 nm image.
  • the method further comprises: determining, based on the at least one spectral absorbance gradient image, an abundance of melanin in the retina; determining, based on the at least one spectral absorbance gradient image, at least one second-order spectral absorbance gradient image; determining, based on the at least one second-order spectral absorbance gradient image, an abundance of oxyhemoglobin in the retina; and determining an abundance of deoxyhemoglobin in the retina based on: the at least one second-order spectral absorbance gradient image, the abundance of melanin, and the abundance of oxyhemoglobin.
  • the at least one second-order spectral absorbance gradient image comprises a 550 nm image.
  • the method further comprises determining an abundance of macular pigment in the retina based on: the at least one spectral absorbance gradient image, the abundance of melanin, the abundance of oxyhemoglobin, and the abundance of deoxyhemoglobin.
  • the at least one spectral absorbance gradient image comprises a 500 nm image.
  • Embodiments of the present technology each have at least one of the above-mentioned object and/or aspects, but do not necessarily have all of them. It should be understood that some aspects of the present technology that have resulted from attempting to attain the above-mentioned object may not satisfy this object and/or may satisfy other objects not specifically recited herein. [0050] Additional and/or alternative features, aspects and advantages of embodiments of the present technology will become apparent from the following description, the accompanying drawings and the appended claims.
  • Figure 1 is a block diagram of an example computing environment in accordance with various embodiments of the present technology
  • Figure 2 is a block diagram of a system for generating a hyperspectral cube in accordance with various embodiments of the present technology
  • Figure 3 illustrates a hyperspectral cube in accordance with various embodiments of the present technology
  • Figure 4 is a flow diagram of a method for generating spectral gradient images and spatial gradient images in accordance with various embodiments of the present technology
  • Figure 5 is a flow diagram of a method for generating a spectral-spatial gradient cube in accordance with various embodiments of the present technology
  • Figure 6 is a flow diagram of a method for determining abundances of chromophores in a retinal image in accordance with various embodiments of the present technology
  • Figure 7 is an exemplary spectral gradient image calculated at 560 nm in accordance with various embodiments of the present technology
  • Figure 8 is a second exemplary spectral gradient image calculated at 560 nm in accordance with various embodiments of the present technology
  • Figure 9 is a third exemplary spectral gradient image calculated at 630 nm in accordance with various embodiments of the present technology
  • Figure 10 is an exemplary spectral gradient image with an over lay ed grid in accordance with various embodiments of the present technology
  • Figure 11 is an exemplary hyperspectral image, spectral gradient image, and spatial gradient image in accordance with various embodiments of the present technology
  • Figure 12 is an exemplary spectral absorbance gradient image at 560 nm in accordance with various embodiments of the present technology
  • Figure 13 is an exemplary second-order spectral absorbance gradient image at 575 nm in accordance with various embodiments of the present technology
  • Figure 14 is an exemplary second-order spectral absorbance gradient image at 550 nm in accordance with various embodiments of the present technology.
  • Figure 15 is an exemplary spectral absorbance gradient image at 500 nm in accordance with various embodiments of the present technology.
  • Multispectral or hyperspectral cameras can capture images of an object across a range of wavelengths by taking a series of images at different wavelengths (e.g., using bandpass filters or other techniques). These images may then be combined into a hyperspectral cube that contains both spatial and spectral information in three dimensions (two spatial and one spectral).
  • the captured reflectance spectrum is influenced by the molecular content (e.g., hemoglobin, melanin), cellular arrangement (e.g., capillaries, nerve fiber layer), and density/thickness (e.g., neurodegeneration) of the tissue at the various wavelengths of the spectrum.
  • Hyperspectral imaging is also possible using fluorescence imaging.
  • the fluorescence signal from a tissue excited at an excitation wavelength can be spectrally filtered before the detection with a sensor, or by collecting light emitted by the tissue at a specific emission band following iterative monochromatic excitation of the tissue at different wavelengths.
  • Spectral and/or spatial gradient images may be generated using a hyperspectral cube.
  • the rate of change of the intensity between adjacent pixels in any of the dimensions of the hyperspectral cube, and the direction of the rate of change, may be quantified using derivatives and or gradients.
  • Gradient images may be generated using the rate of change for each pixel across multiple dimensions.
  • Absorbance images may be generated based on the hyperspectral images.
  • the spectral and/or spatial gradients may be generated using the hyperspectral images and/or the absorbance images.
  • the spectral and/or spatial gradient images may be used to determine the abundance of various chromophores present in the biological tissue, such as melanin, oxyhemoglobin, deoxyhemoglobin, and/or macular pigment.
  • Figure 1 illustrates a computing environment 100, which may be used to implement and/or execute any of the methods described herein.
  • the computing environment 100 may be implemented by any of a conventional personal computer, a network device, and/or an electronic device (such as, but not limited to, a mobile device, a tablet device, a server, a controller unit, a control device, etc.), and/or any combination thereof appropriate to the relevant task at hand.
  • a conventional personal computer such as, but not limited to, a mobile device, a tablet device, a server, a controller unit, a control device, etc.
  • an electronic device such as, but not limited to, a mobile device, a tablet device, a server, a controller unit, a control device, etc.
  • the computing environment 100 comprises various hardware components including one or more single or multi-core processors collectively represented by processor 110, a solid-state drive 120, a random access memory 130, and an input/output interface 150.
  • the computing environment 100 may be a computer specifically designed to operate a machine learning algorithm (MLA).
  • MLA machine learning algorithm
  • the computing environment 100 may be a generic computer system.
  • the computing environment 100 may also be a subsystem of one of the above-listed systems.
  • the computing environment 100 may be an “off-the-shelf’ generic computer system.
  • the computing environment 100 may also be distributed amongst multiple systems.
  • the computing environment 100 may also be specifically dedicated to the implementation of the present technology. As a person in the art of the present technology may appreciate, multiple variations as to how the computing environment 100 is implemented may be envisioned without departing from the scope of the present technology.
  • processor 110 is generally representative of a processing capability.
  • one or more specialized processing cores may be provided.
  • graphics Processing Units 111 GPUs
  • QPUs Quantum Processing Units
  • TPUs Tensor Processing Units
  • accelerated processors or processing accelerators
  • System memory will typically include random access memory 130, but is more generally intended to encompass any type of non-transitory system memory such as static random access memory (SRAM), dynamic random access memory (DRAM), synchronous DRAM (SDRAM), read-only memory (ROM), or a combination thereof.
  • Solid-state drive 120 is shown as an example of a mass storage device, but more generally such mass storage may comprise any type of non-transitory storage device configured to store data, programs, and other information, and to make the data, programs, and other information accessible via a system bus 160.
  • mass storage may comprise one or more of a solid state drive, hard disk drive, a magnetic disk drive, and/or an optical disk drive.
  • Communication between the various components of the computing environment 100 may be enabled by a system bus 160 comprising one or more internal and/or external buses (e.g., a PCI bus, universal serial bus, IEEE 1394 “Firewire” bus, SCSI bus, Serial-ATA bus, ARINC bus, etc.), to which the various hardware components are electronically coupled.
  • a system bus 160 comprising one or more internal and/or external buses (e.g., a PCI bus, universal serial bus, IEEE 1394 “Firewire” bus, SCSI bus, Serial-ATA bus, ARINC bus, etc.), to which the various hardware components are electronically coupled.
  • the input/output interface 150 may enable networking capabilities such as wired or wireless network communications.
  • the input/output interface 150 may comprise a networking interface such as, but not limited to, a network port, a network socket, a network interface controller and the like.
  • a networking interface such as, but not limited to, a network port, a network socket, a network interface controller and the like.
  • the networking interface may implement specific physical layer and data link layer standards such as Ethernet, Fibre Channel, WiFi, Token Ring or Serial communication protocols.
  • the specific physical layer and the data link layer may provide a base for a full network protocol stack, allowing communication among small groups of computers on the same local area network (LAN) and large-scale network communications through routable protocols, such as Internet Protocol (IP).
  • IP Internet Protocol
  • the input/output interface 150 may be coupled to a touchscreen 190 and/or to the one or more internal and/or external buses 160.
  • the touchscreen 190 may be part of the display. In some embodiments, the touchscreen 190 is the display.
  • the touchscreen 190 may equally be referred to as a screen 190.
  • the touchscreen 190 comprises touch hardware 194 (e.g., pressure-sensitive cells embedded in a layer of a display allowing detection of a physical interaction between a user and the display) and a touch input/output controller 192 allowing communication with the display interface 140 and/or the one or more internal and/or external buses 160.
  • the input/output interface 150 may be connected to a keyboard (not shown), a mouse (not shown) or a trackpad (not shown) allowing the user to interact with the computing environment 100 in addition to or instead of the touchscreen 190.
  • the solid-state drive 120 stores program instructions suitable for being loaded into the random access memory 130 and executed by the processor 110 for executing acts of one or more methods described herein.
  • the program instructions may be part of a library or an application.
  • the computing environment 100 may include any number of the illustrated components, which may be integrated in any number of physical devices.
  • the computing environment 100 may be implemented as a cloud environment and/or a distributed architecture.
  • the computing environment 100 may include multiple servers, which may be in different physical locations and/or on different networks.
  • the computing environment 100 may include virtualized systems.
  • the methods described herein, or any parts of the methods described herein may be executed on multiple systems as distributed applications.
  • Figure 2 is a block diagram of a system for generating a hyperspectral cube in accordance with various embodiments of the present technology.
  • a camera 250 may capture images of a biological tissue.
  • the images may contain any number of biological tissue, and be in any arrangement.
  • the images may be images of multiple biological tissues, a stack of biological tissue, biological tissue having multiple layers, and/or any other configuration of biological tissue. Any number of cameras 250 may be used to capture the images.
  • the camera 250 may be a hyper-spectral and/or multi- spectral camera.
  • the camera 250 may capture multiple images of the biological tissue at different wavelengths. The wavelengths at which the images are captured may be selected by an operator and/or pre-defined. The wavelengths at which the images are captured may be determined based on the type of biological tissue being imaged.
  • An image processing system 220 may receive the images captured by the camera 250.
  • the camera 250 may transmit the images to the image processing system 220 via a network 210, and/or via any other communication means.
  • the image processing system 220 may perform various processing steps on the images.
  • the image processing system 220 may use the images from the camera 250 to generate a hyperspectral cube.
  • the hyperspectral cube may contain images of the biological tissue or tissues at different wavelengths.
  • the image processing system 220 may use the hyperspectral cube to generate spatial gradient images and/or spectral gradient images.
  • the image processing system 220 may store the images, hyperspectral cube, spatial gradient images, spectral gradient images, and/or any other data in the image storage system 225.
  • An analysis system 230 may retrieve and/or analyze the images and/or hyperspectral cubes in the image storage system 225 via the network 210.
  • the analysis system 230 may include automated systems that may use the retrieved data to perform automated analyses. For example, the analysis system 230 may use machine learning, machine vision, or other automated approaches to perform automated analysis, generate diagnostic information, generate treatment guidance, and/or the like.
  • Information output by the analysis system 230 may be provided, via the network 210, to a user device 240.
  • the user device 240 may be accessed by a clinician to examine the images of the biological tissue in the image storage system 225 and/or access any analysis regarding the biological tissue output by the analysis system 230.
  • a user interface may be output by the user device 240.
  • the user interface may contain images of the biological tissue, abundances of chromophores detected based on the images, indications of features identified in the images of the biological tissue, and/or any other information about the biological tissue.
  • FIG. 3 illustrates a hyperspectral cube 300 in accordance with various embodiments of the present technology.
  • the hyperspectral cube 300 comprises a series of images 301, 302, 303, 304, 305, and 306, a plurality of illumination power measurements 311, 312, 313, 314, 315, and 316 for each image 301-06, and a plurality of other metadata 321, 322, 323, 324, 325, and 326 for each image 301-06.
  • the cube may also comprise cube metadata 330.
  • the hyperspectral cube 300 is a data structure that comprises a series of images 301-06. Each of the images 301-06 may have been captured at different wavelengths. Although the illustrated cube 300 shows only a few images for simplicity, the cube 300 may include a larger number of images (e.g., tens, hundreds, or thousands) that may be captured across a range of wavelengths, which may be overlapping.
  • the hyperspectral cube 300 is an example of a data cube, but other data cubes may have different data structures.
  • the hyperspectral cube 300 may store a single three-dimensional data structure where two dimensions are spatial dimensions and a third dimension is a spectral dimension.
  • the hyperspectral cube 300 may separately store other “slices” of the data cube, such as a series of images where each image has a first spatial dimension and a second spectral dimension, and each image corresponds to a different row or column along a second spatial dimension.
  • the hyperspectral cube 300 comprises a series of images, where a first image 301 corresponds to a first wavelength I, a second image 302 corresponds to a second wavelength 2, and so on.
  • the hyperspectral cube 300 may be generated by the image processing system 220, which receives images from the camera 250 and combines the images into a single hyperspectral cube 300 that includes both spatial and spectral information.
  • the intensity of each pixel of each image in the hyperspectral cube 300 may be indicative of the molecular makeup, cellular arrangement, and/or density of the imaged biological tissue at the corresponding wavelength.
  • each image in the hyperspectral cube 300 may include information captured within a specific band of wavelengths that includes the particular wavelength (e.g., the term “wavelength” may refer to a representative wavelength within a band).
  • the bands may be relatively broad and/or nonoverlapping or relatively narrow and/or contiguous or overlapping.
  • the hyperspectral cube 300 may comprise an illumination power measurement 311-16 for each wavelength, which may be a measurement of the illumination power that was generated by the illumination light of the camera in the corresponding wavelength or band of wavelengths.
  • the power measurements 311-16 may be different due to temporal light fluctuations during the capture as well as the illumination light’s output spectrum.
  • Each image of a hyperspectral cube 300 may also include various other metadata 321-26.
  • the metadata 321 may include data indicating a type of the image 301, the wavelength or wavelength band for the image 301, camera settings used to capture the image 301 (e.g., focus amount), and/or any other information related to the image 301.
  • metadata 322 may contain information about the image 302
  • metadata 323 may contain information about the image 303, etc.
  • the hyperspectral cube 300 may include cube metadata 330 that may include data about the type of cube, such as an eye measurement cube, a reference cube, a baseline cube, a cube comprising spectral gradient images, a cube comprising spatial gradient images, a spatial-spectral gradient cube, etc.
  • the cube metadata 330 may include an identifier of a camera or identifiers of multiple cameras that captured the images 301-06 cube.
  • the cube metadata 330 may include patient information associated with the hyperspectral cube 300.
  • the cube metadata 330 may include any other data related to the hyperspectral cube 300.
  • FIG. 4 is a flow diagram of a method 400 for generating spectral gradient images and spatial gradient images in accordance with various embodiments of the present technology.
  • the method 400 or one or more steps thereof may be performed by a computing system, such as the computing environment 100 or the image processing system 220.
  • the method 400 or one or more steps thereof may be embodied in computerexecutable instructions that are stored in a computer-readable medium, such as a non- transitory mass storage device, loaded into memory and executed by one or more CPUs.
  • the method 400 is exemplary, and it should be understood that some steps or portions of steps in the flow diagram may be omitted and/or changed in order.
  • images of biological tissue may be captured.
  • the images may be captured and/or retrieved from memory.
  • the images may be captured by any number of cameras, such as the camera 250.
  • the images may be captured by a hyperspectral camera. Each image may correspond to a wavelength or a range of wavelengths.
  • a light source may be directed at the biological tissue while the images are captured.
  • the light source may be a tunable light source, where various parameters of the light source may be configured.
  • the number of images captured and/or the wavelength of the captured images may be selected by a user and/or pre- determined.
  • hyperspectral images may be captured from a wavelength of 450 to 905 nm in steps of 5 nm, resulting in 92 images.
  • the number of images and/or wavelength of each image may be determined based on the type of biological tissue being imaged.
  • the images may be of any biological tissue, such as a portion of an individual’s skin or the fundus of an individual’s eye.
  • Each image may be captured at a same resolution and contain a same number of pixels.
  • the camera and the biological tissue may remain at a same position when capturing each image.
  • a hyperspectral cube may be generated from the images captured at step 405.
  • the hyperspectral cube may include some or all of the images captured at step 405.
  • the hyperspectral cube may include an indication of the wavelength or wavelengths associated with each image in the hyperspectral cube.
  • the hyperspectral cube may include an illumination power measurement and/or other metadata for each image.
  • the hyperspectral cube may include cube metadata.
  • the hyperspectral cube may be generated by the image processing system 220.
  • the hyperspectral cube may be stored in memory, such as being stored by the image storage system 225.
  • the images in the hyperspectral cube may be processed, such as by normalizing the images.
  • Other processing may include spatial and/or spectral denoising (for example wavelet or non- local-means denoising, median, Laplace or gaussian filtering, adjacent averaging, Savitzky-Golay, low pass or steerable filters, 2D or 3D total variation denoising, spectral unmixing), applying a thresholding algorithm or histogram equalization algorithm, and/or pseudo-normalization.
  • Processing specific to a type of biological tissue may be performed, such as spectral corrections for melanin content, intra-ocular lenses (IOL), and/or cataract may be performed on images of an eye.
  • the hyperspectral images may be preprocessed with appropriate normalization and/or registration to spectrally calibrate and/or spatially realign images to correct for any movement that may have occurred during the image acquisition.
  • the method 400 may proceed to step 415 where spectral gradient images may be generated and/or step 420 where spatial gradient images may be generated.
  • a user may select whether spectral gradient images are generated, spatial gradient images are generated, or both.
  • spectral gradient images may be generated.
  • the spectral gradient images may put in relationship the intensity of a given pixel measured at one or several successive wavelengths.
  • a single spectral gradient image may be generated or multiple spectral gradient images may be generated.
  • the images in the hyperspectral cube may be used to generate the spectral gradient images.
  • Each pixel of the images may have various attributes, such as a location (x, y) and a value associated with the pixel, which is referred to herein as an intensity.
  • the images in the hyperspectral cube may be aligned so that a pixel located in a same location of each image corresponds to a same physical location on the biological tissue.
  • a rate of change value such as a derivative, of the pixel intensity values for that pixel location may be determined over some or all of the images in the hyperspectral cube.
  • a spectral gradient image may be generated by plotting these rate of change values at their corresponding pixel locations.
  • a pixel location and an image in the hyperspectral cube may be selected. Pixels at the pixel location may be selected in different frames of the hyperspectral cube. The frames may surround the selected image when the images are placed in order based on wavelength. For example the five frames immediately below the selected image and the five frames immediately above the selected image may be selected. An intensity value may be determined for each of the pixels at the pixel location in the selected images. A rate of change of the intensity values over the spectral gradients may be determined. In order to generate the spectral gradient image, an intensity value for the pixel at the pixel location may be determined based on the rate of change of the intensity values over the spectral gradients. This process may be repeated for each pixel location of the hyperspectral cube to generate a complete spectral gradient image having a pixel plotted at each location.
  • the hyperspectral cube may contain information from 2 spatial dimensions (x and ) and 1 spectral dimension (wavelength, X).
  • the rate of change of the intensity between adjacent pixels in any of these dimensions, and its direction, may be quantified using derivatives and or gradients.
  • the resulting images may have the same size (i.e. same number of pixels) as the images used to generate the spectral gradient image.
  • Spectral gradient images provide information on the rate of change of the intensity in the spectral dimension (AI/A X). With respect to the chosen direction of wavelength progression (increasing or decreasing), the derivative values can be positive or negative but they are not tributary of the image orientation.
  • the derivatives for the spectral gradient images can be obtained using various techniques, such as the forward, backward and central difference. The size (kernel) of any of these techniques can be adapted based on the original spectra properties such as the noise level, the spectral resolution, and/or the level of detail contained.
  • the spectral gradient image can be additionally divided by an image from the original cube in order to provide a rate of change on a relative scale (for example, a percentage) rather than on an absolute one.
  • a relative spectral gradient images may provide a better visualization of features of the biological tissue, such as areas of geographic atrophy, than an original image or a non-normalized spectral gradient image.
  • the image or images in the hyperspectral cube to be used for generating the spectral gradient images may be determined based on an intended use of the spectral gradient images. If the spectral gradient image is to be used for identifying areas of compromised retinal pigmented epithelium (RPE), a 560 nm image from the hyperspectral cube may be selected as the central image, and then frames surrounding that wavelength may be selected from the hyperspectral cube.
  • RPE retinal pigmented epithelium
  • spatial gradient images may be generated.
  • a single spatial gradient image may be generated or multiple spatial gradient images may be generated.
  • the images in the hyperspectral cube may be used to generate the spatial gradient images.
  • the spatial gradient image may be generated in a similar manner as the spectral gradient images described at step 415, except instead of showing the rate of change of pixel intensities in a spectral direction, the pixels of the spatial gradient image may show the rate of change of pixel intensities in a horizontal and/or vertical direction.
  • Spatial gradient images provide information on the rate of change of the intensity Al both in the horizontal (x) and vertical (y) directions, from which the spatial gradient magnitude can also be obtained as well as the direction (atan((AI/Ay)/(AI/Ax))).
  • the derivative values can be positive or negative, while the spectral gradient magnitude will necessarily be positive.
  • the x and y derivatives are tributary of the image orientation (rotation), but the magnitude is not.
  • the derivatives for the spatial gradient images can be obtained from techniques such as the forward, backward and central differences, Sobel’s, Scharr's and Prewitt’s filters, and Roberts cross.
  • the size (kernel) of any of these techniques can be adapted based on the original image properties such as the noise level, the spatial resolution, or the level of detail contained.
  • a spatial gradient image can be divided by an original image from the hyperspectral cube in order to provide a rate of change on a relative scale (for example, a percentage) rather than on an absolute one.
  • features of the spectral and/or spatial gradient images may be identified.
  • the features may be identified by the analysis system 230.
  • the features may be areas of atrophy, geographic atrophy, macular pigment, an indication of disruption of integrity of melanin in RPE, areas of lesion, and/or any other features.
  • a user interface displaying the spectral and/or spatial gradient images may be output to a clinician or other user.
  • a grid such as an Early Treatment of Diabetic Retinopathy Study (ETDRS) grid, can be overlay ed on the spectral and/or spatial gradient images to help the clinician evaluate the location of the lesion(s) of interest relative to the center of the macula and eventually evaluate quantitatively the proportion of each zone affected.
  • EDRS Early Treatment of Diabetic Retinopathy Study
  • the features may be identified by the clinician and marked using a user interface.
  • the user interface may allow the clinician to modify various aspects of the spectral and/or spatial gradient images, such as allowing the clinician to switch between different images, modify parameters used for generating the gradient images, and/or perform other modifications to the images.
  • a machine learning algorithm may be used to identify the features of the spectral and/or spatial gradient images.
  • the MLA may be any type of MLA, such as a neural network.
  • the MLA may have been trained to identify the features using a training dataset containing training data points. Each training data point may contain an image and a label. The label may indicate the location of features in the image.
  • a training data point used to train the MLA may include a spatial gradient image and/or spectral gradient image of the retina, and a label indicating locations of lesions in the images of the retina.
  • the MLA may be trained to identify areas of atrophy in spectral gradient images, areas of geographic atrophy in spectral gradient images, areas of macular pigment in spectral gradient images, areas of disruption of integrity of melanin in RPE based on spectral gradient images, and/or areas of lesion in spectral gradient images.
  • Anomalies in the biological tissue may be detected and/or mapped based on the one or more features.
  • the anomalies may include a pathological lesion of an individual’s eye.
  • Physiological characteristics of the individual may be detected and/or mapped based on the one or more features
  • a treatment plan for the individual may be generated based on the identified features.
  • the treatment plan may be determined by the clinician based on the identified features.
  • the treatment plan may include a treatment schedule and/or a dosage of treatment for an individual.
  • the features identified at step 425 may be input to an MLA trained to predict a response to a treatment.
  • the MLA may output a predicted response of the individual to the treatment.
  • the features identified at step 425 may be input to an MLA trained to predict whether an adverse event will occur.
  • the MLA may output a predicted likelihood that the adverse event will occur to the individual.
  • FIG. 5 is a flow diagram of a method 500 for generating a spectral-spatial gradient cube in accordance with various embodiments of the present technology.
  • the method 500 or one or more steps thereof may be performed by a computing system, such as the computing environment 100 or the image processing system 220.
  • the method 500 or one or more steps thereof may be embodied in computer-executable instructions that are stored in a computer-readable medium, such as a non-transitory mass storage device, loaded into memory and executed by one or more CPUs.
  • the method 500 L is exemplary, and it should be understood that some steps or portions of steps in the flow diagram may be omitted and/or changed in order.
  • hyperspectral images of biological tissue may be captured.
  • a hyperspectral cube may be generated from the hyperspectral images.
  • spatial gradient images may be generated using the hyperspectral cube, and at step 520 spectral gradient images may be generated using the hyperspectral cube.
  • Actions performed at steps 505, 510, 515, and/or 520 may be similar to those performed at steps 405, 410, 415, and/or 420 of the method 400, as described above.
  • a spectral- spatial gradient cube may be generated.
  • the spectral-spatial gradient cube may include any number of images.
  • the spatial gradient images and spectral gradient images may be used to generate the spectral- spatial gradient cube.
  • the images in the spectral-spatial gradient cube may indicate a relationship between pixel intensity values of the spatial gradient images and the spectral gradient images.
  • the spatial and spectral gradient images can be put in relationship to each other, for example, by computing the angle between the two: g
  • a total rate of change of the intensity can be obtained from the magnitude considering all 3 dimensions: V((A]/Ax) A 2+(AI/Ay) A 2+(AI/AA) A 2) from which a relative rate of change (for example expressed as a percentage) can also be obtained by dividing by an image from the hyperspectral cube.
  • Second or higher order derivatives may be determined at step 525. Successive derivatives in either the spatial or spectral dimensions can lead to higher order, sometimes mixed, derivatives.
  • the Laplacian can be changed into a relative Laplacian by dividing the gradient image by an image from the hyperspectral cube.
  • Curl images (one for all three dimensions) could also be computed by taking the cross product of the gradient operator with the individual component of the spectral gradient images and/or spatial gradient images.
  • the spatial dimensions x and y being physical quantities very different than the spectral dimension A, a Laplacian or curl computation involving only the spatial dimensions could be performed.
  • features may be identified in the spectral-spatial gradient cube.
  • the features may be identified by the analysis system 230.
  • a user interface displaying the spectral- spatial gradient cube may be output to a clinician or other user.
  • a grid such as an Early Treatment of Diabetic Retinopathy Study (ETDRS) grid, can be overlayed on the spectral- spatial gradient cube to help the clinician evaluate the location of the lesion(s) of interest relative to the center of the macula and eventually evaluate quantitatively the proportion of each zone affected.
  • EDRS Early Treatment of Diabetic Retinopathy Study
  • the features may be identified by the clinician and marked using a user interface.
  • the user interface may allow the clinician to modify various aspects of the spectral-spatial gradient cube, such as allowing the clinician to switch between different images, modify parameters used for generating the spectral-spatial gradient cube, and/or perform other modifications to the spectral-spatial gradient cube.
  • a machine learning algorithm may be used to identify the features of the spectral-spatial gradient cube.
  • the MLA may be any type of MLA, such as a neural network.
  • the MLA may have been trained to identify the features using a training dataset containing training data points. Each training data point may contain a spectral-spatial gradient cube and a label. The label may indicate the location of features in the spectral- spatial gradient cube.
  • a treatment plan for the individual may be generated based on the identified features.
  • the treatment plan may be determined by the clinician based on the identified features.
  • Figure 6 is a flow diagram of a method 600 for determining abundances of chromophores in a retinal image in accordance with various embodiments of the present technology.
  • the method 600 or one or more steps thereof may be performed by a computing system, such as the computing environment 100 or the image processing system 220.
  • the method 600 or one or more steps thereof may be embodied in computer-executable instructions that are stored in a computer-readable medium, such as a non-transitory mass storage device, loaded into memory and executed by one or more CPUs.
  • the method 600 is exemplary, and it should be understood that some steps or portions of steps in the flow diagram may be omitted and/or changed in order.
  • hyperspectral images of a retina may be captured.
  • the hyperspectral retinal images may be captured and/or retrieved from memory.
  • the hyperspectral retinal images may be captured by any number of cameras, such as the camera 250.
  • the hyperspectral retinal images may be captured by a hyperspectral camera. Each image may correspond to a wavelength or a range of wavelengths.
  • the hyperspectral retinal images may be captured at any wavelengths and/or ranges of wavelengths.
  • a light source may be directed at the retina while the images are captured.
  • the light source may be a tunable light source, where various parameters of the light source may be configured.
  • the number of hyperspectral retinal images captured and/or the wavelength of the captured hyperspectral retinal images may be selected by a user and/or pre-determined.
  • hyperspectral retinal images may be captured from a wavelength of 450 to 905 nm in steps of 5 nm, resulting in 92 images.
  • the number of hyperspectral retinal images and/or wavelength of each hyperspectral retinal image may be determined based on the type of analysis being performed.
  • the hyperspectral retinal images may be of the fundus of an individual’s eye.
  • Each hyperspectral retinal image may be captured at a same resolution and contain a same number of pixels.
  • the camera may remain at a relatively fixed position with respect to the eye.
  • the hyperspectral retinal images may be preprocessed with appropriate normalization and/or registration to spectrally calibrate and/or spatially realign images to correct for any eye movement that may have occurred during the image acquisition.
  • a hyperspectral cube may be generated from the hyperspectral images.
  • spectral gradient images may be generated from the hyperspectral cube. Actions performed at steps 610 and/or 615 may be similar to those performed at steps 410 and/or 415 of the method 400, as described above. Normalization algorithms and/or de-noising algorithms may be applied to the hyperspectral cube at step 610. An example of normalization that may be applied to the hyperspectral cube is described in commonly- assigned Application No. PCT/IB2024/053209, which is incorporated by reference herein in its entirety.
  • the spectral gradient images may be generated for first-order derivatives, second-order derivatives, and/or any other higher-order derivatives.
  • First-order derivative spectral gradient images may be generated at wavelengths of 500 nm and 560 nm.
  • Second-order derivative spectral gradient images may be generated at wavelengths of 550 nm and 575 nm.
  • an absorbance cube may be determined.
  • the images in the hyperspectral cube may be converted to absorbance images.
  • the absorbance cube may include an absorbance image for some or all of the images of the hyperspectral cube.
  • the absorbance image may be calculated using the following formula, where Abs is the absorbance and R 4 1, - 1 is the hyperspectral image: ADS — 10g ⁇ 1
  • Spectral absorbance gradient images may be determined using the absorbance cube.
  • the spectral absorbance gradient images may show the rate of change in absorbance across multiple spectrums.
  • the spectral absorbance gradient images may be generated for first- order derivatives, second-order derivatives, and/or any other higher-order derivatives.
  • Figures 12-15, described below, are exemplary first-order or second-order spectral absorbance gradient images.
  • the absorbance as a function of wavelength is a sum of the apparent abundances of the absorbing chromophores melanin (Mel), oxyhemoglobin (HbO), deoxyhemoglobin (HbR), and macular pigment (MP) multiplied by their absorption coefficients /ri.
  • the apparent abundances of melanin, oxyhemoglobin, deoxyhemoglobin, and/or macular pigment may be determined from the spectral gradient images and/or absorbance images according to the following steps.
  • a melanin abundance may be determined.
  • the melanin abundance may be determined by analyzing first-order spectral absorbance gradient images.
  • the spectral absorbance gradient images may be centered around a hyperspectral image at 560 nm.
  • the melanin abundance may be determined without prior knowledge of the abundances of other chromophores.
  • the melanin abundance may be determined by dividing the spectral derivative of the absorbance at 560 nm by the spectral derivative of the melanin absorption coefficient at 560 nm.
  • an oxyhemoglobin abundance may be determined.
  • the oxyhemoglobin abundance may be determined by analyzing second-order spectral absorbance gradient images.
  • the spectral absorbance gradient images may be centered around a hyperspectral image at 575 nm. Similar results may be achieved if the image is centered around a hyperspectral image at a range between 520 nm and 600nm, such as at 520nm, 530 nm, 540 nm or 600 nm, where the second derivative of the absorbance may be dominated by oxyhemoglobin.
  • the oxyhemoglobin abundance may be determined without prior knowledge of the abundances of other chromophores.
  • a deoxyhemoglobin abundance may be determined.
  • the deoxyhemoglobin abundance may be determined by analyzing second-order spectral absorbance gradient images.
  • the second-order spectral absorbance gradient images may be centered around a hyperspectral image at 550 nm.
  • the deoxyhemoglobin abundance may be determined based on the melanin abundance determined at step 625 and the oxyhemoglobin abundance determined at step 630.
  • the melanin contribution to the absorbance and the oxyhemoglobin contribution to the absorbance may be subtracted from the total absorbance at 550 nm to determine the deoxyhemoglobin abundance.
  • a macular pigment abundance may be determined.
  • the macular pigment abundance may be determined by analyzing first-order spectral absorbance gradient images.
  • the spectral absorbance gradient images may be centered around a hyperspectral image at 500 nm.
  • the macular pigment abundance may be determined based on the melanin abundance determined at step 625, the oxyhemoglobin abundance determined at step 630, and the deoxyhemoglobin abundance determined at step 635.
  • the melanin contribution to the absorbance, oxyhemoglobin contribution to the absorbance, and deoxyhemoglobin contribution to the absorbance may be subtracted from the total absorbance at 500 nm to determine the macular pigment abundance.
  • the determined abundances may be output.
  • the melanin abundance, oxyhemoglobin abundance, deoxyhemoglobin abundance, and/or the macular pigment abundance may be output.
  • the melanin abundance, oxyhemoglobin abundance, deoxyhemoglobin abundance, and/or the macular pigment abundance may be indicative of a concentration of melanin, oxyhemoglobin, deoxyhemoglobin, and/or macular pigment in the retina that was imaged.
  • the abundances may be output on a user interface.
  • the abundances may be stored in memory, such as in a database.
  • a total hemoglobin abundance may be determined.
  • the total hemoglobin abundance may be determined based on the determined oxyhemoglobin abundance and the determined deoxyhemoglobin abundance.
  • the total hemoglobin abundance may be the sum of the oxyhemoglobin abundance and the deoxyhemoglobin abundance.
  • a retinal oximetry value may be determined for each pixel of the image.
  • the retinal oximetry value may be determined based on the oxyhemoglobin abundance and the total hemoglobin abundance.
  • the retinal oximetry value may be the ratio of oxyhemoglobin abundance over the total hemoglobin abundance.
  • Figure 7 is an exemplary spectral gradient image calculated at 560 nm in accordance with various embodiments of the present technology.
  • Figure 7 is an image of a retina. Darker areas in the image may be indicative of compromised RPE. Not all of the darker areas correspond to compromised RPE. For example the optical nerve head 710 and blood vessels 720 may also appear as dark areas, but those dark areas might not correspond to compromised RPE. The dark areas corresponding to the optical nerve head 710, blood vessels 720, and/or other dark areas corresponding to known features may be excluded when determining a measure of compromised RPE.
  • the compromised RPE may be an indication of age-related macular degeneration (AMD). Brighter areas in the image may indicate calcified drusen, which may also be an indication of AMD. These darker and brighter areas may be clearly viewable or detectable in the spectral gradient image, but might not have been detectable or viewable in the images of the hyperspectral cube used to generate this spectral gradient image.
  • AMD age-related macular degeneration
  • Figure 8 is a second exemplary spectral gradient image calculated at 560 nm in accordance with various embodiments of the present technology.
  • Figure 8 is an image of a retina. Darker areas in the image may be indicative of compromised RPE. Dark areas corresponding to blood vessels and/or the optical nerve head may be excluded when identifying the compromised RPE in figure 8.
  • the compromised RPE may be an indication of age-related macular degeneration (AMD).
  • AMD age-related macular degeneration
  • Figure 9 is a third exemplary spectral gradient image calculated at 630 nm in accordance with various embodiments of the present technology.
  • Figure 9 is an image of a retina. Darker areas in the image may indicate regressed calcified drusen, which may also be an indication of AMD. Dark areas corresponding to blood vessels and/or the optical nerve head may be excluded when identifying the calcified drusen in figure 9.
  • Figure 10 is an exemplary spectral gradient image with an overlayed grid in accordance with various embodiments of the present technology.
  • Figure 10 is an image of a retina.
  • An ETDRS grid is overlayed on the image, which may help a clinician to evaluate the location of the lesion(s) of interest relative to the center of the macula. The clinician may use this information to evaluate quantitatively the proportion of each zone affected.
  • the zone 1010 corresponds to identified geographic atrophy. Automatic segmentation may be performed on the image of figure 10 to identify the zone 1010. The automatic segmentation may be performed using a machine learning algorithm trained to identify areas of geographic atrophy.
  • Figure 11 includes an exemplary hyperspectral image 1100, spectral-spatial gradient image 1120, and second-order spatial gradient image 1110 in accordance with various embodiments of the present technology.
  • Each of the images 1100, 1110, and 1120 is of a retina. As can be seen in the images, different features of the retina are visible in each of the hyperspectral image 1100, spectral-spatial gradient image 1120, and spatial gradient image 1110.
  • Figure 12 is an exemplary spectral absorbance gradient image at 560 nm in accordance with various embodiments of the present technology.
  • the spectral absorbance gradient image illustrated in figure 12 is an image of a retina.
  • the spectral absorbance gradient image illustrated in figure 12 is a first-order spectral absorbance gradient image.
  • the spectral absorbance gradient image illustrated in figure 12 may be used to identify a melanin abundance within the retina.
  • the first derivative of the absorbance may be dominated by melanin.
  • the melanin abundance may be determined by dividing the spectral derivative of the absorbance at 560 nm by the spectral derivative of the melanin absorption coefficient at 560 nm
  • Figure 13 is an exemplary second-order spectral absorbance gradient image at 575 nm in accordance with various embodiments of the present technology.
  • the second-order spectral absorbance gradient image illustrated in figure 13 is an image of a retina.
  • the second-order spectral absorbance gradient image illustrated in figure 13 may be used to identify an oxyhemoglobin abundance within the retina.
  • the second derivative of the absorbance may be dominated by oxyhemoglobin.
  • the oxyhemoglobin abundance may be determined by dividing the second-order spectral derivative of the absorbance at 575 nm by the second-order spectral derivative of the oxyhemoglobin absorption coefficient at 575 nm
  • Figure 14 is an exemplary second-order spectral absorbance gradient image at 550 nm in accordance with various embodiments of the present technology.
  • the second-order spectral absorbance gradient image illustrated in figure 14 is an image of a retina.
  • the second-order spectral absorbance gradient image illustrated in figure 14 may be used to identify a deoxyhemoglobin abundance within the retina.
  • the second derivative of the absorbance may be maximal for deoxyhemoglobin.
  • the deoxyhemoglobin abundance may be determined after determining the melanin abundance and the oxyhemoglobin abundance.
  • the deoxyhemoglobin abundance may be determined by subtracting the second-order spectral derivative of the melanin absorption coefficient times the melanin abundance and the second-order spectral derivative of the oxyhemoglobin absorption coefficient times the oxyhemoglobin abundance from the second-order spectral derivative of the absorbance at 550 nm, and then dividing that amount by the second-order spectral derivative of the deoxyhemoglobin absorption coefficient at 550 nm.
  • Figure 15 is an exemplary spectral absorbance gradient image at 500 nm in accordance with various embodiments of the present technology.
  • the spectral absorbance gradient image illustrated in figure 15 is an image of a retina.
  • the spectral absorbance gradient image illustrated in figure 15 may be used to identify a macular pigment abundance within the retina.
  • the absorbance may be maximal for macular pigment.
  • the macular pigment abundance may be determined after determining the melanin abundance, the oxyhemoglobin abundance, and the deoxyhemoglobin abundance.
  • the macular pigment abundance may be determined by subtracting the spectral derivative of the melanin absorption coefficient times the melanin abundance, and the spectral derivative of the oxyhemoglobin absorption coefficient times the oxyhemoglobin abundance, and the spectral derivative of the deoxyhemoglobin absorption coefficient times the deoxyhemoglobin abundance from the spectral derivative of the absorbance at 500nm, and then dividing that amount by the spectral derivative of the macular pigment absorption coefficient at 500 nm.
  • the wording “and/or” is intended to represent an inclusive-or; for example, “X and/or Y” is intended to mean X or Y or both. As a further example, “X, Y, and/or Z” is intended to mean X or Y or Z or any combination thereof.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Pathology (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Radiology & Medical Imaging (AREA)
  • Theoretical Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Ophthalmology & Optometry (AREA)
  • Business, Economics & Management (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Optics & Photonics (AREA)
  • Software Systems (AREA)
  • General Business, Economics & Management (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Medicinal Chemistry (AREA)
  • Quality & Reliability (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Chemical & Material Sciences (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

L'invention divulgue un procédé et un système d'identification de caractéristiques dans un tissu biologique d'un individu à l'aide d'une caméra hyperspectrale. Des images capturées par une caméra hyperspectrale du tissu biologique sont reçues. Un cube hyperspectral est généré à l'aide des images. Une image de gradient spectral est générée à l'aide du cube hyperspectral, chaque pixel de l'image de gradient spectral indiquant un taux de changement d'une intensité de pixels à un même emplacement dans différentes trames du cube hyperspectral. Les caractéristiques du tissu biologique sont identifiées dans l'image de gradient spectral.
PCT/IB2025/050840 2024-01-26 2025-01-24 Systèmes et procédés de génération d'images à gradient spectral et spatial de tissu biologique Pending WO2025158396A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202463625598P 2024-01-26 2024-01-26
US63/625,598 2024-01-26
US202463672938P 2024-07-18 2024-07-18
US63/672,938 2024-07-18

Publications (1)

Publication Number Publication Date
WO2025158396A1 true WO2025158396A1 (fr) 2025-07-31

Family

ID=96544538

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2025/050840 Pending WO2025158396A1 (fr) 2024-01-26 2025-01-24 Systèmes et procédés de génération d'images à gradient spectral et spatial de tissu biologique

Country Status (1)

Country Link
WO (1) WO2025158396A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160300A (zh) * 2019-12-31 2020-05-15 北京理工大学重庆创新中心 一种结合全局先验的深度学习高光谱图像显著性检测算法
US20230125377A1 (en) * 2020-05-08 2023-04-27 The Regents Of The University Of California Label-free real-time hyperspectral endoscopy for molecular-guided cancer surgery
WO2023228146A1 (fr) * 2022-05-27 2023-11-30 Optina Diagnostics, Inc. Procédés d'identification de biomarqueurs présents dans des tissus biologiques, systèmes d'imagerie médicale et procédés d'entraînement des systèmes d'imagerie médicale

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160300A (zh) * 2019-12-31 2020-05-15 北京理工大学重庆创新中心 一种结合全局先验的深度学习高光谱图像显著性检测算法
US20230125377A1 (en) * 2020-05-08 2023-04-27 The Regents Of The University Of California Label-free real-time hyperspectral endoscopy for molecular-guided cancer surgery
WO2023228146A1 (fr) * 2022-05-27 2023-11-30 Optina Diagnostics, Inc. Procédés d'identification de biomarqueurs présents dans des tissus biologiques, systèmes d'imagerie médicale et procédés d'entraînement des systèmes d'imagerie médicale

Similar Documents

Publication Publication Date Title
JP7572316B2 (ja) 機械学習ガイド付き撮影システム
US12165434B2 (en) Segmentation and classification of geographic atrophy patterns in patients with age related macular degeneration in widefield autofluorescence images
US9418423B2 (en) Motion correction and normalization of features in optical coherence tomography
EP2453790B1 (fr) Appareil de traitement d'image, procédé de traitement d'image et programme
Shanmugam et al. An automatic recognition of glaucoma in fundus images using deep learning and random forest classifier
Abràmoff et al. Retinal imaging and image analysis
JP5955163B2 (ja) 画像処理装置および画像処理方法
JP5165732B2 (ja) マルチスペクトル画像処理方法、画像処理装置、及び画像処理システム
Odstrcilik et al. Thickness related textural properties of retinal nerve fiber layer in color fundus images
EP4143781B1 (fr) Segmentation de face de pathologie par tco utilisant des plaques à codage de canal
JP4599520B2 (ja) マルチスペクトル画像処理方法
CN108272434B (zh) 对眼底图像进行处理的方法及装置
WO2015013632A1 (fr) Mesure automatique de changements dans une maladie rétinienne, épithéliale du pigment rétinien, ou choroïdienne
Szeskin et al. A column-based deep learning method for the detection and quantification of atrophy associated with AMD in OCT scans
JP2016512133A (ja) 網膜血管系の解析により疾患を検出する方法
Christopher et al. Deep learning approaches predict glaucomatous visual field damage from optical coherence tomography optic nerve head enface images and retinal nerve fiber layer thickness maps
WO2018161078A1 (fr) Réglage et normalisation d'image
Dufour et al. Pathology hinting as the combination of automatic segmentation with a statistical shape model
WO2025158396A1 (fr) Systèmes et procédés de génération d'images à gradient spectral et spatial de tissu biologique
Majumdar et al. An automated graphical user interface based system for the extraction of retinal blood vessels using kirsch‘s template
WO2022249184A1 (fr) Classification par apprentissage machine d'atrophie rétinienne dans des scanners oct
WO2013041977A1 (fr) Procédé d'amélioration de l'image rétinienne et outil pour la mise en œuvre de ce procédé
Vinodhini et al. Swarm Based Enhancement Optimization Method for Image Enhancement for Diabetic Retinopathy Detection
WO2025078936A1 (fr) Procédé et système de mappage de structures biologiques indiquant un état médical, à partir d'une image rétinienne multispectrale
Davis et al. Identification of spectral phenotypes in age-related macular degeneration patients

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25744826

Country of ref document: EP

Kind code of ref document: A1