WO2025207357A1 - A sensor fusion system and methods of use - Google Patents
A sensor fusion system and methods of useInfo
- Publication number
- WO2025207357A1 WO2025207357A1 PCT/US2025/020222 US2025020222W WO2025207357A1 WO 2025207357 A1 WO2025207357 A1 WO 2025207357A1 US 2025020222 W US2025020222 W US 2025020222W WO 2025207357 A1 WO2025207357 A1 WO 2025207357A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- interest
- scanning
- camera
- oct
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0062—Arrangements for scanning
- A61B5/0066—Optical coherence imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00004—Operational features of endoscopes characterised by electronic signal processing
- A61B1/00009—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
- A61B1/000096—Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/00002—Operational features of endoscopes
- A61B1/00043—Operational features of endoscopes provided with output arrangements
- A61B1/00045—Display arrangement
- A61B1/0005—Display arrangement combining images e.g. side-by-side, superimposed or tiled
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B1/00—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
- A61B1/227—Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for ears, i.e. otoscopes
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0033—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room
- A61B5/0035—Features or image-related aspects of imaging apparatus, e.g. for MRI, optical tomography or impedance tomography apparatus; Arrangements of imaging apparatus in a room adapted for acquisition of images from more than one imaging mode, e.g. combining MRI and optical tomography
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/48—Other medical applications
- A61B5/4887—Locating particular structures in or on the body
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6813—Specially adapted to be attached to a specific body part
- A61B5/6814—Head
- A61B5/6815—Ear
- A61B5/6817—Ear canal
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B9/00—Measuring instruments characterised by the use of optical techniques
- G01B9/02—Interferometers
- G01B9/0209—Low-coherence interferometers
- G01B9/02091—Tomographic interferometers, e.g. based on optical coherence
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B2576/00—Medical imaging apparatus involving image processing or analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6846—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be brought in contact with an internal body part, i.e. invasive
- A61B5/6847—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be brought in contact with an internal body part, i.e. invasive mounted on an invasive device
Definitions
- a volumetric scanning OCT system may seem to be a solution to the narrow window of data capture of a scan; however, its use in this application is not feasible due to a number of limitations, including speed and cost.
- Speed A typical high speed volumetric scan requires 1- 5s to complete data capture across the entire window. Since diagnostically relevant tissue is a small proportion of the overall image capture window, it is inefficient to capture from the entire window at a rate less than 1Hz. Moreover, in a use case of non-compliant patients, subject movement can compromise the integrity and coherence of the entire volume acquired at the data collection rate, resulting in unusable data.
- Cost If the speed requirement is met, volumetric scanning solution will require significantly enhanced components.
- the present invention attempts to solve these problems, as well as others.
- the method further comprises identifying diagnostically relevant tissue by a deep learning model segmenting the camera image into areas of interest within about 50 ms of image capture.
- the method further comprises directing focused scanning in real time to the area of interest within about 500ms of image capture.
- the method further comprises scanning until sufficient data is determined to have been collected through the use of a quality control metric and informs the user that scanning of sufficient data is complete or that the data is invalid and requires remeasurement.
- the plurality of OCT A-scans is obtained with a micro-electromechanical system (MEMS) mirror component or a galvanometer mirror component.
- the camera is selected from the group consisting of a color camera, a greyscale camera, or any focusable wide field of view input, or faster low- resolution scanning technology.
- FIGS. 1A-1B are images captured on the OtoSight Middle Ear Scope showing the surface image of the TM with crosshairs showing the position of the OCT beam in the center of the image.
- the OCT beam is positioned over non scannable tissue or obstruction.
- FIG. 1A shows the OCT beam centered over the malleus bone in the middle ear. Data collected at this point will not indicate fluid presence or turbidity.
- FIG. IB shows the OCT beam centered over cerumen, blocking the view of the middle ear space, also preventing diagnostic data collection.
- Each image is annotated to demonstrate that moving the OCT beam to a diagnostically relevant portion of the image will enable data collection.
- FIG. 2A is a schematic of the MEMS scanning design based on existing PhotoniCare imaging system.
- FIG. 2B is an OCT imaging system.
- a MEMS mirror component will be introduced to the OCT laser beam pathway to redirect scanning to the anatomy identified by the CNN (convolutional neural network) model with a camera input.
- FIGS. 1A-1B are images captured on the OtoSight Middle Ear Scope showing the surface image of the tympanic membrane (TM) with crosshairs showing the position of the OCT beam in the center of the image.
- the OCT beam is positioned over non scannable tissue or obstruction.
- FIG. 1A shows the OCT beam centered over the malleus bone in the middle ear. Data collected at this point will not indicate fluid presence or turbidity.
- FIG. IB shows the OCT beam centered over cerumen, blocking the view of the middle ear space, also preventing diagnostic data collection.
- Each image is annotated to demonstrate that moving the OCT beam to a diagnostically relevant portion of the image will enable data collection.
- FIG. 2A is a schematic of the MEMS scanning system 100 based on existing OCT imaging system.
- a MEMS mirror component 110 will be introduced to the OCT laser beam pathway 120 to redirect scanning to the anatomy identified by the CNN (convolutional neural network) model with a camera input 130.
- the MEMS scanning system 100 further comprises a focusing optics 140, a liquid lens 142, a hot mirror 144, a fold mirror 146, and an aspheric collimator 148, according to one embodiment.
- Alternative configurations could deliver and collect the OCT light where MEMS and a focusing optic are required in other embodiments.
- the same method and system could be applicable to Light Detection and Ranging (LIDAR) or any other scanning technology where scanning speed is limited.
- LIDAR Light Detection and Ranging
- various machine learning algorithms could be used to process image data. Traditional algorithms could be used also in place of a deep learning model.
- moving elements such as galvanometers could replace MEMS for rapid movement of optical scanning beam.
- moving elements such as galvanometers could replace MEMS for rapid movement of optical scanning beam.
- a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer.
- an application running on a server and the server can be a component.
- One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers.
- a computer typically includes a variety of computer-readable media.
- Computer-readable media can be any available media that can be accessed by the computer and includes both volatile and nonvolatile media, removable and non-removable media.
- Computer-readable media can comprise computer storage media and communication media.
- Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer- readable instructions, data structures, program modules or other data.
- communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer-readable media.
- Software includes applications and algorithms.
- Software may be implemented in a smart phone, tablet, or personal computer, in the cloud, on a wearable device, or other computing or processing device.
- Software may include logs, journals, tables, games, recordings, communications, SMS messages, Web sites, charts, interactive tools, social networks, VOIP (Voice Over Internet Protocol), e-mails, and videos.
- VOIP Voice Over Internet Protocol
- ⁇ includes any type of computer code, including source code, object code, executable code, firmware, software, etc.
- computer readable medium includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Surgery (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Public Health (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Veterinary Medicine (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Animal Behavior & Ethology (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Artificial Intelligence (AREA)
- Optics & Photonics (AREA)
- Signal Processing (AREA)
- Evolutionary Computation (AREA)
- Otolaryngology (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physiology (AREA)
- Psychiatry (AREA)
- General Physics & Mathematics (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Provided herein are systems and methods for sensor fusion of images. A sensor fusion method, comprises using a plurality of camera images combined with a plurality of OCT A-scans, wherein the A-scan comprises a depth profile that represents the reflected light intensity as a function of depth along a single beam axis and real time processing the camera output to determine regions of medical interest; and directing optical hardware to concentrate the OCT scanning function on areas of interest.
Description
A SENSOR FUSION SYSTEM AND METHODS OF USE
BACKGROUND
[0001] The invention generally relates to imaging systems and more specifically to Optical Coherence Tomography (OCT) systems or Low Coherence Interferometry (LCI).
[0002] A volumetric scanning OCT system may seem to be a solution to the narrow window of data capture of a scan; however, its use in this application is not feasible due to a number of limitations, including speed and cost. 1) Speed: A typical high speed volumetric scan requires 1- 5s to complete data capture across the entire window. Since diagnostically relevant tissue is a small proportion of the overall image capture window, it is inefficient to capture from the entire window at a rate less than 1Hz. Moreover, in a use case of non-compliant patients, subject movement can compromise the integrity and coherence of the entire volume acquired at the data collection rate, resulting in unusable data. 2) Cost: If the speed requirement is met, volumetric scanning solution will require significantly enhanced components. This includes high-performance camera electronics, high-bandwidth data transmission and processing units, and an optimized optical system. These upgrades would enable scan rates of 100 to 1000 kHz within acceptable timeframes, along with increased data capture and processing capacity. However, this would result in a 2-5x increase in manufacturing costs, rendering the device prohibitively expensive for front-line care settings.
[0003] The present invention attempts to solve these problems, as well as others.
SUMMARY OF THE INVENTION
[0004] Provided herein are systems and methods for sensor fusion of images. A sensor fusion method, comprises using a plurality of camera images combined with a plurality of OCT A-scans, wherein the A-scan comprises a depth profile that represents the reflected light intensity as a function of depth along a single beam axis and real time processing the camera output to determine regions of medical interest; and directing optical hardware to concentrate the OCT scanning function on areas of interest.
[0005] The method further comprises identifying diagnostically relevant tissue by a deep learning model segmenting the camera image into areas of interest within about 50 ms of image capture. The method further comprises directing focused scanning in real time to the area of interest within about 500ms of image capture. The method further comprises scanning until sufficient data is
determined to have been collected through the use of a quality control metric and informs the user that scanning of sufficient data is complete or that the data is invalid and requires remeasurement. The plurality of OCT A-scans is obtained with a micro-electromechanical system (MEMS) mirror component or a galvanometer mirror component. The camera is selected from the group consisting of a color camera, a greyscale camera, or any focusable wide field of view input, or faster low- resolution scanning technology.
[0006] The systems and methods are set forth in part in the description which follows, and in part will be obvious from the description, or can be learned by practice of the systems and methods. The advantages of the systems and methods will be realized and attained by means of the elements and combinations particularly pointed out in the appended claims. It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the systems and methods, as claimed.
[0007] Accordingly, it is an object of the invention not to encompass within the invention any previously known product, process of making the product, or method of using the product such that Applicants reserve the right and hereby disclose a disclaimer of any previously known product, process, or method. It is further noted that the invention does not intend to encompass within the scope of the invention any product, process, or making of the product or method of using the product, which does not meet the written description and enablement requirements of the USPTO (35 U.S.C. § 112, first paragraph) or the EPO (Article 83 of the EPC), such that Applicants reserve the right and hereby disclose a disclaimer of any previously described product, process of making the product, or method of using the product. It may be advantageous in the practice of the invention to be in compliance with Art. 53(c) EPC and Rule 28(b) and (c) EPC. All rights to explicitly disclaim any embodiments that are the subject of any granted patent(s) of applicant in the lineage of this application or in any other lineage or in any prior filed application of any third party is explicitly reserved. Nothing herein is to be construed as a promise.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] In the accompanying figures, like elements are identified by like reference numerals among the several preferred embodiments of the present invention.
[0009] FIGS. 1A-1B are images captured on the OtoSight Middle Ear Scope showing the surface image of the TM with crosshairs showing the position of the OCT beam in the center of the image. In each image, the OCT beam is positioned over non scannable tissue or obstruction. FIG. 1A shows
the OCT beam centered over the malleus bone in the middle ear. Data collected at this point will not indicate fluid presence or turbidity. FIG. IB shows the OCT beam centered over cerumen, blocking the view of the middle ear space, also preventing diagnostic data collection. Each image is annotated to demonstrate that moving the OCT beam to a diagnostically relevant portion of the image will enable data collection.
[0010] FIG. 2A is a schematic of the MEMS scanning design based on existing PhotoniCare imaging system. FIG. 2B is an OCT imaging system. A MEMS mirror component will be introduced to the OCT laser beam pathway to redirect scanning to the anatomy identified by the CNN (convolutional neural network) model with a camera input.
[0011] DETAILED DESCRIPTION OF THE INVENTION
[0012] The foregoing and other features and advantages of the invention are apparent from the following detailed description of exemplary embodiments, read in conjunction with the accompanying drawings. The detailed description and drawings are merely illustrative of the invention rather than limiting, the scope of the invention being defined by the appended claims and equivalents thereof.
[0013] Embodiments of the invention will now be described with reference to the Figures, wherein like numerals reflect like elements throughout. The terminology used in the description presented herein is not intended to be interpreted in any limited or restrictive way, simply because it is being utilized in conjunction with detailed description of certain specific embodiments of the invention. Furthermore, embodiments of the invention may include several novel features, no single one of which is solely responsible for its desirable attributes or which is essential to practicing the invention described herein.
[0014] The words proximal and distal are applied herein to denote specific ends of components of the instrument described herein. A proximal end refers to the end of an instrument nearer to an operator of the instrument when the instrument is being used. A distal end refers to the end of a component further from the operator.
[0015] The use of the terms “a” and “an” and “the” and similar referents in the context of describing the invention are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not
preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0016] Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. The word “about,” when accompanying a numerical value, is to be construed as indicating a deviation of up to and inclusive of 10% from the stated numerical value. The use of any and all examples, or exemplary language (“e.g.” or “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any nonclaimed element as essential to the practice of the invention.
[0017] References to “one embodiment,” “an embodiment,” “example embodiment,” “various embodiments,” etc., may indicate that the embodiment(s) of the invention so described may include a particular feature, structure, or characteristic, but not every embodiment necessarily includes the particular feature, structure, or characteristic. Further, repeated use of the phrase “in one embodiment,” or “in an exemplary embodiment,” do not necessarily refer to the same embodiment, although they may.
[0018] As used herein the term “method” refers to manners, means, techniques, and procedures for accomplishing a given task including, but not limited to, those manners, means, techniques, and procedures either known to, or readily developed from known manners, means, techniques and procedures by practitioners of the chemical, pharmacological, biological, biochemical, and medical arts. Unless otherwise expressly stated, it is in no way intended that any method or aspect set forth herein be construed as requiring that its steps be performed in a specific order. Accordingly, where a method claim does not specifically state in the claims or descriptions that the steps are to be limited to a specific order, it is no way intended that an order be inferred, in any respect. This holds for any possible non-express basis for interpretation, including matters of logic with respect to arrangement of steps or operational flow, plain meaning derived from grammatical organization or punctuation, or the number or type of aspects described in the specification.
[0019] Description
[0020] Generally speaking, the invention comprises deep learning models to guide a microelectromechanical system (MEMS) mirror component for the identification, acquisition, and
capture of relevant diagnostic data within between about 50 ms and about 5 s using a sensor fusion system. The sensor fusion system integrates color camera images with OCT A-scans, processing the color camera output in real time to identify regions of medical interest and directing the optical hardware to acquire OCT A-scans on those areas. A-scan comprises a depth profile that represents the reflected light intensity as a function of depth along a single beam axis. The method comprises identifying diagnostically relevant tissue by a deep learning model segmenting the camera image into areas of interest within about 20 ms and about 200 ms of image capture. The system comprises real-time focused scanning, directing it to the area of interest within approximately 500ms of image capture. The system continues scanning until sufficient data is determined to have been collected through the use of a quality control metric and informs the user that scanning of sufficient data is complete. In one embodiment, the quality control metric may include Deep learning model segmentation or classification, signal strength or signal to noise ratio (SNR), data accumulation thresholding. In other embodiments, other techniques may be used to assess quality by using signal strength. In either embodiment, a motion tracking program may be incorporated into the quality control metrics to assess whether the motion of the handheld probe has moved beyond the area of interest before the next optical hardware redirection. If the motion causes data acquisition outside the area of interest, the collected data will be considered invalid.
[0021] The method and system offer significant advantages over traditional scanning methods that either capture a large amount of data that may not be relevant for diagnostic use or fail to capture sufficient data for diagnostic use. This method and system are particularly important for applications where execution of data capture is technically challenging for users, like handheld implementations of OCT.
[0022] Miniaturization of scanning devices into a handheld form factor is increasingly common in an attempt to improve accessibility of diagnostic technology; however, handheld imaging has proven to have limited success where operator skill level has a significant impact on successful imaging outcome. By automating the process of aligning the scanning element to the patient's anatomy, a significant portion of skill-based imaging can be eliminated.
[0023] Machine learning-driven scanning supports alignment of the scanning element far faster than possible with a human controlled system, through the use of hardware capable of responding to automated control within milliseconds of data availability. By using intelligence during the selection of scanning location, this project introduces the capabilities to radically improve access
to diagnostic technology for young patients that would otherwise go without an accurate diagnostic exam.
[0024] This method and system forgo volumetric scanning and directs the beam only to areas of interest within the field of view of the OCT system. Instead of using a predefined pattern of scanning, scanning is continuously updated using information provided from the color camera input.
[0025] FIGS. 1A-1B are images captured on the OtoSight Middle Ear Scope showing the surface image of the tympanic membrane (TM) with crosshairs showing the position of the OCT beam in the center of the image. In each image, the OCT beam is positioned over non scannable tissue or obstruction. FIG. 1A shows the OCT beam centered over the malleus bone in the middle ear. Data collected at this point will not indicate fluid presence or turbidity. FIG. IB shows the OCT beam centered over cerumen, blocking the view of the middle ear space, also preventing diagnostic data collection. Each image is annotated to demonstrate that moving the OCT beam to a diagnostically relevant portion of the image will enable data collection.
[0026] FIG. 2A is a schematic of the MEMS scanning system 100 based on existing OCT imaging system. A MEMS mirror component 110 will be introduced to the OCT laser beam pathway 120 to redirect scanning to the anatomy identified by the CNN (convolutional neural network) model with a camera input 130. The MEMS scanning system 100 further comprises a focusing optics 140, a liquid lens 142, a hot mirror 144, a fold mirror 146, and an aspheric collimator 148, according to one embodiment. Alternative configurations could deliver and collect the OCT light where MEMS and a focusing optic are required in other embodiments.
[0027] FIG. 2B is an OCT imaging system 200 with the OCT laser beam 120 being redirected.
[0028] In alternative embodiments, the color camera could be substituted with other sensor type, either greyscale camera, or any focusable wide field of view input, or faster low-resolution scanning technology.
[0029] In alternative embodiments, the same method and system could be applicable to Light Detection and Ranging (LIDAR) or any other scanning technology where scanning speed is limited. [0030] In other embodiment, various machine learning algorithms could be used to process image data. Traditional algorithms could be used also in place of a deep learning model.
[0031] In other embodiment, moving elements such as galvanometers could replace MEMS for rapid movement of optical scanning beam.
[0032] System
[0033] As used above, the terms “component” and “system” are intended to refer to a computer- related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers. [0034] Generally, program modules include routines, programs, components, data structures, etc., that perform particular tasks or implement particular abstract data types. Moreover, those skilled in the art will appreciate that the inventive methods can be practiced with other computer system configurations, including single-processor or multiprocessor computer systems, minicomputers, mainframe computers, as well as personal computers, hand-held computing devices, microprocessor-based or programmable consumer electronics, and the like, each of which can be operatively coupled to one or more associated devices.
[0035] The illustrated aspects of the innovation may also be practiced in distributed computing environments where certain tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules can be located in both local and remote memory storage devices.
[0036] A computer typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the computer and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer-readable media can comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer- readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer.
[0037] Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer-readable media.
[0038] Software includes applications and algorithms. Software may be implemented in a smart phone, tablet, or personal computer, in the cloud, on a wearable device, or other computing or processing device. Software may include logs, journals, tables, games, recordings, communications, SMS messages, Web sites, charts, interactive tools, social networks, VOIP (Voice Over Internet Protocol), e-mails, and videos.
[0039] In some embodiments, some or all of the functions or process(es) described herein and performed by a computer program that is formed from computer readable program code and that is embodied in a computer readable medium. The phrase “computer readable program code” includes any type of computer code, including source code, object code, executable code, firmware, software, etc. The phrase “computer readable medium” includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory.
[0040] All publications and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference.
[0041] While the invention has been described in connection with various embodiments, it will be understood that the invention is capable of further modifications. This application is intended to cover any variations, uses or adaptations of the invention following, in general, the principles of the invention, and including such departures from the present disclosure as, within the known and customary practice within the art to which the invention pertains.
Claims
1. A sensor fusion method, comprising: using a plurality of camera images combined with a plurality of OCT A-scans, wherein the A-scan comprises a depth profile that represents the reflected light intensity as a function of depth along a single beam axis and real time processing the camera output to determine regions of medical interest; and directing optical hardware to concentrate the OCT scanning function on areas of interest.
2. The method of claim 1, further comprising identifying diagnostically relevant tissue by a deep learning model segmenting the camera image into areas of interest within about 50 ms of image capture.
3. The method of claim 2, further comprising directing focused scanning in real time to the area of interest within about 500ms of image capture.
4. The method of claim 3, further comprising scanning until sufficient data is determined to have been collected through the use of a quality control metric and informs the user that scanning of sufficient data is complete or that the data is invalid and requires remeasurement.
5. The method of claim 4, wherein the plurality of OCT A-scans are obtained with a micro-electromechanical system (MEMS) mirror component or a galvanometer mirror component.
6. The method of claim 5, wherein camera is selected from the group consisting of a color camera, a greyscale camera, or any focusable wide field of view input, or faster low-resolution scanning technology.
7. The method of claim 6, wherein the quality control metric assesses whether the
motion of the handheld probe has moved beyond the area of interest before the next optical hardware redirection, and ff the motion causes data acquisition outside the area of interest, the collected data is considered invalid.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463570258P | 2024-03-27 | 2024-03-27 | |
| US63/570,258 | 2024-03-27 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025207357A1 true WO2025207357A1 (en) | 2025-10-02 |
Family
ID=97218865
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2025/020222 Pending WO2025207357A1 (en) | 2024-03-27 | 2025-03-17 | A sensor fusion system and methods of use |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025207357A1 (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20180242847A1 (en) * | 2017-02-17 | 2018-08-30 | The Board Of Trustees Of The University Of Illinois | Method and Apparatus for OCT-Based Viscometry |
| US20190320887A1 (en) * | 2017-01-06 | 2019-10-24 | Photonicare, Inc. | Self-orienting imaging device and methods of use |
| US20190343390A1 (en) * | 2016-12-01 | 2019-11-14 | The Board Of Trustees Of The University Of Illinois | Compact Briefcase OCT System for Point-of-Care Imaging |
-
2025
- 2025-03-17 WO PCT/US2025/020222 patent/WO2025207357A1/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20190343390A1 (en) * | 2016-12-01 | 2019-11-14 | The Board Of Trustees Of The University Of Illinois | Compact Briefcase OCT System for Point-of-Care Imaging |
| US20190320887A1 (en) * | 2017-01-06 | 2019-10-24 | Photonicare, Inc. | Self-orienting imaging device and methods of use |
| US20180242847A1 (en) * | 2017-02-17 | 2018-08-30 | The Board Of Trustees Of The University Of Illinois | Method and Apparatus for OCT-Based Viscometry |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Zhao et al. | Miniature three-photon microscopy maximized for scattered fluorescence collection | |
| EP3704528B1 (en) | Confocal scanning imaging systems with micro optical element arrays and methods of specimen imaging | |
| Westphal et al. | Correction of geometric and refractive image distortions in optical coherence tomography applying Fermat’s principle | |
| RU2675688C2 (en) | Microscope-less wide-field-of-view surgical oct visualisation system | |
| Riva et al. | Fundus camera based retinal LDV | |
| US20130010260A1 (en) | Light field camera for fundus photography | |
| JP5599818B2 (en) | Optical probe | |
| US10973405B2 (en) | Systems and methods for long working distance optical coherence tomography (OCT) | |
| US11747603B2 (en) | Imaging systems with micro optical element arrays and methods of specimen imaging | |
| CN105424601A (en) | A hand-held confocal skin microscopy method and device | |
| Greer et al. | Fast objective coupled planar illumination microscopy | |
| Wei et al. | 75-degree non-mydriatic single-volume optical coherence tomographic angiography | |
| US20160007857A1 (en) | Systems and methods of creating in vivo medical images of tissue near a cavity | |
| US11547314B2 (en) | Optical coherence tomography system | |
| Dussaux et al. | Fast confocal fluorescence imaging in freely behaving mice | |
| Yelin et al. | Spectral-domain spectrally-encoded endoscopy | |
| JP2001079007A (en) | Optical probe device | |
| US20210149101A1 (en) | Multicore Fiber Instrument with 3D-Printed Distal Optics | |
| Kim et al. | Endoscopic optical coherence tomography enables morphological and subnanometer vibratory imaging of the porcine cochlea through the round window | |
| Hosseinaee et al. | Label-free, non-contact, in vivo ophthalmic imaging using photoacoustic remote sensing microscopy | |
| Jin et al. | A robust method of eye torsion measurement for medical applications | |
| WO2025207357A1 (en) | A sensor fusion system and methods of use | |
| CN114847868B (en) | Focusing assembly, intermittent focusing scanning and automatic focusing OCT device and method | |
| CN205317650U (en) | A hand-held confocal skin microscopy device | |
| US11051698B2 (en) | Optical microscopy probe for scanning microscopy of an associated object |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25775091 Country of ref document: EP Kind code of ref document: A1 |