[go: up one dir, main page]

WO2019103909A1 - Dispositif de microscopie portable à performance d'image améliorée à l'aide d'un apprentissage profond et procédés d'utilisation de celui-ci - Google Patents

Dispositif de microscopie portable à performance d'image améliorée à l'aide d'un apprentissage profond et procédés d'utilisation de celui-ci Download PDF

Info

Publication number
WO2019103909A1
WO2019103909A1 PCT/US2018/061311 US2018061311W WO2019103909A1 WO 2019103909 A1 WO2019103909 A1 WO 2019103909A1 US 2018061311 W US2018061311 W US 2018061311W WO 2019103909 A1 WO2019103909 A1 WO 2019103909A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
images
neural network
deep neural
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2018/061311
Other languages
English (en)
Inventor
Aydogan Ozcan
Yair RIVENSON
Hongda WANG
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of California Berkeley
University of California San Diego UCSD
Original Assignee
University of California Berkeley
University of California San Diego UCSD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of California Berkeley, University of California San Diego UCSD filed Critical University of California Berkeley
Publication of WO2019103909A1 publication Critical patent/WO2019103909A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/36Microscopes arranged for photographic purposes or projection purposes or digital imaging or video purposes including associated control and data processing arrangements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/0004Microscopes specially adapted for specific applications
    • G02B21/0008Microscopes having a simple construction, e.g. portable microscopes
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B21/00Microscopes
    • G02B21/24Base structure
    • G02B21/26Stages; Adjusting means therefor

Definitions

  • the technical field generally relates to portable microscopy devices and systems that have improved imaging performance that rely on deep learning.
  • the technical field relates to a mobile phone-based microscope device that has improved imaging performance.
  • the system uses a trained deep neural network to output improved images obtained from the mobile phone-based microscope device that resemble images obtained using a conventional high-end benchtop microscope.
  • Optical imaging is a ubiquitous tool for medical diagnosis of numerous conditions and diseases.
  • most of the imaging data considered the gold standard for diagnostic and screening purposes, are acquired using high-end benchtop microscopes. Such benchtop microscopes are often equipped with expensive objectives lenses and sensitive sensors.
  • mobile phone-based imaging devices include U.S. Patent Application No. 2012-0157160 (Compact wide-field fluorescent imaging on a mobile device), U.S.
  • Patent Application No. 2012-0218379 Incoherent lensfree cell holography and microscopy on a chip
  • U.S. Patent Application No. 2012-0248292 Lis-free wide-field super resolution imaging device
  • an imaging platform or system that bridges the gap between cost-effective mobile microscopes and existing gold standard benchtop microscopes in terms of their imaging quality.
  • An important challenge in creating high-quality benchtop microscope equivalent images on mobile devices stems from the desire to keep mobile or portable microscopes cost-effective, compact and light-weight.
  • a trained deep neural network provided that is executed on a separate computing device separate and apart from the portable or mobile-phone based microscope device.
  • the trained deep neural network receives input image(s) and generates an improved output image(s) that resemble images obtained using a high-end, benchtop microscope.
  • the system or platform uses a mobile phone as the microscope device.
  • the mobile microscope may be implemented using a Smartphone with an optomechanical attachment unit that is secured to mobile phone to align an optical axis of the sample with the camera of the mobile phone.
  • image enhancement and aberration correction are performed computationally using a deep convolutional neural network.
  • Deep learning is a powerful machine learning technique that can perform complex operations using a multi-layered artificial neural network.
  • the imaging method uses a supervised learning approach that is first applied by feeding the designed deep neural network with input (e.g., images obtained from a Smartphone microscope) and labels (gold standard benchtop microscope images obtained for the same samples) and optimizing a cost function that guides the deep neural network to learn the statistical transformation between the input and label.
  • each enhanced image of the mobile phone-based microscope is inferred by the deep network in a non-iterative, feed- forward manner.
  • the deep neural network generates an enhanced output image with a FOV of -0.57 mm 2 (the same as that of a 20 c objective lens), from a Smartphone microscope image within -0.42 s, using a standard personal computer equipped with a dual graphics-processing unit.
  • This deep learning-enabled enhancement is maintained even for highly compressed raw images of the mobile phone-based microscope, which is especially desirable for storage, transmission and sharing of the acquired microscopic images.
  • This is particularly helpful for telemedicine applications where the deep neural network can rapidly operate at a remote location separate from the geographical location of where the sample is obtained or where the sample is examined or analyzed (e.g., viewing and classification of a sample by a professional or skilled technician who is tasked with the microscopic inspection of the specimens).
  • the deep neural network leams how to predict the benchtop microscope image that is most statistically likely to correspond to the input smartphone microscope image by learning from experimentally -acquired training images of different samples.
  • the systems and methods described herein are broadly applicable to other low-cost and aberrated microscopy systems and could facilitate the replacement of high-end benchtop microscopes with mobile and cost-effective alternative devices.
  • the systems and methods described herein have numerous applications in global health, telemedicine and diagnostics related applications.
  • the deep learning enabled image transformation and enhancement platform will also help with the standardization of optical images across various biomedical imaging platforms, including mobile microscopes that are being used for clinical and research applications, and can reduce potential discrepancies in microscopic investigation and diagnostic analysis of specimens performed by medical professionals.
  • a method of imaging a sample using a portable electronic device having a camera includes securing an optomechanical attachment unit to the portable electronic device.
  • the optomechanical attachment unit includes a sample holder, one or more light sources, a lens or set of lenses, and a movable stage configured to move the sample relative to the camera in one or more of the x, y, and z direction.
  • the sample is illuminated with the one or more light sources.
  • the camera of the portable electronic device obtains one or more images of the sample.
  • the obtained images (or image files that contains the images) are then input to a trained deep neural network that is executed on a computing device using one or more processors.
  • the trained deep neural network then outputs one or more output images of the sample, the one or more output images having improved one or more of spatial resolution, field-of-view, depth-of-field, signal-to-noise ratio, contrast, and color accuracy.
  • the one or more output images from the trained deep neural network have an image quality substantially equivalent to images of the sample obtained with a benchtop higher-resolution optical microscope. That is to say, the image quality in terms of one or more of spatial resolution, field-of-view, depth-of-field, signal-to-noise ratio, contrast, and color accuracy are substantially equivalent to (or even better than) the same parameters for images obtained using a benchtop higher-resolution optical microscope.
  • a microscopy system the generates improved images obtained from the camera of a portable electronic device.
  • the system includes a portable electronic device that has a camera.
  • the system includes an optomechanical attachment unit that configured to detachably mount to the portable electronic device.
  • the optomechanical attachment unit contains a sample holder (for holding the sample), one or more light sources, a lens or set of lenses, and a movable stage configured to move the sample relative to the camera in one or more of the x, y, and z directions.
  • the sample is interposed between the one or more light sources such that transmitted light through the sample passes through the lens or set of lenses and into the camera.
  • the system further includes a computing device having software configured a execute a trained deep neural network, the trained deep neural network receiving as an input one or more images of the sample obtained with the camera and outputting one or more output images of the sample having improved one or more of spatial resolution, field-of-view, depth-of-field, signal-to- noise ratio, contrast, and color accuracy.
  • FIG. 1 A illustrates a first perspective view of a portable electronic device (e.g., mobile phone) that is secured to an optomechanical attachment unit according to one embodiment.
  • a portable electronic device e.g., mobile phone
  • FIG. 1B illustrates a partial cut-away perspective view of the mobile phone secured to the optomechanical attachment unit. Portions of the optomechanical attachment unit are cut-away to illustrate inner components.
  • FIG. 1C illustrates another partial cut-away perspective view of the mobile phone secured to the optomechanical attachment unit. Portions of the optomechanical attachment unit are cut-away to illustrate inner components.
  • FIG. 1D illustrates and end view of the mobile phone secured to the
  • FIG. 1E illustrates a front face of a mobile phone usable with the optomechanical attachment unit.
  • FIG. 1F illustrates a back face of a mobile phone usable with the optomechanical attachment unit.
  • FIG. 1G illustrates a sample loaded onto an optically transparent substrate (e.g., microscope slide).
  • an optically transparent substrate e.g., microscope slide
  • FIG. 1H illustrates a side view of a mobile phone along with the various optical components of the optomechanical attachment unit aligned along an optical axis.
  • FIG. 2A illustrates a schematic representation of the microscopy system according to one embodiment.
  • FIG. 2B illustrates a schematic representation of the microscopy system according to another embodiment.
  • FIGS. 3A-3E illustrate an imager device (FIG. 3A) and microscopy system that generates an input image (FIGS. 3B and 3C) that is used by the trained deep neural network to output an enhanced image of a sample (FIG. 3D) that, in this illustrative embodiment, is Masson’ s-trichrome-stained lung tissue.
  • FIG. 3E illustrates the corresponding view of the image obtained by the benchtop microscope (i.e., gold standard).
  • FIG. 4 schematically illustrates the training phase of the deep neural network.
  • FIG. 5 schematically illustrates an algorithm or method that is used to register images obtained from the camera of the portable electronic device with the gold standard images obtained from an optical microscope.
  • a pyramid elastic registration method is used.
  • FIG. 6A illustrates a Smartphone microscope image (input) image of Masson's- trichrome-stained lung tissue section.
  • FIG. 6B illustrates the corresponding deep neural network output from the image of FIG. 6A.
  • FIG. 6C illustrates a zoomed-in version of ROI#l of the Smartphone microscope image (input) image of FIG. 6A.
  • FIG. 6D illustrates the neural network output image for the same ROI#l of FIG.
  • FIG. 6E illustrates the image of the same ROI#l of FIG. 6C obtained using a 20x/0.75NA objective lens (with a 0.55NA condenser).
  • the arrows in (FIGS. 6C, 6D, and 6E) point to some examples of the fine structural details that were recovered using the deep network.
  • FIG. 6F illustrates a zoomed-in version of ROI#2 of the Smartphone microscope image (input) image of FIG. 6A.
  • FIG. 6G illustrates the neural network output image for the same ROI (ROI#2) of FIG. 6F.
  • FIG. 6H illustrates the image of the same ROI (ROI#2) of FIG. 6F obtained using a 20//0.75NA objective lens (with a 0.55NA condenser).
  • the cross-section line profiles in (FIGS. 6F, 6G, and 6H) are used for FIGS. 611, 612, 613.
  • FIG. 611 illustrates the cross-section line profile from FIG. 6F demonstrating the noise removal performed by the deep network, while retaining the high-resolution spatial features.
  • FIG. 612 illustrates the cross-section line profile from FIG. 6G demonstrating the noise removal performed by the deep network, while retaining the high-resolution spatial features.
  • FIG. 613 illustrates the cross-section line profile from FIG. 6H.
  • FIG. 7A illustrates an illustration of an input image (JPEG-compressed) that is input to the trained deep neural network.
  • FIG. 7B illustrates an illustration of an output image (JPEG) that is output from the trained deep neural network.
  • JPEG output image
  • FIG. 7C illustrates a network input image (JPEG) of ROI #1 illustrated in FIG. 7B.
  • FIG. 7D illustrates a network output image (JPEG) of FIG. 7C.
  • FIG. 7E illustrates a network input image (TIFF) of ROI #1 illustrated in FIG. 7B.
  • FIG. 7F illustrates a network output image (TIFF) of FIG. 7E.
  • FIG. 7G illustrates a benchtop microscope image (20x/0.75NA) of ROI #1 for comparison purposes.
  • FIG. 7H illustrates a network input image (JPEG) of ROI #2 illustrated in FIG. 7C.
  • FIG. 71 illustrates a network output image (JPEG) of FIG. 7H.
  • FIG. 7J illustrates a network input image (TIFF) of ROI #2 illustrated in FIG. 7C.
  • FIG. 7K illustrates a network output image (TIFF) of FIG. 7J.
  • FIG. 7L illustrates a benchtop microscope image (20x/0.75NA) of ROI #2 for comparison purposes.
  • FIG. 8A illustrates a Smartphone microscope image of a stained Pap smear sample.
  • FIG. 8B illustrates the corresponding output image (of FIG. 8 A) generated by the trained deep neural network.
  • the arrows reveal the extended DOF of the imaging results obtained by the Smartphone-based microscope.
  • FIG. 8C illustrates the corresponding benchtop microscope image (20x/0.75NA).
  • FIG. 9A illustrates a Smartphone microscope (input) image.
  • FIG. 9B illustrates the corresponding (FIG. 9A) enhanced output image from the deep neural network.
  • FIG. 9C illustrates a 20x/0.75NA benchtop microscope image of the same sample.
  • FIG. 9D illustrates a zoomed-in version of a ROI of the smartphone microscope image (seen in FIG. 9 A).
  • FIG. 9E illustrates the corresponding deep neural network output of the ROI of FIG. 9D.
  • FIG. 9F illustrates the 20x/0.75NA benchtop microscope image of the same ROI, revealing the image enhancement achieved by the deep neural network.
  • FIG. 10A illustrates an exemplary region-of-interest (ROI) of Masson’ s-trichrome- stained lung tissue sample for the different RGB color channels.
  • ROI region-of-interest
  • FIG. 10B illustrates a vector field map (mean estimated shift distortion map - red channel) signifying the local shifts, which correspond to the mean deviations of the acquired Smartphone microscope images from the gold standard images acquired with a 20* objective lens (0.75NA) of a high-end benchtop microscope.
  • a vector field map (mean estimated shift distortion map - red channel) signifying the local shifts, which correspond to the mean deviations of the acquired Smartphone microscope images from the gold standard images acquired with a 20* objective lens (0.75NA) of a high-end benchtop microscope.
  • FIG. 10C illustrates a vector field map (mean estimated shift distortion map - green channel) signifying the local shifts, which correspond to the mean deviations of the acquired Smartphone microscope images from the gold standard images acquired with a 20* objective lens (0.75NA) of a high-end benchtop microscope.
  • FIG. 10D illustrates a vector field map (mean estimated shift distortion map - blue channel) signifying the local shifts, which correspond to the mean deviations of the acquired Smartphone microscope images from the gold standard images acquired with a 20* objective lens (0.75NA) of a high-end benchtop microscope.
  • FIGS. 1A-1D and FIGS. 2A-2C illustrate a microscopy system 10 that uses deep learning to improve the quality of images obtained using a relatively inexpensive and portable optomechanical attachment unit 12 that is used with a portable electronic device 14 that includes a camera 16.
  • the optomechanical attachment unit 12 and the portable electronic device 14 collectively form an imager device 50 that, as explained herein, captures images of a sample 32 (FIG. 1G) that are then further processed using a trained deep neural network 56 to generate enhanced images of the sample 32.
  • the portable electronic device 14 is the form of a mobile phone 18 such as that illustrated in FIGS. 1A-1F and 1H.
  • the mobile phone 18 may include a Smartphone such as those widely used by consumers (e.g., iPhone, Samsung, Pixel, Nokia, Sony, and the like).
  • the mobile phone 18 includes a camera 16 that is typically located on the back side of the camera 16 as illustrated in FIG. 1E (note that the platform described herein is not limited to the camera 16 being located on the back side of the mobile phone 18).
  • the camera 16 of the mobile phone 18 typically includes one or more lenses 17 (FIG. 1H) that are contained inside the mobile phone 18 along with an image sensor 19 that is used to capture images.
  • the image sensor is typically a color image sensor as most portable electronic devices 14 include color imaging.
  • the mobile phone 18 includes one or more processors 20 (FIGS.
  • the mobile phone 18 includes the ability to transfer data to and from the mobile phone 18 using a cellular network, Bluetooth, Wi-Fi, and the like as is commonly known. Data transfer may also take place using a wired connection (e.g., USB cable or the like).
  • the front of the mobile phone 18 includes a display 24 (best seen in FIG. 1E) that is used to visually present information to the user of the mobile phone 18.
  • the display 24 typically includes a touch-screen user interface that that user can use to select items, navigate, and control various functions of the mobile phone 18.
  • the mobile phone typically includes software programs or applications 26 that can be run or launched from the display 24.
  • the imaging software that converts raw images obtained with the camera 16 of the mobile phone 18 to improved images may be run in part or exclusively using software 22 of the mobile phone 18.
  • a mobile phone 18 is described in detail as being the portable electronic device 14 it should be appreciated that other portable electronic devices 14 such as a tablet computer, a camera, or a webcam may be substituted for the mobile phone 18 as each of these has an internal camera.
  • the optomechanical attachment unit 12 would be sized and configured for modular attachment to these devices.
  • FIGS. 1A-1D, and 1H illustrate details of the optomechanical attachment unit 12 that is used in conjunction with the portable electronic device 14 such as the mobile phone 18.
  • the optomechanical attachment unit 12 is a light-weight housing made from, for example, a polymer or plastic material (although not required) that contains the components required to illuminate a sample such that a transmission image of the sample can be capture by the camera 16.
  • the optomechanical attachment unit 12, in one embodiment, may include one or more tabs, clips, slots, receptacles, holders, or retaining elements 28 are used to releasably secure the optomechanical attachment unit 12 to the portable electronic device 14.
  • the optomechanical attachment unit 12 may secured to a mobile phone 18 as needed to perform imaging of a sample.
  • the optomechanical attachment unit 12 may be designed to accommodate a number of different makes and models of mobile phones 18. Alternatively, specific optomechanical attachment units 12 may be manufactured and used with particular makes and models of mobile phones 18.
  • the optomechanical attachment unit 12 includes a sample holder 30 that holds a sample 32 that is to be imaged (FIG. 1G).
  • the sample holder 30 may include a moveable sample tray that opens to receive a sample 32.
  • the sample 32 to be imaged may be placed on an optically transparent substrate 33 (FIG. 1G) such as glass or plastic substrate or slide which is placed in the sample holder 30.
  • the sample holder 30 holds the optically transparent substrate 33 around the border or periphery thereof so that light can pass through the sample 32 and into the camera 16.
  • the sample 32 may be sandwiched between two optically transparent substrates 33 in some embodiments.
  • the sample 32 may include a biological sample such as a tissue section that has been stained with one or more stains.
  • the sample 32 may include a stained histological slide or pathology slide.
  • the sample 32 may include an environmental sample or the like.
  • the sample holder 30 may be closed whereby the sample 32 moves inside the interior of the housing of the optomechanical attachment unit 12.
  • the optomechanical attachment unit 12 includes one or more light sources 34 (FIGS. 1B, 1C, 1H) that are used to illuminate the sample with light.
  • the one or more light sources 34 include a plurality of light emitting diodes (LEDs) that provide bright-filed illumination.
  • the light sources 34 include multiple color LEDs mounted in a ring that are driven to illuminate the sample with white light (NeoPixel Ring with 12 RGB LEDs, Product Number 1643, Adafruit).
  • the one or more light sources 34 are located on an opposing side of the sample from where the camera 16 is located (see, e.g., FIG. 1H).
  • transmission images are obtained of the sample 32.
  • a diffuser 36 is interposed between the one or more light sources 34 and the sample holder 30.
  • the diffuser 36 as its name implies, imparts diffused light onto the sample.
  • An exemplary diffuser 36 includes the PTFE-based polymer diffuser that has a thickness of 100 pm with a transmission value of around 50% (e.g., Zenith® Part No. SG-3201).
  • the sample 32 When loaded into the sample holder 30 of the optomechanical attachment unit 12, the sample 32 is positioned along an optical axis (FIG. 1H) that is formed between the one or more light sources 24 and the camera 16 of the portable electronic device 14.
  • the optomechanical attachment unit 12 includes a moveable stage 38 that, in one preferred embodiment, moves the sample holder 30 (and sample 32 held therein) in multiple axes.
  • the moveable stage 38 moves the sample 32 in the x and y directions (a plane that is generally perpendicular to the optical axis) so that different regions of the sample 32 may be imaged (e.g., the sample 32 may be laterally scanned).
  • the moveable stage 38 may be moved manually by twisting or rotation of a knob 40 that moves the sample holder 30 in one direction or another (e.g., one knob 40a for movement in the x direction and another 40b for movement in the y direction).
  • the knob 40a may be coupled to a gear 43 (FIG. 1B) that engages with teeth 41 on the side of the sample holder 30 that moves the sample holder 30 in the x direction. Rather than rotation, the knob(s) 40 may be used to slide the sample holder 30 in a particular direction.
  • the moveable stage 38 also moves the sample 32 in the z direction (generally along the optical axis direction). Moving the sample 32 in the z direction allows the user to control the depth of focus of the image.
  • a rotatable ring 42 that interfaces with a threaded support for the sample holder 30 may be used to adjust the sample 32 in the z direction. Movement in the z-direction may be accomplished by rotation of the ring 42 which causes extension/retraction of the threaded support.
  • a knob or slide may be used to move the sample holder 30 in the z direction.
  • the optomechanical attachment unit 12 further includes a power source 44 (FIGS. 1B, 1C) that is used to power the one or more light sources 34.
  • the power source 44 may include one or more batteries that are contained in the optomechanical attachment unit 12.
  • the power source 44 is coupled to driver circuitry (not shown) to power the one or more light sources 34.
  • a switch 46 located on the optomechanical attachment unit 12 is used to turn the one or more light sources 34 on or off.
  • the one or more light sources 34 may be powered by the internal battery of the mobile phone 18.
  • a cable or the like may connect the mobile phone 18 to the control circuitry of the optomechanical attachment unit 12 that drives the one or more light sources 34.
  • a software-based switch that is triggered through the application or program 26 may be used to turn the one or more light sources 34 on or off.
  • the optomechanical attachment unit 12 further includes a lens or set of lenses 48 that are fixed in place and aligned in the optical path to focus the sample image onto the camera 16.
  • the lens or set of lenses 48 may include an external lens with a focal length of 2.6 mm, provided a magnification of -2.77, a FOV of -1 mm 2 , and a half-pitch lateral resolution of -0.87 pm. Multiple lenses or multiple optical components that emulate or focus light may also be used instead of a single lens.
  • FIG. 2A illustrate one embodiment of a microscopy system 10 that illustrates the portable electronic device 14 secured in the optomechanical attachment unit 12 which collectively form an imager 50 along with a computing device 52 that has software 54 that is configured to execute a trained deep neural network 56.
  • the computing device 52 includes one or more processors 58 that are used to execute the trained deep neural network 56.
  • the one or more processors 58 may include, in some embodiments, one or more Graphic
  • the trained deep neural network 56 receives as an input one or more images 60 (or image files representing the image) of the sample 34 obtained with the camera 16 and outputting one or more output images 70 (or image files representing the enhanced images) of the sample 32 having improved one or more of spatial resolution, field-of-view, depth-of-field, signal -to-noise ratio, contrast, and color accuracy.
  • the computing device 52 that executes the trained deep neural network 56 may include, by way of example, a tablet computer, a personal computer, a laptop computer, a server, or cloud computing device.
  • the data (e.g., image files) may be transmitted to and received by the computing device 52 via a wired or wireless connection.
  • the computing device 52 may include a remotely located computing device and data is transferred to and from the computing device 52 via a wide area network such as the Internet or a mobile phone wireless network.
  • the imager 50 is used to capture images 60 of the sample 32.
  • the images 60 may be captured in any number of formats.
  • the images 60 that are captured by the mobile phone 18 may be optionally compressed (e.g., JPEG) and transmitted to the computing device 52.
  • These input images 60 are then input into the trained deep neural network 56 to generate the enhanced output images 70.
  • the enhanced output images 70 may, in one embodiment, be returned to the user of the mobile phone 18 for presentation on the display 24 of the mobile phone 18.
  • a graphical user interface GUI may be presented to the user of the mobile phone 18 that enables the user to view, manipulate, and analyze the enhanced output images 70.
  • the GUI may also be used to transfer raw images 60 to the computing device 52 and/or receive output images 70 from the computing device 52.
  • the GUI may also include other tools that let the user, pan, zoom, cut, highlight, and annotate the images.
  • the enhanced output images 70 may, in another embodiment, be made available to other users different from the actual person that imaged the sample 32.
  • the enhanced output images 70 may be accessed using a conventional web browser or dedicated image sharing software by trained image analysis professions (e.g., pathologists) that can then evaluate and analyze the samples. These samples 32 may be optionally scored or characterized.
  • FIG. 2B illustrates another embodiment of a microscopy system 10 in which the computing device 52 is also the mobile phone 18 (or other portable electronic device 14).
  • the trained deep neural network 56 is executed on the mobile phone 18.
  • the internal processor(s) 20 of the mobile phone 18 are used run the trained deep neural network 56 whereby the input images 60 are input to the trained deep neural network 56 to generate the enhanced output images 70.
  • FIG. 3A-3E illustrates the imager device 50 and microscopy system 10 used to output an enhanced image of a sample 32 that, in this illustrative embodiment, is Masson’ s- trichrome- stained lung tissue.
  • FIG. 3A illustrates the imager device 50 that is used to obtain an input image 60 (FIG. 3B) of the sample 32.
  • FIG. 3C illustrates a magnified view of the region-of-interest (ROI) of FIG. 3B.
  • the input image 60 is then input to the trained deep neural network 56 which blindly generates an output image 70 (FIG. 3D) that is enhanced or improved that is comparable to that obtained with a high-end benchtop microscope, equipped with a 20x/0.75NA objective lens and a 0.55NA condenser (FIG. 3E).
  • FIG. 4 A schematic illustration of the deep network training process is shown in FIG. 4.
  • the images 80, 82 were partitioned into input and corresponding label pairs.
  • a localized registration between input and label was performed using pyramid elastic registration to correct distortions caused by various aberrations and warping in the input smartphone microscope images (see the Data Pre-Processing).
  • FIG. 5 illustrates the pyramid elastic registration algorithm.
  • a block-wise cross-correlation is calculated using the corresponding blocks from the two images.
  • the peak location inside each block represents the shift of its center.
  • the peak value i.e., the Pearson correlation coefficient, represents the similarity of the two blocks.
  • a cross-correlation map (CCM) 86 and an NxN similarity map are extracted by locating the peak locations and fitting their values.
  • An mxn translation map 88 is then generated based on the weighted average of the CCM at each pixel.
  • This translation map 88 defines a linear transform from the distorted image to the target enhanced image 90. This translation operation, although it corrects distortions to a certain degree, is synthesized from the block-averaged CCM and therefore should be refined with smaller-block-size CCMs.
  • N is increased from 5 to 7, and the block size is reduced as seen in operation 92 of FIG. 5.
  • the elastic registration in each loop followed the open-source NanoJ plugin in Image! [0076]
  • the distortion-corrected images were divided into training and validation sets.
  • the validation set prevented the network from overfitting to the training set, and the model achieving the minimal target cost function (detailed in the Materials and Methods section, Deep Neural Network Architecture and Implementation subsection) was used for the validation set to fix the network parameters.
  • An independent testing set (which was not aberration-corrected) enabled the blind testing of the network on samples that were not used for the network training or validation.
  • the training dataset was generated by partitioning the registered images into 60x60 pixel and 150x 150 pixel patch images (with 40% overlap), from the distorted
  • FIGS. 6A, 6B, 6C, 6D, 6E, 6F, 6G, 6H, 611, 612, 613 demonstrate the ability of the trained deep neural network 56 to restore spatial features in the output image 70 that cannot be detected in the raw Smartphone microscope image 60 due to various factors including spatial blurring, poor signal-to-noise ratio, non-ideal illumination, and the spectral response of the sensor.
  • FIGS. 6D and 6G Following the inference of the deep neural network 56 acting on the input Smartphone microscope image 60, several spatial details were restored as illustrated FIGS. 6D and 6G.
  • the deep neural network 56 corrected the severe color distortion of the Smartphone image 60, restoring the original colors of the dyes that were used to stain the lung tissue sample, which is highly important for telepathology and related applications.
  • the CIE-94 color distance was used as a metric to quantify the reconstruction quality of the deep network, with respect to the gold standard benchtop microscope images of the same samples.
  • the deep network has significantly improved the average CIE-94 color distance of the mobile microscope images by a factor of 4 to 11 -fold, where the improvement was sample dependent as shown in Table 2.
  • Table 2 Average and standard deviation (Std) of the CIE-94 color distances compared to the gold standard benchtop microscope images for the different pathology samples.
  • Table 3 Average SSIM for the different pathology samples, comparing bicubic x2.5 upsampling of the Smartphone microscope images and the deep neural network output images.
  • the average CIE-94 color distance was reduced by approximately 0.067 for the aberration corrected images, while the average SSIM was reduced by approximately 0.02, which form a negligible compromise when scenarios with strict transmission bandwidth and storage limits are considered.
  • the deep neural network approach was applied to images of Pap smear samples acquired with the mobile-phone microscope (see Table 1 for implementation details).
  • a Pap smear test is an efficient means of cervical cancer screening, and the sample slide preparation, including its staining, can be performed in a field setting, where a mobile microscope can be of great importance.
  • FIGS. 9A-9F Very similar inference results were obtained for a human blood smear sample as shown in FIGS. 9A-9F, where the deep neural network 56, in a response to an input image 60 of the Smartphone microscope (with an average SSIM of -0.2 and an average color distance of -20.6) outputs a significantly enhanced image 70, achieving an average SSIM and color distance of -0.9 and -1.8, respectively (see Tables 2 and 3).
  • the deep neural networks 56 were trained with sample-specific datasets in the experiments described herein, it is possible to train a universal network, at the expense of increasing the complexity of the deep neural network 56 (for example, increasing the number of channels), which will accordingly increase the inference time and memory resources used. This, however, is not expected to create a bottleneck since image upsampling occurs only in the last two layers in the architecture of the deep neural network 56. Stated differently, the upsampling process is optimized through supervised learning in this approach. Quite importantly, this design choice enables the network operations to be performed in the low- resolution image space, which reduces the time and memory requirements compared with those designs in which interpolated images are used as inputs (to match the size of the outputs).
  • This design significantly decreases both the training and testing times and relaxes the computational resource requirements, which is important for implementation in resource- limited settings and, in some embodiments, be implemented on Smartphones (e.g. the embodiment of FIG. 2B).
  • the training of multiple mobile- phone microscope imagers 50 based on the same optical design can be significantly simplified by using transfer learning. Once a few systems have been trained with the proposed approach, the trained model can be used to initialize the deep neural network for a new mobile microscope with the already learnt model; this transfer learning-based approach will rapidly converge, even with a relatively small number of example images.
  • the Smartphone microscope images 60 were captured using the automatic image-capture settings of the mobile phone, which inevitably led the color response of the sensor to be non-uniform among the acquired images.
  • Training the deep neural network 56 with such a diverse set of images creates a more robust network that will not over-fit when specific kinds of illumination and color responses are present.
  • the deep neural networks 56 that were trained produced generalized, color- corrected responses, regardless of the specific color response acquired by using the automatic settings of the Smartphone and the state of the battery-powered illumination component of the mobile microscope. This property should be very useful in actual field settings, as it will make the imaging process more user-friendly and mitigate illumination and image acquisition related variations that could become prominent when reduced energy is stored in the batteries of the illumination module.
  • Smartphone microscopes possess certain advantages, such as integration with off-the-shelf consumer products benefiting from economies of scale, portability, and inherent data communication
  • a plethora of other devices and platforms e.g., Raspberry Pi
  • some of the mechanical (e.g., related to object holder and its alignment) and illumination instabilities should produce less degradation in image quality than that resulting from using a
  • Smartphone-based mobile microscope Such an imaging apparatus with its beher repeatability in imaging samples will facilitate the use of the pyramid elastic registration as part of the image enhancement workflow, since the image distortions will be more stationary and less affected by mechanical and illumination instabilities resulting from, e.g., user variability and the status of the battery.
  • a deep learning-based framework has been described to enhance mobile-phone microscopy by creating high-resolution, denoised and color-corrected images through a convolutional neural network (CNN) used as the trained deep neural network 56.
  • CNN convolutional neural network
  • the platform provides significant enhancement of low-resolution, noisy, distorted images of various specimens acquired by a cost-effective, Smartphone-based microscope by using a deep learning approach. This enhancement was achieved by training a deep convolutional neural network using the Smartphone microscope images and corresponding benchtop microscope images of various specimens, used as gold standard.
  • a Nokia Lumia 1020 was used in the design of the Smartphone-based transmission microscope. It has a CMOS image sensor chip with an active area of 8.64 mm x 6 mm, and a pixel size of 1.12 pm.
  • the built-in camera of the Smartphone is formed with 6-lenses, a combination of one glass lens (facing the prototype) and five additional plastic lenses.
  • the smartphone sensor aperture is f/2.2.
  • the regular camera application of the smartphone facilitates the capture of images in raw format (i.e., DNG) as well as JPG images using the rear camera of the Smartphone, which has 41 megapixels.
  • the same application also provides adjustable parameters such as the sensor’s sensitivity (International Organization for Standardization, ISO) and exposure time.
  • the ISO was set to 100, exposure time and focus to auto, and white balance to cloud mode, which is a predefined mode of the Smartphone camera that was visually evaluated as one of the best modes for imaging of pathology slides.
  • the automatically adjusted exposure times for the Smartphone microscope images ranged from 1/49 s to 1/13 s.
  • the illumination unit illuminated each sample from the back side through a polymer diffuser (Zenith Polymer® diffuser, 50% transmission, 100 pm thickness, Product No. SG 3201, American Optic Supply, Golden, CO, USA), as seen in FIGS. 1C and 1H.
  • An external lens with a focal length of 2.6 mm provided a magnification of -2.77, a FOV of -1 mm 2 , and a half-pitch lateral resolution of -0.87 pm.
  • the xy stage on the sample tray was used to move each sample slide for lateral scanning and the z stage to adjust the depth of focus of the image.
  • Gold standard image data acquisition was performed using an Olympus 1X83 microscope equipped with a motorized stage.
  • the images were acquired using a set of Super Apochromat objectives, (Olympus UPLSAPO 20X/0.75NA, WD0.65).
  • the color images were obtained using a Qimaging Retiga 4000R camera with a pixel size of 7.4 pm.
  • the microscope was controlled by MetaMorph® microscope automation software (Molecular Devices, LLC), which includes automatic slide scanning with autofocusing.
  • the samples were illuminated using a 0.55NA condenser (Olympus IX2-LWUCD).
  • Lung tissue De-identified formalin-fixed paraffin-embedded Masson's-trichrome- stained lung tissue sections from two patients were obtained from the Translational Pathology Core Laboratory at UCLA. The samples were stained at the Histology Lab at UCLA.
  • Pap smear A de-identified Pap smear slide was provided by UCLA Department of Pathology.
  • Blood smear A de-identified human blood smear slide was provided by UCLA Microbiology Lab.
  • the deep neural network leams how to enhance the images by following an accurate Smartphone and benchtop microscope FOV matching process, which in is based on a series of spatial operators (convolution kernels).
  • convolution kernels convolution kernels
  • This image registration task is divided into two parts.
  • the first part matches the FOV of an image acquired using the Smartphone microscope with that of an image captured using the benchtop microscope.
  • This FOV matching procedure can be described as follows: (i) Each cell phone image is converted from DNG format into TIFF (or JPEG) format with the central 0.685 mm 2 FOV being cropped into four parts, each with 1024x 1024 pixels (ii) Large-FOV, high-resolution benchtop microscope images ( ⁇ 25Kx25K pixels) are formed by stitching 2048x2048 pixel benchtop microscope images (iii) These large-FOV images and the Smartphone image are used as inputs for scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) algorithms.
  • SIFT scale-invariant feature transform
  • RANSAC random sample consensus
  • both color images are converted into grey-scale images.
  • the SIFT frames (F) and SIFT descriptors (/)) of the two images are computed.
  • F is a feature frame and contains the fractional center of the frame, scale, and orientation.
  • D is the descriptor of the corresponding frame in F.
  • the two sets of SIFT descriptors are then matched to determine the index of the best match (iv)
  • a homography matrix, computed using RANSAC, is used to project the low-resolution smartphone image to match the FOV of the high-resolution benchtop microscope image, used as gold standard.
  • the Smartphone and benchtop microscope images are globally matched. However, they are not accurately registered, mainly due to distortions caused by the imperfections of the optical components used in the smartphone microscope design and inaccuracies originating during the mechanical scanning of the sample slide using the xyz translation stage.
  • This second part of the registration process locally corrects for all these distortions between the input and gold standard images by applying a pyramid elastic registration algorithm, which is depicted in FIG. 5 and described herein.
  • the last step is to upsample the target image in a way that will enable the network to learn the statistical transformation from the low-resolution Smartphone images 60 into high-resolution, benchtop-microscope equivalent images 70.
  • each sample was illuminated using a 0.55 NA condenser, which creates a theoretical resolution limit of approximately 0.4 pm using a 0.75 NA objective lens (20x).
  • the Smartphone microscope is based on a CMOS imager and has a half-pitch resolution of 0.87 pm, corresponding to a resolvable period of 1.74 pm.
  • the deep neural network architecture receives three input feature maps (RGB channels) (FIG. 4), and following the first convolutional layer, the number of feature maps is expanded to 32.
  • the convolution operator of the /-th convolutional layer for x,y-th pixel m they-th feature map is given by:
  • g defines the feature maps (input and output), is a learned bias term, r is the index of the feature maps in the convolutional layer, and M[ r i s the learned convolution kernel value at its u,v-t h entry.
  • the size of the convolutional kernel is UxV, which was set to be 3x3 throughout the network.
  • the network consists of five residual blocks, which contribute to the improved training and convergence speed of the deep networks.
  • the residual blocks implement the following structure:
  • ReLU the non-linear activation function that was applied throughout the deep network
  • a t A k -1 + flo or((a x k ) / K + 0.5 ) , Eq. (3)
  • k [ 1 : 5]
  • a 10
  • To 32.
  • the network is kept more compact and less demanding on computational resources (for both training and inference).
  • increasing the number of channels through residual connections creates a dimensional mismatch between the features represented by X k and X kA in equation (2).
  • k was augmented with zero-valued feature maps, to match the total number of feature maps in X k ⁇ l .
  • another convolutional layer increases the number of feature maps from 62 to 75. The following two layers transform these 75 feature maps, each with S x T pixels, into three output channels, each with
  • X mpui is the network input (smartphone microscope raw image), with the deep netw'ork operator denoted as F and the trainable network parameter space as Q .
  • the indices c, s, and t denote the s,/-th pixel of the c-th color channel.
  • the cost function (equation (4)) balances the mean-squared error and image sharpness with a regularization parameter l, which was set to he 0.001.
  • the sharpness term. is defined as 36
  • CX is the matrix transpose operator
  • the calculated cost function is then back-propagated to update the network parameters ( Q ), by applying the adaptive moment estimation optimizer (Adam) (described in Kingma, D. P.; Ba, J. Adam: A Method for Stochastic Optimization. ArXivl 4126980 Cs 2015, which is incorporated herein by reference) with a constant learning rate of 2 c !Q
  • the network w3 ⁇ 4s trained with a mini-batch of 32 patches (Table 1)
  • the convolution kernels were initialized by using a truncated normal distribution with a standard deviation of 0.05 and a mean of 0. All the network biases were initialized as 0.
  • the CIE-94 color distance was developed by the Commission intemationale de l'eclairage (CIE).
  • CIE-94 was used as a metric to quantify the reconstruction quality of the deep neural network, with respect to the gold standard benchtop microscope images of the same samples.
  • the average and the standard deviation of the CIE-94 were calculated between the 2.5* bicubic upsampled Smartphone microscope raw input images and the benchtop microscope images (used as gold standard), as well as between the deep network output images and the corresponding benchtop microscope images, on a pixel-by -pixel basis and averaged across the images of different samples (see Table 2).

Landscapes

  • Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Microscoopes, Condenser (AREA)

Abstract

Un procédé d'imagerie d'un échantillon à l'aide d'un téléphone mobile ayant une caméra comprend la fixation d'une unité de fixation optomécanique au dispositif électronique portable, l'unité de fixation optomécanique ayant un porte-échantillon, une ou plusieurs sources de lumière, une lentille ou un ensemble de lentilles, et un étage mobile conçu pour déplacer l'échantillon par rapport à la caméra. L'échantillon est éclairé par la ou les sources de lumière et une image de l'échantillon est obtenue avec la caméra. L'image est entrée dans un réseau neuronal profond entraîné qui tourne sur un dispositif informatique à l'aide d'un ou de plusieurs processeurs. Le réseau neuronal profond entraîné émet une image de sortie améliorée qui possède un ou plusieurs d'une résolution spatiale améliorée, d'un champ de vision amélioré, d'une profondeur de champ améliorée, d'un rapport signal-bruit amélioré, d'un contraste amélioré et d'une précision de couleur améliorée.
PCT/US2018/061311 2017-11-21 2018-11-15 Dispositif de microscopie portable à performance d'image améliorée à l'aide d'un apprentissage profond et procédés d'utilisation de celui-ci Ceased WO2019103909A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201762589343P 2017-11-21 2017-11-21
US62/589,343 2017-11-21

Publications (1)

Publication Number Publication Date
WO2019103909A1 true WO2019103909A1 (fr) 2019-05-31

Family

ID=66631753

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2018/061311 Ceased WO2019103909A1 (fr) 2017-11-21 2018-11-15 Dispositif de microscopie portable à performance d'image améliorée à l'aide d'un apprentissage profond et procédés d'utilisation de celui-ci

Country Status (1)

Country Link
WO (1) WO2019103909A1 (fr)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111007661A (zh) * 2019-12-02 2020-04-14 湖南国科智瞳科技有限公司 一种基于深度学习的显微图像自动聚焦方法及装置
CN111239999A (zh) * 2020-01-08 2020-06-05 腾讯科技(深圳)有限公司 一种基于显微镜的光学数据处理方法、装置及存储介质
CN113589515A (zh) * 2021-08-19 2021-11-02 绍兴格物光学有限公司 便携式显微镜观察套装
US11175488B2 (en) * 2016-02-18 2021-11-16 Oculyze Gmbh Mobile microscope assembly
US11249293B2 (en) 2018-01-12 2022-02-15 Iballistix, Inc. Systems, apparatus, and methods for dynamic forensic analysis
US11262286B2 (en) 2019-04-24 2022-03-01 The Regents Of The University Of California Label-free bio-aerosol sensing using mobile microscopy and deep learning
CN114563869A (zh) * 2022-01-17 2022-05-31 中国地质大学(武汉) 一种贴片式手机显微镜检测系统及其显微结果获取方法
US20220284232A1 (en) * 2021-03-01 2022-09-08 Nvidia Corporation Techniques to identify data used to train one or more neural networks
US11460395B2 (en) 2019-06-13 2022-10-04 The Regents Of The University Of California System and method for measuring serum phosphate levels using portable reader device
US20220364995A1 (en) * 2021-05-14 2022-11-17 National Tsing Hua University Portable ring-type fluorescence optical system for observing microfluidic channel and operating method thereof
GB2607953A (en) * 2021-06-18 2022-12-21 Oxford Immune Algorithmics Ltd Device with focus drive system
GB2607956A (en) * 2021-06-18 2022-12-21 Oxford Immune Algorithmics Ltd Wafer for holding biological sample
GB2607955A (en) * 2021-06-18 2022-12-21 Oxford Immune Algorithmics Ltd Device with cartridge drive system
GB2608108A (en) * 2021-06-18 2022-12-28 Oxford Immune Algorithmics Ltd Device with cartridge
GB2608143A (en) * 2021-06-23 2022-12-28 Oxford Immune Algorithmics Ltd Device with stigmatic lens
CN116894841A (zh) * 2023-09-08 2023-10-17 山东天鼎舟工业科技有限公司 一种变速箱合金壳体质量视觉检测方法
US20230351585A9 (en) * 2021-10-12 2023-11-02 Board Of Regents, The University Of Texas System Artificial intelligence enabled, portable, pathology microscope
US11915360B2 (en) 2020-10-20 2024-02-27 The Regents Of The University Of California Volumetric microscopy methods and systems using recurrent neural networks
EP4332876A1 (fr) * 2022-08-30 2024-03-06 CellaVision AB Procédé et système de construction d'une image couleur numérique représentant un échantillon
US11946854B2 (en) 2018-12-26 2024-04-02 The Regents Of The University Of California Systems and methods for two-dimensional fluorescence wave propagation onto surfaces using deep learning
WO2024105344A1 (fr) * 2022-11-16 2024-05-23 Oxford Immune Algorithmics Ltd Dispositif avec système d'entraînement de cartouche
WO2024105343A1 (fr) * 2022-11-16 2024-05-23 Oxford Immune Algorithmics Ltd Dispositif avec cartouche
US12020165B2 (en) 2018-11-15 2024-06-25 The Regents Of The University Of California System and method for transforming holographic microscopy images to microscopy images of various modalities
US12038370B2 (en) 2019-07-02 2024-07-16 The Regents Of The University Of California Magnetically modulated computational cytometer and methods of use
CN118644395A (zh) * 2024-08-14 2024-09-13 山东黄海智能装备有限公司 一种电子显微镜成像图像增强方法
US12270068B2 (en) 2020-01-28 2025-04-08 The Regents Of The University Of California Systems and methods for the early detection and classification of live microorganisms using time-lapse coherent imaging and deep learning
US12300006B2 (en) 2019-12-23 2025-05-13 The Regents Of The University Of California Method and system for digital staining of microscopy images using deep learning
US12430554B2 (en) 2020-10-20 2025-09-30 The Regents Of The University Of California Device and method for neural-network based on-chip spectroscopy using a plasmonic encoder
US12488431B2 (en) 2023-04-20 2025-12-02 The Regents Of The University Of California Deep neural network for hologram reconstruction with superior external generalization

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5741648A (en) * 1992-11-20 1998-04-21 The Board Of Regents Of The University Of Oklahoma Cell analysis method using quantitative fluorescence image analysis
US20170090177A1 (en) * 2014-06-24 2017-03-30 Olympus Corporation Imaging device, image processing device, image processing method, and microscope
WO2017196885A1 (fr) * 2016-05-10 2017-11-16 The Regents Of The University Of California Procédé et dispositif pour imagerie couleur à haute résolution à l'aide d'images fusionnées provenant de dispositifs holographiques et faisant appel aux lentilles

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5741648A (en) * 1992-11-20 1998-04-21 The Board Of Regents Of The University Of Oklahoma Cell analysis method using quantitative fluorescence image analysis
US20170090177A1 (en) * 2014-06-24 2017-03-30 Olympus Corporation Imaging device, image processing device, image processing method, and microscope
WO2017196885A1 (fr) * 2016-05-10 2017-11-16 The Regents Of The University Of California Procédé et dispositif pour imagerie couleur à haute résolution à l'aide d'images fusionnées provenant de dispositifs holographiques et faisant appel aux lentilles

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RIVENSON Y. ET AL.: "Deep Learning Enhanced Mobile-Phone Microscopy", ACS PHOTONICS, vol. 5, no. 6, 15 March 2018 (2018-03-15), pages 2354 - 2364, XP081303226, Retrieved from the Internet <URL:hftps://www.researchgate.net/publication/321761248_Deep_learning_enhanced_mobile-phone_microscopy> *
SLADOJEVIC, SRDJAN ET AL.: "Deep neural networks based recognition of plant diseases by leaf image classification", COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2016, 2016, XP055385550, Retrieved from the Internet <URL:https://www.hindawi.com/journals/cin/2016/3289801/abs> *

Cited By (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11175488B2 (en) * 2016-02-18 2021-11-16 Oculyze Gmbh Mobile microscope assembly
US20220075166A1 (en) * 2016-02-18 2022-03-10 Oculyze Gmbh Mobile Microscope Assembly
US11249293B2 (en) 2018-01-12 2022-02-15 Iballistix, Inc. Systems, apparatus, and methods for dynamic forensic analysis
US12020165B2 (en) 2018-11-15 2024-06-25 The Regents Of The University Of California System and method for transforming holographic microscopy images to microscopy images of various modalities
US11946854B2 (en) 2018-12-26 2024-04-02 The Regents Of The University Of California Systems and methods for two-dimensional fluorescence wave propagation onto surfaces using deep learning
US11262286B2 (en) 2019-04-24 2022-03-01 The Regents Of The University Of California Label-free bio-aerosol sensing using mobile microscopy and deep learning
US11460395B2 (en) 2019-06-13 2022-10-04 The Regents Of The University Of California System and method for measuring serum phosphate levels using portable reader device
US12038370B2 (en) 2019-07-02 2024-07-16 The Regents Of The University Of California Magnetically modulated computational cytometer and methods of use
CN111007661A (zh) * 2019-12-02 2020-04-14 湖南国科智瞳科技有限公司 一种基于深度学习的显微图像自动聚焦方法及装置
CN111007661B (zh) * 2019-12-02 2022-02-22 湖南国科智瞳科技有限公司 一种基于深度学习的显微图像自动聚焦方法及装置
US12300006B2 (en) 2019-12-23 2025-05-13 The Regents Of The University Of California Method and system for digital staining of microscopy images using deep learning
CN111239999B (zh) * 2020-01-08 2022-02-11 腾讯科技(深圳)有限公司 一种基于显微镜的光学数据处理方法、装置及存储介质
CN111239999A (zh) * 2020-01-08 2020-06-05 腾讯科技(深圳)有限公司 一种基于显微镜的光学数据处理方法、装置及存储介质
US12270068B2 (en) 2020-01-28 2025-04-08 The Regents Of The University Of California Systems and methods for the early detection and classification of live microorganisms using time-lapse coherent imaging and deep learning
US11915360B2 (en) 2020-10-20 2024-02-27 The Regents Of The University Of California Volumetric microscopy methods and systems using recurrent neural networks
US12430554B2 (en) 2020-10-20 2025-09-30 The Regents Of The University Of California Device and method for neural-network based on-chip spectroscopy using a plasmonic encoder
US20220284232A1 (en) * 2021-03-01 2022-09-08 Nvidia Corporation Techniques to identify data used to train one or more neural networks
US20220364995A1 (en) * 2021-05-14 2022-11-17 National Tsing Hua University Portable ring-type fluorescence optical system for observing microfluidic channel and operating method thereof
US11609185B2 (en) * 2021-05-14 2023-03-21 National Tsing Hua University Portable ring-type fluorescence optical system for observing microfluidic channel and operating method thereof
GB2608108A (en) * 2021-06-18 2022-12-28 Oxford Immune Algorithmics Ltd Device with cartridge
GB2607955A (en) * 2021-06-18 2022-12-21 Oxford Immune Algorithmics Ltd Device with cartridge drive system
GB2607953A (en) * 2021-06-18 2022-12-21 Oxford Immune Algorithmics Ltd Device with focus drive system
GB2607956A (en) * 2021-06-18 2022-12-21 Oxford Immune Algorithmics Ltd Wafer for holding biological sample
GB2608143A (en) * 2021-06-23 2022-12-28 Oxford Immune Algorithmics Ltd Device with stigmatic lens
CN113589515A (zh) * 2021-08-19 2021-11-02 绍兴格物光学有限公司 便携式显微镜观察套装
US20230351585A9 (en) * 2021-10-12 2023-11-02 Board Of Regents, The University Of Texas System Artificial intelligence enabled, portable, pathology microscope
CN114563869A (zh) * 2022-01-17 2022-05-31 中国地质大学(武汉) 一种贴片式手机显微镜检测系统及其显微结果获取方法
EP4332876A1 (fr) * 2022-08-30 2024-03-06 CellaVision AB Procédé et système de construction d'une image couleur numérique représentant un échantillon
WO2024047042A1 (fr) * 2022-08-30 2024-03-07 Cellavision Ab Procédé et système de construction d'une image couleur numérique représentant un échantillon
WO2024105343A1 (fr) * 2022-11-16 2024-05-23 Oxford Immune Algorithmics Ltd Dispositif avec cartouche
WO2024105344A1 (fr) * 2022-11-16 2024-05-23 Oxford Immune Algorithmics Ltd Dispositif avec système d'entraînement de cartouche
US12488431B2 (en) 2023-04-20 2025-12-02 The Regents Of The University Of California Deep neural network for hologram reconstruction with superior external generalization
CN116894841A (zh) * 2023-09-08 2023-10-17 山东天鼎舟工业科技有限公司 一种变速箱合金壳体质量视觉检测方法
CN116894841B (zh) * 2023-09-08 2023-11-28 山东天鼎舟工业科技有限公司 一种变速箱合金壳体质量视觉检测方法
CN118644395A (zh) * 2024-08-14 2024-09-13 山东黄海智能装备有限公司 一种电子显微镜成像图像增强方法

Similar Documents

Publication Publication Date Title
WO2019103909A1 (fr) Dispositif de microscopie portable à performance d&#39;image améliorée à l&#39;aide d&#39;un apprentissage profond et procédés d&#39;utilisation de celui-ci
US11397405B2 (en) Method and system for pixel super-resolution of multiplexed holographic color images
Rivenson et al. Deep learning enhanced mobile-phone microscopy
US10838192B2 (en) Method and device for high-resolution color imaging using merged images from holographic and lens-based devices
EP3374817B1 (fr) Système de mise au point automatique pour un microscope de calcul
Phillips et al. Multi-contrast imaging and digital refocusing on a mobile microscope with a domed LED array
US10871745B2 (en) Device and method for iterative phase recovery based on pixel super-resolved on-chip holography
CN105308949B (zh) 图像获取装置、图像获取方法以及记录介质
US12489872B2 (en) Compressed acquisition of microscopic images
US11828927B2 (en) Accelerating digital microscopy scans using empty/dirty area detection
US20150378143A1 (en) Whole slide imaging
Wu et al. Real-time, deep-learning aided lensless microscope
Kohli et al. Ring deconvolution microscopy: exploiting symmetry for efficient spatially varying aberration correction
CN113296259B (zh) 基于孔径调制子系统和深度学习的超分辨成像方法和装置
CN112992336A (zh) 一种病理智能诊断系统
Zhang et al. High-quality panchromatic image acquisition method for snapshot hyperspectral imaging Fourier transform spectrometer
JP2013141192A (ja) 被写界深度拡張システム及び被写界深度拡張方法
KR20230073005A (ko) Z축 스캐닝을 포함하는 전체 슬라이드 이미지 생성 장치 및 방법
Taplin et al. Practical spectral capture systems for museum imaging
Wang Deep Learning-Enabled Cross-Modality Image Transformation and Early Bacterial Colony Detection
Sasivimolkul et al. Whole slide imaging based on a low-cost camera
Guzmán et al. Low-Cost Fourier Ptychography Microscope Kit
Toledano González et al. MicroHikari3D: an automated DIY digital microscopy platform with deep learning capabilities
Zhang et al. Fusion of lens-free microscopy and mobile-phone microscopy images for high-color-accuracy and high-resolution pathology imaging
CN112037154A (zh) 一种全切片数字成像两次拍照高精度准焦复原方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18882208

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18882208

Country of ref document: EP

Kind code of ref document: A1