WO2025106429A1 - Systèmes et procédés de tri de cellules basé sur des images - Google Patents
Systèmes et procédés de tri de cellules basé sur des images Download PDFInfo
- Publication number
- WO2025106429A1 WO2025106429A1 PCT/US2024/055512 US2024055512W WO2025106429A1 WO 2025106429 A1 WO2025106429 A1 WO 2025106429A1 US 2024055512 W US2024055512 W US 2024055512W WO 2025106429 A1 WO2025106429 A1 WO 2025106429A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- channel
- image data
- particle
- sorting
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/10—Investigating individual particles
- G01N15/14—Optical investigation techniques, e.g. flow cytometry
- G01N15/1429—Signal processing
- G01N15/1433—Signal processing using image recognition
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/10—Investigating individual particles
- G01N15/14—Optical investigation techniques, e.g. flow cytometry
- G01N15/1434—Optical arrangements
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/10—Investigating individual particles
- G01N15/14—Optical investigation techniques, e.g. flow cytometry
- G01N15/1468—Optical investigation techniques, e.g. flow cytometry with spatial resolution of the texture or inner structure of the particle
- G01N15/147—Optical investigation techniques, e.g. flow cytometry with spatial resolution of the texture or inner structure of the particle the analysis being performed on a sample stream
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/10—Investigating individual particles
- G01N15/14—Optical investigation techniques, e.g. flow cytometry
- G01N15/149—Optical investigation techniques, e.g. flow cytometry specially adapted for sorting particles, e.g. by their size or optical properties
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/10—Investigating individual particles
- G01N2015/1006—Investigating individual particles for cytology
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/10—Investigating individual particles
- G01N2015/1028—Sorting particles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N15/00—Investigating characteristics of particles; Investigating permeability, pore-volume or surface-area of porous materials
- G01N15/10—Investigating individual particles
- G01N15/14—Optical investigation techniques, e.g. flow cytometry
- G01N2015/1497—Particle shape
Definitions
- cell proliferation can be analyzed, e.g., by counting a number of cells in a sample.
- cells can be stained or fluorescently marked, and sub-populations cells can be counted based on an optical signal obtained from the cells.
- Cells can also be quantified based on cell cycle stage or characterized by morphological, biochemical and molecular changes indicating events such as apoptosis or necrosis.
- immunophenotyping e.g., to diagnose certain types of cancer, heterogeneous samples of blood or other tissues may be analyzed to quantify certain sub-populations and cell types.
- Tools for cell quantification include flow cytometry, where single cells are analyzed by reading a light signal emitted from a cell as it passes through a tube, and fluorescence-activated cell sorting, where single cells are analyzed by reading a fluorescent light signal emitted from a cell as it passes through a tube. Cells may be separated based on the read signal by applying electric fields to droplets containing single cells.
- the technologies allow for particle sorting (e.g., cell sorting) based on one or more images of a particle passing through a microfluidic channel.
- the sorting technologies include Machine Learning methods for user-agnostic and/or feature-agnostic extraction of sorting characteristics from the particle images.
- the technologies may obviate the need to pre-process the images to select and extract particle features (e.g., area, perimeter, or eccentricity) and use these extracted features for image classification and sorting.
- the present technologies may allow for classification and sorting of a greater variety of cell populations with greater degrees of similarity to be successfully categorized and sorted compared to technologies relying on (manual) cell feature extraction for classification and sorting.
- the present specification describes a system for handling a particle suspension in a volume of fluid.
- the system includes a sample source and one or more destination reservoirs, such as a first destination reservoir, and a second destination reservoir.
- the system may include a microfluidic channel in fluid communication with the sample source and one or more destination reservoirs, such as the first destination reservoir, and the second destination reservoir.
- the microfluidic channel may include a main channel having an inlet disposed downstream of the sample source, and a downstream end comprising a bifurcation having a distal end in fluid communication with a first destination channel having a first destination outlet and a second destination channel having a second destination outlet, the first destination reservoir disposed downstream of the first destination outlet, and the second destination reservoir disposed downstream of the second destination outlet. Additional destination reservoirs with respective destination outlets may also be applied to provide additional parallel processing capability.
- the system may include one or more imaging devices configured to image a first section of the main channel.
- the system may include a fluidic sorting device disposed upstream of the bifurcation and downstream of the first section, the sorting device may be configured to selectively direct the volume of fluid exiting the main channel away from the first destination channel to the second destination channel.
- the system may include a control system in electronic communication with the one or more imaging devices and the sorting device.
- the control system may be configured to transmit a control signal to the sorting device to actuate the selective direction of the fluid volume.
- the control system may include a processor; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, may cause the processor to receive, from the one or more imaging devices, a digital image of a particle in the suspension; process the digital image by executing an algorithm to extract at least one characteristic associated with the particle, the at least one characteristic derived from a machine-learning model trained on image data; classify, based on the at least one characteristic, the particle; and assign, based on the classification, the particle to at least a first group or a second group; and generate, if the particle belongs to the first group, the control signal.
- the present specification describes a method for handling a particle suspension in a volume of fluid.
- Applications can allow identification of one or more components in the particle suspension.
- the methods can achieve a processing rate of at least 10 cells per second, or between 10 to 1000 cells per second, or more than 1000 cells per second.
- the method may include providing a sample source and one or more destination reservoirs, such as a first destination reservoir and a second destination reservoir.
- the method may include providing a microfluidic channel in fluid communication with the sample source, the first destination reservoir, and the second destination reservoir.
- the microfluidic channel may include a main channel having an inlet disposed downstream of the sample source, and a downstream end comprising a bifurcation having a distal end in fluid communication with a first destination channel having a first destination outlet and/or a second destination channel having a second destination outlet, the first destination reservoir disposed downstream of the first destination outlet, and the second destination reservoir disposed downstream of the second destination outlet.
- the method may include flowing the volume of fluid through the main channel.
- the method may include imaging a first section of the main channel using one or more imaging devices, and transmitting data of a digital image depicting the particle from the one or more imaging devices to a processor of a control system in electronic communication with the one or more imaging devices and a fluidic sorting device disposed upstream of the bifurcation.
- the method may include processing the digital image to extract at least one characteristic associated with the particle, the at least one characteristic derived from a machine-learning model trained on image data.
- the method may include classifying, by the processor, based on the at least one characteristic, the particle and assigning, by the processor, based on the classification, the particle to at least a first group or a second group.
- the method may include generating, by the processor, a control signal if the particle belongs to the first group; and transmitting, by the processor, the control signal to the sorting device to selectively direct the volume of fluid exiting the main channel away from the first destination channel to the second destination channel.
- FIG. 1 is a schematic diagram of an example system for handling a particle suspension in a fluid volume, according to embodiments of the present disclosure
- FIG. 2 is a schematic diagram of an example cell sorting process, according to embodiments of the present disclosure.
- FIG. 3 is a schematic diagram of an example system for handling a particle suspension in a fluid volume, according to embodiments of the present disclosure, where the system includes a flow focusing system;
- FIG. 4 is a schematic diagram of an example configuration and workflow employing a set of syringe pumps as sample sources in an example system
- FIG. 5 is a series of photographs illustrating an outlet region of a flow focusing system shown by dual (BF & FL) imaging with a fluorescent dye in the sample inlet;
- FIG. 6 is a schematic diagram of an example system for handling a particle suspension in a fluid volume, where the system includes a flow focusing system in which each lateral sheath channel has an independent sheath fluid source, according to embodiments of the present disclosure;
- FIG. 7 is a schematic diagram illustrating a shift of the sample stream when the flow rate through a first lateral sheath channel is greater than the flow rate through a second lateral sheath channel;
- FIG. 8 is a schematic diagram of an example source set-up for a steerable flow focusing system
- FIG. 9 is a schematic diagram of a system for handling a particle suspension in a fluid volume, where the system includes a flow focusing system and an event tracking system, according to embodiments of the present disclosure
- FIG. 10 is a schematic diagram of an Image Based Cell Sorter configuration, according to embodiments of the present disclosure.
- FIG. 11 is a series of images illustrating motion-blur that may result when a cell is imaged in-motion
- FIG. 12 is a schematic diagram of a system for handling a particle suspension in a fluid volume, where the system includes an event tracking system, according to embodiments of the present disclosure
- FIG. 13 is a diagram illustrating an example configuration for detecting a change in electric impedance caused by a particle in a volume of fluid
- FIG. 14 is a diagram illustrating generating cell catalogues including paired image and impedance data for maching learning modeling, cell sorting, and/or real-time decision making;
- FIG. 15 is a schematic diagram of an example impedance detection system that can be used for an impedance-based event tracking system (ETS) and/or impedance cytometry system (ICS), according to embodiments of the present disclosure;
- ETS impedance-based event tracking system
- ICS impedance cytometry system
- FIG. 16 is a schematic diagram of an example impedance detection system that can be used for an impedance-based ETS and/or ICS, according to embodiments of the present disclosure
- FIG. 17 is a set of photographs of an example impedance detection system that can be used for animpedance-based ETS and/or ICS, according to embodiments of the present disclosure
- FIG. 18 is a magnified photograph of the example channels shown in FIG. 17;
- FIG. 19 is a schematic diagram of a system for handling a particle suspension in a fluid volume, according to embodiments of the present disclosure.
- FIG. 20 is a diagram illustrating operation of an example ETS, imaging device, and sorting device, according to embodiments of the present disclosure
- FIG. 21A is a diagram illustrating a laser illumination and detection device (laser trigger);
- FIG. 21B is a diagram illustrating a particle measurement component;
- FIG. 21C is a graph showing Anal og-to-Digi tai conversion of (ADC) analog signals from the side scatter detector, according to embodiments of the present disclosure;
- FIG. 22A is a diagram illustrating an ETS connected to an imaging device
- FIG. 22B is a set of graphs illustrating example laser trigger input signal and camera trigger output timing illustrating a timeout period to prevent double trigger
- FIG. 22C is a set of graphs illustrating example laser trigger input signal and camera trigger output timing illustrating a timeout period to prevent double sorts, according to embodiments of the present disclosure
- FIG. 23A is a diagram illustrating an ETS connected to the control system and a sorting device
- FIG. 23B is a set of graphs illustrating example laser trigger input signal, camera trigger output timing, and piezo trigger output timing
- FIG. 24 is a set of time graphs illustrating the principle of operation of an example ETS, according to embodiments of the present disclosure
- FIG. 25 is a set of time graphs illustrating the principle of operation of an example ETS, according to embodiments of the present disclosure
- FIG. 26 is a schematic diagram of a part of a system for handling a particle suspension in a fluid volume illustrating an example fluidic sorting operation, according to embodiments of the present disclosure
- FIG. 27 is a schematic diagram of an example optical system that can be used with a system for handling a particle suspension in a fluid volume, according to embodiments of the present disclosure
- FIG. 28 is a diagram illustrating an angle of a laser side-scatter beam through a channel and the resulting correlation between cell position and focus, according to embodiments of the present disclosure
- FIG. 29 is a schematic diagram of a fluidic system for handling a particle suspension in a fluid volume, according to embodiments of the present disclosure
- FIG. 30 is a schematic diagram of a system and process for handling a particle suspension in a fluid volume using an image-based cell sorting system and method, according to embodiments of the present disclosure
- FIG. 31 is a screenshot of a graphical user interface of a control system for operating an example system for handling a particle suspension in a fluid volume, according to embodiments of the present disclosure
- FIG. 32 is a schematic workflow diagram illustrating basic operation of an example system for handling a particle suspension in a fluid volume, according to embodiments of the present disclosure
- FIG. 33 is a schematic workflow diagram of an example system for handling a particle suspension in a fluid volume including an image-based machine-learning guided particle sorting technology, according to embodiments of the present disclosure
- FIG. 34 is a schematic diagram of an example image processing workflow for an example system for handling a particle suspension in a fluid volume, according to embodiments of the present disclosure
- FIG. 35 is a set of particle images processed with image processing technologies and used as input for a machine learning algorithm for image-based cell sorting (IBCS), according to embodiments of the present disclosure
- FIG. 36A is a set of particle images used as input for a machine learning algorithm for IBCS
- FIG. 36B is a set of graphs illustrating the distribution of image-based parameters in a set of particle images s used as input for a machine learning algorithm for IBCS, according to embodiments of the present disclosure
- FIG. 37 is a schematic diagram of an example particle classification workflow for an example IBCS system, according to embodiments of the present disclosure.
- FIG. 38 is a schematic diagram of an example particle classification workflow utilizing Convolutional Neural Networks (CNN) for an example IBCS system, according to embodiments of the present disclosure
- FIG. 39A is a set of monocyte images used as input for a machine learning algorithm for IBCS
- FIG. 39B is a set of THP1 cell images used as input for a machine learning algorithm for IBCS
- FIG. 39C is a set of graphs illustrating predictive accuracy of a machine learning algorithm for IBCS trained on the image sets of FIG. 39A and FIG. 39B, according to embodiments of the present disclosure
- FIG. 40 is a schematic diagram illustrating monocyte cell differentiation
- FIGS. 41A-41D are sets of photographs of macrophage subtypes used as input for a machine learning algorithm for IBCS;
- FIG. 42 is a schematic diagram of an example particle classification workflow utilizing CNN for an example IBCS system, according to embodiments of the present disclosure
- FIG. 43A is a scatter plot (t-distributed stochastic neighbor embedding) illustrating the distribution of the cell sets illustrated in FIGS. 41A-D utilizing CNN for an example IBCS system;
- FIG. 43B shows separate scatter plots (t-distributed stochastic neighbor embedding) illustrating the distribution of the cell sets illustrated in FIGS. 41A-D, according to embodiments of the present disclosure;
- FIG. 44 is a schematic diagram of an example particle classification workflow utilizing Variational Autoencoders (VAEs) for an example IBCS system, according to embodiments of the present disclosure
- FIG. 45A is a scatter plot (t-distributed stochastic neighbor embedding) illustrating the distribution of the cell sets illustrated in FIGS. 41A-D utilizing VAEs for an example IBCS system;
- FIG. 45B shows separate scatter plots (t-distributed stochastic neighbor embedding) illustrating the distribution of the cell sets illustrated in FIGS. 41A-D, according to embodiments of the present disclosure;
- FIG. 46 is a schematic diagram illustrating a YOLO (you-only-look-once) model for an example IBCS system, according to embodiments of the present disclosure
- FIG. 47 is a schematic diagram of an example particle classification workflow using a YOLO model for an example IBCS system, according to embodiments of the present disclosure
- FIG. 48 is a schematic diagram of a system for handling a particle suspension in a fluid volume including an event tracking system utilizing a YOLO model, according to embodiments of the present disclosure
- FIG. 49A is a photograph of a mammalian cell with yeast attached thereon;
- FIG. 49B is a gradient image of the same cell;
- FIG. 49C illustrates the detection of yeast in the same photograph using Hough transform, according to embodiments of the present disclosure;
- FIG. 50 illustrates the detection of yeast in a photograph of mammalian cells with attached yeast cells using Hough transform (red) and YOLO algorithm (green) , according to embodiments of the present disclosure
- FIG. 51 is a diagram illustrating an example cell line development process utilizing the technologies described in this specification, according to embodiments of the present disclosure
- FIGS. 52A-52C illustrate an example image-based cell sorting process
- FIG. 52A shows an image data set for training of a machine learning algorithm
- FIG. 52B is a flow diagram illustrating the development of a machine-learning algorithm that can be used with the technologies described herein
- FIG. 52C is a flow diagram illustrating an image-based cell sorting process, according to embodiments of the present disclosure
- FIGS. 53A-53C illustrate example image data set that can be used with the technologies described in this specification for stable cell line development;
- FIG. 53A shows images of Naive/healthy ChoZn cells;
- FIG. 53B shows images of metabolically (glutamine) starved ChoZn cells after 3 days;
- FIG. 53C shows images of recently electroporated cells (30 min post electroporation);
- FIG. 54A-54D show the results of an initial cell sorting study using the technologies described in this specification;
- FIG. 54A shows a scatter plot and images of cells in a healthy state (blue);
- FIG. 54B) shows a scatter plot and images of cells of metabolically stressed subpopulation (orange);
- FIG. 54C shows a scatter plot and images of cells in a physically damaged state (green);
- FIG. 54D shows a scatter plot summarizing the results, according to embodiments of the present disclosure;
- FIG. 55A-55B show the results of an initial cell sorting study using the technologies described in this specification;
- FIG. 55A shows a scatter plot illustrating development of example cell phenotypes over a period of 21 days;
- FIG. 55B shows a scatter plot and images of cell illustrating development of example cell phenotypes over a period of 21 days, according to embodiments of the present disclosure;
- FIG. 56A-56B show additional results of an initial cell sorting study using the technologies described in this specification;
- FIG. 56A shows a scatter plot of healthy control cells;
- FIG. 56B shows scatter plots of transfected cells over a period of 21 days, according to embodiments of the present disclosure;
- FIG. 57 shows images of a “clustef’ phenotype of example cells
- FIG. 58 shows an image of an example cell spheroid
- FIG. 59 is a diagram illustrating an example system and process adapted for imaging and sorting of spheroids, according to embodiments of the present disclosure
- FIGS. 60A-60C show components of an example IBCS adapted for organoid/ spheroid imaging and sorting
- FIG. 60A is a photograph of a pump set-up with large capacity syringes to accommodate higher flow rates compared to systems for cell sorting
- FIG. 60B is a photograph of a part of an IBCS system configured as a microfluidic chip
- FIG. 60C is a photograph of beads obtained using optics for larger particle imaging while cataloguing 20 um diameter beads with lOx objective, according to embodiments of the present disclosure;
- FIG. 61 is a set of images of example spheroids obtained in a device with a 1x1 mm (width x height) channel, according to embodiments of the present disclosure
- FIG. 62 in an image of an example multi-cellular organoid/spheroid obtained using a system, according to embodiments of the present disclosure
- FIG. 63 is a flowchart illustrating an example particle sorting method, according to embodiments of the present disclosure.
- FIG. 64 is a schematic diagram of an Image Based Cell Sorter application, according to embodiments of the present disclosure.
- FIG. 65 is a schematic diagram of a process for performing multi -gated fluorescence exposure imaging, according to embodiments of the present disclosure
- FIG. 66 is a diagram illustrating particle flow as captured using multi-gated imaging, according to embodiments of the present disclosure.
- FIG. 67 shows a brightfield image and a fluorescence image representing a sequence of particle sub-images captured using multi-gated fluorescence exposure imaging, according to embodiments of the present disclosure
- FIG. 68 is a schematic diagram illustrating a processor for performing multi-gated fluorescence exposure imaging with variable gated exposure time, according to embodiments of the present disclosure
- FIG. 69 shows a brightfield image and a fluorescence image representing a sequence of particle sub-images captured using multi-gated fluorescence exposure imaging with variable gated exposure time, according to embodiments of the present disclosure
- FIG. 70 is a schematic diagram illustrating a process for performing multi-color channel fluorescence imaging, according to embodiments of the present disclosure
- FIG. 71 is a schematic diagram illustrating an example control scheme for combinations of multi-color imaging, variable exposure time, and multiple images per channel, according to embodiments of the present disclosure
- FIG. 72 is a schematic diagram of a first example microfluidic device for sorting particles suspended in a fluid, according to embodiments of the present disclosure
- FIG. 73 is a schematic diagram of a second example microfluidic device, according to embodiments of the present disclosure.
- FIG. 74 is a schematic diagram illustrating the operation of an example fluidic sorting mechanism, according to embodiments of the present disclosure.
- FIG. 75 is a timing diagram illustrating the timing of an example fluidic sorting device, according to embodiments of the present disclosure.
- FIG. 76 is a schematic diagram illustrating a transformation matrix for mapping between two views of the same object, according to embodiments of the present disclosure
- FIG. 77 shows a brightfield image and a fluorescence image of a calibration device having a constellation of microbeads
- FIG. 78 is a flow diagram of an example homography calibration protocol, according to embodiments of the present disclosure.
- FIG. 79 is a diagram illustrating an example process for defining a region of interest around a particle represented in a brightfield image, according to embodiments of the present disclosure
- FIG. 80 shows a first region of interest defined in a brightfield image translated to a second region of interest in a fluorescence image using a transformation matrix, according to embodiments of the present disclosure
- FIG. 81A is a schematic diagram illustrating a first dual-channel machine-learning model architecture and FIG. 81B is a schematic diagram illustrating a second dual-path machinelearning model architecture, according to embodiments of the present disclosure;
- FIG. 82 shows example images of out-of-focus cells and example images of in-focus cells that may be used to train a classifier, according to embodiments of the present disclosure
- FIG. 83 shows example images of cells having different signal-to-noise ratios
- FIG. 84 is a schematic diagram of a process for deriving sorting parameters, according to embodiments of the present disclosure.
- FIG. 85 is a conceptual diagram illustrating an example computing device / system, according to embodiments of the present disclosure.
- FIGS. 86A and 86B show images of cells obtained using confocal imaging and the image-based cell sorter, according to embodiments of the present disclosure
- FIG. 87 illustrates stained organelles appearing as distinct phenotypes that may be distinguished using a principal component analysis, according to embodiments of the present disclosure
- FIG. 88 illustrates a classical approach to assess fractional localization on a plasma membrane versus an interior of a cell, according to embodiments of the present disclosure
- FIG. 89 illustrates a classical dilation-based masking algorithm, according to embodiments of the present disclosure
- FIG. 90 illustrates convex hull masking, according to embodiments of the present disclosure
- FIG. 91 illustrates how brightfield masking enables localization of signals from specific regions in fluorescence images, according to embodiments of the present disclosure
- FIG. 92 illustrates fluorescence signal from a plasma membrane and from a nucleic acid stain, according to embodiments of the present disclosure
- FIG. 93 illustrates an example of how the image-based cell sorter may classify and/or sort particles based on a ratio of perimeter interior fluorescence signal, according to embodiments of the present disclosure
- FIG. 94 illustrates an application of image-based cell sorting to a pooled mixture of cells with different protein expression phenotypes, according to embodiments of the present disclosure
- FIG. 95 illustrates localization of surface-bound or internalized antibodies based on pH- dependent fluorophores versus image-based signal localization, according to embodiments of the present disclosure
- FIG. 96 illustrates an application of the imaged-based cell sorter to emulsion droplets, according to embodiments of the present disclosure
- FIG. 97 shows example images of emulsion cell droplets, according to embodiments of the present disclosure.
- FIG. 98 illustrates an application of the imaged-based cell sorter to T-Cell activation, according to embodiments of the present disclosure
- FIG. 99 is a graph illustrating an example principal component analysis of activated T- Cells imaged using the system, according to embodiments of the present disclosure.
- FIG. 100 illustrates different phenotypes represented by the principal components, according to embodiments of the present disclosure
- FIG. 101 illustrates a comparison of T-Cell activation as measured using image-based phenotyping versus flow cytometry, according to embodiments of the present disclosure
- FIG. 102 is a flowchart illustrating example image-based and traditional T-Cell activation assay workflows, according to embodiments of the present disclosure
- FIG. 103 illustrates results of image-based phenotyping of naive T-Cells, activated T- Cells, and tumor cells, according to embodiments of the present disclosure
- FIG. 104 illustrates results of image-based functional screening of a complex cell mixture, according to embodiments of the present disclosure
- FIG. 105 shows a comparison images of naive T-Cells, activated T-Cells, and tumor cells in a complex cell mixture with a functional T-Cell engager and a non-functional T-Cell engager, according to embodiments of the present disclosure
- FIG. 106 illustrates an example of image-based phenotyping of human monocyte-derived dendritic cells, according to embodiments of the present disclosure
- FIGS. 107A through 107D illustrate the results of image-based phenotyping corresponding to morphological changes associated with B-Cell activation, according to embodiments of the present disclosure
- FIG. 108 illustrates an application of the imaged-based cell sorter in multi-cellular spheroid sorting, according to embodiments of the present disclosure.
- FIG. 109A through 109C show example images of multi-cellular spheroids, according to embodiments of the present disclosure.
- Cell analysis tools such as flow cytometry, Fluorescence-Activated Cell Sorting (FACS), and cell microscopy are foundational techniques in assay and cell line development and are used for research and development of therapeutics and diagnostics. These tools provide valuable insights by sorting cells based on complex biological criteria or act as an endpoint readout for biological assays.
- Flow cytometers analyze cell populations by measuring scattered light and fluorescence profiles of cells as they serially flow through a microfluidic channel, whereby they may achieve high-throughput analysis at the expense of spatial resolution.
- cell microscopy may map detailed cellular structure with high spatial resolution and with a wide range of optical modalities, but often at the expense of throughput and/or without the ability to effectively sort cells.
- the processing rate can be at least 100 cells per second, or between 100 to 1000 cells per second, or more than 1000 cells per second.
- Cells may be sorted based on image information that can be used to extract characteristics of the cell. Characteristics can include features of the cell and/or information of the surrounding area of the cell within a region of interest, e.g., morphology, granularity, texture, a label, and/or co-located particles of each cell.
- IBCS Image-Based Cell Sorting
- Cell analysis and sorting may be performed using machine-learning-based and/or classical image processing techniques deployed on an IBCS control system to provide ‘on-the- fly’ image analysis of image data generated in-real-time.
- a cell may be sorted into one of two (or more) microfluidic separation channels using a hydraulic pulse actuated using an event tracking system (ETS).
- ETS event tracking system
- the IBCS system can leverage label-based or label-free assessment of cell-level and sub-cellular structure, probe cell morphology and granularity with a high degree of resolution, and explore signal localization in an unprecedent manner.
- the addition of high-throughput sorting based on complex image data can bridge population-level information with individual cell characteristics and cell genomic profding, and can expand cell sorting capabilities at the frontier of therapeutic development.
- the technologies described in this specification can be used for cell polarization studies (e.g., CarT, Myeloid, B or T cell drug response), tumor profiling (e.g., label free lymphocyte invasion), cell-cell binding (e.g., using Yeast Surface Display (YSD) label-free or other display technologies), stable cell line development, endocytosis, cell sorting (e.g., to sort cells with poor or unknown biomarkers), sample QC (e.g., singles/doublets, cell health, debris characterization, etc.), assessment of multicellular spheroids, Drug/Therapeutic screening (e.g., antibody-drug conjugates, T-Cell Engagers), function-upon- binding readouts in multi-cellular assays, cell phenotyping, cell-cell binding assays, or cell activation screening (T-Cells, D-Cells, B-Cells).
- the technologies described in this specification can be used on live cells, dead cells, or fixed cells
- FIG. 1 illustrates a system 100 for handling a particle suspension in a fluid volume, according to embodiments of the present disclosure.
- the system includes a sample source 140, and at least a first destination reservoir 112 and a second destination reservoir 122.
- the system includes a microfluidic channel in fluid communication with the sample source 140, the first destination reservoir 112, and the second destination reservoir 122.
- the microfluidic channel includes a main channel 130.
- the main channel 130 has an inlet 132 disposed downstream of the sample source 140 and has a downstream end including a bifurcation 134.
- the bifurcation has a distal end in fluid communication with a first destination channel 110 having a first destination outlet and a second destination channel 120 having a second destination outlet.
- the first destination reservoir 112 is disposed downstream of the first destination outlet, and the second destination reservoir 122 is disposed downstream of the second destination outlet.
- the system 100 includes an imaging device 160, e.g., an optical imaging device, configured to image a (first) section 136 of the main channel 130.
- the system 100 may include a sorting device 170 disposed upstream of the bifurcation 134 and downstream of the first section 136.
- the sorting device 170 may be configured to selectively direct a flow of fluid exiting the main channel 130 away from the first destination channel 110 to the second destination channel 120.
- the system 100 may include a control system 150 in electronic communication with the imaging device 160 and the sorting device 170.
- the control system 150 may be configured to transmit a control signal to the sorting device 170 to actuate the selective direction of the flow of fluid.
- the control system 150 may include one or more computing devices and/or system, such as the computing device / system 8500 described below with reference to FIG. 85.
- the control system 150 may be an electronic system that includes a processor and a memory. In operation, the control system 150 may receive a digital image of a particle in the suspension from the imaging device 160. The digital image may be processed by executing an algorithm including a machine-learning model trained on image data, e.g., raw or unprocessed image data, to extract at least one characteristic associated with the particle. The system 100 may then classify the particle based on the at least one characteristic. Based on the classification, the particle may be assigned to at least a first group or a second group. The control system 150 may generate a control signal, e.g., if the particle belongs to the first group. The control signal may be transmitted to the sorting device 170 to actuate the selective direction of the flow of fluid.
- image data e.g., raw or unprocessed image data
- FIG. 2 An example process for sorting cells into either targets or waste using the system 100 is illustrated in FIG. 2.
- the process may include flowing a volume of fluid from a sample source storing a plurality of particles (e.g., cells) through the main channel 130 toward the bifurcation 134 and into the first destination channel 110 and/or the second destination channel 120.
- the system 100 and/or methods described herein may be implemented in various ways depending on the configuration of the channels.
- the first destination channel 110 (the waste channel) may be configured as a default channel such that when no sorting action is taken by the system, the particle travels from the main channel 130 to into the first destination channel 110.
- first destination channel 110 with a larger cross-sectional area (perpendicular to the direction of flow) than the second destination channel 120 (the target channel).
- the control system 150 if a cell is determined by the control system 150 not to be a target cell, no control signal is sent by the control system 150 to the sorting device 170, and the fluid volume including the cell is allowed to flow into the first destination channel 110 and toward the waste reservoir 112.
- a signal is sent to the sorting device 170 not to actuate the selective direction of the fluid volume.
- a control signal may be sent by the control system 150 to the sorting device 170 to actuate the selective direction of the fluid volume into the second destination channel 120 and toward the target reservoir 122 (e g., for collection).
- the second destination channel 120 (the target channel) may be configured as a default channel, e.g., by having a larger cross-sectional area than the first destination channel 110 (the waste channel).
- the control system 150 may send no control signal to the sorting device 170, and a fluid volume including the cell may be allowed to flow into the second destination channel 120 and toward the target reservoir 122.
- the control system 150 may send a signal to the sorting device 170 not to actuate the selective direction of the fluid volume. If a cell is determined by the control system 150 not to be a target cell, the control system 150may send a control signal to the sorting device to actuate the selective direction of the fluid volume into the first destination channel 110 and toward the waste reservoir 112.
- the first destination channel 110 (the waste channel) and the second destination channel 120 (the target channel) may have the same configuration.
- the control system 150 may send a first control signal to the sorting device 170 to actuate the selective direction of the fluid volume into the second destination channel 120 and toward the target reservoir 122; and, if the control system 150 determines the cell is likely to be a cell that is not to be a target cell, the control system 150 may send a second control signal to the sorting device 170 to actuate the selective direction of the fluid volume into the first destination channel 110 and toward the waste reservoir 1 12.
- the IBCS may be used to classify and/or sort particles other than cells including, but not limited to, protein crystals, minerals, beads, clusters, etc.
- the IBCS may classify cells, but rather than sort them, it may alter the cells using a pulsed electric field (PEF).
- the PEF may be calibrated to kill the cell by causing it to disintegrate from the force of the electric field.
- the IBCS may be configured to kill cells other than those it determines correspond to the target phenotype.
- the system may be configured to sort into more than two possible destinations.
- the microfluidic devices shown in FIGS. 1, 3, and elsewhere may be daisy chained such that a first sorting mechanism can sort particles a target destination for particles of interest (e.g., corresponding to one of a number of target groups) and a waste destination.
- the target destination may lead to a second analysis region (e.g., for imaging and/or impedance spectroscopy, etc.) followed by a second sorting mechanism for sorting particles to a first group destination or a second group destination.
- One or both destinations may lead to an additional analysis region and/or sorting mechanism, and so on.
- the system may be configured with various collection reservoirs for the different target groups.
- the microfluidic channel may include a flow focusing system disposed upstream of the inlet of the main channel as illustrated in FIG. 3.
- the flow focusing system may align particles (e g., cells) in a volume of fluid into a sample stream of a narrow, single-file line at or near a center line of the main channel 130, allowing the cells to be imaged, one at a time, and remaining in focus while flowing through the imaging region (e g., the first section 136).
- This alignment can be accomplished by providing multiple sheath flows in a laminar flow regime.
- the system 100 may include a lateral sheath flow focusing system.
- the flow focusing system may include a sample channel 230, first lateral sheath channel 210, a second lateral sheath channel 220, and a sheath fluid source 240.
- the sample source 140 and sheath fluid source 240 can be or can include a pump.
- the sample source 140 and sheath fluid source 240 can be or can include a reservoir.
- the sample source 140 and sheath fluid source 240 can each include a reservoir fluidically connected to one or more pumps (not shown), which can be controlled by control system 150.
- a sheath fluid source 240 can be integrated into a pumping system, e.g., a syringe pump system.
- sheath fluid reservoirs and/or pumps may be configured as syringes and/or syringe pumps, and/or reservoirs in fluid communication with air pressure driven pumps, piezoelectric pumps, rotary pumps, and the like.
- An example configuration and workflow employing a set of syringe pumps as sample sources is shown in FIG. 4.
- the system 100 may additionally or alternatively include a central sheath flow focusing system.
- the sample channel 230 may have a sample inlet 242 downstream of the sample source 140, a first central sheath inlet 244 upstream of the sample inlet 242, and a second central sheath inlet 246 downstream of the sample inlet 242.
- the first central sheath inlet 244 and the second central sheath inlet 246 may be disposed downstream of the sheath fluid source 240.
- the sample channel 230 may have an outlet region 232 upstream of the inlet 132 of the main channel 130.
- the first lateral sheath channel 210 and/or the second lateral sheath channel 220 may have a sheath inlet downstream of a sheath fluid source 240.
- the first lateral sheath channel 210 and second lateral sheath channel 220 may have a downstream outlet at the outlet region 232 of the sample channel 230.
- the sample channel 230, the first lateral sheath channel 210 and the second lateral sheath channel 220 may be coplanar, with the sample channel 230 disposed between the first lateral sheath channel 210 and the second lateral sheath channel 220.
- the sample channel 230 may not be coplanar with the first lateral sheath channel 210 and the second lateral sheath channel 220.
- an angle between the direction of flow in the sample channel 230 and the direction of flow in the first lateral sheath channel 210 is less than 90 degrees.
- the angle is less than 80 degrees, less than 60 degrees, less than 40, or less than 20 degrees.
- FIG. 5 shows dual images (brightfield and fluorescent) of fluid flow of a fluorescent dye in an example fluid focusing system. A narrower sample stream (bright region) is achieved by increasing sheath flow rates.
- the flow focusing system may be configured to steer the sample through main channel 130.
- the lateral sheath channels 210 and 220 may be connected to separate sheath fluid sources 240a and 240b, as illustrated in FIG. 6.
- the sample channel 230 may be connected to another sheath fluid source 240c.
- the sheath fluid source 240c may be a single source or may include multiple separate sources (e.g., separate reservoirs and/or pumps).
- the lateral sheath channels 210 and/or 220 may have an independent pump and pump control; for example, controlled by the control system 150. This configuration may allow the independent adjustment of the flow rates of each pump separately, which can have the effect of shifting the sample (particle) flow relative to the centerline of the main channel 130.
- FIG. 7 illustrates a shift of the sample stream when the flow rate through lateral sheath channel 210 is greater than the flow rate through lateral sheath channel 220.
- the steering configuration of the flow focusing system may allow shifting of the particle stream away from one destination channel toward another, e.g., from the target channel toward the waste channel. This configuration can be used to prevent particles from entering the target channel inadvertently, which may reduce purity of the sorted product.
- FIG. 8 An example source set-up for a steerable flow focusing system is illustrated in FIG. 8.
- the example system may include two syringe pumps (e.g., corresponding to a first and second sample), and four syringe pumps for the sheath fluids.
- “Sheath 1” may be the source 240a for first lateral sheath channel 210 (first lateral sheath fluid source)
- “Sheath 2” may be the source 240b for second lateral sheath channel 220 (second lateral sheath fluid source)
- “Sheath 3” may be a first source 240c connected to first central sheath inlet 244 upstream of the sample inlet 242
- “Sheath 4” may be a second source 240c connected to second central sheath inlet 246 downstream of the sample inlet 242.
- the flow focusing system can be configured such that actuating the pump of the first lateral sheath fluid source or the pump of the second lateral sheath fluid source, or both, by the control system 150, alters the trajectory of one or more particles conveyed in the volume of fluid.
- the system 100 can include additional components, e.g., one or more bubble traps, check valves, reagent select valve, tubing, and additional reservoirs.
- the system 100 may include an (electronic) event tracking system (ETS) 180 in electronic communication with the control system 150 and a laser 190 illuminating a (second) section 138 of the main channel 130 upstream of the first section 136, as illustrated in FIG. 9 (the system is illustrated having a flow focusing system as shown in FIG. 6, but, alternatively, the system can have a flow focusing system as shown in FIG. 3).
- the ETS 180 can function as an intermediary between the opto-mechanical systems (e.g., which may include the imaging device(s) 160 and the fluidic systems), and one or more compute nodes executing the control system 150, as illustrated in FIG. 10.
- the compute node(s) may include the computing device / system 8500 described below with reference to FIG. 85.
- the ETS 180 may monitor the occurrence and timing of “events” (e.g., detection of presence of cells, beads, particles, etc. moving through the main channel).
- the ETS 180 may perform one or more of (i) monitoring and detect laser 190 side-scatter events, e.g., caused by cells, beads, particles, etc.
- the ETS 180 may also track one or more performance metrics for a sorting run.
- the performance metric(s) may be used to compute sort efficiency and/or characterize system performance (e.g., total cells imaged, number of (+) sort decisions, number of sort signals sent, etc.).
- the ETS 180 may be configured to trigger image acquisition by the imaging device(s) 160.
- the imaging device(s) 160 may be configured for short exposure times (e.g., from several microseconds potentially down to tens of nanoseconds). Shorter exposure times may improve the quality of acquired images by reducing or eliminating motion- induced blur (e.g., as shown in FIG 11).
- the ETS 180 may include and/or be connected to a laser 190 configured to illuminate a (second) section 138 of the main channel 130 disposed upstream of the first section 136.
- Software and/or circuitry of the ETS 180 may be in electronic communication with the laser 190 and/or the control system 150.
- the ETS 180 may be configured to detect presence of a particle in the second section 138 by detecting at least one of absorption, attenuation, or scatter of the laser beam by a particle in a volume of fluid moving through the main channel 130.
- the ETS 180 may include an/dor connect to one or more photoelectric sensors arranged to detect the absorption, attenuation, and/or scattering of the laser light.
- the ETS 180 may be configured to provide, upon detection of a particle, a first trigger signal to the imaging device 160, causing the imaging device 160 to acquire an image of the first section of the main channel 130.
- the imaging device 160 may then trigger the control system 150 to collect the image.
- the control system 150 upon receiving the image, can initiate the classification and sorting process.
- the ETS 180 may be configured to receive, from the control system 150, an actuation input and provide an actuation signal to the sorting device 170, e.g., as illustrated in FIG. 12. In some implementations, the control system 150 may provide the actuation signal to the sorting device 170 directly.
- the ETS may 180 receive a sorting decision from the control system 150 indicating whether or not an imaged particle is to be sorted. The ETS 180 may then make a secondary determination based on an event timing algorithm as to whether or not it will send a sort signal to cause the particle to be collected. The ETS 180 may generate the actuation signal for sorting if one or more sorting criteria are fulfilled.
- the ETS 180 may make one or both of the following assessments prior to sending the sort signal.
- the ETS 180 may make an assessment based on ML inference (e.g., cell classification). Additionally or alternatively, the ETS 180 may make an assessment regarding event timing, which can include considering if another sort even is occurring and/or if two particles are too close in proximity to each other to allow, for example, the first particle to be collected and the second particle to be discarded, or vice-versa.
- the ETS 180 may make additional timing assessments.
- the ETS 180 may utilize two ring buffers to read particle positional data into the system and to create two delay values associated with that particle.
- the first delay value may trigger image acquisition by the imaging device 160 as the particle passes through the field of view of the imaging device 160.
- the second delay value may trigger the sorting device 170 if the sorting device 170 has received a sort command from the control system 150.
- the ETS 180 may control timing of one or more functions downstream of particle detection including triggering image acquisition and/or particle sorting.
- the imaging device 160 may have a limited image acquisition rate. In certain operations or configurations, e.g., at high throughput, it may not be possible to obtain an image of every particle because the particles are conveyed at a rate that exceeds the maximum image acquisition rate of a single imaging device 160.
- the ETS 180 may be configured such that if a time interval between detection of a first particle and a second particle is less than a camera rate limiter value (e.g., derived from the maximum image acquisition rate of the camera), the ETS 180 may cause the imaging device 160 to refrain from acquiring an image of the second particle.
- a camera rate limiter value e.g., derived from the maximum image acquisition rate of the camera
- the ETS 180 may not send an actuation signal to the sorting device. That is, although the ETS 180 detected the event / particle, the imaging device 160 may not be able to acquire an image of it, and thus the system 100 may not be able to determine whether the particle is of a target type. In some implementations, when no image-based sorting occurs, the particle may be conveyed automatically to the default channel (e.g., waste channel 110).
- the default channel e.g., waste channel 110
- the system 100 may include one or more imaging devices 160 (e.g., cameras). This may increase (and potentially double) the image acquisition rate of the system 100.
- the ETS 180 may make an independent timing decision for an imaging device 160. For example, even if a first imaging device 160a has acquired an image recently and is not ready to receive a subsequent image, a second imaging device 160b may be ready to acquire an image. Thus, including a second imaging device 160b (and additional imaging devices 160) may increase the throughput of the system 100.
- the ETS 180 may be a laser-based ETS.
- the ETS 180 may include and/or connect to a laser signal detector, e.g., a side scatter detector.
- the laser signal detector may detect reflection, transmission, scattering, etc. of the beam emitted by the laser 190.
- the laser signal detector and the imaging device(s) 160 can be positioned along the main channel 130 at an arbitrary position upstream away from the sorting area in order to, for example, increase the transit time of particles between the imaging device(s) 160 and the sorting device 170 to accommodate longer inferencing times (and therefore allow for more complex machine learning models as described below). This may also allow for variable flow rates/speeds.
- the ETS 180 may include at least one pair of electrodes in a second section 138 of the main channel 130 that is upstream of the first section 136.
- the ETS 180 may be in electronic communication with the electrodes and the control system 150.
- the ETS 180 may be configured to detect presence of a particle in the second section by detecting a change in electric impedance caused by a particle in a volume of fluid translocating through the main channel 130 as illustrated in FIG. 13.
- the ETS 180 may include three, four, five, six, or more electrodes.
- the impedance measurement may be carried out by measuring a disturbance of the electric field across two electrodes (“detection pads”) fabricated on, for example, a glass surface of a microfluidic chip and located within a microfluidic channel (e g., the main channel 130).
- the electrodes may be spaced apart in the direction of fluid flow such that particles flow near the pair of electrodes.
- An alternating carrier wave may be applied to the electrodes to generate the electric field.
- Detection circuitry can measure the impedance as particles move past the electrodes.
- Electric impedance-based detection of cells in microfluidic channels of a system may be scaled to many detection points. With multiple detection points, an automated sorting system may be configured to automatically characterizes a particle flow speed, sort delay time, or sort time window in the microfluidic chip, e.g., in one or more of the main channel 130, the first destination channel 110, or the second destination channel 120.
- an impedance-based ETS 180 may use a single pair of electrodes disposed in the main channel 130 upstream of the imaging device(s) 160.
- an impedance-based ETS 180 can be used in combination with a laser-based ETS 180.
- multiple particle sorting operations can be carried out in sequence, e.g., where a first destination channel of a first sorting operation serves as a main channel (or input channel) for a subsequent sorting operation between additional downstream destination channels.
- An advantage of the impedance-based ETS 180 is that the system may not require alignment of a particle in the center of a channel.
- An electrodes can span the entire width of a channel, which may obviate the need to center the particle path in a field of vision of an imaging device, laser detector, and/or laser bearm.
- an impedance-based ETS 180 may be scalable: impedance-based system can detect particles at multiple detection points with a plurality of electrodes in the main channel 130 and one or more of the destination channels 110 and/or 120. This type of configuration may provide cell tracking over multiple electrode pairs to determine, e.g., particle velocity and sortdelay parameters.
- an impedance-based system can also simplify a microfluidic chip characterization process, eliminating or reducing the amount of manual processing to determine the particle velocity and/or sort-delay parameters (e.g., using high-speed video and manual image analysis prior to the sorting experiment, followed by sensitive optical alignment of the laser side scatter (SSC) event detection triggering system and relative stage position at the start of each experiment).
- SSC laser side scatter
- An impedance-based system can provide automated velocity measurements for different particles, e.g., particle sizes and/or particle types etc., and can provide automated velocity measurements for different (sorting) events.
- An impedancebased system can provide automated sort delay and/or sort window measurement.
- An impedance-based system can provide automated sort confirmation via pads disposed in one or more destination channels, e.g., to provide purity assessment and/or efficiency metrics.
- the ETS 180 may allow users to configure these and other timing parameters, increasing the versatility of the system 100. Moreover, the ETS 180 may control the imaging devices 160 to acquires images only when there is a cell/particle present in the imaging area; for example, rather than acquiring images continuously or continually to detect particle events. The ETS 180 may also allow for tuning of the event timing to, for example, adjust the sorting rate and purity of the sort.
- the system 100 may include an impedance cytometry system (ICS).
- An ICS can be configured to be used alone or in combination with one or more or all components of the system 100.
- An ICS may operate in a manner similarly to the impedancebased ETS 180, e.g., as illustrated in FIG. 13.
- Label-free image data such as brightfield images obtained by the system 100, may have a resolution determined by the imaging devices 160 (e.g., diffraction-limited optics) and/or the optical opacity of the parti cl e/cell. While the image resolution may be sufficient to detect a particle’s shape/size and micro-scale cellular structure and texture, subcellular and/or molecular properties of a particle may be difficult to measure in a label-free manner. Thus, in some implementations, impedance spectroscopy may provide a non-invasive, label-free technology that can probe the dielectric properties of the cell across an opacity hierarchy in a frequencydependent manner.
- the imaging devices 160 e.g., diffraction-limited optics
- the optical opacity of the parti cl e/cell While the image resolution may be sufficient to detect a particle’s shape/size and micro-scale cellular structure and texture, subcellular and/or molecular properties of a particle may be difficult to measure in a label-free
- An ICS may use electrical analysis to phenotype particles (e.g., cells) based on dielectric properties.
- the ICS may use a carrier wave frequency of alternating current between two electrodes to probe the dielectric properties of a cell across an opacity hierarchy ranging from the cell membrane, to the cell cytoplasm, and to subcellular structures in a frequency dependent manner.
- the ICS may measure, for example, membrane capacitance (1 MHz), cytoplasmic conductivity (10 MHz), or vacuole size or property (> 10 MHz).
- specific parts of the system may dominate the following frequency regimes: C(DL): ⁇ 0.1MHz; C(medium): 0.1 - 0.5 MHz;
- Impedance data and/or image data can be used for cell phenotyping each alone or in combination.
- cell catalogues can be generated that contain paired image and impedance data for each cell as illustrated in FIG. 14.
- Impedance data can be used to reinforce label-free machine-learning capabilities and can enhance cell phenotyping with characterization of cell electrical properties.
- impedance data can be applicable to cell sorting for stable cell line development post transfection where cell membrane integrity is potentially disrupted.
- the system 100 may use machine-learning techniques to analyze the impedance and/or image data, to the system may generate paired image and impedance data at single-cell level and at large scale.
- the system may apply machine-learning techniques for multimodal data integration from computer vision to train one or more machine-learning models using this dataset. Taken together, the combined image and impedance dataset can reveal subtle cellular signals with higher resolution than either modality alone.
- a model like OpenAI’s Contrastive Language-Image Pretraining (CLIP) can zero-shot predict the biologically-relevant semantic states of a cell (e.g., “this is an activated, electroporated cell”).
- the example impedance detection system may include a flow channel with (e.g., eight) pairs of electrodes, a dedicated PCB for first-stage amplification (optional), dedicated microcontroller, and an impedance measurement front-end.
- An on-board microcontroller may use peak detections on eight dedicated channels to determine particle position in a flow channel (e.g., the main channel 130) and send relevant timing information to the ETS 180.
- Positional information (e.g., using an impedance detection system disposed in a destination channel) can confirm successful sorting events.
- FIG. 16 Another example impedance detection system that can be used for impedance ETS and/or ICS is illustrated in FIG. 16.
- the system 100 may include a single channel and an electronic configuration to manually switch equipment between different electrode pad pairs and individual impedance front-ends.
- FIG. 17 An example implementation of the impedance detection system described above and in FIG. 16 is shown in FIG. 17.
- the photograph on the left shows an IBCS with a microfluidic system including impedance detection system mounted on an imaging stage of the IBCS.
- External wiring connects an excitation controller / impedance front end to the electrical contact pads on the microfluidic chip using spring-loaded pogo-pins held in place by a 3D printed chip mount.
- Electrodes, traces, and detection pads are fabricated with PVD deposited gold.
- the photograph on the right show electrode pairs that can be used for the ETS (“cell detection pads”). Electrode pairs are spaced every 1 mm in the main channel (e.g., a first section for imaging) as well as in the two outlet (destination) channels (e.g., destination channels 110 and 120).
- FIG. 18 shows a magnified photograph of the example channels shown in FIG. 17.
- Impedance detection is used to trigger events in an example ETS.
- ETS electronic telecommunications
- By detecting events at multiple electrode pads it is possible to track a single cell through an imaging region of the IBCS microfluidic chip. After a number of pad measurements for calibration, it is possible to measure the average velocity of each cell and use the velocity information to predict sort-delay times on-the-fly.
- the table below shows example detection times at three electrode pairs (pads), the calculated velocities and calculated delay times for cell sorting.
- Using information from the pads in the destination channels it is possible to confirm which cells enter which destination channel, e.g., the target channel vs waste channel, thus to confirm successful sorting in the IBCS.
- the system 100 may include a sorting device 170 in electronic communication with the ETS 180, e.g., as illustrated in FIG. 19 (the system is illustrated having a flow focusing system as shown in FIG. 6, but, alternatively, the system can have a flow focusing system as shown in FIG. 3).
- the sorting device 170 may be in electronic communication with the control system 150.
- the sorting device 170 may include a sorting channel 176 in fluid communication with the main channel 130 and a sorting reservoir 172 for holding a sorting fluid.
- the sorting channel 176 has an inlet downstream of the sorting reservoir 172 and an outlet upstream of the bifurcation 134.
- the sorting device may include a pump 174.
- the pump 174 can include a piezoelectric actuator.
- the piezoelectric actuator may be configured to receive an actuation signal from the control system 150 and/or the ETS 180 and cause, upon receipt of the actuation signal, a flow of sorting fluid from the sorting reservoir 172 through the sorting channel 176 into the main channel 130.
- the sorting device can include one or more of a push-pull mechanism, an air pressure actuation system, a fluid pressure actuation system, a solenoid valve, an electrostatic cell sorting system, a dielectrophoretic cell sorting system, an acoustic cell sorting system, a surface wave generator, a laser trap, an optical trap, a cavitation-induced sorting system, a laser ablation system for positive or negative cell selection, or a system for light-induced gelation encapsulation of cells for positive/negative selection.
- a push-pull mechanism an air pressure actuation system, a fluid pressure actuation system, a solenoid valve, an electrostatic cell sorting system, a dielectrophoretic cell sorting system, an acoustic cell sorting system, a surface wave generator, a laser trap, an optical trap, a cavitation-induced sorting system, a laser ablation system for positive or negative cell selection, or a system for light-induced gelation encapsulation of
- the ETS 180 may include and/or be connected to a laser 190 illumination and detection device; for example, in the form of a current analog input device of the ETS.
- the analog input device may include: (1) a particle (cell) measurement component (e.g., a laser trigger) and/or (2) a converter component to convert a measurement (e.g., laser light scatter) into a digital form that can be processed, manipulated, computed, transmitted, or stored (see FIG. 21A).
- the particle (cell) measurement component can include a sensor or other implement to detect, e.g., laser side scatter (SSC), from a particle illuminated by a laser beam in a flow channel (or flow cell) as illustrated in FIG. 21B.
- the side scatter detector turns light reflected from a cell into a voltage representing the magnitude of the light received.
- attenuation and/or absorption of laser light is measured by the particle (cell) measurement component.
- the converter component converts (analog) light measurement into a digital form.
- Analog-to-digital converters (ADC) may translate analog signals, like the measurement from the side scatter detector, into a digital representation of that signal (see FIG. 21C). The precision of this conversion may be represented by the number of bits and ADC has.
- an ETS uses a 16-bit ADC which may yield 65,536 distinct values.
- the example ADC can sample at 250 K samples/sec.
- the ETS 180 may include an event generation system (camera trigger) connected to an imaging device 160, e.g., a camera, as illustrated in FIG. 22A.
- an event generation system camera trigger
- the ETS 180 logs it as an event.
- one or more delay values can be created that are associated with that event, e.g., DTI and/or DT2 etc.
- at least one camera trigger delay value may be created for each imager (e.g., a Camera 1 trigger delay and a Camera 2 trigger delay).
- a sort-trigger delay value may also be generated for each event.
- multiple camera trigger delay values can be generated.
- the system 100 may include multiple imaging devises 160 (e g., for brightfield and fluorescence imaging).
- the multiple-camera configuration can include delay values that can be independently set for each imaging device 160.
- each imaging device 160 may have a separate trigger time at which it images the particle. This may allow for triggering of both cameras at either the same or at different times (which can be important for fluorescence implementation).
- the ETS 180 may include a camera rate limiting timer so that the imaging devices 160 are not triggered faster than their maximum trigger rate.
- the ETS 180 may implement three timer values (new event detection timeout (NEDT), sort window (SW), and sort sequence time (SST)), during which conditional detection and sorting behavior can be used.
- each timer value is measured in microseconds.
- Time value NEDT may be used to prevent multiple (e.g., double) triggers on the same SSC pulse.
- a timeout period may be allowed to expire before another event can be registered. This timeout period is NEDT (see FIG. 22B).
- Delay value 1 may be used as a camera trigger. DTI may represent the delay time that represents the time for a particle (cell) to move from the laser side scatter detection position to the imaging position. At the end of the DTI time delay, a signal may be sent to the camera to take an image.
- Time value SW may be used to prevent double sorts.
- SW may represent the width of a time window in which any particle in this time window will be sorted during a sort operation.
- the ETS 180 may set a timeout period in which sorting is prevented. This timeout period may be represented by SW (see FIG. 22C).
- the ETS 180 may be configured to adhere to a sort decision window. For example, if the ETS 180 does not receive a sort decision from the control system 150 (e.g., as shown in FIG. 23 A) for an event during the sort decision window, the ETS 180 may not sort that event regardless of the decision. This feature can increase robustness and flexibility of the system, e.g., allowing for interruptions or delays of the inference / classification process.
- Delay value 2 may be used as a sort trigger.
- DT2 may represent a transit time for the cell to move from the detection region to the sorting region.
- the ETS 180 determine whether it has received a positive sort decision from the control system for this event (see FIG. 23B). If “yes”, a signal may sent to the sorting device to sort the cell conditional upon another event being detected within the sort window timing (e.g., SW expiration) and/or conditional upon there being an active sort occurring (SST). If the sort decision is “no”, no further signal/action may be taken.
- Time value SST designates a period to allow piezo recovery; for example, a total time for a sort process to initiate, sort, and recover for the next sort.
- SST condition may prevent another sort from being made during an active sort event. This may allow the fluidic sorting system to recover fully before sorting another event.
- the piezoelectric actuator may be incapable of sorting a subsequent cell until the SST time is up (see FIG. 24)
- the ETS 180 may receive a laser trigger pulse caused by the detection of a particle in a volume of fluid in the main channel.
- the ETS 180 may send a trigger signal to the imaging device(s) 160 to acquire one or more images of the particle.
- the acquired one or more images may be transferred to the control system 150 for processing.
- the control system 150 may send a sort decision to the ETS 180.
- the ETS 180 may send a trigger signal (e.g., a piezo signal) to the sorting device 170.
- the sorting device 170 may convert the signal into a pump action, introducing the sorting fluid at a predetermined duration and rate. The duration and rate can be fixed or variable.
- the system 100 may be configured such that the sorting fluid entering the main channel 130 causes the direction of flow out of the main channel 130 to change from the first destination channel 110 (waste) to the second destination channel 120 (collection).
- An example sorting operation is illustrated in FIG. 26.
- the microfluidic channel of the IBCS may be configured such that sorting fluid entering the main channel 130 causes the direction of flow out of the main channel 130 to change from the second destination channel 120 (target) to the first destination channel 110 (waste).
- the sorting device 170 can include one or more of other or additional components, e.g., a valve, a channel, a foil, an electric field, or a combination thereof.
- the sorting device 170 may include a valve, and a piezoelectric actuator configured to actuate the valve.
- Other microfluidic implements and configurations can be used with the IBCS to effect microfluidic sorting of particles.
- the system 100 may include one or more imaging devices 160 in electronic communication with the ETS 180 as illustrated in FIG. 27.
- the imaging device can include one or more of an optical device (e.g., a camera), a fluorescence detector, a spectroscopy system, or an impedance detection system.
- the imaging device(s) 160 can include a one or more of a brightfield camera, a machine-vision camera, a charge-coupled device (CCD) camera, complementary metal-oxide-semiconductor (CMOS) camera, a photomultiplier tube (PMT), a photodiode, or an Avalanche Photodiode (APD).
- CCD charge-coupled device
- CMOS complementary metal-oxide-semiconductor
- PMT photomultiplier tube
- the imaging device is configured for Differential Interference Contrast (DIC), Phase Contrast, Light-field, or Darkfield imaging.
- the imaging device is configured for Coherent Anti-Stokes Raman Scattering (CARS) or Fluorescence Lifetime Imaging Microscopy (FLIM).
- the fluorescence detector is a CCD camera, an electron -multi plying CCD camera, or a CMOS camera.
- the fluorescence detector is an image intensified fluorescence camera.
- the system 100 may include multiple imaging device(s) 160, such as a camera configured to capture brightfield images and/or a camera configured to capture fluorescence images.
- An imaging device 160 can include a number of additional components, including, lenses, prisms, mirrors, filters, collimators, collectors, condensers, or any combination thereof.
- one or more of the imaging devices 160 may be configured to perform fluorescence resonance energy transfer (FRET) imaging.
- one or more of the imaging devices 160 may be configured to image other signals such as luminescence (e.g., bioluminescence, Cherenkov luminescence, etc.).
- the system 100 may include a brightfield camera, but not a fluorescence camera.
- the system 100 may include a fluorescence camera, but not a brightfield camera.
- a fluorescent camera may be a gated intensified high-speed camera. This type of camera may be used for high-speed fluorescence imaging, but differs from a standard CMOS camera in that it may include a fiber-coupled image intensifier to boost the light signal received by the image sensor over a short, 1 microsecond gated exposure time.
- the fluorescence camera may be configured to detect one or more fluorescent labels attached to a particle.
- a label such as a fluorescent label can be or can include a protein expression label, a spatial distribution and/or localization label, or a translocation label.
- a particle may be labeled with two or more fluorescent labels.
- the system 100 may extract one or more characteristics from an image of an unlabeled cell.
- the character! stic(s) can include information that may not be humanly recognizable as a feature or property of a particle.
- the characteristic may be or include a cell feature, e.g., cell morphology (e.g., shape, perimeter, or eccentricity), cell size, cell area, texture (e.g., granularity or number, size, position, or distribution of cell components, e.g., nuclei).
- the characteristic includes cell-cell binding or cell-cell spatial association information.
- a particle e.g., a cell
- a particle may display an antibody or antibody fragment, an antigen, or other proteins of interest.
- a particle may include an internalized bispecific or monospecific antibody-drug conjugate.
- the antibodydrug conjugate can be labeled with a fluorophore or dye.
- a particle is connected to one or more other particles via a T-Cell engager.
- a particle to be sorted can be labeled with a labeling cell, which can be dead or alive.
- the labeling cell can be attached to the particle to be sorted, or can be co-located in the same image without being attached.
- the labeling cell can itself be fluorescently labeled; the fluorescent label of the labeling cell can be detected and used for sorting.
- a particle to be sorted can be labeled with a bead that is either attached to the particle or is co-located in the same image without being attached.
- a particle to be sorted can be labeled with cell debris or a cell fragment co-located in the same image. Any characteristic and/or label can be used alone or in combination with one or more other characteristics and/or labels.
- the first section 136 of the main channel 130 which is imaged by the imaging device 160, can be illuminated at least in part by a laser 190.
- the laser 190 may be positioned such that the beam emitted by the laser strikes the channel at an angle of less than 90 degrees, e.g., less than 45 degrees. Presence of a particle, e.g., a cell, in an imaged volume of fluid can cause scattering of laser light.
- the sharp angle of the laser side-scatter beam through the channel can provide correlation between cell position and focus of the imaging optics, e.g., as illustrated in FIG. 28
- FIG. 29 A schematic diagram of a fluidic system for handling a particle suspension in a fluid volume is shown in FIG. 29.
- FIG. 30 A schematic diagram of a system and process for handling a particle suspension in a fluid volume using an image-based cell sorting system with said fluidic system is shown in FIG. 30.
- the control system 150 may process images acquired from the imaging device(s) 160 to generate a sort / no-sort decision. In some implementations, the control system 150 may use one or more machine-learning models to make the sort / no-sort decision.
- the control system 150 may be electronically connected to the ETS 180 and can be used to set one or more ETS parameters or otherwise control the ETS.
- the ETS is electronically connected to the laser and one or more photodetectors (e.g., an Avalanche Photodiode (APD)), and can monitor the laser beam to obtain side scatter information and/or to obtain an image acquisition trigger signal.
- the ETS can trigger image acquisition by the imaging device(s) 160.
- the ETS can also function as a timekeeper for time-of-flight cell classification / sorting decisions.
- the ETS is also electronically connected to a waveform generator and a piezoelectric pump (e.g., a syringe pump) in fluidic communication with a sorting reservoir and a sorting channel. Upon receipt of a sort decision from the control system, the ETS generates a sorting pulse signal to actuate the pump.
- the IBCS control system is electronically connected to the fluidics system, e.g., to one or more pumps and/or valves to control sheath flows from the sheath fluid sources and sample flows from the sample reservoir(s).
- the IBCS control system may be electronically connected to the imaging device(s) 160, e.g., to control camera settings or sample illumination, e.g., using LED lights.
- FIG. 31 is a screenshot of a graphical user interface of the control system for operating an example system for handling a particle suspension in a fluid volume.
- FIG. 32 is a schematic workflow diagram illustrating basic operation of an example system for handling a particle suspension in a fluid volume.
- a main app e.g., the IBCS app illustrated in FIG.
- the main app 64 may be included in and/or connected to the control system 150, and may be used to control the imaging device(s) 160, e g., a camera for image and/or video capture.
- the main app can control the video display and/or a graphical user interface (GUI), e.g., as shown in FIG. 31.
- GUI graphical user interface
- the main app can control the image processing algorithm (“cell gazer”) and can control hardware of the system, e.g., the sheath pumps, sample pumps, and/or the pump of the sorting device.
- the technologies for image-based cell sorting described in this specification include Machine Learning / Artificial Intelligence models to sort particles, e.g., cells, into two or more groups.
- cells can be sorted into two groups (e.g., cells of interest vs. other/waste).
- cells are grouped into three or more groups. Each group can include sub-groups.
- three or more groups can be sorted into three or more corresponding channels.
- the first step may include the acquisition of one or more image data sets to train the machine-learning algorithms.
- at least 4 sets of images can be used to train the machine-learning algorithms.
- at least 8 sets of images can be used to train the machine-learning algorithms.
- at least 12 sets of images can be used to train the machinelearning algorithms.
- the imaging device 160 may acquire images of known particles (e.g., cells) to generate libraries of images called ‘catalogues’.
- the system 100 may capture images at a rate up to 1,000 particles (e.g., cells) per second.
- the system 100 may capture images a rate of over 1,000 particles (e.g., cells) per second.
- the sets can include separate series of images of different cell types (one series for each type), e.g., to train the algorithm to distinguish cell types based on images.
- the sets can include a series of images of a cell type at different stages of development, at different stages of a cell cycle, or at different disease states.
- the images may be used to train the machine-learning model to classify images and/or make a sorting decision (e.g., in real-time).
- These training methods can include guided or unguided methodologies as described below in this specification.
- the models can be used in a third step in a microfluidic system to sort cells suspended in a fluid.
- the system 100 may be used for training and/or sorting, e.g., as illustrated in FIG. 33.
- the process may start with injection of a sample containing a plurality of particles (e.g., cells) into the fluidic system.
- a sheath flow may be established, e.g., as described above.
- Pressure and flow parameters may be adjusted to achieve stable flow conditions, e.g., to ensure a trajectory of cells at or near the centerline of the main channel (e.g., within half a cell width from the centerline).
- the imaging device is then configured for optimal optical alignment and focus. Illumination parameters (e.g., LED parameters) and/or exposure times are set.
- a laser trigger as described above may be aligned and delay parameters of the ETS 180 may be configured (e.g., delays between trigger and image capture).
- the cells In training mode (top branch of the diagram in step 3 in FIG. 33), the cells may be imaged (e.g., at a rate of 100 to 1000 images per second), and the images saved. The image data can then be transferred, e.g., to an image processing and machine learning module.
- the module can be a virtual module or a separate device.
- the machinelearning methodologies that can be used with the technologies described in this specification include image preprocessing, CNN models, YOLO models, or other models described below in this specification.
- control system 150 may load a machine-learning model, and the ETS delays and sorting triggers may be configured. Cells may then be sorted, e.g., using the fluidic sorting methods described above, e.g., in FIG. 26. The isolated target cells can then be collected from the target reservoir for further processing, for example, using cell culture or by extracting genetic information.
- Described in this specification are technologies for image-based cell sorting based that include the use of one or more machine-learning technologies. These technologies include model training methods where images, e.g., raw images, are collected and preprocessed to be used as initial input for the model.
- the ML models that can be used with the technologies described in this specification include models based on Convolutional Neural Networks (CNNs) to provide characterization of cell morphologies at scale with speed and precision necessary for high- throughput screening and/or sorting.
- CNNs Convolutional Neural Networks
- a YOLO-type model can be used to provide efficient one- shot localization and classification.
- a YOLO (You-Only-Look-Once) model is a CNN-based algorithm that can be used in demanding computer vision applications such as self-driving cars.
- CNN-based Variational Autoencoders VAEs
- VAEs Variational Autoencoders
- the training methods that can be used with the ML-based IBCS technologies described in this specification include obtaining images, e.g., raw images, of particles (e.g., cells), preprocessing the images, and cataloguing the image (FIG. 34). Images of particles are obtained, e.g., by imaging a plurality of cells in a microfluidic system using the imaging technologies described in this specification. Each cell can be imaged two or more times (a series of images of the same cell), e.g., to form an average or add additional images that capture the same cell from a slightly different angle to enhance the quality of the catalogue and thus improve the predictive capability of the algorithm. Cell images are then pre-processed, e.g., as shown in FIG. 35.
- thresholding methods can be used, e.g., to segment one or more regions from an image, e.g., by creating a binary image.
- Morphological operations can be performed on the image, e.g., dilation (adding pixels to the boundaries of objects in an image) and/or erosion (removing pixels on object boundaries).
- One or more image filters can be applied, e.g., to filter images based on, e.g., size, shape, or texture of the image.
- a region of interest (ROI) can be extracted from each image, e.g., a region containing the image of the cell (e.g., the center of mass (CM) of a labeled region).
- ROI region of interest
- FIG. 36A shows an example cell catalogue filtered by size, shape, and texture parameters.
- FIG. 36B shows scatter plots illustrating the distribution of a cell population in terms of area, eccentricity (shape factor), and standard deviation of pixel values in the image (e.g., if no cell is present, std dev is 0).
- the cell catalogue can then be used as input for the predictive model (FIG. 37).
- the predictive models described in this specification provide a feature-agnostic approach, and so do not need to explicitly measure cell features (e.g., cell size or shape) for any real-time inference for cell sorting. Such features may be measured in the catalogues for secondary data analysis steps or for refining the catalogue data that is then used for supervised training, but these features are not assessed to provide an output of the inference/sorting algorithm.
- An example model that can be used with the technologies described in this specification is or includes Convolutional Neural Networks (CNN). CNNs are deep learning models for image segmentation and classification tasks. The models can be trained in a supervised manner with labeled data.
- CNN Convolutional Neural Networks
- each image of a cell is labeled with label identifying the cell type.
- the trained model can then be used to predict a cell type in images of cells previously unseen.
- a CNN based on LeNet-5 architecture can be used. This model is comparatively small (-61,000 learnable parameters) and fast (e.g., ⁇ 1 ms or ⁇ 0.5 ms inference time).
- FIG. 38 illustrates an example CNN that can be used with the technologies described in this specification.
- a pixel map of a cell is used as input.
- a first convolutional layer is generated consisting of six convolutional kernels of size 5x5, which “sweep” the input image. This process outputs six images of size 28x28.
- This first layer of the convolutional neural network can identify basic characteristics or features of the cell.
- a subsampling layer is then generated.
- the subsampling layer is an average pooling layer: Each square of four pixels in the previous output is averaged to a single pixel.
- the subsampling layer is a maximum pooling layer: The maximum of four pixels in the previous output is selected as single pixel (value).
- the subsampling scales down the six 28x28 images by a factor of 2, outputting six images of size 14x14.
- the second convolutional layer consists of 16 convolutional kernels, each of size 5x5, which take the six 14x14 images and sweep them, producing 16 images of size 10x10.
- the second average pooling layer scales down the sixteen 10x10 images to sixteen 5x5 images.
- a fully connected convolutional layer with 120 outputs is then created. Each of the 120 output nodes is connected to all of the 400 nodes (5x5x16) that came from the second pooling layer.
- the output is a ID array of length 120.
- a fully connected layer of length 84 maps the 120-array to a new array of length 4. Each element of the array now corresponds to, e.g., a cell type.
- a (softmax) function transforms the output into a probability distribution of e.g., four values, which sum to 1.
- the input for the CNN models described in this specification may be (single channel) brightfield image data, e.g., as described above.
- the CNN models can be extended to utilize multi-channel image input and are capable of crossleveraging the information content in each channel to boost their predictive capacity.
- Other image modalities for example, fluorescence imaging (e.g., multi-channel fluorescence imaging), can be used with the machine-1 eamingmethods described herein, each alone or in combination, e.g., in combination with brightfield images.
- FIGS. 39A and FIG. 39B show two sets of images that can be used as input for a machine learning algorithm for IBCS: FIG. 39A is a set of monocyte images, and FIG. 39B is a set of THP1 cell images.
- FIG. 39C is a set of graphs illustrating predictive accuracy of a machine learning algorithm for an IBCS trained on the image sets of FIG. 39A and FIG. 39B.
- the CNN algorithm used in this example uses example images from the training set(s) to learn weights that minimize the loss function shown in the upper graph, which decreased over time as expected.
- the loss function used in ML techniques is defined as the difference between an actual output and a Predicted output from a model for a (single) training example. This computed difference from the loss functions (e.g., regression loss or multiclass classification loss function) is known as the error value.
- the accuracy of the of the trained model was evaluated on a randomly split held-out test set and shown in the lower graph.
- the CNN algorithm was able to distinguish monocytes from THP1 cells with >90% accuracy, well over the 50% expected for random guessing.
- monocytes were stimulated with macrophage colony-stimulating factor (M-CSF) to differentiate into MO macrophages.
- M-CSF macrophage colony-stimulating factor
- MO Macrophages were further stimulated to differentiate the cells into Ml using Lipopolysaccharide (LPS) and Interferon-y (IFN-y ), into M2a using Interleukin-4 (IL4), and into M2b using Interleukin- 10 (IL10) phenotypes, as illustrated in FIG. 40.
- LPS Lipopolysaccharide
- IFN-y Interferon-y
- IL4 Interleukin-4
- IL10 Interleukin- 10
- Embedding layers from the fully connected layers generated by the CNN model were extracted and visualized in t- Distributed Stochastic Neighbor Embedding (t-SNE) plots shown in FIG. 43A and FIG. 43B.
- t-SNE plots similar cells are placed close together and different cells are placed further apart on a 2D graph. The distances between cells in the 2D or 3D plot aim to capture the differences between cells in high-dimensional space.
- the CNN model was capable of discerning subpopulations of cells by structure. For example, the results indicate that the Ml subpopulation is most distinct from subpopulations MO, M2a, and M2b.
- VAEs Variational Autoencoders
- VAEs can be used to characterize subpopulation structure in heterogeneous cell populations, such as stimulated macrophages.
- VAEs include an encoder, a decoder, and a loss function as illustrated in FIG. 44.
- VAE is an unsupervised method useful for learning a compact low-dimensional representation of the image in a lower dimensional latent space z.
- the task of the encoder (which is CNN-based) and decoder is to determine the encoder/ decoder combination that can best reconstruct the image. This mapping is learned by minimizing the loss function that tries to minimize the error between the original image reconstructed image.
- VAE can be viewed as an unsupervised method that can uncover the underlying generative distribution of the image dataset.
- the latent vectors visualized by t-SNE show a similar subpopulation structure to the CNN embeddings illustrated in FIGS. 43A and 39B.
- both methodologies show that the Ml subpopulation is most distinct from MO, M2a/b populations.
- the IBCS technologies include YOLO algorithm-based sorting technology.
- the YOLO (you-only-look-once) model is a state-of-the-art algorithm for detecting and classifying objects in images, e.g., as illustrated in FIG. 46.
- the YOLO algorithm can be used in demanding real-time applications, such as self-driving cars. It achieves its performance by utilizing one forward pass of a CNN in which all possible objects located in a suitably fine grid are computed in a brute force manner.
- Candidate boxes are scored using Intersection Over Union (IOU) relative to the ground truth boxes in the training dataset. For each grid, only the top IOU score for each object is retained (a process termed non-max suppression).
- IOU Intersection Over Union
- FIG. 47 An example sorting workflow is illustrated in FIG. 47.
- the THP1 cell can be sorted out from the monocyte using a single pass of the YOLO algorithm without requiring a trigger to acquire an image and/or segment an ROI that will be sent to a CNN model as described above in order to perform a sorting decision.
- Other advantages of YOLO are that it is more sensitive smaller cells or unwanted small objects like debris and that it has a larger detection dynamic range, which can be important for detecting small cells or subcellular debris.
- Another application of the imaging and IBCS technologies described in this specification is cell panning Yeast Surface Display (YSD).
- This technique includes co-localizing yeast expressing a binding ligand of interest with mammalian cells expressing the binding target.
- the cell panning YSD can be used, e.g., to identify and enrich yeast cell expressing a ligand of interest from a diverse population of yeast cells.
- An example image of a mammalian cell with smaller, co-located yeast cells is shown in FIG. 49A.
- a gradient image of the same objects is shown in FIG. 49B.
- the gradient image can be used as input for a Hough transform operation to identify the yeast cells.
- Hough transform is a classical image processing algorithm for detecting circular objects and can be used to detect yeast. Applying the Hough transform to the image in FIG. 49B showed that this algorithm was unable to detect all the yeast cells visible to the naked eye.
- a YOLO algorithm is a subtype of convolutional neural network that may both classify and localize an object; for example, both recognizing and locating an object.
- the YOLO may be used to classify and obtain the bounding boxes of yeast cells and mammalian cells in an image that contains both yeast cells and mammalian cells.
- FIG. 50 illustrates the detection of yeast in a photograph of mammalian cells with attached yeast cells using Hough transform (red) and YOLO algorithm (green).
- the technologies described in this specification can be used for cell line development.
- Stable cell line development is used for creating, e.g., well- characterized protein producing cells that can be utilized for scaling up drug discovery research, generate proteins for animal experiments, support clinical trials, and eventually scale up for drug production.
- the current processes can be expensive and time-consuming and are often outsourced.
- Stable cell line development is a lengthy process (>1 month) that results in a few stable unique clones of a protein expressing cell line.
- a transfection step that introduces DNA encoding for a protein-of-interest as well as a selectable marker that confers the ability for transfected cells to survive a metabolic selection process.
- the majority of the cells are non-transfected and are killed-off during a multi-week metabolic selection.
- the stably transfected cells which are few in number, survive the metabolic selection and proliferate.
- the transfected cells with the highest growth rate tend to overtake the population leading to an uneven distribution in the fully recovered transfected pool. This disparity leads to an inefficient process whereby many clones must be sampled for downstream characterization to ensure adequate representation of the original transfected cells.
- This step can be significantly optimized with the technologies described in this specification providing significant reduction in process time and cost.
- Image based Cell Sorting can be used as a label-free method to identify stably transfected cells at early timepoints in the metabolic selection process prior-to or during the major outgrowth period. IBCS can be used to gently sort these stably transfected cells cutting, e.g., two weeks off the typical development protocol time and increasing the number and quality of unique clones that move into the characterization stage of cell line development.
- FIG. 51 An example Cell Line Development process is illustrated in FIG. 51.
- host cells e.g., Chinese Hamster Ovary (CHO) cells
- Electroporated cells are placed into selective pressure to enrich stable integrant clones. Only a small set of the electroporated cells are stably integrated; the non-transfected cells are killed.
- Typical modes of Selection include Glutamine Synthetase (GS)-based and Antibiotic selection methods.
- the surviving transfected cells are recovered, sorted, and imaged. Clones are then expanded and pooled.
- a problem associated with this process is that there is no assurance that clones are not siblings from recovered pools. Outgrowth is often skewed by clones with high growth rate, which is not necessarily predictive of high productivity and desirable product quality. To ensure ‘true’ heterogeneity, high number of clones with varied attributes must be evaluated.
- Image Based Cell Sorting combines the high throughput of a FACS system with the imaging resolution of a research grade microscope and can be used to address the problems of standard cell line development.
- IBCS is a technology that images cells rapidly (e.g., at 1000 cells per second or more) in a flow cytometry-style device and can employ a real-time Machine Learning inference engine to classify and sort cells at, e.g., 100 cells per second.
- FIGS. 52A-52C An example process is illustrated in FIGS. 52A-52C.
- step 1 large image data sets are generated for training of a machine learning algorithm (e.g., sets of T-Cells at various stages of activation). Cells can be classified by phenotype and/or label-free features of the cells can be used for classification.
- step 2 (FIG. 52B) a machine learning model is developed.
- hypothesis-driven or discovery-driven approaches can be used.
- embeddings e.g., for a CNN algorithm
- step 3 FIG.
- the machine learning algorithm is implemented on the IBCS to image and sort cells, e.g., based on one or more of the machine learning embeddings.
- the sorting process can be combined with or followed by a number of biological techniques including sequencing, cell culture/ expansi on, omics (e.g., proteomics), mass spectroscopy, or FACS).
- the trained model can be used to predict cell type given unseen cell images.
- a CNN based on LeNet-5 architecture with 61,000 learnable parameters and short ( ⁇ 1 ms) inference time is used.
- Resulting data is shown on PCA/TSNE plots whereby spatial separation of populations is interpreted as unique cell populations that can be identified by embeddings and can thus be actionably sorted on IBCS.
- results of the initial study are shown in FIGS. 54A-54D. All three populations of cells could be uniquely identified in a label-free manner using the IBCS. As such, transfected cells could be sorted out at an early timepoint to increase the rate of unique clones being screened during stable cell line development.
- the following hypothesis could be formulated: 1. Immediately post transfection, cells go from healthy state (blue / FIG. 54A) to physically damaged state (green / FIG. 54C). 2. After transfection, all cells will temporarily recover and trend back to a more healthy state. 3. Upon selection, most cells will trend toward the metabolically stressed sub-population and die off (orange / FIG. 54B). 4. The small population of stably integrated cells will remain healthy and eventually overtake the total cell population (blue / FIG. 54A). A graph summarizing the results is shown in FIG. 54D.
- the healthy phenotype appears distinct from the unhealthy metabolically starved phenotype during the stable cell line development process by day 14, and potentially earlier.
- This signal is actionable for cell sorting. Sorting stably transfected cells at an early timepoint can reduce the cell line development timeline by ⁇ 2 weeks and increase the number of characterized unique clones, resulting in significant cost savings.
- IBCS provides additional benefits in that cells are gently sorted (cells experience a lower pressure drop of ⁇ 1-2 psi compared to >100 psi from a FACS system) resulted in less cell damage. Cells can be sorted in a label free manner avoiding the use of fluorophores. As such, the IBCS is well suited for generating cell lines for chemistry, manufacturing, and controls (CMC) manufacturing in regulated industries. In some implementations, the technologies described in this specification can be used for optimized metabolic selection.
- Green fluorescent protein was used as a reporter in the experiment. Selection may progress differently for a molecule of interest that is secreted from a population of cells. Different molecules may impart different phenotypes that could be leveraged as sort criteria for IBCS / ML analysis
- Glutamine Synthetase was used as a selection system in the experiment.
- Metabolic (auxotrophic) selection systems e.g., GS, or dihydrofolate reductase (DHFR)
- antibiotic selection systems negative selection (e g., thymidine kinase) systems, or hybrid selection (e.g., metabolic and antibiotic) systems can be used.
- a random integration system to integrate a gene of interest into the DNA of a cell population was used in the experiment.
- targeted integration e.g., using transposase technology
- site specific integration can be used (e.g., using recombinase technology).
- a fluorophore at the integration site allows for sort of “darks” and/or provides a fluorescence-based sort criterion.
- a “cluster” phenotype as shown in FIG. 57 associated with early recovery was used in the experiment.
- a cluster phenotype provides binary sorting for “cluster” attribute.
- cells can be dissociated mechanically or using cell dispersing agent to provide single-cell sort.
- the technologies described in this specification can be used for micro physiological systems (MPS), e.g., for high-throughput screening of multi-cellular organoids or spheroids.
- MPS micro physiological systems
- the example system described is gentle for handling spheroids without breakage and adapted to avoiding clogged fluidics.
- Organoids, e.g., spheroids are large ( ⁇ 50um to 1 mm in diameter) and can contain 100s to 100,000 cells.
- An example spheroid with a diameter of about 500 um is shown in FIG. 58.
- An example system and process adapted for imaging and sorting of spheroids is illustrated in FIG. 59. This process mirrors the processes described above with respect to particles, e.g., as described in FIG. 2.
- the spheroid triggers the ETS based on a signal from laser 190.
- the spheroid is imaged using the imaging device(s) 160 and classified.
- sorting device 170 is actuated to guide the spheroid into destination channel 120.
- Imaging speed can be a bottleneck for scaling spheroid screening to a high-throughput platform.
- High-throughput platforms like the IBCS described in this specification can significantly reduce imaging time to allow for screening of more variants while providing large ensemble-level statistical information within a given spheroid population.
- Estimated screening times for various imaging platforms are shown in in the table below. Estimated times for Conventional Confocal Microscopy and (standard) high-throughput imagers may depend on image acquisition settings, resolution, color channels, etc. Estimated times for IBCS imaging is calculated based on typical event rates (10 - 100 per second).
- An IBCS as described above in this specification can include modifications or adaptations for the imaging of organoids/ spheroids, mainly pertaining to the fluidic system and optics. All modifications or adaptations are adaptable for a range of particle sizes from single cells (5um) to multi-cellular spheroids containing tens of thousands of cells (up to -lmm).
- the adaptation of the design can include modification to the microfluidic system.
- one or more components of the microfluidic system are scaled up.
- a channel width and/or depth can be scaled up from a channel having a width of 100 um and a depth of 30 um channel to a channel having a width of 1000 um and a depth of 1000 um. Larger channels reduce fluidic clogging and reduce shear forces, which can break spheroids.
- 3D-printing can be used to quickly manufacture lithographic molds for PDMS chip fabrication for fast adaptation of the device.
- a range of channel dimensions e.g., widths (e.g., 500, 750, 1000, and 1500 um) have been implemented to accommodate small-to-large spheroids.
- Larger tubing e.g., having an internal diameter of 1 mm or more
- An example connection of a channel to tubing can be implemented using 19-gauge pins.
- Pumps can also be adapted for high volumetric flow rates associated with the larger channels, e.g., using 25ml syringes for sheath pumps.
- Optics for imaging the larger channels can be adapted, e.g., using lOx objective used instead of a 20x objective to image a channel.
- Example images can be acquired using a full sensor on a brightfield camera with a field of view of approximately 500 x 350 um.
- Example system parameters for single cell and organoid/spheroid imaging are shown in the table below.
- FIGS. 60A-60C illustrate components of a system as described (e.g., an IBCS) and adapted for organoid/spheroid imaging and sorting.
- FIG. 60A shows a pump set-up with large capacity syringes to accommodate higher flow rates compared to systems for cell sorting.
- FIG. 60B shows part of an IBCS system, configured as a microfluidic chip with Ixlmm channels and larger ID tubing (compared to cell sorting systems) mounted for imaging and/or sorting organoids/spheroids.
- FIG. 60C shows bead images obtained using optics for larger particle imaging while cataloguing 20 um diameter beads with lOx objective.
- example spheroids with sizes ranging from about 50um to about 300um were opportunistically imaged using a device with a 1x1 mm (width x height) channel (see FIG. 61).
- the flow rate was about 4ml/min ( ⁇ 0.07m/s).
- Each image window is approximately 490 x 350 um.
- a system with brightfield and/or fluorescence imaging is sufficient for organoid/ spheroid applications without any additional modality.
- supervised learning can be used for organoid/spheroid characterization.
- spheroids of known key attributes are catalogued using a system. Catalogues of images can be used to generate machine learning models that can analyze experimental organoid/spheroid data. Organoids/spheroids are screened and classified relative to the known spheroid catalogues.
- a so-called “ghost cytometry” approach can be used.
- training data from high-resolution confocal microscopy and the imaging devices of the system e.g., modified IBCS
- a mapping scheme that can generate high- resolution image data from low-resolution brightfield or fluorescence image data.
- machine-learning tools can be utilized to map low-resolution (high acquisition speed) fluorescence image data from a modified IBCS to high-resolution (low-speed) 3D image data collected by confocal microscopy.
- mapping information can be used for high-speed categorization of images, e.g., of organoids/spheroids from low-resolution brightfield or fluorescence image data.
- a system can include a confocal -type imaging device.
- the confocal-type imaging device can include a pinhole and laser optics along with a PMT detection system as described herein.
- FIG. 63 is a flowchart illustrating an example particle sorting method 6300.
- the method 6300 may include flowing (6310) a particle through a main channel of a microfluidic device (e.g., the system 100).
- the method 6300 may include acquiring (6320) one or more images of a section of the main channel, including the particle.
- the system 100 may acquire the image(s) (e g., using one or more imaging devices 160) in response to a signal from the event tracking system (ETS) indicating the presence of a particle flowing in the main channel.
- ETS event tracking system
- the ETS may detect the particle by various means including laser scattering and/or absorption, image-based detection, and/or electrical impedance-based detection as described herein.
- the system may detect a particle in two or more locations. Based on a time elapsed between a first point of detection and a second point of detection, the system may estimate a velocity of the particle through the main channel. Based on the signal from the ETS, the system may, following a delay, trigger the imaging device(s) to capture the image(s) of the main channel. The delay may be configured based on an expected transit time of the particle through the main channel such that the image(s) is/are acquired at an estimated time of arrival of when the particle is expected to be within the field of view of the imaging device(s).
- the system may include multiple imaging devices (e.g., cameras), each of which is configured to capture a different type of image.
- one of the cameras may be configured to capture a brightfield image (e.g., transmission of illumination through the particle), and another one of the cameras may be configured to capture a fluorescence image (e.g., fluorescent light emitted from the particle).
- one or both cameras may capture multiple images of the particle (e.g., in quick succession). The sequence of images may track transit of the particle across the field of view of one or more of the cameras (e.g., allowing the system to determine a particle trajectory and/or a flow velocity, etc.).
- the images may be captured with different exposure lengths, which may, for example, improve the dynamic range of the imaging device(s).
- a multi-gated exposure sequence as described below may include at least one first image capture with a short exposure to image a brightly fluorescing particle without saturating the image sensor, and at least one second image capture with a long exposure to image a dimly fluorescing particle.
- Use of an image capture sequence with different exposure durations may enable the capture of images with signal-to-noise ratios adequate for extracting features to, for example classify a particle according to phenotype, as described herein.
- the method 6300 may include processing (6330) the image(s) acquired at the Stage 6320 to extract one or more characteristics of the particle.
- the characteristic(s) may correspond to a particular phenotype of interest.
- the processing may isolate, amplify, and/or otherwise emphasize one or more characteristics of interest to facilitate classifying the particle.
- the processing may include, for example, locating a particle in a first image (e.g., a brightfield image) acquired by the first imaging device, determining a first region of interest in the first image data that includes the particle, and using the first region of interest to locate a corresponding second region of interest in a second image (e.g., a fluorescence image) acquired by the second imaging device.
- additional image processing such as dilation, edge detection, masking, and/or infilling, etc. may be performed.
- the method 6300 may include classifying (6340) the particle based on the characteristic(s) of the particle extracted at the Stage 6330.
- the particle may be classified using machine-learning techniques.
- machine-learning techniques may include processing the acquired image(s) and/or extracted feature(s) using a machine-learning model such as a Convolutional Neural Network (CNN) to determine an embedding representing the extracted characteristic(s).
- CNN Convolutional Neural Network
- the embedding may be used to determine a classification and, in some cases, a confidence score corresponding to the classification.
- Non-machine-leaming techniques may include determining whether an image is in focus or out of focus, and/or measuring a signal to noise ratio (SNR) of the image to determine whether the particle may be accurately classified.
- Non-machine-learning techniques may additionally include applying masking to determine a ratio of perimeter to interior fluorescence in an image.
- the classification may include assigning the particle to a particular group (e.g., a first group or a second group).
- the first group may correspond to, for example, particles that do not match a desired phenotype and/or other selection criteria, while the second group may correspond to a particular phenotype to be collected (e g., for further processing and/or examination, etc.).
- the assignment may be based on, for example, a result of one or more classification and/or selection processes.
- a machine-learning process may output a classification and a confidence score corresponding to a likelihood that the particle corresponds to the indicated classification.
- the system may assign the particle to the first group if the category corresponds to the first group and the confidence score meets or exceeds a threshold.
- the threshold may be configured based on the desired results of the sorting/collection.
- the system may be configured to use a higher threshold if the particles are to be sorted with a higher purity (e.g., fewer false positives among the collected particle) or the system may be configured to use a lower threshold if faster and/or more efficient collection of particles (e.g., fewer false negatives) is desired and/or the purity of the collected particles (e.g., the proportion of the collected particles that corresponds to the desired phenotype) is less important.
- the system may be configured to implement one or more other selection criteria when assigning particles to a particular group. For example, the system may be configured to discard particles based on image quality.
- Out-of-focus particles may confound the classification process and reduce the accuracy of the result; thus, the system may determine whether the particle is in focus or out of focus, and assign out-of-focus particles to the first group regardless of their classification (and/or discard them prior to classification).
- particles that emit a weak fluorescence signal may be represented as dim pixels (e.g., close to the noise in the image) in the fluorescence image, which may also confound the classification process.
- the image may be processed to determine whether the SNR of the fluorescence signal in the image is above a threshold, and assign a particle to the first group if the SNR is below the threshold.
- the method 6300 may include determining (6350) whether the particle has been assigned to group 2 (e.g., the second group) in the Stage 6340. If not (“No” at 6350), the system may allow (6360) the particle to flow to the first destination channel.
- the first destination channel may lead to, for example, a waste destination reservoir from which particles may be discarded.
- the microfluidic device may be configured such that flow through the main channel naturally flows mostly or entirely from the main channel into the first destination channel when a sorting device of the system (e.g., the sorting device 170) is not actuated (e.g., by configuring a cross section and/or shape of the first destination channel to be larger than the cross section and/or shape of the second destination channel).
- the particle may flow to the waste destination reservoir.
- the system may be configured to actuate (6370) the sorting device to direct the particle, to the second destination channel.
- the second destination channel may lead to a second destination reservoir (e.g., a target destination reservoir).
- Particles of interest e.g., those determined to express a desired phenotype
- FIG. 64 is a schematic diagram of an Image Based Cell Sorter (IBCS) application 6400, according to embodiments of the present disclosure.
- the IBCS application 6400 may include various processes including a user interface (UI) process, an ETS handler process, a video display process, a state machine process, a cell tracking process, an image capture process, a fluidic control process, and/or one or more cell sort worker processes (e.g., labeled cell sort worker 1, cell sort worker 2, .. . cell sort worker /).
- a process may include threads as indicated in FIG. 64; for example, the UI process may include an event handler thread and a UI updater thread.
- Certain processes may include one or more sub-processes; for example, in implementations in which the imaging device 160 includes multiple imaging devices, the image capture process may include brightfield camera and fluorescence camera sub -processes, each having an event handler thread and an image capture thread.
- the fluidic control process may have a flask server sub-process with multiple automation handler threads.
- the fluidic control process may include other threads for control of various actuators including a distribution valve, a sample valve, one or more sample pumps, and one or more sheath pumps, etc.
- a user may configure the number of cell sort worker processes in the IBCS application 6400. Using multiple sort worker processes may increase throughput of the system by reducing the latency between image acquisition and sort decision. For example, one cell sort worker process may be configured to process an image and send out a first sorting decision while a second cell sort worker process may be configured to process the same image (and/or a second image corresponding to the same particle) during an overlapping time period (e.g., partially or fully in parallel), and so on for additional cell sort worker processes.
- Use of multiple cell sort worker processes may allow for more complex image analysis and sorting statistics to be calculated in real-time; that is, during a duration of time it takes a particle to flow from the imaging region (e.g., the first section 136) of the microfluidic device to a sorting region (e g., a region of the main channel 130 just upstream of the bifurcation 134).
- the IBCS application 6400 may include an additional process to control the order of sort decisions, and match sort decisions to the correct events (e.g., the same particle).
- the processing time of a cell sort worker process may vary based on one or more factors including, for example, a size of the particle (e.g., image data corresponding to a larger particle may include more pixels), complexity of the processing, and/or size of a model employed to determine a phenotype.
- a cell sort worker control process may match the output of a cell sort worker with the proper event (e.g., particle).
- the cell sort worker control process may send the sorting decision to the ETS 180. Once a cell sort worker process has returned its sorting decision, it may wait for a next available image to process.
- the IBCS application 6400 may include separate sub-processes for each imaging device.
- Each camera sub-process may execute its own event handler and image capture threads.
- a separate image capture thread of the image capture process may handle synchronizing images acquired from the different imaging devices and sending them to the cell sort worker processes. For example, the image capture thread may send a first image from a first imaging device to a first cell sort worker process, or distribute the first image to a first plurality of cell sort worker processes. Similarly, the image capture thread may send a second image from a second imaging device to a second cell sort worker process, or distribute the second image to a second plurality of cell sort worker processes, and so on.
- the image capture thread may send a brightfield image and a corresponding fluorescence image to a single cell sort worker process (e.g., for dual-channel processing).
- a single cell sort worker process e.g., for dual-channel processing.
- FIG. 65 is a schematic diagram illustrating the operation of an example ETS 180, digital delay line (DDL) 6510, and camera triggers for performing multi-gated fluorescence exposure imaging, according to embodiments of the present disclosure.
- the system may include a camera configured for high-speed imaging, including of low-intensity signals (e.g., fluorescence emitted by particles suspended in fluid flowing through the microfluidic device).
- High-speed imaging may be beneficial in a microfluidic system in which particles move across the field of view of the imaging device(s).
- Image captures with a short exposure time may allow for imaging of fast-moving particles with reduced blur and/or may enable the capture of multiple images of the particle at different positions within the field of view.
- Such a camera may include a high-speed complimentary metal-oxide semiconductor (CMOS) image sensor combined with a multi-channel plate (MCP) electron multiplier.
- CMOS complimentary metal-oxide semiconductor
- MCP multi-channel plate
- Light may enter the camera via a photocathode configured to generate photoelectrons, which may be amplified (e.g., multiplied) by the MCP before striking a phosphor screen. Electrons striking the phosphor screen cause it to emit photons, which are detected by the CMOS image sensor and converted to image data.
- the MCP of the camera may be used to achieve extremely short exposure times; for example, on the order of tens of nanoseconds, which is considerably faster than exposure times for a conventional CMOS shutter, which is on the order of several microseconds.
- An MCP works by creating a large voltage potential across an insulator arrayed with holes (e.g., channels) aligned with the electric field created by the voltage potential.
- An electron traveling through a channel may strike an inside surface of a channel, stimulating the release of more electrons, which may themselves strike the surface and generate yet more electrons.
- the resulting electron avalanche may include hundreds to hundreds of thousands of output electrons (e.g., a gain of up to and potentially exceeding 100,000x).
- the voltage potential accelerates the electrons through and out of the channel where they strike the phosphor screen to stimulate emission of photons.
- the corresponding voltage potential created by the MCP may be on the order of hundreds or thousands of volts.
- the output of the MCP rapidly reduces to a low (e.g., negligible) value.
- the voltage potential may be gated, allowing the MCP to be used in a manner analogous to a very fast shutter to reduce the exposure time of the camera and/or increase the frequency of exposures.
- the system may gate the MCP to reduce the minimum exposure time to ⁇ 40ns or some other exposure time at least one order of magnitude less than possible with a conventional CMOS global shutter.
- the system may include a DDL 6510 configured to send control signals to the cameras.
- the DDL 6510 may include logic (e.g., a microcontroller or microprocessor) configured to receive from the ETS 180 an event signal 6505, represented by the event signal waveform 6515, and generate various camera triggers.
- the DDL 6510 may generate a brightfield camera trigger 6520 represented by the waveform 6525, a fluorescence camera trigger 6530 represented by the waveform 6535, and a fluorescence gate trigger 6540 represented by the waveform 6545.
- the brightfield camera trigger 6520 may control the timing of brightfield image acquisition by a first imaging device.
- the fluorescence camera trigger 6530 may control timing of the CMOS global shutter of the second imaging device.
- the CMOS global shutter may be held open for the duration of the sequence of multi-gate exposures, as shown by the waveform 6535.
- the fluorescence gate trigger 6540 may control gating of the MCP of the second imaging device to capture a single image, and/or an image consisting of multiple sub-images, with brief exposures as shown by the multiple shorter pulses of the waveform 6545 during the prolonged pulse of the waveform 6535.
- a delay time of the DDL 6510 from receipt of the ETS signal 6505 to the start of the frame indicated by the arrow 6550 may be configured to shift a position of the particle in the image and/or sub-images.
- the waveforms 6515 through 6545 are provided as illustrative examples only, are not necessarily drawn to scale, and in practice some or all of the triggers may use reverse logic (e.g., shutter open when waveform is at zero, and closed when waveform is at potential).
- reverse logic e.g., shutter open when waveform is at zero, and closed when waveform is at potential.
- FIG. 66 is a diagram illustrating particle flow as captured using multi-gated imaging, according to embodiments of the present disclosure.
- the system may be configured to use the trigger scheme illustrated in FIG. 65 to acquire multiple exposures of a particle moving across a field of view of one or more cameras configured to capture images of the first section 136 of the main channel 130.
- the particle may be located at an upstream end of the first section 136.
- a first gated exposure of the second imaging device may capture a first sub-image 6610 of the particle in that position.
- the particle may be located in the middle of the first section 136.
- a second gated exposure of the second imaging device may capture a second sub-image 6620 of the particle in that position.
- the particle may be located at a downstream end of the first section 136.
- a third gated exposure of the second imaging device may capture a third sub-image 6630 of the particle in that position.
- the final image 6640 may include the three sub-images of the particle combined into a single image.
- the example image 6640 shown in FIG. 66 includes three sub-images, in various implementations, the system may be configured to capture any number of sub-images depending on particle velocity, particle size, distance between particles, and the required exposure time.
- FIG. 67 shows a brightfield image 6710 and a fluorescence image 6720 representing a sequence of particle sub-images captured using multi-gated fluorescence exposure imaging, according to embodiments of the present disclosure.
- the particle shown in the brightfield image 6710 and fluorescence image 6720 is a 15um fluorescent microbead.
- the fluorescence image 6720 was acquired using an 11 -shot gated exposures per frame sequence with a 2us gate width (e g., corresponding to the CMOS global shutter) and a 150us delay between gated exposures.
- the fluorescence image 6720 represents eleven sub-images of the microbead as it transits from left to right through the first section 136 of the microfluidic device.
- the multi-gated exposure technique may be modified and/or expanded to add more imaging capabilities and flexibility to the system 100.
- the technique may be used to capture multiple representations of the same particle.
- Sub-images may have the same or different exposure time (e.g., as illustrated below with reference to FIGS. 68 and 69).
- Sub-images may be captured using the same or different color channels, allowing the system to capture different colors at different times (e.g., as illustrated below with reference to FIGS. 70 and 71) rather than employing filters or other optics that may attenuate a fluorescence signal.
- the system may be configured to rotate a particle during transit (e.g., between gated exposures) using sheer forces applied via controlled fluid introduction into the microfluidic device (e.g., asymmetric introduction of a sheath fluid), allowing the system to capture multiple representations of the same particle to further characterize aspects (e.g., three dimensional aspects) of the particle, which may be helpful for sorting. For instance, capturing multiple representations of the same particle having different rotations may enable constructing a three- dimensional representation of a particle, capturing volumetric information about the particle, and/or determining sub-cellular localization of a signal associated with the particle.
- the system may be configured to add a tilted light-sheet excitation to capture sub-images corresponding to multiple z-positions (e.g., slices) within the first section 136 and/or the particle itself.
- the multiple exposures may be processed in various ways to improve image quality and/or the accuracy of sorting decisions based on the images.
- sub-images may be combined to increase SNR for images in which the particle has a weak signal.
- Multiple exposure times may be used to capture sub-images of different intensity to, for example, increase the dynamic range of the imaging and/or sorting systems without reconfiguring image acquisition parameters such as a exposure time, excitation illumination intensity, etc., between events.
- image acquisition parameters such as a exposure time, excitation illumination intensity, etc.
- Sub-images may be acquired using different color channels to acquire multi-color fluorescence images, where different colors may correspond to a same label or different labels.
- FIG. 68 is a schematic diagram illustrating the operation of an example ETS 180, DDL 6510, and camera triggers performing multi-gated fluorescence exposure imaging with variable gated exposure time, according to embodiments of the present disclosure.
- the example multigate operation shown in FIG. 68 is similar to the example operation illustrated in FIG. 65, except that the fluorescence gate trigger 6840 (e.g., corresponding to the MCP gate activation) has a different waveform 6845.
- the fluorescence gate trigger 6840 may gate the MCP for a different duration of time with each pulse, resulting in the capture of sub-images having different exposure times.
- the waveform 6845 shows MCP gate pulses of increasing duration, other configurations of the waveform may be used including decreasing duration, rising then falling durations, falling then rising durations, etc.
- FIG. 69 shows a brightfield image 6910 and a fluorescence image 6920 representing a sequence of particle sub-images captured using multi-gated fluorescence exposure imaging with variable gated exposure time, according to embodiments of the present disclosure.
- the fluorescence image 6920 may correspond to a multi-exposure acquisition similar to the one represented by the example waveform 6845.
- the particle may be a 15um fluorescent microbead and the imaging device 160 may acquire eleven sub-images.
- the fluorescence image 6920 represents variable-time gated exposures.
- the successive sub-images from left to right increase in intensity as the exposure time for a particular sub-image capture increases.
- the example fluorescence image 6920 was captured using MCP gate times (exposure times) corresponding to 400ns, 600ns, 800ns, 1000ns, 1200ns, 1500ns, 1800ns, 2000ns, 2500ns, 3000ns, and 4000ns.
- MCP gate times exposure times
- fluid flow may be on the order of 30cm/s; thus, above a 4us exposure, image quality may begin to degrade and sorting accuracy may be affected.
- FIG. 70 is a schematic diagram illustrating the operation of an example ETS 180, DDL 6510, and camera triggers performing multi-color channel fluorescence imaging, according to embodiments of the present disclosure.
- the DDL 6510 may additionally be configured to generate one or more excitation triggers (e g., Excitation 1, Excitation 2, etc.).
- An excitation trigger may correspond to a excitation source of a particular wavelength (e.g., 480nm, 670nm, 960nm, etc.).
- the excitation source controlled by the excitation trigger may include, for example, a modulated laser, an electro-optic modulator (EOM), acoustic-optic modulator (AOM), a microelectromechanical system (MEMS) mirror, etc.
- the excitation trigger may control the generation, modulation, and/or direction of excitation illumination to illuminate the particle during the duration of time in which excitation is triggered by the DDL 6510.
- the system may include more or fewer excitation sources.
- the excitation sources may correspond to different wavelengths, overlapping wavelengths, and/or the same wavelengths.
- the excitation sources may be broad and/or narrow in bandwidth.
- the waveforms 6515 through 6535 may be similar to or the same as previously described.
- the DDL 6510 may generate a fluorescence gate trigger having a waveform 7045, in which the exposure time of the sub-image capture overlaps with a corresponding excitation pulse.
- the DDL 6510 may generate a first excitation pulse (e.g., shown in the waveform 7055) that corresponds temporally to the first excitation trigger and a second excitation pulse (e.g., shown in the waveform 7065) that corresponds temporally to the second excitation trigger.
- the acquired image may include two sub-images of the particle, each in a different position in the field of view and representing a different color fluorescence (e.g., representing a different label).
- the sub-images may be reproduced in representative and/or false colors by the video display process of the IBCS app 6400.
- FIG. 71 is a schematic diagram illustrating an example control scheme for combinations of multi-color imaging, variable exposure time, and multiple images per channel, according to embodiments of the present disclosure.
- the triggering sequence shown in FIG. 71 may generate sub-images corresponding to two different excitation sources (e.g., represented by the Excitation 1 trigger and the Excitation 2 trigger).
- the first two pulses of the fluorescence gate trigger waveform 7145 correspond to two different exposure times during excitation using the first excitation source (e.g., represented by the waveform 7155).
- the third pulse of the fluorescence gate trigger waveform 7145 corresponds to a longer duration exposure during excitation using the second excitation source (e.g., represented by the waveform 7165).
- the fourth pulse of the fluorescence gate trigger waveform 7145 corresponds to a very short duration exposure during excitation using the first excitation source.
- the resulting image may include four sub-images of the particle, each in a different position in the field of view and representing different intensities and colors corresponding to the first and second excitation sources, respectively.
- the example imaging operations described above with reference to FIGS. 65 through 71 are provided as examples and are not intended to be limiting with regard to timing, the number of excitation channels, the number of sub-images, or number of sub-images per channel. Various other imaging parameters and system configurations may be employed depending on the particular application.
- FIG. 72 is a schematic diagram of a first example microfluidic device 7200 for sorting particles suspended in a fluid, according to embodiments of the present disclosure.
- the microfluidic device 7200 includes a flow focusing system and fluidic sorting system.
- the microfluidic device 7200 may be similar to the microfluidic devices previously described with reference to FIGS. 3, 6, 9, 19, etc.
- the microfluidic device 7200 may include a sample inlet 242, a main channel 130, a bifurcation 134, a first destination channel 110 leading to a first destination reservoir 112 (e.g., a waste reservoir), and a second destination channel 120 leading to a second destination reservoir 122 (e.g., a target reservoir for collecting target particles).
- the microfluidic device 7200 may include a flow focusing system.
- the flow focusing system may include features for introducing flows of fluid around the sample fluid. This so-called “sheath fluid” may surround the sample fluid, promote laminar flow, and confine particles to a central region of the main channel 130 to, for example, focus particles to flow within the field of view and/or in-focus depth of the imaging device 160.
- the flow focusing system may include a central sheath and/or lateral sheaths.
- the lateral sheaths may focus flow of the sample fluid, and thus the particle suspended therein, in a central region of the main channel 130 with respect to the Y-axis, or vertical axis, as observed by the imaging device 160.
- the central sheaths may focus flow of the sample fluid in a central region of the main channel with respect to the Z-axis, perpendicular to the Y-axis and the X-axis, which may correspond to the direction of fluid flow.
- the central sheath may be created by introducing fluid (e.g., a so-called “sheath fluid”) into the first central sheath inlet 244 and the second central sheath inlet 246, where the first and second central sheath inlets are disposed on the sample channel 230 on either side of the sample inlet 242.
- fluid e.g., a so-called “sheath fluid”
- the lateral sheath may be created by introducing fluid into a first lateral sheath inlet 7240 and a second lateral sheath inlet 7242.
- the sheath fluid may travel through the first lateral sheath channel 210 and the second lateral sheath channel 220 before joining the sample fluid at the inlet 132 of the main channel 130.
- the first lateral sheath channel 210, second lateral sheath channel 220, and sample channel 230 may be co-planar on the microfluidic device 7200, with the sample channel 230 arranged between the first lateral sheath channel 210 and the second lateral sheath channel 220.
- the system may be configured to introduce the sheath fluid asymmetrically to, for example, create a sheer force in the main channel 130 that may rotate a particle moving therethrough.
- the system may introduce the sheath fluid into the first central sheath inlet 244 at a faster rate than into the second central sheath inlet 246.
- the system may introduce the sheath fluid into the first lateral sheath inlet 7240 at a faster rate than into the second lateral sheath inlet 7242.
- a sample source 140 may introduce a sample fluid into the sample inlet 242 of the microfluidic device 7200.
- the sample fluid may consist of particles suspended in a fluid.
- the sample fluid may flow into the inlet 132 of the main channel 130.
- the ETS 180 (not shown) may detect a particle as it enters the main channel 130 (or shortly before/after).
- the imaging device 160 may capture one or more images and/or sub-images of the particle as it passes through the first section 136 of the main channel 130.
- the system may process the image(s) and make a sorting decision (e.g., whether to route the particle to the waste reservoir or target reservoir).
- the system may either allow the particle to flow through the first destination channel 110 to the first destination reservoir 112 (e.g., if the particle is to be discarded) or actuate a sorting mechanism to direct the particle into the second destination channel 120 (e.g., by temporarily diverting flow in the main channel) to collect the particle in the second destination reservoir 122.
- the sorting mechanism may be a fluidic sorting system.
- the fluidic sorting system may include a pump 174 (e.g., as shown in FIG. 19) for introducing a controlled volume of sorting fluid into a sorting inlet 7274.
- the sorting fluid may travel through the sorting channel 176 and divert the flow of fluid through the main channel 130 from the first destination channel 110 to the second destination channel 120.
- Throughput of the microfluidic device 7200 may depend on several factors including how quickly, upon actuation, the fluidic sorting system can divert the flow of fluid to the second destination channel 120 and how quickly, upon deactivation of the fluidic sorting system, the flow of fluid returns to the first destination channel 110 as the default destination for particles flowing in main channel 130. If a particle to be collected arrives at the sorting region (e.g., a region just upstream of the bifurcation 134) before the fluidic sorting system has diverted the flow, the particle may inadvertently flow into the first destination channel 110, resulting in a false negative.
- the sorting region e.g., a region just upstream of the bifurcation 134
- a second particle (e.g., to be discarded) arrives at the sorting region before the flow has had time to return to the first destination channel 110, the second particle may inadvertently flow into the second destination channel 120, potentially resulting in a false positive.
- the fluid sorting mechanism may be able to divert flow to the second destination channel 120 (e.g., to collect a particle) more quickly than the flow returns on its own to the first destination channel 110 when the fluid sorting mechanism is deactivated.
- FIG. 73 is a schematic diagram of a second example microfluidic device 7300 in which the fluidic sorting system includes the first sorting channel 176 and a second sorting channel 7376, according to embodiments of the present disclosure.
- the second sorting channel 7376 may be associated with a second sorting inlet 7374.
- the second sorting inlet 7374 may couple to the main channel 130 directly and without the second sorting channel 7376.
- the fluidic sorting system may operate in a push-push manner (e.g., by introducing sorting fluid through the first sorting channel 176 to collect a particle and introducing a sorting fluid through the second sorting channel 7376 to return the flow to the first destination channel 110). Timing of respective introduction of the sorting fluid through the first sorting inlet 7274 and second sorting inlet 7274 may be configured to minimize the transition time between redirecting and returning the flow of fluid.
- the fluidic sorting system may operate in a push-pull manner.
- the microfluidic device 7300 may include the second sorting channel 7376 without a corresponding second sorting inlet 7374.
- the microfluidic device 7300 may include the second sorting channel 7376 and the second sorting inlet 7374, but without a dedicated pump or sorting fluid source coupled to the second sorting inlet 7374.
- the system may not introduce sorting fluid through the second sorting channel 7376. Rather, the presence of the second sorting channel 7376 and/or the second sorting inlet 7374 alone may cause fluid flow to return to the first destination channel 110 more quickly.
- the push-pull configuration may reduce the separation resolution (e.g., between sort and non-sort events), for example, from 5ms to 1ms, which may substantially increase the sorting rate of the system.
- the sorting rate may be increased from approximately 33 sorts per second without the second sorting inlet/channel to 200 sorts per second or more when the second sorting inlet/channel is present, thereby increasing the maximum event rate from 200 events per second to 1000 events per second.
- additional sorting channels, inlets, and/or sources may be added to improve the sort rate and/or event rates further and/or to provide for sorting into more than two destination channels.
- FIG. 74 is a diagram illustrating the operation of an example fluidic sorting system, according to embodiments of the present disclosure.
- the example fluidic sorting system shown in FIG. 74 is configured to implement a push-pull sorting technique; that is, the system may be configured to selectively introduce sorting fluid into the first sorting channel 176, but not the second sorting channel 7376.
- the fluidic sorting system may include a piezoelectric actuator 7410 acting as a pump and coupled to a syringe 7420 filled with a sorting fluid.
- Actuation of the piezoelectric actuator 7410 may exert pressure on the plunger of the syringe 7420, displacing the plunger and thus introducing the sorting fluid into the first sorting channel 176 (e.g., by way of the first sorting inlet 7274).
- the piezoelectric actuator 7410 does not exert a pressure on the syringe 7420 and no sorting fluid is introduced into the first sorting channel 176.
- Sample and/or sheath fluid flowing through the main channel 130 continues to flow through the first destination channel 110 and into the first destination reservoir 112.
- the ETS 180 and/or control system 150 may generate a trigger pulse 7485.
- the trigger pulse 7485 may cause a function generator 7490 to generate a piezoelectric signal 7495.
- the piezoelectric signal 7495 may cause actuation of the piezoelectric actuator 7410 such that it exerts a pressure on the plunger of the syringe 7420, causing the syringe to introduce the sorting fluid into the main channel 130 via the first sorting channel 176.
- the introduction of sorting fluid redirects the flow of fluid in the main channel 130 from the first destination channel 110 to the second destination channel 120 and eventually to the second destination reservoir 122 where particles of interest may be collected.
- trigger pulse 7485 ceases, resulting in piezoelectric signal 7495 taking a low state and the introduction of sorting fluid. This allows the flow through the main channel 130 to return to the first destination channel 110.
- the system does not introduce sample fluid into the main channel 310 via the second sorting channel 7376 to actively push the fluid flow from the main channel 310 back into the first destination channel 110, the presence of the second sorting channel 7376 may cause the flow through the main channel 130 to return to the first destination channel 110 more quickly than would occur in absence of the second sorting channel 7376.
- FIG. 75 is a timing diagram illustrating the operation of an example fluidic sorting system, according to embodiments of the present disclosure.
- the timing diagram includes a graph 7510 representing the piezoelectric signal 7495, a graph 7520 representing a sorting profile of particles transiting the microfluidic device, and timing profile 7530 representing the different stages of a sorting operation.
- a sort value of 1 represents a particle collection
- a sort value of 0 represents a non-sort or discarding of a particle.
- the control system 150 may initiate an actuation event, and the function generator 7490 generates a piezoelectric signal 7495 to transition from a low value to a high value as shown in the graph 7510.
- the piezoelectric signal 7495 causes the piezoelectric actuator 7410 to exert a pressure on the syringe 7420, causing the syringe 7420 to introduce the sorting fluid.
- the graph 7520 shows, however, that particles are not sorted into the second destination channel 120 immediately upon actuating the piezoelectric actuator 7410; rather, collection begins after a delay of approximately 1.5ms. Similarly, collection continues for a period of time following deactivation of the piezoelectric actuator 7410 during which particles may continue to be collected in the target destination channel. Following an actuation, the piezoelectric actuator 7410 may require a recovery period before it may be actuated again, during which the system may not be able to collect particles.
- the delay between actuation and collection may be approximately 1.5ms
- the collection window may be approximately 1ms
- the recovery period may be approximately 2.5ms.
- the total duration may be about 5ms, thus allowing for about 200 sort operations per second.
- These parameters may also be used to determine a particle spacing and flow rate that may lead to reliable collection of desired particles and discarding of particles other than the desired particles. For example, if the collection window is 1ms, a second particle following a particle to be collected by less than 1ms may be inadvertently collected.
- a second particle following a particle to be collected by more than 1ms but less than 5ms may be discarded even if the system determines that the second particle corresponds to the desired phenotype, due to the recovery period of the piezoelectric actuator 7410.
- particles may be introduced at a rate at which they may be sorted or discarded with an accuracy that corresponds to the desired purity (e.g., avoiding false positives that may potentially contaminate the collection) and/or efficiency (e.g., avoiding false negatives that may potentially discard desired particles).
- FIG. 76 is a schematic diagram illustrating a transformation matrix H 7620 for mapping between two views of the same object 7600, according to embodiments of the present disclosure.
- the imaging device 160 may include two imaging devices 160a and 160b.
- the first imaging device 160a may be a camera configured for brightfield imaging and the second imaging device 160b may be a camera configured for fluorescence imaging.
- the second imaging device 160b may include a gated MCP configured for rapid gating as previously described.
- Using separate imaging devices 160 for different types of imaging may be beneficial because each device and illumination/excitation conditions may be independently optimized (e.g., for speed, sensitivity, magnification, etc.).
- the first imaging device 160a may be optimized for speed whereas the second imaging device 160b may be optimized for sensitivity.
- Respective images captured by the imaging devices 160 may be processed separately and/or together (e.g., as described below with reference to FIGS. 81A and 81B) to make sorting decisions.
- Images obtained from the first imaging device 160a and the second imaging device 160b may have different positions, fields of view, magnifications, pixel sizes, etc. To process the images together, they may need to be aligned or otherwise mapped to a common coordinate system.
- the system may be configured to determine a mapping between respective views of the imaging devices 160 so that a pixel -by-pixel correspondence between the images may be determined. This may allow the system to, for example, use image processing techniques to identify a first region of interest encompassing a particle in a brightfield image obtained by the first imaging device 160a, and use the mapping to determine a second region of interest that encompasses the particle in a fluorescence image obtained by the second imaging device 160b.
- the first imaging device 160a may have a first view of an object 7600 and the second imaging device 160b may have a second view of the object 7600.
- the system may identify first keypoints and descriptors 7610a corresponding to the first imaging device 160a and second keypoints and descriptors 7610b corresponding to the second imaging device 160b.
- the system may use the keypoints and descriptors 7610a and 7610b to calculate a homography transformation matrix H 7620, or “WarpMatrix”.
- the system may use the transformation matrix H 7620 to map features of the object 7600 represented in an image from the first imaging device 160a to features of the object 7600 represented in an image from the second imaging device 160b, and vice-versa.
- the homography transformation matrix H 7620 may take into consideration both temporal and spatial information associated with the image captures and imaging devices. Thus, features may be correlated between images from different imaging devices 160, even if a particle has moved between capture of respective images.
- FIG. 77 shows a brightfield image 7700 and a fluorescence image 7710 of a calibration device having a constellation of fluorescent microbeads.
- An example calibration device may be a transparent or partially transparent slide having dimensions similar to the example microfluidic device described herein.
- the slide may include randomly distributed fluorescent microbeads disposed on and/or embedded within.
- the fluorescent microbeads may have a diameter of 4um. Random distribution of the fluorescent microbeads may prevent the system from determining an incorrect correspondence between beads of a separate but repeating pattern.
- the portion of the image captured by the second imaging device 160b that corresponds to the field of view of the image captured by the first imaging device 160a is highlighted in the fluorescence image 7710.
- the system may use positions of the beads in each image as keypoints for determining the homography transformation matrix H 7620.
- FIG. 78 is a flow diagram of an example homography calibration protocol 7800, according to embodiments of the present disclosure.
- the system may obtain (7810) a first image and a second image using the respective imaging devices (e.g., the imaging devices 160a and 160b), and determine (7820) the positions of the beads in each image.
- the system may use Hough circle detection to determine the centers and radii of the beads in the images.
- the system may use the bead positions as keypoints, and determine corresponding descriptors.
- the system may use a scaleinvariant feature transform (SIFT) algorithm to generate the descriptors.
- SIFT scaleinvariant feature transform
- the system may take the bead positions (keypoints) of the respective images and leverage local statistics of the regions around the keypoints in each image to programmatically generate informative local descriptors.
- the system may use a fast library for approximate nearest neighbors (FLANN) algorithm to filter out low-quality matches between respective keypoints and descriptors.
- FLANN approximate nearest neighbors
- the system may use the filtered keypoints and descriptors to compute (7830) the transformation matrix H.
- the system may use it to transform the size, shape, and orientation of a brightfield image of an object such that it can overlay a corresponding fluorescence image of the object with alignment between features represented in both images.
- the system can use a bounding box defining a region of interest in an image from the first imaging device to derive a corresponding bounding box in an image from the second imaging device (e.g., as illustrated in FIG. 80).
- the calibration device may be replaced (7840) with a microfluidic device.
- the position of the microfluidic device with reference to the imaging devices may not perfectly correspond to the position of the calibration device used to obtain the transformation matrix H.
- the system may be configured to execute (7850) an auto-calibration protocol to correct for an X-Y shift of keypoints, generate a global offset, and/or correct the transformation matrix H, if needed.
- the auto-calibration protocol may include a pre-run of several hundred to several thousand brightfield and fluorescence images from which the system can estimate the global offset values.
- the system may be able to obtain this number of image pairs of particles flowing through the microfluidic device and calculate the global offsets over a brief amount of time (e.g., one or a few minutes).
- the system may obtain an image pair of a particle, and determine a center of mass or centroid of the particle in each image.
- the system may calculate an X and Y offset between the particle center as represented in each image of the image pair.
- the system may repeat the offset calculation for additional particles represented in additional image pairs, and calculate mean X and Y offsets.
- the system may then use the mean X and Y offsets to adjust the transformation matrix H and improve the centering of particles in the fluorescence image bounding box derived from the brightfield image.
- the system may be calibrated such that it may detect (7860) a particle in a brightfield image, determine a region of interest corresponding to the particle, and apply the updated transformation matrix H to extract (7860) the particle from the corresponding fluorescence image.
- the system may refine and/or update the global offsets as further image pairs are acquired during normal use of the system.
- FIG. 79 is a diagram illustrating an example process for defining a region of interest around a particle represented in a brightfield image, according to embodiments of the present disclosure.
- the system may use the transformation matrix H 7620 and/or global offsets calculated using the protocol 7800 to derive the corresponding region of interest in a fluorescence image.
- the system may obtain a brightfield image 7910 and process it to detect edges as shown in the image 7820.
- the system may use Canny edge detection to generate the image 7920.
- the system may dilate the detected edges to generate the dilated image 7930.
- the system may define the region of interest as a square or rectangle around the moment of the dilated edges in the dilated image 7930, as shown in the image 7940.
- the region of interest may vary based on particle size. In some implementations, dimensions of the region of interest may be defined by a user.
- a larger bounding box may allow for some misalignment between the brightfield and fluorescence images without inadvertently cropping out features of the particle.
- the system may then crop the original brightfield image 7910 according to the region of interest to generate the cropped brightfield image 7950.
- FIG. 80 shows a first region of interest defined in a brightfield image 8010 translated to a second region of interest in a fluorescence image 8020 using the transformation matrix H 7620, according to embodiments of the present disclosure.
- Each image 8010 and 8020 may be cropped according to the region of interest to determine a cropped brightfield image 8015 and a cropped fluorescence image 8025.
- the cropped brightfield image 8015 and cropped fluorescence image 8025 may be processed along with a cropped fluorescence image using, for example, the machine-learning models illustrated in FIGS. 81A and 81B as described below, to inform a sorting decision of the system.
- FIG. 81A is a schematic diagram illustrating a first dual-channel machine-learning model architecture 8100 and FIG. 81B is a schematic diagram illustrating a second dual -path machine- learning model architecture 8150, according to embodiments of the present disclosure.
- the model architectures may include a CNN and a fully connected layer (e.g., as previously described with reference to the model architectures shown in FIGS. 38, 42, and 52B); however, the machine-learning model architectures 8100 and 8150 may be configured for dual-channel and/or multi-channel imaging.
- the dual -channel architecture 8100 may be configured to receive multiple channels of image data representing for example, a brightfield image 8101 and a fluorescence image 8102 corresponding to the same particle.
- the images may be overlayed as in, for example, processing RGB (red, green, blue) images, but with two channels rather than all three color channels.
- the dual -channel architecture 8100 may be configured to receive a third channel or more than three additional channels of image data.
- the CNN 8110 may be trained to extract features from the image(s) through successive convolution layers and subsampling operations to refine a feature map.
- the output of the CNN 8110 may be a flattened feature map 8115 (e.g., to a dimensionality of lx /).
- a fully-connected layer 8140 may process the flattened feature map to calculate an embedding 8145 representing one or more characteristics (e.g., features) extracted from the brightfield image 8101 and/or fluorescence image 8102.
- the dual-path architecture 8150 may include a separate CNN 8110 for each channel of input data.
- a first CNN 8110a may process image data representing brightfield images 8101 while a second CNN 8110b may process image data representing fluorescence images 8102.
- the dual-path architecture 8150 may be configured as a multi-path model with additional CNNs 8110 for processing image data from the additional channels.
- the CNNs 8110 may output flattened feature maps 8115a and 8115b, respectively.
- the flattened feature maps 8115a and 8115b may be concatenated 8130 and processed with the fully connected layer 8140 to calculate an embedding 8145 representing one or more characteristics (e.g., features) extracted from the brightfield image 8101 and/or fluorescence image 8102.
- characteristics e.g., features
- the machine-learning model(s) 8100 and/or 8150 may be trained on pure populations in a supervised manner to learn which features/characteri sitess represented in the image data are important for proper classification and optimize model parameters for best discrimination.
- the embeddings 8145 may be similar (e.g., calculated by cosine similarity) for particles corresponding to a single phenotype and different for particles corresponding to different phenotypes.
- the system may use the embedding 8145 to assign the particle to a group and make a collect/discard (e.g., sort, non-sort) decision.
- the system may process the embedding 8145 with a classifier to make a sorting decision.
- the system may process the embedding 8145 to determine a confidence score associated with the sorting decision. The ultimate sorting decision may be based on whether the confidence score meets a confidence threshold.
- the system may perform a principal component analysis to map embeddings in a lower-dimensional space (e.g., a 2D space).
- the system may then group similar particles together in clusters, which may correspond to different phenotypes.
- a user may configure the system to define a gate in the 2D space that corresponds to a phenotype of interest.
- the system may calculate the embedding and principal components, determine whether the principal components fall within the gate of interest, and determine whether to collect (or discard) the particle accordingly.
- FIG. 82 shows example out-of-focus cell images 8200 and example in-focus cell images 8210 that may be used to train a classifier to determine whether an image of a particle is out-of- focus particles, and use the determination to inform a sorting decision.
- Particles that are out-of- focus may contain less morphological information relative to in-focus cells.
- a model trained on out-of-focus cell images may exhibit suboptimal performance relative to a model trained on in-focus cell images when presented with classification, representation learning, and/or sorting tasks.
- out-of-focus cells may confound classification due to the out-of-focus cells lacking discriminative details.
- filtering out-of-focus cells from the training data set used to train a classifier may improve the ability of the trained classifier to accurately classify cells, thereby increasing the purity of the collected populations when the trained classifier is used to make sorting decisions during collection.
- the system may employ machine-learning and/or classical techniques to determine whether a cell is out of focus.
- a machine-leaming-based approach may involve training a machine-learning model (such as the machine-learning model(s) 8100 and/or 8150) described previously) to predict whether a particle is in focus or out of focus. Training may be performed using images such as those shown in FIG. 82, where the images 8200 represent out-of-focus cells and the images 8210 represent in-focus cells.
- a classical approach may include calculating the Laplacian of an image via convolution kernels to estimate the amount of blurring in the image. Out-of-focus images may be more blurred and thus have a lower variance relative to infocus images.
- Both the machine-learning and classical techniques described herein may be implemented in real-time to inform sorting decisions. For example, classification of image data as out of focus may override (and potentially precede) other processing related to the sorting decision.
- FIG. 83 shows example images of cells having different signal-to-noise ratios (SNR).
- SNR signal-to-noise ratios
- FIG. 83 shows example images 8300 of cells exhibiting relatively low SNR and example images 8310 of cells exhibiting relatively high SNR.
- particles that emit a weak fluorescence signal may result in low-SNR images, which may confound the classification process (e.g., in heterogeneous cell populations such as transfected cell libraries).
- One situation where this can present a problem is when trying to sort out a subpopulation in which the ratio of perimeter to interior fluorescence is > 1.0, indicating enriched membrane fluorescence relative to interior fluorescence.
- the system may calculate the SNR of acquired images, and filter out images of particles exhibiting an SNR below a threshold.
- the system may use masking to determine cell and non-cell regions, and calculate the SNR as a ratio of the fluorescence signal intensity from each.
- the low-SNR filtering may be used to, for example, remove images from a training dataset prior to training a machine-learning model and/or discard particles imaged by the system 100 that exhibit low SNR.
- the system may process a brightfield image 8320 to determine a mask as shown in the image 8330.
- the mask may be a convex hull mask.
- a convex hull may represent the smallest convex set that contains the points of interest; that is, pixels corresponding to the cell.
- Convex hull masking may work better than a classical dilation-based mask for determining cell and noncell regions of an out-of-focus image for purposes of SNR filtering.
- a Sobel filter may be used to emphasize pixels corresponding to edges in the image data, and the convex hull may be determined to contain the pixels.
- the mask may be used to define cell region 8332 from non-cell region 8334.
- the SNR may be calculated as the mean fluorescence value from the cell region 8332 divided by the mean fluorescence value of the non-cell region 8334 of the image 8330.
- the system may then employ a filter to discard images/cells having an SNR below a threshold, which may be set to, for example, > 1.5 to protect against false positives/negatives.
- FIG. 84 is a schematic diagram illustrating example processing of image-derived sorting parameters 8400 and event-derived sorting parameters 8430 processed by the control system 150 to inform a sorting decision, according to embodiments of the present disclosure.
- the sorting parameters may be defined using one or more of the analysis modules shown in FIG. 84. In various implementations, different combinations of analyses/modules may be configured for different use cases.
- a final sorting decision 8490 may be based on a single assessment or a combination of assessments of modules from software and/or hardware components.
- the control system 150 may be configured to process image data received from the imaging device 160 representing brightfield images 8010 and/or fluorescence images 8020 to determine the image-derived sorting parameters 8400. Additionally, the control system 150 may process event data from the ETS 180 to determine event-derived sorting parameters 8430 (e.g., particle spacing in the microfluidic device). The control system 150 may base the sorting decision 8490 (e.g., collect or discard) on the image-derived and/or event-derived sorting parameters.
- the sorting decision 8490 e.g., collect or discard
- the image-derived sorting parameters 8400 may include quality metrics 8410 and/or analysis metrics 8420.
- Quality metrics 8410 may include, for example, fluorescence image saturation detection 8412, fluorescence image low-SNR detection 8414, and/or focus assessment 8416, where focus may be assessed using the machine-learning and/or classical techniques discussed herein.
- Analysis metrics 8420 may relate to particle features and phenotyping. Determining the analysis metrics 8420 may include performing object tracking 8422 to, for example, detect a particle moment and/or determine a region of interest. Determining the analysis metrics 8420 may include executing one or more classical analysis pipelines 8424 to, for example, determine signal localization (e.g., interior vs perimeter) and/or perform masking.
- signal localization e.g., interior vs perimeter
- Determining the analysis metrics 8420 may include executing one or more machine-learning inference pipelines 8426 using one or more of the machine learning models described herein, including a CNN, YOLO, and/or VAE, etc. Determining the analysis metrics 8420 may include executing one or more hybrid pipelines 8428 to, for example, bootstrap feature extraction.
- the system may use classical (e.g., non-machine learning-based) methods to classify an image. For example, the system may use Sobel gradient image computation and Hough transform circle detection to label images as corresponding to a first phenotype or a second phenotype (e.g., as described previously with reference to FIGS. 49A though 49C and 50 to detect yeast cells).
- the labeled images may be used to perform supervised training of a machine learning model such as the CNN, YOLO, and/or VAE models described herein.
- the trained machine learning model may generalize to detect more yeast in unseen images than the Hough model as illustrated in FIG. 50, in which the green circles denote yeasts cells detected by the machine learning model, which are more numerous than the red circles denoting cells detected using the Hough model.
- the event-derived sorting parameters 8430 may include event timing criteria 8432.
- the event timing criteria 8432 may relate to, for example a sorting window conflict which may result in a particle being missorted based on its proximity to a preceding particle.
- the event timing criteria 8432 may further include active sort blocking, late sorting decision, and misaligned events, etc.
- the control system 150 may receive positive sorting decisions based on image-derived sorting parameters 8400, but the event-derived sorting parameters 8430 may indicate that the timing required to sort and reset the sorting mechanism may only allow the system to sort one of the two particles. Accordingly, the control system 150 may output a positive sorting decision 8490 for the first particle but a negative sorting decision 8490 for the second particle. Although the system may not collect both particles, the control system 150 may still record accurate metadata for both events (e.g., that both particles corresponded to the collection target), even though the system could not collect both particles due to practical constraints of the sorting mechanism.
- control system 150 may determine that an inference for an event is taking too long.
- the control system 150 may determine, based on the event-derived sorting parameters 8430, that the particle may have passed the sorting mechanism (e.g., a bifurcation of a microfluidic device), and thus output a negative sorting decision 8490 to avoid attempting to sort the particle even if the image- derived sorting parameters 8400 eventually indicate that the event corresponds to the collection target.
- the control system 150 may avoid sorting errors due to image-based sorting decisions that have been delayed and/or received out-of-order due to a long inference time, cell sort worker error, serial communication issues between components, etc.
- FIG. 85 An illustrative implementation of a computer device / system 8500 that may be used in connection with any of the embodiments of the technology described herein is shown in FIG. 85.
- the computer system 8500 includes one or more processors 8510 and one or more articles of manufacture that comprise non-transitory computer-readable storage media (e.g., memory 8520 and one or more non-volatile storage devices 8530).
- the processor 8510 may control writing data to and reading data from the memory 8520 and the non-volatile storage device(s) 8530 in any suitable manner, as the aspects of the technology described herein are not limited in this respect.
- the processor 8510 may execute one or more processor-executable instructions stored in one or more non-transitory computer- readable storage media (e.g., the memory 8520), which may serve as non-transitory computer- readable storage media storing processor-executable instructions for execution by the processor 8510.
- non-transitory computer- readable storage media e.g., the memory 8520
- Computing device / system 8500 may also include a network input/output (I/O) interface 8540 via which the computing device may communicate with other computing devices (e.g., over a network), and may also include one or more user I/O interfaces 8550, via which the computing device may provide output to and receive input from a user.
- the user I/O interfaces may include devices such as a keyboard, a mouse, a microphone, a display device (e.g., a monitor or touch screen), speakers, a camera, and/or various other types of I/O devices.
- the image-based cell sorting system may be applied to various use cases of image-based cell sorting.
- the system may be used to determine signal internalization (e.g., as a ratio of signal from a plasma membrane versus the cell interior).
- the system may assess internalization using either or both of classical methods or machine learning techniques.
- the system may include a CNN (e.g., as shown in FIGS. 81A and 81B) to recognize image phenotypes related to fluorescent stains of various organelles.
- the system may apply a generalized internalization metric using image analysis techniques to measure the fraction of internalized versus perimeter localized fluorescence.
- the system may be applied to use either or both approaches in applications such as internalization, mammalian cell library characterization, etc.
- FIGS. 86A and 86B show images of cells obtained using confocal imaging (top rows) and the image-based cell sorter (bottom rows), according to embodiments of the present disclosure.
- the system may obtain images of stained cells and cells with stained organelles, as shown in the bottom rows of FIGS. 86A and 86B.
- the confocal images shown in the top rows of FIGS. 86A and 86B may be used to confirm the phenotype of the images obtained using the IBCS system.
- the confirmed phenotypes may be used as ground-truth labels for the images in the image catalog.
- the labeled image catalog may be used to train a machine learning model of the system (e.g., the CNNs shown in FIGS. 81A and 81B) and/or machine learning models of other systems.
- FIG. 87 illustrates stained organelles appearing as distinct phenotypes that may be distinguished using a principal component analysis (PCA), according to embodiments of the present disclosure.
- the PCA graph 8700 shown on the right side of the figure shows a dot for each cell.
- Each cell has a stain applied to its nucleic acid, plasma membrane, endoplasmic reticulum (ER), mitochondria, or golgi apparatus.
- the PCA graph shows good separation between clusters corresponding to each (e.g., nucleic acid 8710, plasma membrane 8720, ER 8730, mitochondria 8740, or golgi apparatus 8750).
- Regions may be defined in the PCA graph corresponding to individual phenotypes. The size and/or extent of the defined region may be set depending on whether purer collection or more efficient collection is desired (e.g., a smaller defined region may yield a purer collection while a larger defined region may collect more cells).
- FIG. 88 illustrates a classical approach to assess fractional localization on a plasma membrane versus an interior of a cell, according to embodiments of the present disclosure.
- the classical image analysis approach directly measures the fraction of fluorescent signal at the perimeter versus interior of the cell.
- the IBCs is used to acquire brightfield (left) and fluorescence image (right) pairs.
- the image pairs in the left-hand column are of a cell to which a plasma membrane stain has been applied.
- the image pairs in the right-hand column are of a cell to which a nucleic acid stain has been applied.
- the system may detect a cell perimeter in brightfield (BF) image.
- the system may use the perimeter to determine masks corresponding to the perimeter (outer circle) and interior (inner circle).
- the system may measure the fluorescence signal corresponding to each masked region (e.g., as shown by the bar graphs below the images), and determine a fraction of perimeter versus interior signal.
- the benefit of this method is that the system may be able to quickly and easily determine the cell perimeter using the brightfield image.
- the system may map the masks to the fluorescence image, and measure the fluorescence signal corresponding to each mask.
- the system may determine the masks using dilation-based masking, convex hull masking, and/or other technique.
- FIG. 89 illustrates a classical dilation-based masking algorithm, according to embodiments of the present disclosure.
- the system may capture a brightfield image of a particle (e.g., a cell).
- the system may apply Canny edge detection to detect edges in the image.
- the edges may correspond to some or all of the perimeter, but also to other features within the particle.
- the system may dilate the detected edges until they merge and create an outline of the particle.
- the system may fill holes within the dilated edges to determine a full cell mask.
- the system may determine interior and perimeter masks. To determine the interior mask, the system may erode the outer edge inward.
- To determine the perimeter mask the system may identify the outer edge of the full cell mask to determine edge pixels. The system may dilate the edge pixels to determine the edge mask.
- FIG. 90 illustrates convex hull masking, according to embodiments of the present disclosure.
- the convex hull masking process may be more effective for out-of-focus cells and lead to better masks overall than classical dilation-based masks.
- the convex hull-based technique may start with a brightfield image of a cell.
- the system may apply, for example, a Sobel filter to create an image based on an emphasis of the edges in the brightfield image.
- the system may determine a convex hull around the pixels detected by the Sobel filter.
- a convex hull may be defined as the smallest convex set that contains the detected pixels (e.g., as if a rubber band were stretched around them).
- the system may determine the interior mask by infilling the convex hull, and detect an edge of the convex hull to determine the perimeter mask.
- FIG. 91 illustrates how brightfield masking enables localization of signals from specific regions in fluorescence images, according to embodiments of the present disclosure.
- the figure includes brightfield and fluorescent image pairs obtained using the system. Highlighting has been applied to the interior of the cells, and the outer perimeter of the cells has been bounded by a line.
- the images in FIG. 91 illustrate how masking can enable localization of signals from specific regions in FL images, including across various cell morphology.
- FIG. 92 illustrates fluorescence signal from a plasma membrane and from a nucleic acid stain, according to embodiments of the present disclosure.
- the figure includes brightfield and fluorescent image pairs obtained using the system.
- the left two columns show cells with staining applied to the plasma membrane, and the right two columns show cells with staining applied to the nucleic acid.
- FIG. 93 illustrates an example of how the image-based cell sorter may classify and/or sort particles based on a ratio of perimeter interior fluorescence signal, according to embodiments of the present disclosure.
- the graph 9300 shows the distribution of perimeter to interior fluorescence ratio for the cells with the nucleic acid stain 9310 versus cells with the plasma membrane stain 9320.
- the peaks show good separation, allowing the IBCS to sort cells based on phenotype as determined from the brightfield and fluorescence images.
- the IBCS may be used to collect cells corresponding to one or the other phenotype based on a ratio threshold between the peaks corresponding to each phenotype.
- the threshold 9315 may be used to collect cells corresponding to nucleic stained cells
- the threshold 9325 may be used to collect cells corresponding to plasma membrane-stained cells.
- Ratio thresholds may be configured based on the desired purity of the collected cells.
- the threshold 9315 may be moved to the left to collect a purer sample or to the right to collect more cells, but with the inclusion of more false positives (e.g., cells not having the desired phenotype).
- Graph 9350 shows the proportion of the target phenotype in the sort channel and the proportion of the non-target phenotype in the waste channel for the different phenotypes (nucleic acid stain versus plasma membrane stain) and different thresholds (e.g., set for higher purity or higher efficiency).
- FIG. 94 illustrates an application of image-based cell sorting to a pooled mixture of cells with different protein expression phenotypes, according to embodiments of the present disclosure.
- the mixture may include cells exhibiting different viability and/or fluorescence having different localization (e.g., plasma membrane, organelles, everywhere, etc.).
- the fluorescent localization may correspond to selective or general staining and/or activation (e.g., as observed in a fluorescence image, in some cases using masks determined from a brightfield image).
- Cell viability may correspond to morphological properties such as the presence and/or degree of blebbing exhibited by a cell (e.g., as observed in a brightfield image).
- the IBSC may be used to sort the mixture by phenotype to collect pure or substantially pure samples corresponding to the individual phenotypes.
- the samples may be sequenced (e.g., using massive parallel sequencing), with the resulting sequences stored in a phenotype database (e.g., a catalog).
- a phenotype database e.g., a catalog
- the system may apply this process to a mixture of two proteins or many proteins.
- the catalog may be used as a target library for training a machine learning model and/or other type of classifier component.
- FIG. 95 illustrates localization of surface-bound or internalized antibodies based on pH- dependent fluorophores versus image-based signal localization, according to embodiments of the present disclosure.
- the system may be used to assess fluorescence localization.
- Some applications may use fluorescence localization to determine, for example, antibody penetration into a cell.
- One technique for determining antibody uptake is to use a pH- sensitive fluorophore such as FabFluor or Phrodo, as shown in the top half of the figure.
- a pH-sensitive fluorophore may exhibit little or no fluorescence outside of a cell, but may begin to exhibit fluorescence once inside the cell (e.g., within lysosomes) due to the slightly different pH relative to the exterior of the cell. In some cases, however, fluorophores outside of the cell surface may emit a weak fluorescence signal that may be difficult to distinguish from the internalized signal.
- the IBCS is capable of localizing fluorescence signals to the perimeter / interior of a cell using a non-pH dependent secondary, as shown in the bottom of the figure, which may offer an improved method of assessing antibody uptake.
- FIG. 96 illustrates an application of the imaged-based cell sorter to emulsion droplets, according to embodiments of the present disclosure.
- An emulsion droplet may be a droplet of water, including one or more cells, suspended in an oil.
- an emulsion droplet may be a double-emulsion droplet of water in oil in water.
- Other materials may be added to the water to interact with the cell(s), enabling each droplet to act as a miniature bioreactor.
- FIG. 97 shows example images of emulsion cell droplets, according to embodiments of the present disclosure.
- the IBCS can combine the benefits of droplet-based assays with image phenotype-based analysis and/or sorting.
- the IBCS may extend the segmentation and masking techniques described above to identifying a droplet in a brightfi el d image, identifying a cell within the droplet using the brightfield image, and masking the fluorescence image to determine perimeter vs interior fluorescence.
- the IBCS may thus allow assessment of the amount of attachment of reagents in the droplet to the exterior of the cell, and/or penetration of the reagents into the cell interior.
- the IBCS may be used to perform label-free identification of immune cell activation; that is, using only brightfield images and without fluorescent labels.
- Label-free T-Cell activation may be applied to complex cell assays including, but not limited to, activation upon binding (e.g., cell-cell binding assays), sample/tissue profiling (e.g., in complex mixtures of cells such as peripheral blood mononuclear cells (PBMCs) or a tumor microenvironment), label-free sort for downstream culture application (e.g., memory, persistence of activation, etc.), good manufacturing process (GMP) applications where fluorescent labels are not desired (E.g., screening/sorting for T-Cell-related therapies), bispecific molecules (e.g., bispecific T-Cell engagers (BiTEs)), immune-synapse-bound T-Cell-tumor pairs, organoid infiltration, fast and label-free blood donor screening (e.g., PBMCs), etc.
- PBMCs peripheral blood
- FIG. 98 illustrates an application of the imaged-based cell sorter to T-Cell activation, according to embodiments of the present disclosure.
- a stimulant cocktail of varying concentration was applied to T-Cells.
- the concentration of the stimulant cocktail applied to each population of cells (e.g., from 1 to 8) is shown by the wedge, where population 1 was immersed in the highest concentration and population 8 the lowest.
- Each population of T- Cells was cataloged separately.
- Each population was analyzed using the IBCS, implementing a CNN as described herein, to determine whether T-Cell activation is detectable using morphology (e.g., without a fluorescence label).
- the figure shows an example cell exposed to the lowest concentration of the stimulant cocktail (population 8) and an example cell exposed to the highest concentration of the simulant (population 1).
- FIG. 99 is a graph illustrating an example principal component analysis of activated T- Cells imaged using the system, according to embodiments of the present disclosure. Each point corresponds to a cell, and nearby cells exhibit similar morphology. The cells are coded for their population, from 0 to 8, corresponding to exposure to a different concentration of stimulant cocktail.
- FIG. 100 illustrates different phenotypes represented by the principal components, according to embodiments of the present disclosure.
- FIG. 99 illustrates PCA of all populations together, while FIG. 100 illustrates the PCA of each population separately. Despite the overlap shown in FIG. 99, boxes may be drawn to collect T-Cells corresponding to each population individually and with reasonable purity.
- FIG. 101 illustrates a comparison of T-Cell activation as measured using image-based phenotyping versus flow cytometry, according to embodiments of the present disclosure.
- Traditional flow cytometry was used to validate the brightfield (label-free) phenotyping of the populations using the IBCS.
- the label-free phenotyping of T-Cell activation corresponded closely to the T-Cell activation as measured by flow.
- the label-free phenotyping by the IBCS was also shown to be comparable to label-based measurement of T-Cell activation using fluorescence activated cell sorting (FACS).
- FACS fluorescence activated cell sorting
- FIG. 102 is a flowchart illustrating example image-based and traditional T-Cell activation assay workflows, according to embodiments of the present disclosure.
- Image-based determination of the half maximal inhibitory concentration (IC50) for the T-Cells / reagent using the IBCS has several advantages over traditional methods; for example the IBCS does not require a known activation marker to be pre-specified, it may be faster / more efficient due to obviating fixation and/or staining steps, and it may be cheaper due to not needing labeling reagents.
- Both workflows may begin with incubation of T-Cells with an antigen of interest (e g., a dilution series of the stimulant cocktail) (10210).
- an antigen of interest e g., a dilution series of the stimulant cocktail
- the traditional FACS workflow may include fixing the T-Cells (10220), staining the T-Cells with fluorophore conjugate antibodies specific to an activation biomarker (10230), imaging each population in FACS (10240), and counting the fraction of activated cells by FACS for each antigen concentration, and determining the IC50 by curve fitting (10250).
- the label-free workflow may include imaging each population of cells using the IBCS (10260), processing the images through a pre-trained T-Cell activation machine learning model to extract embeddings and determine principal components (10270), and computing the mean of the principal component of embeddings for each population and determining the IC50 by curve fitting (10280).
- FIG. 103 illustrates results of image-based phenotyping of naive T-Cells, activated T- Cells, and tumor cells, according to embodiments of the present disclosure.
- a machine learning model as first trained on data sets including images of naive T-Cells, activated T-Cells, and tumor cells as shown at the top of the figure. The IBCS and model were then tested using mixtures of T-Cells and tumor cells to determine the number of each in the mixture. For mixtures of 1 : 1 tumor cells to T-Cells, the model reproduced the correct ratio, as shown in the graph at the bottom right. In a 6: 1 mixture, however, the model showed bias towards classifying the more numerous tumor cells as activated T-Cells.
- FIG. 104 illustrates results of image-based functional screening of a complex cell mixture, according to embodiments of the present disclosure.
- the figure illustrates how the IBCS may be used as a readout for functional screens in complex cell mixtures such as T-Cell tumor killing assays.
- a mixture of T-Cells and tumor cells may be co-incubated with a functional T-Cell engager (TCE) and with a non-functional T-Cell engager.
- TCE functional T-Cell engager
- a RCE may bind to a T-Cell and a tumor cell, as shown at the top of the figure. This may result in death of the tumor cell, which may alter its size and/or shape.
- FIG. 105 shows a comparison images of naive T-Cells, activated T-Cells, and tumor cells in a complex cell mixture with a functional T-Cell engager and a non-functional T-Cell engager, according to embodiments of the present disclosure.
- the graph at the bottom left of FIG. 104 shows a PCA of the different phenotypes (e.g., naive and activated T-Cells, and tumor cells).
- the total percentage of tumor cells may be reduced due to T-Cell activation and subsequent tumor cell killing as expected from the introduction of the functional TCE relative to the non-functional TCE, as shown in the graph at the bottom right.
- FIG. 106 illustrates an example of image-based phenotyping of human monocyte-derived dendritic cells (moDCs), according to embodiments of the present disclosure.
- MoDCs isolated from frozen PBMCs may show differential activation patterns as observed by the IBCS after TLR4 stimulation. Unlike T-Cells, moDC size may not change during activation. Yet the IBCS may still identify distinct subpopulations within donors in response to LPS-induced activation, as shown by the images, PCAs, and boundary boxes in FIG. 106.
- FIGS. 107A through 107D illustrate the results of image-based phenotyping corresponding to morphological changes associated with B-Cell activation, according to embodiments of the present disclosure.
- B-Cells increase in size (e.g., as reflected by PCA2 on the vertical axis) and become more irregularly shaped (e.g., as reflected by PCA1 on the horizontal axis) with activation time (e.g., from FIG. 107A to FIG. 107C) until eventually shrinking again (e.g., as show in FIG. 107D).
- FIG. 108 illustrates an application of the imaged-based cell sorter in multi-cellular spheroid sorting, according to embodiments of the present disclosure.
- a spheroid may be a mass of dozens to tens of thousands of cells. An image of an example spheroid is shown in the top left of the figure. The imaged spheroid is about 500um across.
- the IBCS may be modified for use as a high-throughput screening system for micro physiological systems (MPS). In particular, the IBCS may be modified to handle such spheroids gently to avoid breaking them (e.g., from sheer forces in the microfluidic device) and/or clogging the microfluidic device.
- MPS micro physiological systems
- the microfluidic device may be scaled up from channels having dimensions of 100x30um channel to, for example, lOOOxlOOOum.
- the wider channel may reduce the sheer forces experienced by the spheroids as they pass through the channel.
- other dimensions may be used to accommodate a range of spheroid sizes; for example, the channel(s) may be fabricated to have a width and/or depth of 500, 750, 1000, and 1500um, etc.
- larger 1mm internal diameter tubing may be used for sample loading, and the tubing may be connected to the microfluidic chip using 19-gauge pins.
- the sample pumps and/or sheath pumps may be modified for higher volumetric flow rates; for example, the system may be scaled with large 25ml syringes for PSD6 sheath pumps.
- Optics of the imaging device(s) 160 may be modified to reduce magnification (e.g., from a 20x objective to a lOx objective). Images may be acquired using the full sensor of the brightfield and/or fluorescent camera (e g., which may be approximately 500 x 350 um).
- FIG. 109A through 109C show example images of multi-cellular spheroids, according to embodiments of the present disclosure.
- the images of fibroblasts were taken using an exposure time of lus at a total flow rate of 9-10ml/min.
- the volumetric flow rate of sheath fluids 1-4 e.g., two lateral and two central sheath flows
- was 2ml/min each e.g., two lateral and two central sheath flows
- the volumetric flow rate of the sample fluid was l-2ml/min.
- the images reflect a dramatic difference in appearance of the 5%, 10%, and 25% spheroids.
- the 5% and 10% fibroblasts had mostly large whole intact spheroids, with the 10% showing a prominent fibroblast core.
- the 25% fibroblast appeared to be mostly small fragments of spheroids.
- the 25% fibroblasts may have been fragmented during transfer or flow.
- the embodiments described herein can be implemented in any of numerous ways.
- the embodiments may be implemented using hardware, software or a combination thereof.
- the software code can be executed on any suitable processor (e.g., a microprocessor) or collection of processors, whether provided in a single computing device or distributed among multiple computing devices.
- any component or collection of components that perform the functions described above can be generically considered as one or more controllers that control the above-discussed functions.
- the one or more controllers can be implemented in numerous ways, such as with dedicated hardware, or with general purpose hardware (e.g., one or more processors) that is programmed using microcode or software to perform the functions recited above.
- one implementation of the embodiments described herein comprises at least one computer-readable storage medium (e.g., RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or other tangible, non-transitory computer-readable storage medium) encoded with a computer program (i.e., a plurality of executable instructions) that, when executed on one or more processors, performs the above-discussed functions of one or more embodiments.
- the computer-readable medium may be transportable such that the program stored thereon can be loaded onto any computing device to implement aspects of the techniques discussed herein.
- program or “software” are used herein in a generic sense to refer to any type of computer code or set of processor-executable instructions that can be employed to program a computer or other processor to implement various aspects of embodiments as discussed above. Additionally, it should be appreciated that according to one aspect, one or more computer programs that when executed perform methods of the disclosure provided herein need not reside on a single computer or processor, but may be distributed in a modular fashion among different computers or processors to implement various aspects of the disclosure provided herein.
- Processor-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices.
- program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
- functionality of the program modules may be combined or distributed as desired in various embodiments.
- data structures may be stored in one or more non-transitory computer-readable storage media in any suitable form.
- data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a non-transitory computer-readable medium that convey relationships between the fields.
- any suitable mechanism may be used to establish relationships among information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationships among data elements.
- inventive concepts may be embodied as one or more processes, of which examples have been provided.
- the acts performed as part of each process may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
- the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements.
- This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified.
- “at least one of A and B” can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
- a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
- Item 1 A system for handling a particle suspension in a volume of fluid, the system comprising:
- a microfluidic channel in fluid communication with the sample source, the first destination reservoir, and the second destination reservoir, the microfluidic channel comprising: a main channel having an inlet disposed downstream of the sample source, and a downstream end comprising a bifurcation having a distal end in fluid communication with a first destination channel having a first destination outlet and a second destination channel having a second destination outlet, the first destination reservoir disposed downstream of the first destination outlet, and the second destination reservoir disposed downstream of the second destination outlet;
- a sorting device disposed upstream of the bifurcation and downstream of the first section, the sorting device configured to selectively direct the volume of fluid exiting the main channel away from the first destination channel to the second destination channel;
- a control system in electronic communication with the one or more imaging devices and the sorting device, the control system configured to transmit a control signal to the sorting device to actuate the selective direction of the volume of fluid
- the control system comprising: a processor; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: receive, from the one or more imaging devices, a digital image of a particle in the particle suspension; process the digital image by executing an algorithm to extract at least one characteristic associated with the particle, the at least one characteristic derived from a machine-learning model trained on image data; determining, based on the at least one characteristic, a classification of the particle; and assign, based on the classification, the particle to at least a first group or a second group; and generate, if the particle belongs to the first group, the control signal.
- Item 2 The system of item 1, wherein the instructions, when executed by the processor, cause the processor to: if the particle belongs to the first group, transmit no actuation signal to the sorting device.
- Item 3 The system as in any one of items 1-2, wherein the instructions, when executed by the processor, cause the processor to: if the particle belongs to the second group, transmit an actuation signal to the sorting device.
- Item 4 The system of item 1, wherein the instructions, when executed by the processor, cause the processor to: if the particle belongs to the first group, transmit a first actuation signal to the sorting device and if the particle belongs to the second group, transmit a second actuation signal to the sorting device.
- Item 5 The system as in any one of items 1-4, wherein the first destination channel is a default channel and the second destination channel is a target channel.
- Item 6 The system as in any one of items 1-5, wherein the first destination reservoir is a waste reservoir, and the second destination reservoir is a target reservoir for collection of particles of interest.
- Item 7 The system as in any one of items 1-6, wherein the sample source holds a volume of source fluid comprising a plurality of particles.
- Item 8 The system as in any one of items 1-7, wherein the sample source comprises a pump in electronic communication with the control system.
- Item 9 The system as in any one of items 1-8, wherein the first destination channel and the second destination channel each have a cross-sectional area perpendicular to a direction of flow, the cross-sectional area of the first destination channel being larger than the cross-sectional area of the second destination channel.
- Item 10 The system as in any one of items 1-9, wherein the microfluidic channel comprises a flow focusing system disposed upstream of the inlet and configured to control a trajectory of the particle conveyed in the volume of fluid.
- the flow focusing system comprises a sample channel, first lateral sheath channel, a second lateral sheath channel, and at least one sheath fluid source.
- sample channel has a sample inlet downstream of the sample source, a first central sheath inlet upstream of the sample inlet, and a second central sheath inlet downstream of the sample inlet, the first central sheath inlet and the second central sheath inlet each being disposed downstream of the at least one sheath fluid source.
- Item 13 The system of item 12, wherein the sample channel has an outlet region upstream of the inlet of the main channel.
- Item 14 The system of item 13, wherein the first lateral sheath channel and the second lateral sheath channel each have a sheath inlet downstream of the at least one sheath fluid source, and wherein the first lateral sheath channel and second lateral sheath channel each have a downstream outlet at the outlet region of the sample channel.
- Item 15 The system of item 14, wherein the sample channel, the first lateral sheath channel and the second lateral sheath channel are coplanar, and the sample channel is disposed between the first lateral sheath channel and the second lateral sheath channel.
- Item 16 The system of item 15, wherein an angle between a direction of flow in the sample channel, and a direction of flow in the first lateral sheath channel is less than 90 degrees.
- Item 17 The system of item 15 or item 16, wherein an angle between the direction of flow in the sample channel, and a direction of flow in the second lateral sheath channel is less than 90 degrees.
- Item 18 The system as in any one of items 14-17, wherein the first lateral sheath channel has a sheath inlet downstream of a first lateral sheath fluid source, and the second lateral sheath channel has a sheath inlet downstream of a second lateral sheath fluid source.
- Item 19 The system of item 18, wherein the first lateral sheath fluid source and the second lateral sheath fluid source each comprise a pump in electronic communication with the control system.
- Item 20 The system of item 19, wherein the flow focusing system is configured such that actuating the pump of the first lateral sheath fluid source or the pump of the second lateral sheath fluid source, or both, by the control system, alters the trajectory of the particle conveyed in the volume of fluid.
- Item 21 The system as in any one of items 1-20, comprising an event tracking system comprising a processor and a memory having instructions stored thereon, in electronic communication with the control system and configured to trigger image acquisition by at least one of the one or more imaging devices.
- Item 22 The system of item 21, wherein the event tracking system comprises a laser beam illuminating a second section of the main channel upstream of the first section.
- Item 23 The system of item 22, wherein the event tracking system is configured to detect a presence of a particle in the second section by detecting at least one of absorption, attenuation, or scatter of the laser beam, by the particle.
- Item 24 The system of item 21, wherein the event tracking system comprises at least one pair of electrodes and electronic circuitry configured to detect electric impedance in a second section of the main channel upstream of the first section.
- Item 25 The system of item 24, wherein the event tracking system comprises three or more electrodes.
- Item 26 The system as in any one of items 24-25, wherein the event tracking system is configured to detect the presence of a particle in the second section based on a change in detected impedance.
- Item 27 The system as in any one of items 24-26, wherein the event tracking system is configured to measure a velocity of a particle in the second section based on a change in detected impedance.
- Item 28 The system of item 27, wherein the event tracking system is configured to determine sort delay time, a sort time window, or both.
- Item 29 The system as in any one of items 24-28, comprising at least one pair of electrodes and electronic circuitry configured to detect electric impedance in the first destination channel or the second destination channel.
- Item 30 The system of item 29, comprising at least one pair of electrodes and electronic circuitry configured to detect electric impedance in each of the first destination channel and the second destination channel.
- Item 31 The system as in any one ofitems 21-30, wherein the event tracking system is configured to transmit, upon detection of a particle, a first trigger signal to at least one of the one or more imaging devices, causing the at least one of the one or more imaging devices to acquire an image of the first section of the main channel.
- Item 32 The system of item 31, wherein instructions, when executed by the processor, cause the processor to: transmit, upon completion of image acquisition, a second trigger signal to the control system, thereby causing the control system to collect the image and commence image processing.
- Item 33 The system as in any one of items 31-32, wherein the event tracking system is configured to, if a time interval between detection of a first particle and a second particle is less than a camera rate limiter value, cause the at least one of the one or more imaging devices to not acquire an image of the second particle.
- Item 34 The system as in any one of items 31-33, wherein the event tracking system is configured to, if a time interval between detection of a first particle and a second particle is less than a camera rate limiter value, cause an actuation signal to be transmitted to the sorting device.
- Item 35 The system as in any one of items 31-33, wherein the event tracking system is configured to, if a time interval between detection of a first particle and a second particle is less than a camera rate limiter value, cause no actuation signal to be transmitted to the sorting device.
- Item 36 The system as in any one of items 21-35, wherein the event tracking system is configured to receive, from the control system, a first actuation input and provide, based on the first actuation input, an actuation signal to the sorting device.
- Item 37 The system of item 36, wherein the event tracking system is configured to execute a secondary determination algorithm to generate a second actuation input and provide, based on the first actuation input and second actuation input, an actuation signal to the sorting device.
- Item 38 The system of item 37, wherein executing the secondary determination algorithm comprises one or more of: (a) determining a time interval between two particles, (b) determining a relative position of at least two particles to each other, and/or (c) determining if actuating the sorting device would conflict with an ongoing or active sorting event.
- Item 39 The system as in any one of items 1-38, wherein the sorting device comprises a sorting channel in fluid communication with the main channel, the sorting channel having an inlet downstream of a sorting reservoir and an outlet upstream of the bifurcation.
- Item 40 The system of item 39, wherein the sorting device comprises a pump comprising a piezoelectric actuator, the piezoelectric actuator configured to receive an actuation signal from the control system or the event tracking system and cause, upon receipt of the actuation signal, a flow of sorting fluid from the sorting reservoir through the sorting channel into the main channel.
- the microfluidic channel is configured such that sorting fluid entering the main channel causes a direction of flow out of the main channel to change from the first destination channel to the second destination channel.
- Item 42 The system as in any one of items 1-41, wherein the sorting device comprises a valve and a piezoelectric actuator configured to actuate the valve.
- Item 43 The system as in any one of items 1-41, wherein the sorting device comprises one or more of a push-pull mechanism, an air pressure actuation system, a fluid pressure actuation system, a solenoid valve, an electrostatic cell sorting system, a dielectrophoretic cell sorting system, an acoustic cell sorting system, a surface wave generator, a laser trap, an optical trap, a cavitation-induced sorting system, a laser ablation system for positive or negative cell selection, or a system for light-induced gelation encapsulation of cells for positive/negative selection.
- the sorting device comprises one or more of a push-pull mechanism, an air pressure actuation system, a fluid pressure actuation system, a solenoid valve, an electrostatic cell sorting system, a dielectrophoretic cell sorting system, an acoustic cell sorting system, a surface wave generator, a laser trap, an optical trap, a cavitation-induced sorting system, a laser
- the one or more imaging devices comprise one or more of a brightfield camera, a machine-vision camera, a charge-coupled device (CCD) camera, complementary metal-oxide-semiconductor (CMOS) camera, a photomultiplier tube (PMT), a photodiode, or an Avalanche Photodiode (APD).
- a brightfield camera a machine-vision camera
- CMOS complementary metal-oxide-semiconductor
- PMT photomultiplier tube
- APD Avalanche Photodiode
- Item 45 The system as in any one of items 1-44, wherein the one or more imaging devices comprise one or more of a fluorescence detector, a spectroscopy system, or an impedance detection system.
- Item 46 The system as in any one of items 1-45, wherein the one or more imaging devices are configured for Differential Interference Contrast (DIC), Phase Contrast, Light-field, or Darkfield imaging.
- DIC Differential Interference Contrast
- Phase Contrast Phase Contrast
- Light-field or Darkfield imaging.
- Item 47 The system as in any one of items 1-46, wherein the one or more imaging devices are configured for Coherent Anti-Stokes Raman Scattering (CARS) or Fluorescence Lifetime Imaging Microscopy (FLIM).
- CARS Coherent Anti-Stokes Raman Scattering
- FLIM Fluorescence Lifetime Imaging Microscopy
- Item 48 The system as in any one of items 45-47, wherein the fluorescence detector is a CCD camera, an electron-multiplying CCD camera, or a CMOS camera.
- Item 49 The system as in any one of items 45-47, wherein the fluorescence detector is an image intensified fluorescence camera.
- Item 50 The system as in any one of items 1-49, wherein the particle displays an antibody or antibody fragment, an antigen, or other proteins of interest.
- Item 51 The system as in any one of items 1-50, wherein the particle comprises an internalized bispecific or monospecific antibody-drug conjugate.
- Item 52 The system of item 51, wherein the antibody-drug conjugate is labeled with a fluorophore or dye.
- Item 53 The system as in any one of items 1-52, wherein the particle is connected to one or more other particles via a T-Cell engager.
- Item 54 The system as in any one of items 1-53, wherein the particle is a cell.
- Item 55 The system as in any one of items 1-54, wherein the particle is labeled.
- Item 56 The system of item 55, wherein the particle is labeled with a protein expression label, a spatial distribution and localization label, cell debris, a cell fragment, a labeling cell, a bead, a translocation label, a fluorophore, a dye, a stain, a cell painting dye, a luminescent label, a live/dead stain, a lanthanide, a quantum dot, or a lasing particle.
- Item 57 The system of item 56, wherein the labeling cell is fluorescently labeled.
- Item 58 The system of item 56 or item 57, wherein the labeling cell is a dead cell.
- Item 59 The system as in any one of items 56-58, wherein the label is not attached to the particle.
- Item 60 The system as in any one of items 1-54, wherein the particle is unlabeled.
- Item 61 The system as in any one of items 1-60, wherein the at least one characteristic is based on a cell feature comprising one or more of cell morphology, cell size, cell area, texture, cell-cell binding, or cell-cell spatial association.
- Item 62 The system of item 55, wherein the particle is a fluorescently labeled cell, and the at least one characteristic is based on a fluorescent label comprising a protein expression label, a label indicating spatial distribution or localization of a molecule of interest, or a label indicating translocation of a molecule of interest.
- Item 63 The system of item 54, wherein the at least one characteristic is a second particle present within an imaged region of interest comprising the cell.
- Item 64 The system as in any one of items 1-53, wherein the particle is a multicellular spheroid.
- Item 66 The system of item 64, wherein the multicellular spheroid is labeled.
- Item 67 The system as in any one of items 1-66, comprising an impedance spectroscopy system comprising at least one pair of electrodes and electronic circuitry configured to detect electric impedance in at least one of the main channel, the first destination channel, or the second destination channel to determine a dielectric property of a particle.
- Item 68 The system of item 67, wherein an electric carrier wave frequency of a current between the at least one pair of electrodes of the at least one pair is about 0.1 MHz, about 1 MHz, about 10 MHz, or over 10 MHz.
- Item 69 The system as in any one of items 67-68, wherein the instructions, when executed by the processor, cause the processor to: receive, from the impedance spectroscopy system, electric frequency data; and process the electric frequency data to extract a spectral fingerprint of the particle.
- Item 70 The system of item 69, wherein the instructions, when executed by the processor, cause the processor to: determine, from the spectral fingerprint, a particle type.
- Item 71 The system as in any one of items 69-70, wherein the instructions, when executed by the processor, cause the processor to: determine, from the spectral fingerprint, a particle size.
- Item 72 The system as in any one of items 69-71, wherein the instructions, when executed by the processor, cause the processor to: determine, from the spectral fingerprint, particle morphology.
- Item 73 The system as in any one of items 69-72, wherein the particle is a cell and the instructions, when executed by the processor, cause the processor to: determine, from the spectral fingerprint, cell health or cell viability.
- Item 74 The system as in any one of items 69-73, wherein the instructions, when executed by the processor, cause the processor to: classify the particle based on the spectral fingerprint.
- Item 75 The system of item 74, wherein the image data comprises a plurality of sets of one or images wherein each set is associated with a corresponding set of electric frequency data.
- processing the electric frequency data comprises executing an algorithm to extract a spectral fingerprint derived from a machinelearning model trained on previously obtained electric frequency data.
- processing the image data or the electric frequency data, or both comprises executing an algorithm to extract at least one characteristic from the digital image or the spectral fingerprint, or both, derived from an integrated machine-learning model trained on image data comprising a plurality of sets of one or images wherein each set is associated with a corresponding set of electric frequency data.
- Item 78 A system for handling a particle suspension in a volume of fluid, the system comprising:
- a microfluidic channel in fluid communication with the sample source, the first destination reservoir, and the second destination reservoir, the microfluidic channel comprising: a main channel having an inlet disposed downstream of the sample source, and a downstream end comprising a bifurcation having a distal end in fluid communication with a first destination channel having a first destination outlet and a second destination channel having a second destination outlet, the first destination reservoir disposed downstream of the first destination outlet, and the second destination reservoir disposed downstream of the second destination outlet;
- an impedance spectroscopy system comprising at least one pair of electrodes and electronic circuitry configured to detect electric impedance in a first section of the main channel;
- a sorting device disposed upstream of the bifurcation and downstream of the first section, the sorting device configured to selectively direct the volume of fluid exiting the main channel away from the first destination channel to the second destination channel;
- control system in electronic communication with the impedance spectroscopy system and the sorting device, the control system configured to transmit a control signal to the sorting device to actuate the selective direction of the volume of fluid
- control system comprising: a processor; and a memory having instructions stored thereon, wherein the instructions, when executed by the processor, cause the processor to: receive, receive, from the impedance spectroscopy system, electric frequency data; process the electric frequency data to extract a spectral fingerprint of a particle; determine, based on the spectral fingerprint, a classification of the particle; and assign, based on the classification, the particle to at least a first group or a second group; and generate, if the particle belongs to the first group, the control signal.
- Item 79 A system as in any of items 1-78, wherein the machine-learning model is trained using a computer implemented method of training a neural network for microfluidic cell sorting, the training comprising: collecting a first set of digital images, each digital image depicting a particle suspended in a fluid flowing through a microfluidic device; applying at least one first transformation to each digital image to create a first modified set of digital images; classifying each modified digital image to create a first catalogued set of digital images; creating a first training set comprising the first catalogued set of digital images; and training the neural network using the first training set.
- Item 80 The system of item 79, further comprising: collecting a second set of digital images, each digital image depicting a particle suspended in a fluid flowing through a microfluidic device; applying at least one first transformation to each digital image of the second set of digital images to create a second modified set of digital images; classifying each modified digital image to create a second catalogued set of digital images; creating a second training set comprising the second catalogued set of digital images; and training the neural network using the second training set.
- Item 81 The system of item 79 or item 80, wherein at least one set of digital images comprises at least one brightfield image.
- each set of digital images comprises two or more brightfield images.
- Item 83 The system as in any one of items 79-82, wherein each set of digital image comprises at least one fluorescence image.
- each set of digital images comprises two or more fluorescence images, wherein the two or more fluorescence images comprise images from two or more fluorescence color channels.
- Item 85 The system as in any one of items 80-82, wherein the first set of digital images comprises at least one bright-field image and the second set of digital images comprises at least one fluorescence image.
- Item 86 The system as in any one of items 80-82, wherein the first and second sets of digital images depict the same particle.
- Item 87 The system as in any one of items 80-82, wherein the first and second sets of digital images depict different particles.
- Item 88 The system as in any one of items 79-87, wherein at least one set of digital images comprises at least one additional image of the same particle taken at a different focal plane (“z- slice”),
- Item 89 The system as in any one of items 79-87, wherein at least one set of digital images comprises at least one image acquired via Differential Interference Contrast (DIC), Light-field, Phase Contrast, or Darkfield imaging.
- DI Differential Interference Contrast
- Light-field Light-field
- Phase Contrast or Darkfield imaging.
- Item 90 The system as in any one of items 79-87, wherein at least one set of digital images comprises at least one image acquired via Coherent Anti-Stokes Raman Scattering (CARS) or Fluorescence Lifetime Imaging Microscopy (FLIM).
- CARS Coherent Anti-Stokes Raman Scattering
- FLIM Fluorescence Lifetime Imaging Microscopy
- Item 91 The system as in any one of items 79-90, wherein the particle is a cell.
- Item 92 The system as in any one of items 79-91, wherein the at least one first transformation comprises at least one of thresholding, erosion, dilation, filtering, or region extraction.
- Item 93 The system of item 92, wherein the filtering is based on at least one of size, shape, or texture of the particle.
- Item 94 The system as in any one of items 79-93, comprising applying at least a second transformation comprising at least one of image normalization or image gradient determination.
- Item 95 The system as in any one of items 79-94, wherein the classifying is feature-agnostic.
- Item 96 The system as in any one of items 79-95, wherein the classifying is based on at least one of size, shape, or texture of the particle.
- Item 97 The system as in any one of items 79-96, comprising labeling each image of the first and second catalogued set of digital images.
- Item 98 The system as in any one of items 79-97, wherein the neural network is a convolutional neural network.
- Item 99 The system as in any one of items 79-98, wherein training comprises learning weights that minimize a loss function.
- Item 100 The system as in any one of items 79-99, wherein classifying comprises using a variational autoencoder (VAE).
- VAE variational autoencoder
- Item 101 The system as in any one of items 79-100, wherein classifying comprises using a you- only-look-once (YOLO) model.
- YOLO you- only-look-once
- Item 102 A method for handling a particle suspension in a volume of fluid, the method comprising:
- microfluidic channel in fluid communication with a sample source, a first destination reservoir, and a second destination reservoir
- the microfluidic channel comprising a main channel having an inlet disposed downstream of the sample source, and a downstream end comprising a bifurcation having a distal end in fluid communication with a first destination channel having a first destination outlet and/or a second destination channel having a second destination outlet, the first destination reservoir disposed downstream of the first destination outlet, and the second destination reservoir disposed downstream of the second destination outlet;
- Item 103 The method of item 102, comprising, if the particle belongs to the first group, transmitting no actuation signal to the sorting device.
- Item 104 The method of item 102, comprising, if the particle belongs to the second group, transmitting an actuation signal to the sorting device.
- Item 105 The method of item 102, comprising, if the particle belongs to the first group, transmitting a first actuation signal to the sorting device and if the particle belongs to the second group, transmitting a second actuation signal to the sorting device.
- Item 106 The method as in any one of items 102-105, wherein the first destination channel is a default channel and the second destination channel is a target channel.
- Item 107 The method as in any one of items 102-106, wherein the first destination reservoir is a waste reservoir, and the second destination reservoir is a target reservoir for collection of particles of interest.
- Item 108 The method as in any one of items 102-107, wherein the sample source holds a volume of source fluid comprising a plurality of the particles.
- Item 109 The method as in any one of items 102-108, wherein the first destination channel and the second destination channel each have a cross-sectional area perpendicular to the direction of flow, the cross-sectional area of the first destination channel being larger than the cross-sectional area of the second destination channel.
- Item 110 The method as in any one of items 102-109, comprising maintaining a trajectory of the particle conveyed in the volume of fluid at or near a center line of the main channel.
- Item 112. The method of item 111, wherein the sample channel has a sample inlet downstream of the sample source, a first central sheath inlet upstream of the sample inlet, and a second central sheath inlet downstream of the sample inlet, the first central sheath inlet and the second central sheath inlet each being disposed downstream of the sheath fluid source.
- Item 113 The method of item 112, wherein the sample channel has an outlet region upstream of the inlet of the main channel.
- Item 114 The method of item 113, wherein the first lateral sheath channel and the second lateral sheath channel each have a sheath inlet downstream of a sheath fluid source, and wherein the first lateral sheath channel and second lateral sheath channel each have a downstream outlet at the outlet region of the sample channel.
- Item 115 The method of item 114, wherein the sample channel, the first lateral sheath channel and the second lateral sheath channel are coplanar, and the sample channel is disposed between the first lateral sheath channel and the second lateral sheath channel.
- Item 116 The method of item 115, wherein an angle between a direction of flow in the sample channel, and a direction of flow in the first lateral sheath channel is less than 90 degrees.
- Item 117 The method of item 115 or item 116, wherein an angle between the direction of flow in the sample channel, and a direction of flow in the second lateral sheath channel is less than 90 degrees.
- Item 118 The method as in any one of items 111-117, wherein the first lateral sheath channel has a sheath inlet downstream of a first lateral sheath fluid source, and the second lateral sheath channel has a sheath inlet downstream of a second lateral sheath fluid source.
- Item 119 The method of item 118, wherein the first lateral sheath fluid source and the second lateral sheath fluid source each comprise a pump in electronic communication with the control system.
- Item 120 The method of item 119, comprising actuating the pump of the first lateral sheath fluid source or the pump of the second lateral sheath fluid source, or both, by the control system, to alter the trajectory of the particle conveyed in the volume of fluid.
- Item 121 The method of any one of items 102-120, comprising triggering the imaging using an event tracking system in electronic communication with the control system.
- Item 122 The method of item 121, wherein triggering comprises illuminating a second section of the main channel upstream of the first section by a laser beam and detecting a presence of a particle.
- Item 123 The method of item 122, wherein detecting the presence of a particle in the second section comprises detecting at least one of absorption, attenuation, or scatter of the laser beam, by the particle.
- Item 124 The method of item 121, wherein the event tracking system comprises at least one pair of electrodes and electronic circuitry configured to detect electric impedance in a second section of the main channel upstream of the first section.
- Item 125 The method of item 124, wherein the event tracking system comprises three or more electrodes.
- Item 126 The method of item 124 or item 125, comprising detecting, by the event tracking system, the presence of a particle in the second section based on a change in detected impedance.
- Item 127 The method as in any one of items 124-126, comprising measuring, by the event tracking system, a velocity of a particle in the second section based on a change in detected impedance.
- Item 128 The method of item 127, comprising determining, by the event tracking system, sort delay time, a sort time window, or both.
- Item 129 The method as in any one of items 124-128, wherein the event tracking system comprises at least one pair of electrodes and electronic circuitry configured to detect electric impedance in the first destination channel or the second destination channel.
- Item 130 The method of item 129, wherein the event tracking system comprises at least one pair of electrodes and electronic circuitry configured to detect electric impedance in each of the first destination channel and the second destination channel.
- Item 131 The method as in any one of items 121-130, comprising transmitting, upon detection of a particle by the event tracking system, a first trigger signal to the one or more imaging devices, causing the one or more imaging devices to acquire an image of the first section of the main channel.
- Item 132 The method of item 131, comprising transmitting, by the imaging system, upon completion of image acquisition, a second trigger signal to the control system, thereby causing the control system to collect the image and commence image processing.
- Item 133 The method as in any one of items 131-132, wherein, if a time interval between detection of a first particle and a second particle is less than a camera rate limiter value, the event tracking system causes the at least one of the one or more imaging devices to not acquire an image of the second particle.
- Item 134 The method as in any one of items 131-133, wherein, if a time interval between detection of a first particle and a second particle is less than a camera rate limiter value, the event tracking system causes an actuation signal to be transmitted to the sorting device.
- Item 135. The method as in any one of items 131-134, wherein, if a time interval between detection of a first particle and a second particle is less than a camera rate limiter value, the event tracking system causes no actuation signal to be transmitted to the sorting device.
- Item 136 The method as in any one of items 121-135 , comprising receiving, by the event tracking system, from the control system, a first actuation input and providing, by the event tracking system, based on the first actuation input, an actuation signal to the sorting device.
- Item 137 The method of item 136, comprising executing, by the event tracking system, a secondary determination algorithm to generate a second actuation input and provide, based on the first actuation input and second actuation input, an actuation signal to the sorting device.
- Item 138 The method as in any one of items 121-135 , comprising receiving, by the event tracking system, from the control system, a first actuation input and providing, by the event tracking system, based on the first actuation input, an actuation signal to the sorting device.
- the method of item 137, wherein executing the secondary algorithm comprises one or more of (a) determining a time interval between two particles, (b) determining a relative position of at least two particles to each other, and/or (c) determining if actuating the sorting device would conflict with an ongoing or active sorting event.
- Item 140 The method of item 139, receiving, by the piezoelectric actuator of the pump an actuation signal from the control system or the event tracking system and pumping, upon receipt of the actuation signal, sorting fluid from the sorting reservoir through the sorting channel into the main channel.
- Item 141 The method of item 140, wherein the pumping causes the direction of flow out of the main channel to change from the first destination channel to the second destination channel.
- Item 142 The method as in any one of items 102-141, wherein the sorting device comprises a valve and a piezoelectric actuator configured to actuate the valve.
- Item 143 The method as in any one of items 102-142, wherein the sorting device comprises one or more of a push-pull mechanism, an air pressure actuation system, a fluid pressure actuation system, a solenoid valve, an electrostatic cell sorting system, a dielectrophoretic cell sorting system, an acoustic cell sorting system, a surface wave generator, a laser trap, an optical trap, a cavitation-induced sorting system, a laser ablation system for positive or negative cell selection, or a system for light-induced gelation encapsulation of cells for positive/negative selection.
- the sorting device comprises one or more of a push-pull mechanism, an air pressure actuation system, a fluid pressure actuation system, a solenoid valve, an electrostatic cell sorting system, a dielectrophoretic cell sorting system, an acoustic cell sorting system, a surface wave generator, a laser trap, an optical trap, a cavitation-induced sorting system,
- the one or more imaging device comprise one or more of a brightfield camera, a machine-vision camera, a charge-coupled device (CCD) camera, complementary metal-oxide-semiconductor (CMOS) camera, a photomultiplier tube (PMT), a photodiode, or an Avalanche Photodiode (APD).
- a brightfield camera e.g., a machine-vision camera
- CMOS complementary metal-oxide-semiconductor
- PMT photomultiplier tube
- APD Avalanche Photodiode
- Item 145 The method as in any one of items 102-144, wherein the one or more imaging device comprise one or more of a fluorescence detector, a spectroscopy system, or an impedance detection system.
- Item 146 The method as in any one of items 102-145, wherein the one or more imaging devices are configured for Differential Interference Contrast (DIC), Phase Contrast, Light-field, or Darkfield imaging.
- DIC Differential Interference Contrast
- Phase Contrast Phase Contrast
- Light-field or Darkfield imaging.
- Item 147 The method as in any one of items 102-145, wherein the one or more imaging devices are configured for Coherent Anti-Stokes Raman Scattering (CARS) or Fluorescence Lifetime Imaging Microscopy (FLIM).
- CARS Coherent Anti-Stokes Raman Scattering
- FLIM Fluorescence Lifetime Imaging Microscopy
- Item 148 The method as in any one of items 145-147, wherein the fluorescence detector is a CCD camera, an electron-multiplying CCD camera, or a CMOS camera.
- Item 149 The method as in any one of items 145-147, wherein the fluorescence detector is an image intensified fluorescence camera.
- Item 150 The method as in any one of items 102-149, wherein the particle displays an antibody or antibody fragment, an antigen, or other proteins of interest.
- Item 151 The method as in any one of items 102-150, wherein the particle comprises an internalized bispecific or monospecific antibody-drug conjugate.
- Item 152 The method of item 151, wherein the antibody-drug conjugate is labeled with a fluorophore or dye.
- Item 153 The method as in any one of items 102-152, wherein the particle is connected to one or more other particles via a T-Cell engager.
- Item 154 The method as in any one of items 102-153, wherein the particle is a cell.
- Item 155 The method as in any one of items 102-154, wherein the particle is labeled.
- Item 156 The method of items 155, wherein the label is a protein expression label, a spatial distribution and localization label, cell debris, a cell fragment, a labeling cell, a bead, a translocation label, a fluorophore, a dye, a stain, a cell painting dye, a luminescent label, a live/dead stain, a lanthanide, a quantum dot, or a lasing particle.
- the label is a protein expression label, a spatial distribution and localization label, cell debris, a cell fragment, a labeling cell, a bead, a translocation label, a fluorophore, a dye, a stain, a cell painting dye, a luminescent label, a live/dead stain, a lanthanide, a quantum dot, or a lasing particle.
- Item 157 The method of item 156, wherein the labeling cell is fluorescently labeled.
- Item 158 The method of item 157, wherein the labeling cell is a dead cell.
- Item 159 The method as in any one of items 156-158, wherein the label is not attached to the particle.
- Item 160 The method as in any one of items 102-159, wherein the particle is unlabeled.
- Item 161. The method as in any one of items 102-160, wherein the at least one characteristic is based on a cell feature comprising one or more or of cell morphology, cell size, cell area, texture, cell-cell binding, or cell-cell spatial association.
- Item 162 The method of item 155, wherein the labeled particle is a fluorescently labeled cell, and the at least one characteristic is based on a fluorescent label comprising a protein expression label, a label indicating spatial distribution or localization of a molecule of interest, and a label indicating translocation of a molecule of interest.
- Item 163. The method of item 154, wherein the at least one characteristic is a second particle present within an imaged region of interest comprising the cell.
- Item 164 The method as in any one of items 102-163, wherein the particle is a multicellular spheroid.
- Item 165 The method of item 164, wherein the spheroid is unlabeled.
- Item 166 The method of item 164, wherein the spheroid is labeled.
- Item 167 The method as in any one of items 102-166, comprising: providing an impedance spectroscopy system comprising at least one pair of electrodes and electronic circuitry configured to detect electric impedance; detecting, in at least one of the main channel, the first destination channel, or the second destination channel, electric impedance; and determining a dielectric property of a particle.
- Item 168 The method of item 167, wherein an electric carrier wave frequency of a current between the two electrodes of the at least one pair is about 0.1 MHz, about 1 MHz, about 10 MHz, or over 10 MHz.
- Item 169 The method of item 167 or item 168, comprising: receiving, from the impedance spectroscopy system, electric frequency data; and processing the electric frequency data to extract a spectral fingerprint of the particle.
- Item 170 The method of item 169, comprising determining, from the spectral fingerprint, a particle type.
- Item 171. The method of item 169 or item 170, comprising determining, from the spectral fingerprint, a particle size.
- Item 172 The method as in any one of items 169-171, comprising determining, from the spectral fingerprint, particle morphology.
- Item 173. The method as in any one of items 169-172, comprising determine, from the spectral fingerprint, cell health or cell viability.
- Item 174 The method as in any one of items 169-173, comprising classifying the particle based on the spectral fingerprint.
- Item 175. The method of item 174, wherein the image data comprises a plurality of sets of one or images wherein each set is associated with a corresponding set of electric frequency data.
- processing the electric frequency data comprises executing an algorithm to extract a spectral fingerprint derived from a machinelearning model trained on previously obtained electric frequency data.
- Item 177 The method of item 176, wherein processing the image data or the electric frequency data, or both, comprises executing an algorithm to extract at least one characteristic from the digital image or a spectral fingerprint, or both, derived from an integrated machine-learning model trained on image data comprising a plurality of sets of one or images wherein each set is associated with a corresponding set of electric frequency data.
- Item 178. A method for handling a particle suspension in a volume of fluid, the method comprising:
- microfluidic channel in fluid communication with a sample source, a first destination reservoir, and the second destination reservoir
- the microfluidic channel comprising a main channel having an inlet disposed downstream of the sample source, and a downstream end comprising a bifurcation having a distal end in fluid communication with a first destination channel having a first destination outlet and a second destination channel having a second destination outlet, the first destination reservoir disposed downstream of the first destination outlet, and the second destination reservoir disposed downstream of the second destination outlet;
- Item 179 The method as in any one of items 102-178, wherein classifying comprises using a variational autoencoder (VAE).
- Item 180 The method as in any one of items 102-179, wherein classifying comprises using a you-only-look-once (YOLO) model.
- VAE variational autoencoder
- Item 181. The method as in any one of items 102-180, comprising hydrodynamically aligning a plurality of cells in a flow-focused stream through the microfluidic channel and classifying the plurality of cells at a processing rate of at least 100 cells per second, or between 100 to 1000 cells per second, or more than 1000 cells per second.
- Item 182 The method of item 181, wherein determining the classification is a feature-agnostic process.
- a system for sorting particles suspended in fluid comprising: a microfluidic device comprising a first inlet, a main channel disposed downstream from the first inlet, and a bifurcation disposed downstream from the main channel, the bifurcation coupled to a first destination channel and a second destination channel; a first imaging device configured to observe a first section of the main channel; a fluidic sorting device configured to selectively direct particles exiting the main channel from the first destination channel to the second destination channel; and a control system configured to receive data from the first imaging device and control the fluidic sorting device to actuate selective direction of particles exiting the main channel, the control system comprising: one or more processors, and a memory having instructions stored thereon that, when executed by the one or more processors, cause the one or more processors to: receive first image data from the first imaging device, assign, based on the first image data, a first particle represented in the first image data to a first group, and in response to assigning the first particle to the first group
- Item 184 The system of item 1, the microfluidic device further comprising: a first sorting channel coupled to a second section of the main channel downstream of the first section and upstream of the bifurcation, the first sorting channel being arranged such that when a sorting fluid is provided into the main channel via the first sorting channel, a flow of the sorting fluid will direct particles into the second destination channel.
- Item 185 The system of item 184, further comprising: a piezoelectric actuator configured to, in response to receiving the first control signal, introduce a flow of the sorting fluid into the first sorting channel.
- Item 186 The system of item 184, the microfluidic device further comprising: a second sorting channel coupled to the second section, wherein the main channel is disposed between the first sorting channel and the second sorting channel.
- the microfluidic device further comprising: a sample channel coupled to the first inlet and the main channel, the sample channel disposed downstream of the first inlet and upstream of the first section; a first lateral sheath channel coupled to the main channel and disposed upstream of the first section; and a second lateral sheath channel coupled to the main channel and disposed upstream of the first section, wherein the sample channel is disposed between the first lateral sheath channel and the second lateral sheath channel.
- Item 188 The system of item 187, further comprising: a first pump coupled to the first lateral sheath channel; and a second pump coupled to the second lateral sheath channel, wherein the instructions further cause the one or more processors to: actuate the first pump to introduce fluid into the first lateral sheath channel at a first flow rate, and actuate the second pump to introduce fluid into the second lateral sheath channel at a second flow rate different from the first flow rate.
- the microfluidic device further comprising: a sample channel coupled to the first inlet and the main channel, the sample channel disposed upstream of the first section, the sample channel having: a first central sheath inlet upstream of the first inlet, and a second central sheath inlet downstream of the first inlet.
- Item 190 The system of item 189, further comprising: a first pump coupled to the first central sheath inlet; and a second pump coupled to the second central sheath inlet, wherein the instructions further cause the one or more processors to: actuate the first pump to introduce fluid into the first central sheath inlet at a first flow rate, and actuate the second pump to introduce fluid into the second central sheath inlet at a second flow rate different from the first flow rate.
- Item 191. The system of item 1, further comprising: a second imaging device configured to observe the first section, wherein the instructions further cause the one or more processors to: receive second image data from the second imaging device, the second image data captured contemporaneously with the first image data; determining, using the first image data, a first region of interest (ROI) of the first image data corresponding to the first particle; determining, based on the first ROI, a second ROI of the second image data corresponding to the first particle; and determining, based on the second ROI, that the first particle corresponds to the first group.
- ROI region of interest
- Item 192 The system of claim item 191, wherein: the first image data represents a brightfield image; and the second image data represents a fluorescence image.
- Item 193 The system of claim item 191, wherein the instructions further cause the one or more processors to, prior to receiving the first image data: receive third image data from the first imaging device, the third image data corresponding to a calibration device arranged in place of the microfluidic device; receive fourth image data from the second imaging device, the fourth image data corresponding to the calibration device and captured contemporaneously with the third image data; determine first positional data corresponding to a first plurality of particles represented in the third image data; determine second positional data corresponding to a second plurality of particles represented in the fourth image data; and determine, using the first positional data and the second positional data, a transformation matrix, wherein determining the second ROI includes applying the transformation matrix to the first ROI.
- Item 194 The system of item 191, wherein the instructions further cause the one or more processors to: cause the first imaging device to capture the first image data using a single exposure; cause the second imaging device to capture the second image data using multiple exposures, the second ROI corresponding to a first exposure of the multiple exposures; and determine a third ROI corresponding to a second exposure of the multiple exposures.
- Item 195 The system of item 194, wherein: the first exposure corresponds to a first duration of time; and the second exposure corresponds to a second duration of time different from the first duration of time.
- Item 196 The system of item 1, wherein the instructions further cause the one or more processors to: process the first image data using a neural network trained using a first plurality of images of particles corresponding to a first phenotype and a second plurality of images of particles corresponding to a second phenotype, wherein determining to assign the first particle to the first group includes determining, using the neural network, that the first particle corresponds to the first phenotype.
- Item 197 The system of item 196, wherein the neural network is a convolutional neural network.
- Item 198 The system of item 1, wherein the instructions further cause the one or more processors to: determine, using a neural network trained using a first plurality of images of out-of-focus particles and a second plurality of images of in-focus particles, that the first image data corresponds to the second plurality of images, wherein sending the first control signal is additionally based on determining that the first image data corresponds to the second plurality of images.
- Item 199 The system of item 198, wherein the instructions further cause the one or more processors to: receive fourth image data obtained using the first imaging device and representing a second particle; determine, using the neural network, that the fourth image data corresponds to the first plurality of images; and in response to determining that the fourth image data corresponds to the first plurality of images, allow the second particle to exit the main channel into the first destination channel.
- Item 200 receive fourth image data obtained using the first imaging device and representing a second particle; determine, using the neural network, that the fourth image data corresponds to the first plurality of images; and in response to determining that the fourth image data corresponds to the first plurality of images, allow the second particle to exit the main channel into the first destination channel.
- Item 201 The system of item 200, wherein the instructions further cause the one or more processors to: receive second image data from the first imaging device, the second image data including a representation of a second particle; process the second image data to determine a second value representing a second variance of the Laplacian of the second image data; determine that the second value fails to satisfy the condition; and in response to determining that the second value fails to satisfy the condition, assign the second particle to a second group.
- Item 202 The system of item 1, further comprising: a first pair of electrodes disposed downstream of the first inlet and upstream of the first section, the first pair of electrodes configured to detect electric impedance in a second section of the main channel upstream of the first section, wherein the instructions further cause the one or more processors to: receive a first electric signal from the first pair of electrodes, determine that the first electric signal represents a change in electric impedance indicating detection of the first particle in the second section, and in response to determining that the first electric signal indicates detection of the first particle, causing the first imaging device to capture a first image of the first section.
- Item 203 The system of item 202, further comprising: a second pair of electrodes disposed downstream of the first pair of electrodes and upstream of the first section, wherein the instructions further cause the one or more processors to: receive a second electric signal from the second pair of electrodes, and determine, based on the first electric signal and the second electric signal, a velocity of the first particle through the main channel.
- Item 204 The system of item 203, wherein the instructions further cause the one or more processors to: determine, based on the velocity of the first particle, an estimated time of arrival of the first particle in the first section, and cause the first imaging device to capture the first image at the estimated time of arrival.
- Item 205 The system of item 1, further comprising: a first pair of electrodes configured to detect electric impedance in the second destination channel, wherein the instructions further cause the one or more processors to: receive a first electric signal from the first pair of electrodes, determine that the first electric signal represents a change in electric impedance indicating detection of the first particle in the second destination channel, and determine, based on detection of the first particle in the second destination channel, that the first particle was successfully sorted into the second destination channel .
- Item 206 The system of item 1, further comprising: an impedance spectroscopy system comprising at least one pair of electrodes and electronic circuitry configured to detect electric impedance in at least one of the main channel, the first destination channel, or the second destination channel to determine a dielectric property of a particle.
- an impedance spectroscopy system comprising at least one pair of electrodes and electronic circuitry configured to detect electric impedance in at least one of the main channel, the first destination channel, or the second destination channel to determine a dielectric property of a particle.
- Item 207 The system of item 206, wherein the instructions further cause the one or more processors to: receive a first electric signal from the at least one pair of electrodes, the first electric signal corresponding to a second particle, and determine, using the impedance spectroscopy system and the first electric signal, that the second particle corresponds to the first group.
- Item 208 The system of item 207, wherein the at least one pair of electrodes are configured to detect the electric impedance in the main channel and the instructions further cause the one or more processors to: in response to determining that the second particle corresponds to the first group, send a second control signal to the fluidic sorting device.
- a method comprising: receiving first image data from a first imaging device observing fluid flowing in a first channel of a microfluidic device; receiving second image data from a second imaging device observing the fluid flowing in the first channel; determining a first region of interest (ROI) of the first image data corresponding to a first particle represented in the first image data; determining, using the first ROI, a second ROI of the second image data corresponding to the first particle; determining, using the second ROI and a component configured to classify particles as corresponding to one of a first group or a second group, that the first particle corresponds to the first group; and in response to determining that the first particle corresponds to the first group, associating third image data representing the second ROI with the first group.
- ROI region of interest
- the method of item 209 further comprising: receiving fourth image data representing a second particle; determining, using the fourth image data and the third image data, that the second particle corresponds to the first group; and in response to determining that the second particle corresponds to the first group, sending a control signal to a fluidic sorting mechanism configured to selectively direct particles exiting the first channel from a first destination channel of the microfluidic device to a second destination channel of the microfluidic device, wherein the control signal causes the fluidic sorting mechanism to selectively direct the second particle to the second destination channel.
- the method of item 209 further comprising: receiving, from at least one pair of electrodes configured to detect electric impedance in the first channel, a first electrical signal corresponding to the first particle; determining, using the first electric signal, a first spectral fingerprint of the first particle; and associating the first spectral fingerprint with the first group.
- the method of item 211 further comprising: receiving a second electrical signal from the at least one pair of electrodes, the at least one pair of electrodes configured to detect electric impedance in the first channel; determining, using the second electric signal, a second spectral fingerprint of a second particle; determining, using the second spectral fingerprint and the first spectral fingerprint, that the second particle corresponds to the first group; and in response to determining that the second particle corresponds to the first group, sending a control signal to a fluidic sorting mechanism configured to selectively direct particles exiting the first channel from a first destination channel of the microfluidic device to a second destination channel of the microfluidic device, wherein the control signal causes the fluidic sorting mechanism to selectively direct the second particle to the second destination channel.
- the method of item 209 further comprising: causing a first pump to introduce a sample fluid into the first channel via a sample channel coupled to first channel, the sample fluid containing a suspension of particles; causing a second pump to introduce a first fluid into a first lateral sheath channel coupled to the first channel; and causing a third pump to introduce a second fluid into a second lateral sheath channel coupled to the first channel.
- the method item 213, further comprising: causing the second pump to introduce the first fluid at a first flow rate, and causing the third pump to introduce the second fluid at a second flow rate different from the first flow rate.
- the method of item 209 further comprising: causing a first pump to introduce a sample fluid into a first inlet of a sample channel coupled to the first channel, the sample fluid containing a suspension of particles; causing a second pump to introduce a first fluid into the sample channel at a second inlet upstream of the first inlet; and causing a third pump to introduce a second fluid into the sample channel at a third inlet downstream of the first inlet.
- the method of item 209 further comprising: receiving third image data from the first imaging device, the third image data corresponding to a calibration device arranged in place of the microfluidic device; receiving fourth image data from the second imaging device, the fourth image data corresponding to the calibration device and captured contemporaneously with the third image data; determining first positional data corresponding to a first plurality of particles represented in the third image data; determining second positional data corresponding to a second plurality of particles represented in the fourth image data; and determining, using the first positional data and the second positional data, a transformation matrix, wherein determining the second ROI includes applying the transformation matrix to the first ROI.
- the method of item 209 further comprising: causing the first imaging device to capture the first image data using a single exposure; causing the second imaging device to capture the second image data using multiple exposures, the second ROI corresponding to a first exposure of the multiple exposures; and determine a third ROI corresponding to a second exposure of the multiple exposures.
- the method of item 209 further comprising: receiving fourth image data obtained using the second imaging device and representing a first plurality of particles corresponding to the first group, the fourth image data including the third image data; receiving fifth image data obtained using the second imaging device and representing a second plurality of particles corresponding to the second group; and training, using the fourth image data and the fifth image data, a neural network to process image data representing a particle and classify the particle as corresponding to one of the first group or the second group.
- the method of item 209 further comprising: receiving fourth image data obtained using the first imaging device and representing a first plurality of images of out-of-focus particles; receiving fifth image data obtained using the first imaging device and representing a second plurality of images of in-focus particles; and training, using the fourth image data and the fifth image data, a neural network to process image data representing a particle and classify the image data as in-focus or out-of-focus.
- the method of item 209 further comprising: determining, using a neural network trained using a first plurality of images of out-of- focus particles and a second plurality of images of in-focus particles, that the first image data corresponds to the second plurality of images, wherein associating the third image data with the first group is additionally based on determining that the first image data corresponds to the second plurality of images.
- the method of item 223, further comprising: receiving fourth image data from the first imaging device, the fourth image data representing a second particle; determining, using the neural network, that the fourth image data corresponds to the first plurality of images; and in response to determining that the fourth image data corresponds to the first plurality of images, performing at least one action including one or more of discarding the second particle or discarding image data corresponding to the second particle.
- the method of item 209 further comprising: processing the first image data to determine a first value representing a first variance of the Laplacian of the first image data; and determine that the first value satisfies a condition, wherein associating the third image data with the first group is additionally based on determining that the first value satisfies the condition.
- the method of item 225 further comprising: receive fourth image data from the first imaging device, the fourth image data including a representation of a second particle; process the fourth image data to determine a second value representing a second variance of the Laplacian of the second image data; determine that the second value fails to satisfy the condition; and in response to determining that the second value fails to satisfy the condition, performing at least one action including one or more of discarding the second particle or discarding image data corresponding to the second particle.
- the method of item 209 further comprising: in response to determining that the first particle corresponds to the first group, applying a pulsed electric field across the first channel to alter the first particle.
- a microfluidic device comprising: a first inlet; a main channel downstream of the first inlet, the main channel having a first section and second section downstream of the first section; a bifurcation downstream of the main channel, the bifurcation splitting the main channel into a first destination channel and a second destination channel; and a first sorting channel coupled to the second section, the first sorting channel being arranged such that when a sorting fluid is provided into the main channel via the first sorting channel, a flow of the sorting fluid will direct particles exiting the main channel from the first destination channel to the second destination channel.
- microfluidic device of item 228, further comprising: a second sorting channel coupled to the second section, wherein the main channel is disposed between the first sorting channel and the second sorting channel.
- microfluidic device of item 228, further comprising: a destination reservoir coupled to the second destination channel and disposed downstream of the bifurcation, the destination reservoir configured to collect particles of interest.
- the microfluidic device of item 228, further comprising: a sample channel coupled to the first inlet and the main channel, the sample channel disposed downstream of the first inlet and upstream of the first section; a first lateral sheath channel coupled to the main channel and disposed upstream of the first section; and a second lateral sheath channel coupled to the main channel and disposed upstream of the first section, wherein the first lateral sheath channel and the second lateral sheath channel are coplanar, and the sample channel is disposed between the first lateral sheath channel and the second lateral sheath channel.
- the microfluidic device of item 228, further comprising: a sample channel coupled to the first inlet and the main channel, the sample channel disposed upstream of the first section, the sample channel having: a first central sheath inlet upstream of the first inlet, and a second central sheath inlet downstream of the first inlet.
- microfluidic device of item 228, further comprising: at least one pair of electrodes disposed downstream of the first inlet and upstream of the first section, the at least one pair of electrodes configured to detect electric impedance in a second section of the main channel upstream of the first section.
- microfluidic device of item 228, further comprising: at least one pair of electrodes disposed downstream of the bifurcation and configured to detect electric impedance in the second destination channel.
- microfluidic device of item 228, further comprising: at least one pair of electrodes disposed downstream of the bifurcation and configured to detect electric impedance in the first destination channel.
- a method comprising: receiving first image data from a first imaging device observing fluid flowing in a first channel of a microfluidic device, the microfluidic device including a bifurcation that splits the first channel into a first destination channel and a second destination channel; assigning, using a classifier configured to classify particles into one of a first group or a second group, a first particle represented in the first image data to the first group; and in response to assigning the first particle to the first group, controlling a sorting mechanism to direct the first particle to the second destination channel.
- the method of item 238, further comprising: receiving second image data from a second imaging device, the first imaging device having a first field of view and the second imaging device having a second field of view at least partially overlapping the first field of view; determining, using the first image data, a first region of interest of the first image data corresponding to the first particle; determining, based on the first region of interest, a second region of interest of the second image data corresponding to the first particle; and determining, based on the second region of interest, that the first particle corresponds to the first group.
- the method of item 239 further comprising: receiving second image data representing a first plurality of images using the first imaging device; receiving third image data representing a second plurality of images using the second imaging device, the second plurality of images temporally corresponding to the first plurality of images; determining first positional data corresponding to a first plurality of particles represented in the first plurality of images; determining second positional data corresponding to a second plurality of particles represented in the second plurality of images; determining a correspondence between a first subset of the first plurality of particles and a second subset of the second plurality of particles; and determining, based on the first positional data, the second positional data, and the correspondence between the first subset and the second subset, an offset between the first positional data and the second positional data, wherein determining the second region of interest is additionally based on the offset.
- the method of item 239 further comprising: causing the first imaging device to capture the first image data using a single exposure; causing the second imaging device to capture the second image data using multiple exposures, the second region of interest corresponding to a first exposure of the multiple exposures; and determining a third region of interest corresponding to a second exposure of the multiple exposures.
- the method of item 238, further comprising: processing the first image data using a neural network trained using a first plurality of images of particles corresponding to the first group and a second plurality of images of particles corresponding to the second group, wherein assigning the first particle to the first group includes determining, using the neural network, that the first particle corresponds to the first plurality of images .
- the method of item 243 wherein: the neural network is additionally trained using a third plurality of images of out-of-focus particles and a fourth plurality of images of in-focus particles; and assigning the first particle to the first group additionally includes determining, using the neural network, that the first image data corresponds to the fourth plurality of images.
Landscapes
- Chemical & Material Sciences (AREA)
- Dispersion Chemistry (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Investigating Or Analysing Biological Materials (AREA)
Abstract
La présente demande concerne des systèmes, des procédés et des dispositifs pour observer et/ou manipuler une particule en suspension dans un fluide. Le système peut comprendre un système de commande recevant des données provenant d'un dispositif d'imagerie et/ou d'un système de spectroscopie d'impédance. Le système de commande peut traiter les données pour déterminer si une particule dans la suspension correspond à un groupe et/ou un phénotype. Dans certains modes de réalisation, le système peut comprendre un mécanisme de tri conçu pour diriger de façon sélective une particule vers une destination. Dans certains modes de réalisation, un dispositif microfluidique peut comprendre un canal principal en amont d'une bifurcation couplée à un premier canal de destination et à un deuxième canal de destination. Le dispositif microfluidique peut comprendre un canal de tri agencé de sorte que, lorsqu'un fluide de tri est fourni dans le canal principal par l'intermédiaire du canal de tri, l'écoulement du fluide de tri dirige une particule sortant du canal principal du premier canal de destination jusqu'au deuxième canal de destination.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202363599016P | 2023-11-15 | 2023-11-15 | |
| US63/599,016 | 2023-11-15 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025106429A1 true WO2025106429A1 (fr) | 2025-05-22 |
Family
ID=93656194
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2024/055512 Pending WO2025106429A1 (fr) | 2023-11-15 | 2024-11-12 | Systèmes et procédés de tri de cellules basé sur des images |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025106429A1 (fr) |
Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120204628A1 (en) * | 2010-11-16 | 2012-08-16 | 1087 Systems, Inc. | Use of vibrational spectroscopy for dna content inspection |
| US20150111196A1 (en) * | 2013-07-16 | 2015-04-23 | Premium Genetics (Uk) Ltd. | Microfluidic chip |
| US20150268244A1 (en) * | 2012-10-15 | 2015-09-24 | Nanocellect Biomedical, Inc. | Systems, apparatus, and methods for sorting particles |
| JP2017116558A (ja) * | 2014-08-28 | 2017-06-29 | シスメックス株式会社 | 粒子撮像装置および粒子撮像方法 |
| WO2018148194A1 (fr) * | 2017-02-07 | 2018-08-16 | Nodexus Inc. | Système microfluidique avec détection combinée électrique et optique pour le tri de particules de haute précision et procédés associés |
| EP3708674A1 (fr) * | 2010-11-16 | 2020-09-16 | 1087 Systems, Inc. | Système d'identification et de tri de cellules vivantes |
| EP3180738B1 (fr) * | 2014-08-15 | 2020-11-25 | IMEC vzw | Système et procédé de reconnaissance de cellules |
-
2024
- 2024-11-12 WO PCT/US2024/055512 patent/WO2025106429A1/fr active Pending
Patent Citations (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20120204628A1 (en) * | 2010-11-16 | 2012-08-16 | 1087 Systems, Inc. | Use of vibrational spectroscopy for dna content inspection |
| EP3708674A1 (fr) * | 2010-11-16 | 2020-09-16 | 1087 Systems, Inc. | Système d'identification et de tri de cellules vivantes |
| US20150268244A1 (en) * | 2012-10-15 | 2015-09-24 | Nanocellect Biomedical, Inc. | Systems, apparatus, and methods for sorting particles |
| US20150111196A1 (en) * | 2013-07-16 | 2015-04-23 | Premium Genetics (Uk) Ltd. | Microfluidic chip |
| EP3180738B1 (fr) * | 2014-08-15 | 2020-11-25 | IMEC vzw | Système et procédé de reconnaissance de cellules |
| JP2017116558A (ja) * | 2014-08-28 | 2017-06-29 | シスメックス株式会社 | 粒子撮像装置および粒子撮像方法 |
| WO2018148194A1 (fr) * | 2017-02-07 | 2018-08-16 | Nodexus Inc. | Système microfluidique avec détection combinée électrique et optique pour le tri de particules de haute précision et procédés associés |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11668641B2 (en) | Image-based cell sorting systems and methods | |
| US11059044B2 (en) | Microfluidic determination of low abundance events | |
| Heo et al. | Real-time image processing for microscopy-based label-free imaging flow cytometry in a microfluidic chip | |
| JP7557067B2 (ja) | フレキシブルな画像ベースの粒子ソーティングのための分類ワークフロー | |
| US20250069384A1 (en) | Droplet processing methods and systems | |
| JP7489968B2 (ja) | 細胞選別装置及び方法 | |
| JP2006517292A (ja) | マルチパラメトリック細胞同定・選別法および対応の装置 | |
| JP7700220B2 (ja) | 画像ベースの教師なしセルクラスタリング及びソーティングのためのフレームワーク | |
| WO2025106429A1 (fr) | Systèmes et procédés de tri de cellules basé sur des images | |
| WO2018215624A1 (fr) | Procédé de cytométrie en flux à base d'image et tri cellulaire utilisant la colocalisation subcellulaire de protéines à l'intérieur de cellules en tant que paramètre de tri | |
| KR102785966B1 (ko) | 딥 러닝 알고리즘을 이용한 이미지 기반의 미세유체 세포 분류기 및 미세유체 세포 분류 방법 | |
| US20250299339A1 (en) | Automatic annotation of event types in iacs workflow | |
| US20250299338A1 (en) | Extension of iacs framework to secretome applications | |
| JP7596534B2 (ja) | 細胞画像分類のための効率的かつロバストな高速ニューラルネットワーク | |
| US20250299326A1 (en) | Cell enumeration module for secretion sorting | |
| HK40076939A (en) | Classification workflow for flexible image based particle sorting | |
| WO2025196570A1 (fr) | Annotation automatique de types d'événements dans un flux de travail iacs | |
| WO2025196569A1 (fr) | Extension de cadre iacs à des applications de sécrétome | |
| JP2025537381A (ja) | 画像ベースの教師なしマルチモデル細胞クラスタリング | |
| CN117581087A (zh) | 生物样品分析系统、信息处理装置、信息处理方法以及生物样品分析方法 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 24813319 Country of ref document: EP Kind code of ref document: A1 |