WO2025165595A1 - Depth-assisted sample container characterization - Google Patents
Depth-assisted sample container characterizationInfo
- Publication number
- WO2025165595A1 WO2025165595A1 PCT/US2025/012279 US2025012279W WO2025165595A1 WO 2025165595 A1 WO2025165595 A1 WO 2025165595A1 US 2025012279 W US2025012279 W US 2025012279W WO 2025165595 A1 WO2025165595 A1 WO 2025165595A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sample container
- sample
- images
- physical characteristics
- imaging sensor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/00584—Control arrangements for automatic analysers
- G01N35/00722—Communications; Identification
- G01N35/00732—Identification of carriers, materials or components in automatic analysers
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/02—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations
- G01N35/04—Details of the conveyor system
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/00584—Control arrangements for automatic analysers
- G01N35/00722—Communications; Identification
- G01N35/00732—Identification of carriers, materials or components in automatic analysers
- G01N2035/00742—Type of codes
- G01N2035/00752—Type of codes bar codes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/00584—Control arrangements for automatic analysers
- G01N35/00722—Communications; Identification
- G01N35/00732—Identification of carriers, materials or components in automatic analysers
- G01N2035/00742—Type of codes
- G01N2035/00772—Type of codes mechanical or optical code other than bar code
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/00584—Control arrangements for automatic analysers
- G01N35/00722—Communications; Identification
- G01N35/00732—Identification of carriers, materials or components in automatic analysers
- G01N2035/00742—Type of codes
- G01N2035/00782—Type of codes reprogrammmable code
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/00584—Control arrangements for automatic analysers
- G01N35/00722—Communications; Identification
- G01N35/00732—Identification of carriers, materials or components in automatic analysers
- G01N2035/00821—Identification of carriers, materials or components in automatic analysers nature of coded information
- G01N2035/00831—Identification of carriers, materials or components in automatic analysers nature of coded information identification of the sample, e.g. patient identity, place of sampling
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/00584—Control arrangements for automatic analysers
- G01N35/00722—Communications; Identification
- G01N35/00732—Identification of carriers, materials or components in automatic analysers
- G01N2035/00821—Identification of carriers, materials or components in automatic analysers nature of coded information
- G01N2035/00851—Identification of carriers, materials or components in automatic analysers nature of coded information process control parameters
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/02—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations
- G01N35/04—Details of the conveyor system
- G01N2035/0401—Sample carriers, cuvettes or reaction vessels
- G01N2035/0406—Individual bottles or tubes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N35/00—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
- G01N35/02—Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations
- G01N35/04—Details of the conveyor system
- G01N2035/0401—Sample carriers, cuvettes or reaction vessels
- G01N2035/0406—Individual bottles or tubes
- G01N2035/041—Individual bottles or tubes lifting items out of a rack for access
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
Definitions
- This disclosure relates to automated diagnostic analysis systems and methods.
- automated diagnostic analysis systems may be used to analyze a biological sample to identify an analyte or other constituent in the sample.
- the biological sample may be, e.g., urine, whole blood, blood serum, blood plasma, interstitial liquid, cerebrospinal liquid, and the like.
- sample containers e.g., test tubes, vials, etc.
- sample transport system comprising automated tracks to and from various modules.
- the various modules may perform, e.g., sample container handling, sample pre-processing, sample analysis, and sample post-processing within the automated diagnostic analysis system.
- the number of sample carriers present in an automated diagnostic analysis system at any one time may be hundreds or even thousands.
- Sample containers are usually arranged in one or more sample container holders that are received at an input module of an automated diagnostic analysis system. Sample containers may not all be the same size and type. Sample containers are usually therefore first “characterized” in an automated diagnostic analysis system to determine their physical features (e.g., container type, height, diameter, position and/or orientation within the sample container holder, type and/or color of the sample container cap, a sample’s volume or height within the sample container, barcode/label position and/or condition thereof on the sample container, etc.). Sample container characterization usually involves capturing and processing one or more images of each sample container.
- physical features e.g., container type, height, diameter, position and/or orientation within the sample container holder, type and/or color of the sample container cap, a sample’s volume or height within the sample container, barcode/label position and/or condition thereof on the sample container, etc.
- sample container characterization in some known automated diagnostic analysis systems may have one or more of the following disadvantages: inaccurate characterization because the sample container images may not have captured sufficient content, delayed sample container processing because each sample container may need to be individually and/or sequentially imaged, and/or increased system floorspace dedicated to sample container imaging apparatus, such as, e.g., one or more quality check modules each connected to the sample transport system and each having an enclosure with multiple imaging sensors and lighting apparatus.
- an automated diagnostic analysis system includes an input module operative to receive a sample container holder including one or more sample containers.
- the input module comprises an imaging sensor that is positioned within the input module to capture one or more images of the one or more sample containers at a tilted angle relative to a horizontal plane.
- the system also includes a computer processor and a trained machine learning model executable thereon operative to: (1 ) capture one or more images of the one or more sample containers with the imaging sensor at the tilted angle; (2) analyze the captured one or more images including inferring or refining three-dimensional depth information from the captured one or more images to determine physical characteristics of each sample container; and (3) direct one or more system components to take respective actions related to each sample container based on the determined physical characteristics of each sample container.
- a method of operating an automated diagnostic analysis system includes capturing with an imaging sensor one or more images of one or more sample containers at a tilted angle relative to a horizontal plane.
- the one or more sample containers are held in a sample container holder received in an input module of the automated diagnostic analysis system.
- the method also includes analyzing, via a computer processor executing a trained machine learning model, the captured one or more images including inferring or refining three-dimensional depth information from the captured one or more images to determine physical characteristics of each sample container.
- the method further includes directing, via the computer processor or another computer processor, one or more system components to take respective actions related to each sample container based on the determined physical characteristics of each sample container.
- FIG. 1 illustrates a top schematic view of an automated diagnostic analysis system configured to perform one or more biological sample analyses according to embodiments provided herein.
- FIG. 2 illustrates a side view of a sample container and related physical characteristics thereof according to embodiments provided herein.
- FIG. 3 illustrates a side view of the sample container of FIG. 2 loaded into a sample carrier of the automated diagnostic analysis system of FIG. 1 according to embodiments provided herein.
- FIG. 4 illustrates a more detailed top schematic view of an input module of FIG. 1 according to embodiments provided herein.
- FIG. 5 illustrates a side schematic view of an input module robot and imaging sensor assembly according to embodiments provided herein.
- FIG. 6 illustrates a top perspective view of a sample container holder and sample containers held therein having an imaging sensor of an input module movable thereabout to various viewpoints via an arced path according to embodiments provided herein.
- FIG. 7 illustrates a block diagram of a depth-assisted sample container characterization process according to embodiments provided herein.
- FIG. 10 illustrates a flowchart of a method of operating an automated diagnostic analysis system according to embodiments provided herein.
- Automated diagnostic analysis systems may include a large number of sample carriers each carrying a sample container therein.
- Each sample container may include a biological sample to be analyzed.
- the biological sample may be, e.g., urine, whole blood, blood serum, blood plasma, interstitial liquid, cerebrospinal liquid, and the like.
- Automated diagnostic analysis systems may also include a sample transport system for transporting the sample carriers throughout the system via an automated track.
- Automated diagnostic analysis systems may further include a number of modules for performing sample container handling, sample pre-processing, sample analysis, and sample post-processing. Each of the modules is connected to the sample transport system for receiving and returning sample containers via the sample carriers.
- One of the modules included in an automated diagnostic analysis system is an input module.
- the input module is configured to receive a plurality of sample containers that are to be processed by the system.
- the plurality of sample containers may be held in one or more sample container holders wherein the sample containers may be arranged in an array of rows and columns.
- the sample container holders are typically loaded manually into the input module.
- sample containers held in a sample container holder may be of different size and type.
- Sample containers are usually therefore first “characterized” in an automated diagnostic analysis system to determine their physical features (e.g., container type, height, diameter, position and/or orientation within a slot of the sample container holder, type and/or color of the sample container cap, a sample’s volume or height within the sample container, barcode/label position and/or condition thereof on the sample container, etc.).
- an automated diagnostic analysis system can more accurately and safely handle each sample container and the biological sample contained therein. For example, by characterizing a sample container’s type, height, diameter, and position and/or orientation within a slot of a sample container holder, a robot of an input module can be accurately aligned and maneuvered to grasp a sample container from the sample container holder and move it to a sample carrier received at the input module without damaging or mishandling the sample container. Similarly, e.g., by characterizing the type of cap on a sample container, a decapper module can be directed to correctly remove that particular cap from the sample container to provide access to the biological sample therein without damaging the sample container or cap and/or spilling the sample.
- characterizing a biological sample s height or volume within a sample container allows the automated diagnostic analysis system to first determine whether a sufficient amount of the biological sample is present in the sample container and then, if there is a sufficient amount, accurately direct an aspiration probe into the sample container to remove a specific amount of the biological sample without the aspiration probe becoming contaminated and/or aspirating air or any unintended portion of the biological sample.
- characterizing a barcode/label position and/or condition thereof on a sample container allows the automated diagnostic analysis system to identify problematic barcode/labels (e.g., unreadable) and to divert the sample container offline to address the issue.
- Sample container characterization usually involves capturing and processing one or more images of each sample container.
- a plurality of sample containers held in a sample container holder received in an input module may be imaged together from a top view by an imaging sensor positioned directly above the sample container holder.
- an imaging sensor positioned directly above the sample container holder.
- large portions of the sample containers below their caps may be self-occluded and/or not visible in the top-view images, thus limiting the amount of characterization information that can be obtained.
- sample containers may need to be transported via respective sample carriers to a quality check module, where they are individually imaged one at a time from one or more side views by one or more imaging sensors.
- sample processing may be delayed as each sample container waits in turn to be imaged.
- sample container imaging apparatus such as, e.g., one or more quality check modules each connected to the sample transport system and each having an enclosure with multiple imaging sensors and lighting apparatus.
- Automated diagnostic analysis systems may advantageously improve sample container characterization and system performance (e.g., system throughput - that is, the number of sample containers processed per hour, per shift, per day, etc.) by imaging together all or most of the sample containers held in a sample container holder received in an input module.
- the sample containers are imaged with an imaging sensor positioned at a tilted angle relative to the plurality of sample containers (e.g., relative to a horizontal plane).
- the imaging sensor may be a digital camera and, more particularly, may be a color or a color-plus-depth imaging sensor.
- the imaging sensor may be an RGB (red-green-blue) or an RGB-D (red-green-blue plus depth) imaging sensor.
- the captured image(s) may be analyzed via a computer processor executing a trained machine learning model that, among other things, infers or refines three-dimensional depth information from the captured image(s) to determine more precisely physical characteristics of each sample container.
- depth may be calculated by conventional multi-view stereo imaging or may be acquired by a conventional depth imaging sensor
- depth information will not represent well the true depth of the sample containers because sample containers of automated diagnostic analysis systems are mostly transparent or semi-transparent, which can adversely affect conventionally obtained depth information.
- Embodiments of the trained machine learning model described herein are operative to perform more precise sample container characterization by inferring or refining the true depth of a sample container from captured image data and optional noisy depth data acquired by a conventional depth imaging sensor.
- FIG. 1 illustrates an automated diagnostic analysis system 100 configured to automatically analyze biological samples according to one or more embodiments.
- Automated diagnostic analysis system 100 may include a plurality of sample carriers 102 (only three labeled in FIG. 1 to maintain clarity), a sample transport system 104 that includes an automated track 105 and track sensors 105-S (only three labeled), a plurality of modules MOMS, and a system controller 106. Automated diagnostic analysis system 100 may include more or less modules and/or other components. (Note that modules M0-M5, while illustrated as all having the same size and shape, are not limited to all having the same size and/or shape.)
- Modules M0-M5 may each be configured to perform one or more actions on a sample container or a biological sample contained in the sample container.
- one or more modules M0-M5 may be configured to perform sample container handling, sample preprocessing, sample analysis, or sample post-processing.
- module M0 may be an input module including an input module controller 108.
- Module M1 may be a decapper module
- module M2 may be a centrifuge
- module M3 may be a chemistry analyzer
- module M4 may be an immunoassay analyzer
- module M5 may be a sealer module.
- Modules M1-M5 may each include a respective module controller (not shown) and may perform different functions in other embodiments.
- FIG. 2 illustrates a sample container 203 according to one or more embodiments.
- Sample container 203 may include a cap 203C, a tubular body 203T, and a label 203L.
- Tubular body 203T has a height HT measured from the bottom-most part of tubular body 203T to the bottom of cap 203C.
- Tubular body 203T also has a container wall thickness TW, an outer diameter or width W, and an inner diameter or width Wl.
- Cap 203C may have different shapes and/or colors (e.g., red, royal blue, light blue, green, grey, tan, yellow, etc., or combinations thereof), which may indicate what sample analysis(es) the sample container is to be used for, the type of additive contained therein (e.g., a reagent or diluent), the type of cap and how to remove it, and/or the like. Characterization of cap 203C may facilitate cap removal and safe handling of the sample container. In some embodiments, characterization of cap 203C may be used to check that the correct type of sample container 203 has been used for the analysis to be performed on the biological sample contained therein.
- Label 203L is attached to tubular body 203T and may include identification information 203I (e.g., indicia) thereon in the form of, e.g., a barcode, alphabetic characters, numeric characters, an RF (radio frequency) ID tag, or combinations thereof.
- the identification information 203I may be machine readable at various locations within automated diagnostic analysis system 100, such as, e.g., at each of modules M0-M5 (which may include a scanning/imaging apparatus) and at various locations around automated track 105 such as where sensors 105-S are located.
- the identification information 203I may indicate a sample container identification number (to be correlated by system controller 106 with analysis instructions residing in, e.g., a database) or may directly include information regarding one or more analyses to be performed by the automated diagnostic analysis system on the biological sample contained therein.
- a biological sample 212 to be analyzed may be contained in sample container 203.
- the biological sample may be, e.g., urine, whole blood, blood serum, blood plasma, interstitial liquid, cerebrospinal liquid, or the like.
- biological sample 212 may include a blood serum or plasma portion 212SP and a settled blood portion 212SB.
- Air 212A may be present above the blood serum and plasma portion 212SP and the line of demarcation between air 212A and the blood serum and plasma portion 212SP is referred to as the liquid-air interface LA.
- the line of demarcation between the blood serum or plasma portion 212SP and the settled blood portion 212SB is referred to as the serum-blood interface SB.
- the interface between air 212A and cap 203C is referred to as the tube-cap interface TC.
- the height HSP of the blood serum or plasma portion 212SP is defined as from the top of the settled blood portion 212SB to the top of the blood serum or plasma portion 212SP (i.e., from SB to LA).
- the height HSB of the settled blood portion 212SB is defined as from the bottom-most part of tubular body 203T to the top of the settled blood portion 212SB at the serum-blood interface SB.
- Total sample height HTOT of biological sample 212 in sample container 203 is HSP + HSB.
- FIG. 3 illustrates a sample container 203 loaded into a sample carrier 302, which is an embodiment of sample carrier 102.
- sample carrier 302 may be a passive, non-motorized puck configured to carry a single sample container 203 on automated track 105 of sample transport system 104 (via, e.g., a magnet in sample carrier 302).
- sample carrier 302 may be an automated carrier including an onboard drive motor, such as a linear motor, that is programmed via system controller 106 or input module controller 108 to move about the track and stop at pre-programmed locations (e.g., one or more of modules M0-M5).
- Sample carrier 302 may include a holder 302H configured to hold sample container 203 in a defined upright position and orientation. Holder 302H may include a plurality of fingers or leaf springs that secure sample container 203 in and on sample carrier 302, wherein some fingers or leaf springs may be moveable or flexible to accommodate different sizes of sample containers. Sample carrier 302 may also include a transceiver 310 for communicating with system controller 106, input module controller 108, and other components in system 100. Sample carrier 302 may further include one or more sensors 302-S, which in some embodiments may be a camera and/or a collision or position sensor. Other types of sensors may be included. Sample carrier 302 may be of other types and/or configurations, and system 100 may include multiple types or configurations of sample carriers.
- sample transport system 104 may be configured to transport sample containers to and from each of modules M0-M5 via respective sample carriers 102 and track 105.
- Track 105 may include multiple interconnected sections configured to allow unidirectional or bidirectional sample container transport.
- Track 105 may be a railed track (e.g., a monorail or multi-rail), a collection of conveyor belts, conveyor chains, moveable platforms, or any other suitable type of conveyance mechanism.
- Track 105 may be circular, oval, or any other suitable shape or configuration and combinations thereof and, in some embodiments, may be a closed track.
- System controller 106 may be in communication either directly via wired and/or wireless connections as shown or via a network 1 14 with each of sample carriers 102, sample transport system 104, and modules M0-M5, each of which includes suitable communications apparatus (e.g., transceivers).
- Network 1 14 may be, e.g., a local area network (LAN), wide area network (WAN), or other suitable communication network, including wired and wireless networks.
- System controller 106 may be housed as part of automated diagnostic analysis system 100 or may be remote therefrom.
- System controller 106 may be in communication with one or more databases or like sources, represented in FIG. 1 as a laboratory information system (LIS) 116 for receiving sample information including, e.g., one or more of patient information, analyses to be performed on each sample, time and date each sample was obtained, medical facility information, tracking and routing information, and/or any other information relevant to the samples to be analyzed.
- LIS laboratory information system
- System controller 106 may include a user interface 1 18, which may include a display, to enable a user to access a variety of control and status display screens and to enter commands and/or data into system controller 106.
- System controller 106 may also include a computer processor 106P, memory 106M, and programming instructions 106PI (e.g., software, programs, algorithms, and the like). Programming instructions 106PI may be stored in memory 106M and executed by processor 106P. A workflow planning (WFP) algorithm 106WFP also may be stored in memory 106M and executed by processor 106P. Memory 106M may further have one or more artificial intelligence (Al) and/or machine learning algorithms and/or trained machine learning models stored therein to perform or facilitate various pre- and post-processing actions and/or sample analyses. System controller 106 may alternatively or additionally include other processing devices/circuits (including microprocessors, A/D converters, amplifiers, filters, etc.), transceivers, interfaces, device drivers, and/or other electronics.
- other processing devices/circuits including microprocessors, A/D converters, amplifiers, filters, etc.
- System controller 106 may be configured to operate and/or control the various components of system 100, including sample carriers 102, sample transport system 104, and modules M0-M5 via communication therewith. In particular, e.g., system controller 106 may control movement of each sample carrier 102 to and from any of modules M0-M5 and to and from any other components (not shown) in system 100 via sample transport system 104. System controller 106 may plan the workflow of system 100 based on information received from, e.g., LIS 1 16, user interface 118, and/or information obtained from scanned or imaged sample container indicia (e.g., identification information 203I).
- system controller 106 may be operative to schedule and direct one or more analyses of each sample contained in a respective sample container 102 to be performed at one or more of modules M0-M5 and, in some cases, to be performed pursuant to a particular time schedule via WFP algorithm 106WFP.
- FIG. 4 illustrates a more detailed view of input module MO of FIG. 1 according to one or more embodiments.
- Input module MO may be configured to receive one or more sample container holders 420A, 420B each having sample containers 403 (some labelled) held therein, one or more of which may be identical or similar to sample container 203.
- Input module M0 may also include a sensor (not shown in FIG. 4) and a robot 422, wherein robot 422 may be directed to grasp each sample container 403 and move it from sample container holder 420A or 420B to an empty sample carrier 102B received at input module M0 via a track segment 405B.
- the sensor may be an imaging sensor as described herein to characterize sample containers 403 held in sample container holders 420A, 420B and to facilitate operation of robot 422 based on that characterization.
- a sample carrier 102A loaded with a sample container 403 from input module M0 may exit input module M0 via track segment 405A.
- sample carriers 102 are not limited to the directions of travel as described herein for sample carriers 102A and 102B.
- input module M0 may be an input/output module (IOM) wherein sample containers to be processed may be received and loaded into sample carriers 102 and, after processing, may be returned to the IOM and unloaded from sample carriers 102 back into sample container holders 420A, 420B for removal from the IOM (and automated diagnostic analysis system 100).
- IOM input/output module
- input module M0 may be a bulk input module (BIM) or a refrigeration/storage module (RSM).
- input module controller 108 may include a computer processor 108P, memory 108M, and programming instructions 108PI (e.g., one or more software programs, Al or machine learning algorithms, or the like) that are stored in memory 108M and executable by computer processor 108P.
- programming instructions 108PI may include a machine learning model trained to perform sample container characterization, as described in more detail below.
- Input module controller 108 may also include a user interface 124, which may include a display, to enable a user to access one or more control and status display screens and to enter commands and/or data into input module controller 108.
- Input module controller 108 may further include suitable communications apparatus (e.g., transceivers) to communicate either directly via wired and/or wireless connections or via network 1 14 with system controller 106, sample carriers 102, sample transport system 104, and modules M1-M5.
- suitable communications apparatus e.g., transceivers
- FIG. 5 illustrates an example input module robot and imaging sensor assembly 500 that may be included in input module M0 according to one or more embodiments.
- Assembly 500 may be controlled by input module controller 108. In other embodiments, assembly 500 may be controlled directly by system controller 106 (of FIG. 1 ) or another (e.g., remote) controller.
- Assembly 500 includes a robot 522 and an imaging sensor 526 attached to robot 522.
- Robot 522 is operative to grasp and transfer sample containers 503 from/to a sample container holder 520 and to/from a sample carrier 102 or 302.
- Robot 522 includes a gripper 528 operative to move in three dimensions (e.g., X, Y, and Z or R, 6, and Z).
- Gripper 528 is coupled to a telescoping arm 530 movable in horizontal directions (-/+ X) as shown via a translational motor 530M.
- Telescoping arm 530 is attached to an upright portion 532, which is movable in vertical directions (-/+ Y) as shown via a vertical motor 532M.
- Telescoping arm 530 is also capable of rotating about upright portion 532 in angular directions (+/- 0) via a rotational motor 532R.
- Upright portion 532 may be mounted to a frame 534 of an input module.
- Gripper 528 may include two gripper fingers 528A, 528B that may be driven open and closed by an actuation mechanism 528M.
- a rotary actuator 528R is operative to rotate gripper fingers 528A, 528B in angular directions (+/- 02) about axis 536 to any prescribed rotational position/orientation.
- Robot 522 may be any suitable robot capable of moving a sample container received at an input module to/from a sample carrier also received at the input module.
- Imaging sensor 526 may include a digital camera 526C and, more particularly in some embodiments, may include a color imaging sensor, such as, e.g., an RGB (red-green- blue) imaging sensor, or a color-plus-depth imaging sensor, such as, e.g., an RGB-D (red- green-blue plus depth) imaging sensor, which may provide depth and color via per-pixel depth information aligned with corresponding image pixels.
- Digital camera 526C may be mounted to and movable via a support structure such as, e.g., a vertical support 526S attached to telescoping arm 530.
- digital camera 526C may be movable via vertical support 526S and telescoping arm 530 (neither shown in FIG. 6) along an arced path 640 of sample container holder viewpoints 600 about sample container holder 520 at which digital camera 526C may be positioned to capture images of sample containers 503 according to one or more embodiments.
- digital camera 526C may be positioned at one or more of viewpoints VP1 , VP2, and/or VP3 to capture one or more images of sample containers 503 at each viewpoint.
- imaging sensor 526 is operative to image sample containers 503 at a tilted angle TA as measured downward from the horizontal (e.g., horizontal plane H).
- the tilted angle TA is greater than 0 degrees (i.e., imaging sensor 526 is not aimed at sample containers 503 horizontally from a side) and is less than 90 degrees (i.e., imaging sensor 526 is not aimed at sample containers 503 vertically downward from the top).
- the height of imaging sensor 526 above sample container holder 520 which is controlled by vertical motor 532M, and the tilted angle TA is chosen such that each sample container 503 (or most sample containers 503 in some embodiments) in sample container holder 520 is captured in a single image by imaging sensor 526.
- the height of imaging sensor 526 above sample container holder 520 may range from 7 inches (17.8 cm) to 15 inches (38.1 cm) and, depending on the height of imaging sensor 526 above sample container holder 520, tilted angle TA may range from 30 degrees to 60 degrees and in other embodiments from 10 degrees to 80 degrees.
- Other tilted angle ranges and heights above sample container holder 520 may be possible based on the sizes of the sample containers and the features of imaging sensor 526 and the desired image quality.
- Other embodiments of the input module robot and imaging sensor assembly 500 are possible.
- Image acquisition 742 may take place in input module M0 (of FIGS. 1 and 4) using, in some embodiments, the input module robot and imaging sensor assembly 500, wherein imaging sensor 526 may capture one or more images of sample containers 503 at, e.g., one or more sample container holder viewpoints 600, such as, e.g., viewpoints VP1 , VP2, and/or VP3 (of FIG. 6).
- imaging sensor 526 is a colorplus-depth or an RGB-D (red-green-blue plus depth) imaging sensor
- a single image pair may be captured (wherein an image pair refers to a color image and a corresponding depth image).
- Color-plus-depth imaging sensors provide both depth and color image data and may include either a stereo depth sensor or a time-of-flight depth sensor.
- a color-plus-depth imaging sensor may also provide the color (e.g., RGB) and depth image data in a single frame by merging pixel-to-pixel color and depth data (wherein a frame herein refers to a single image with four channels, the first three channels storing, e.g., R, G, and B color values, respectively, and the fourth channel storing a depth value).
- imaging sensor 526 is a color (e.g., an RGB (red-green-blue)) imaging sensor
- multiple images at different viewpoints 600 may be captured.
- at least one image at each of two different viewpoints may be captured by a color imaging sensor 526.
- Color imaging sensors provide colored images by capturing light in, e.g., red, green, and blue wavelengths.
- Depth inference/refinement 744 may include a trained machine learning software model that infers or refines the depth of an imaged scene in the image sensor’s coordinate system.
- Training data for the trained depth inference/refinement model may include color images of sample containers and synthetic images corresponding to the sample containers with respective depth and center annotations generated by CG (computer generated) rendering, as illustrated in FIG. 8.
- synthetic image 800 includes sample containers 803 (only two labelled) each having a depth and center annotation 850 (only two labelled).
- the trained model in depth inference/refinement 744 may infer depth from multiple color images (having no depth data] by running the structure from motion (SfM) and multi-view stereo (MVS) algorithms such as Colmap (see, e.g., colmap.github.io). Because these algorithms may be computationally intensive, they may be processed offline, which is suitable for generating training data.
- SfM structure from motion
- MVS multi-view stereo
- the trained model in depth inference/refinement 744 may also refine depth information from an imaged scene that has been estimated/provided by one of the following:
- a multi-view stereo algorithm relying on point or dense feature correspondences across multiple color (e.g., RGB) images captured at different viewpoints
- a depth estimation algorithm based on a single monocular color (e.g., RGB) image.
- Depth information obtained from these three approaches which may be suitable for opaque objects with matte surfaces, may not be accurate for transparent/semi-transparent objects with reflective surfaces such as sample containers (e.g., sample containers 203, 403, and 503) used in automated diagnostic analysis systems.
- the trained model may leverage synthetic color and depth pairs to learn how to refine the depth obtained from these three approaches.
- the trained model in depth inference/refinement 744 may reconstruct the depth up to an unknown scale - that is, the reconstructed depth/three-dimensional scene is only proportionally correct (up to a scale).
- the scaling factor is unknown if no additional information is provided. For example, an object looks identical when it is scaled twice as big but placed twice as far away from the camera. To resolve the unknown scale, multiple images may be used if the amount of movement of the object from one image to another image is known. Further, because monocular depth estimation may not be consistent across multiple frames, the trained model may process monocular depth estimations from multiple frames to ensure that the refined depth information is consistent across those multiple frames. To ensure scale consistency, the scale of the reconstructed depth is adjusted such that the depth corresponding to the same object is consistent across two or more viewpoints.
- the trained model in depth inference/refinement 744 may also receive as input auxiliary viewpoint information to ensure a metric reconstruction. Given the known relative poses between two frames obtained from the robot motion, the depth from the viewpoint of the first frame to the viewpoint of the second frame can be transformed. The scales of these two frames can then be adjusted such that the transformed depth matches the depth of the second frame.
- sample container characterization 746 receives image data 743 from image acquisition 742 and inferred or refined depth data 745 from depth inference/refinement 744 to determine physical characteristics of each sample container captured via image acquisition 742.
- Image data 743 and/or depth data 745 may be stored in memory 108M of input module controller 108 or, alternatively, in memory 106M of system controller 106 or another (e.g., remote) memory.
- Sample container characterization 746 may perform segmentation using a multi-class classifier to identify and determine various physical features of a sample container based on the image and depth data.
- FIG. 9 illustrates an embodiment of sample container characterization 746.
- Sample container characterization 946 performs a segmentation process 952, which receives image data 743 from image acquisition 742 and depth data 745 from depth inference/refinement 744.
- image data 743 received at segmentation process 952 may first undergo image consolidation 956, wherein optimally exposed pixels from the image data 743 are selected and consolidated in the image data provided to segmentation process 952 along with the depth data.
- Segmentation process 952 may include a semantic segmentation network (SSN) 954 that processes the output from image consolidation 956 to perform and output pixel class identification 960 for each pixel. That is, semantic segmentation network 954 may identify a class for each pixel of image data 743.
- SSN semantic segmentation network
- pixels may be classified as (referring to FIG. 2) a sample container cap 203C, a sample container label 203L, a sample container tubular body 203T, liquid 212, air 212A, blood serum or plasma portion 212SP, or settled blood portion 212SB.
- Semantic segmentation network 954 may be, e.g., a Dense U-Net or object detection network such as Mask R-CNN (region-based convolutional neural network) that takes the coIor and depth pair as input and outputs a semantic image where each pixel is assigned to a class.
- Other machine learning algorithms that may be employed include one or more of a hybrid CNN-CRF method (e.g., L.C. Chen, G.
- FCN fully convolutional network
- arXiv:1411 .4038 [cs] (2014) available at arxiv.org/abs/1411 .4038) (submitted on 14 Nov 2014 (v1 ), last revised 8 Mar 2015 (v2))
- a FastFCN e.g., H. Wu, J. Zhang, K. Huang, K. Lian, Y. Yu, FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation.
- arXiv:1903.11816 [cs] (2019) available at arxiv.org/abs/1903.11816
- vision transformer-based models such as depthFormer (e.g., Z. Li, Z. Chen, X. Liu, J.
- pixel class identification 960 Based on the pixel class identification 960, all pixels identified as being in a same class may be grouped as shown in settled blood portion 962, liquid region 964, barcode/label 966, container tubular body 968, and container cap 970. From the pixel groupings, various physical features (e.g., LA, SB, Wi, HSB, HT, W, volume of liquid region, volume of settled blood portion, barcode/label condition, cap type, and cap color) may be determined or calculated.
- various physical features e.g., LA, SB, Wi, HSB, HT, W, volume of liquid region, volume of settled blood portion, barcode/label condition, cap type, and cap color
- the depth image records the physical distance of each pixel in the Z-axis (see FIG. 8). If the region of an object (e.g., a sample container) in an image is known, then the corresponding depth information in that object region provides the geometry information of the object surface. The depth information is used as a fourth channel in addition to the three RGB channels by depth inference/refinement 744 to estimate a better depth image. Thereafter, the semantic segmentation network may also leverage the depth information in addition to RGB information to perform object segmentation.
- a three-dimensional model (e.g., a sample container model) may be fit to the object region to estimate the object’s geometry such as height, diameter, and tilt angle. That is, variously-sized three-dimensional models, which may be, e.g., simple cylindrical models each having known height, diameter, three- dimensional orientation, and tilt angle, may be applied to an object region in an image identified as a sample container (and more particularly, the sample container body) until one of the models matches (fits) the object region.
- Other three-dimensional models may include different types of caps that can be used to estimate a sample container cap’s geometry in the same way. The cap models may be separate from the sample container models or may be combined into sample container/cap models. Also, the barcode label condition may be inferred by examining the surface smoothness from the estimated depth.
- FIG. 10 illustrates a method 1000 of operating an automated diagnostic analysis system according to one or more embodiments.
- method 1000 may include capturing with an imaging sensor one or more images of one or more sample containers at a tilted angle relative to the one or more sample containers (e.g., relative to a horizontal plane), wherein the one or more sample containers are held in a sample container holder received at an input module of the automated diagnostic analysis system.
- imaging sensor 526 arranged at tilted angle TA may capture one or more images of sample containers 503 held in sample container holder 520 received in input module M0.
- method 1000 may include analyzing, via a computer processor executing a trained machine learning model, the captured one or more images including inferring or refining three-dimensional depth information from the captured one or more images to determine physical characteristics of each sample container.
- computer processor 108P executing trained machine learning model 108PI of input module controller 108 may analyze one or more captured images to perform sample container characterization via depth-assisted sample container characterization process 700 including, in some embodiments, sample container characterization 946.
- method 1000 may include directing, via the computer processor or another computer processor, one or more system components to take respective actions related to each sample container based on the determined physical characteristics of each sample container.
- computer processor 108P of input module controller 108 may direct robot 422 of input module M0 to grasp and move a sample container 203 or 403 to a sample carrier 102B of the automated diagnostic analysis system based on the determined physical characteristics of that sample container 203 or 403.
- An automated diagnostic analysis system comprising: an input module operative to receive a sample container holder including one or more sample containers, the input module comprising an imaging sensor positioned within the input module to capture one or more images of the one or more sample containers at a tilted angle relative to a horizontal plane; and a computer processor and a trained machine learning model executable thereon operative to: capture one or more images of the one or more sample containers with the imaging sensor at the tilted angle; analyze the captured one or more images including inferring or refining three-dimensional depth information from the captured one or more images to determine physical characteristics of each sample container; and direct one or more system components to take respective actions related to each sample container based on the determined physical characteristics of each sample container.
- Illustrative embodiment 2 The automated diagnostic analysis system of illustrative embodiment 1 , wherein the one or more system components include: an input module robot configured to grasp a sample container and move it to a sample carrier of the automated diagnostic analysis system based on the determined physical characteristics of that sample container; or a pre-processing module configured to remove a cap from a sample container based on the determined physical characteristics of that sample container.
- an input module robot configured to grasp a sample container and move it to a sample carrier of the automated diagnostic analysis system based on the determined physical characteristics of that sample container
- a pre-processing module configured to remove a cap from a sample container based on the determined physical characteristics of that sample container.
- Illustrative embodiment 3 The automated diagnostic analysis system according to one of the preceding embodiments, wherein the imaging sensor is a color or a color-plusdepth imaging sensor.
- Illustrative embodiment 4 The automated diagnostic analysis system according to one of the preceding embodiments, wherein the imaging sensor is movable within the input module to a first viewpoint to capture the one or more images of the one or more sample containers with the imaging sensor at the tilted angle.
- Illustrative embodiment 5 The automated diagnostic analysis system according to one of the preceding embodiments, wherein the computer processor and the trained machine learning model executable thereon are further operative to: position the imaging sensor at a second viewpoint at the tilted angle relative to the one or more sample containers; capture one or more images of the one or more sample containers with the imaging sensor at the second viewpoint and the tilted angle; and analyze the captured one or more images from the first and second viewpoints including inferring or refining three- dimensional depth information from the captured one or more images from the first and second viewpoints to determine physical characteristics of each sample container.
- Illustrative embodiment 6 The automated diagnostic analysis system according to one of the preceding embodiments, wherein the computer processor and the trained machine learning model executable thereon are further operative to analyze the captured one or more images by inferring or refining three-dimensional depth information from the captured one or more images using noisy output received from a depth sensor to determine physical characteristics of each sample container.
- Illustrative embodiment 7 The automated diagnostic analysis system according to one of the preceding embodiments, wherein the computer processor and the trained machine learning model executable thereon are further operative to analyze the captured one or more images using inconsistent output received from a monocular depth algorithm across multiple image frames to determine physical characteristics of each sample container.
- Illustrative embodiment 8 The automated diagnostic analysis system according to one of the preceding embodiments, wherein the determined physical characteristics include one or more of sample container type, height, diameter, position or orientation within the sample container holder, cap type, cap color, a sample’s volume or height within the sample container, and barcode/label position or condition thereof on the sample container.
- Illustrative embodiment 9 The automated diagnostic analysis system according to one of the preceding embodiments, wherein the tilted angle as measured downward relative to the horizontal plane is greater than 0 degrees and less than 90 degrees.
- Illustrative embodiment 10 The automated diagnostic analysis system according to one of the preceding embodiments, wherein the computer processor and the trained machine learning model executable thereon are further operative to: identify a sample container in an object region of the one or more images; and fit a three-dimensional sample container model to the object region to estimate physical characteristics of the sample container identified in the object region.
- Illustrative embodiment 11 Illustrative embodiment 11 .
- a method of operating an automated diagnostic analysis system comprising: capturing with an imaging sensor one or more images of one or more sample containers at a tilted angle relative to a horizontal plane, the one or more sample containers held in a sample container holder received at an input module of the automated diagnostic analysis system; analyzing, via a computer processor executing a trained machine learning model, the captured one or more images including inferring or refining three-dimensional depth information from the captured one or more images to determine physical characteristics of each sample container; anddirecting, via the computer processor or another computer processor, one or more system components to take respective actions related to each sample container based on the determined physical characteristics of each sample container.
- Illustrative embodiment 12 The method of illustrative embodiment 11 , wherein the capturing comprises capturing the one or more images of the one or more sample containers at the tilted angle with a color or a color-plus-depth imaging sensor.
- Illustrative embodiment 14 The method according to one of the preceding embodiments, further comprising: positioning the imaging sensor via the support structure at a second viewpoint at the tilted angle relative to the horizontal plane; capturing one or more images of the one or more sample containers at the second viewpoint and the tilted angle with the imaging sensor; and analyzing, via the computer processor executing the trained machine learning model, the captured one or more images from the first and second viewpoints including inferring or refining three-dimensional depth information from the captured one or more images from the first and second viewpoints to determine physical characteristics of each sample container.
- Illustrative embodiment 15 The method according to one of the preceding embodiments, wherein the directing comprises one or more of: directing an input module robot to grasp a sample container and move it to a sample carrier of the automated diagnostic analysis system based on the determined physical characteristics of that sample container; and directing a pre-processing module to remove a cap from a sample container based on the determined physical characteristics of that sample container.
- Illustrative embodiment 16 The method according to one of the preceding embodiments, wherein the analyzing comprises analyzing, via the computer processor, the captured one or more images by inferring or refining three-dimensional depth information from the captured one or more images using noisy output received from a depth sensor to determine physical characteristics of each sample container.
- Illustrative embodiment 17 The method according to one of the preceding embodiments, wherein the analyzing comprises analyzing, via the computer processor, the captured one or more images by inferring or refining three-dimensional depth information from the captured one or more images using inconsistent output received from a monocular depth algorithm across multiple image frames to determine physical characteristics of each sample container.
- Illustrative embodiment 18 The method according to one of the preceding embodiments, wherein the determined physical characteristics include one or more of sample container type, height, diameter, position or orientation within the sample container holder, cap type, cap color, a sample’s volume or height within the sample container, and barcode/label position or condition thereof on the sample container.
- Illustrative embodiment 19 The method according to one of the preceding embodiments, wherein the tilted angle as measured downward from the horizontal plane ranges from 30 degrees to 60 degrees.
- Illustrative embodiment 20 The method according to one of the preceding embodiments, wherein the analyzing, via the computer processor executing the trained machine learning model, further comprises: identifying a sample container in an object region of the one or more images; and fitting a three-dimensional sample container model to the object region to estimate physical characteristics of the sample container identified in the object region.
Landscapes
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Automatic Analysis And Handling Materials Therefor (AREA)
Abstract
An automated diagnostic analysis system includes an input module that receives at least one sample container holder that includes one or more sample containers to be processed by the system. The input module includes an imaging sensor for capturing at a tilted angle one or more images of the one or more sample containers. The system also includes a computer processor executing a trained machine learning model for analyzing the one or more images to obtain or refine three-dimensional depth information from the images to determine physical characteristics of each sample container. Methods of operating an automated diagnostic analysis system are also provided, as are other aspects.
Description
DEPTH-ASSISTED SAMPLE CONTAINER CHARACTERIZATION
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims benefit under 35 USC § 1 19(e) of U.S. Provisional Patent Application No. 63/548,886, filed on February 2, 2024, the disclosure of which is hereby incorporated by reference herein in its entirety.
FIELD
[0002] This disclosure relates to automated diagnostic analysis systems and methods.
BACKGROUND
[0003] In medical testing, automated diagnostic analysis systems may be used to analyze a biological sample to identify an analyte or other constituent in the sample. The biological sample may be, e.g., urine, whole blood, blood serum, blood plasma, interstitial liquid, cerebrospinal liquid, and the like. Such biological samples are usually contained in sample containers (e.g., test tubes, vials, etc.) that may be transported in sample carriers via a sample transport system comprising automated tracks to and from various modules. The various modules may perform, e.g., sample container handling, sample pre-processing, sample analysis, and sample post-processing within the automated diagnostic analysis system. The number of sample carriers present in an automated diagnostic analysis system at any one time may be hundreds or even thousands.
[0004] Sample containers are usually arranged in one or more sample container holders that are received at an input module of an automated diagnostic analysis system. Sample containers may not all be the same size and type. Sample containers are usually therefore first “characterized” in an automated diagnostic analysis system to determine their physical features (e.g., container type, height, diameter, position and/or orientation within the sample container holder, type and/or color of the sample container cap, a sample’s volume or height within the sample container, barcode/label position and/or condition thereof on the sample container, etc.). Sample container characterization usually involves capturing and processing one or more images of each sample container.
[0005] However, sample container characterization in some known automated diagnostic analysis systems may have one or more of the following disadvantages: inaccurate characterization because the sample container images may not have captured sufficient content, delayed sample container processing because each sample container may need to be individually and/or sequentially imaged, and/or increased system floorspace dedicated to
sample container imaging apparatus, such as, e.g., one or more quality check modules each connected to the sample transport system and each having an enclosure with multiple imaging sensors and lighting apparatus.
[0006] Accordingly, improved sample container characterization in an automated diagnostic analysis system is desired.
SUMMARY
[0007] In some embodiments, an automated diagnostic analysis system is provided. The system includes an input module operative to receive a sample container holder including one or more sample containers. The input module comprises an imaging sensor that is positioned within the input module to capture one or more images of the one or more sample containers at a tilted angle relative to a horizontal plane. The system also includes a computer processor and a trained machine learning model executable thereon operative to: (1 ) capture one or more images of the one or more sample containers with the imaging sensor at the tilted angle; (2) analyze the captured one or more images including inferring or refining three-dimensional depth information from the captured one or more images to determine physical characteristics of each sample container; and (3) direct one or more system components to take respective actions related to each sample container based on the determined physical characteristics of each sample container.
[0008] In some embodiments, a method of operating an automated diagnostic analysis system is provided. The method includes capturing with an imaging sensor one or more images of one or more sample containers at a tilted angle relative to a horizontal plane. The one or more sample containers are held in a sample container holder received in an input module of the automated diagnostic analysis system. The method also includes analyzing, via a computer processor executing a trained machine learning model, the captured one or more images including inferring or refining three-dimensional depth information from the captured one or more images to determine physical characteristics of each sample container. The method further includes directing, via the computer processor or another computer processor, one or more system components to take respective actions related to each sample container based on the determined physical characteristics of each sample container.
[0009] Still other aspects, features, and advantages of this disclosure may be readily apparent from the following detailed description and illustration of a number of example embodiments and implementations, including the best mode contemplated for carrying out the invention. This disclosure may also be capable of other and different embodiments, and
its several details may be modified in various respects, all without departing from the scope of the invention. For example, although the description below relates to automated diagnostic analysis systems, the depth-assisted sample container characterization systems and methods may readily be adapted to other imaging systems tasked with identifying physical features of imaged objects. This disclosure is intended to cover all modifications, equivalents, and alternatives falling within the scope of the appended claims below.
BRIEF DESCRIPTION OF DRAWINGS
[0010] The drawings described below are for illustrative purposes and are not necessarily drawn to scale. Accordingly, the drawings and descriptions are to be regarded as illustrative in nature, and not as restrictive. The drawings are not intended to limit the scope of the invention in any way.
[0011] FIG. 1 illustrates a top schematic view of an automated diagnostic analysis system configured to perform one or more biological sample analyses according to embodiments provided herein.
[0012] FIG. 2 illustrates a side view of a sample container and related physical characteristics thereof according to embodiments provided herein.
[0013] FIG. 3 illustrates a side view of the sample container of FIG. 2 loaded into a sample carrier of the automated diagnostic analysis system of FIG. 1 according to embodiments provided herein.
[0014] FIG. 4 illustrates a more detailed top schematic view of an input module of FIG. 1 according to embodiments provided herein.
[0015] FIG. 5 illustrates a side schematic view of an input module robot and imaging sensor assembly according to embodiments provided herein.
[0016] FIG. 6 illustrates a top perspective view of a sample container holder and sample containers held therein having an imaging sensor of an input module movable thereabout to various viewpoints via an arced path according to embodiments provided herein.
[0017] FIG. 7 illustrates a block diagram of a depth-assisted sample container characterization process according to embodiments provided herein.
[0018] FIG. 8 illustrates a top perspective view of an image of sample containers with depth and center annotations according to embodiments provided herein.
[0019] FIG. 9 illustrates a block diagram of a sample container characterization subprocess of the depth-assisted sample container characterization process of FIG. 7 according to embodiments provided herein.
[0020] FIG. 10 illustrates a flowchart of a method of operating an automated diagnostic analysis system according to embodiments provided herein.
DETAILED DESCRIPTION
[0021] Automated diagnostic analysis systems according to embodiments described herein may include a large number of sample carriers each carrying a sample container therein. Each sample container may include a biological sample to be analyzed. The biological sample may be, e.g., urine, whole blood, blood serum, blood plasma, interstitial liquid, cerebrospinal liquid, and the like. Automated diagnostic analysis systems may also include a sample transport system for transporting the sample carriers throughout the system via an automated track.
Automated diagnostic analysis systems may further include a number of modules for performing sample container handling, sample pre-processing, sample analysis, and sample post-processing. Each of the modules is connected to the sample transport system for receiving and returning sample containers via the sample carriers.
[0022] One of the modules included in an automated diagnostic analysis system is an input module. The input module is configured to receive a plurality of sample containers that are to be processed by the system. The plurality of sample containers may be held in one or more sample container holders wherein the sample containers may be arranged in an array of rows and columns. The sample container holders are typically loaded manually into the input module.
[0023] As described above, sample containers held in a sample container holder may be of different size and type. Sample containers are usually therefore first “characterized” in an automated diagnostic analysis system to determine their physical features (e.g., container type, height, diameter, position and/or orientation within a slot of the sample container holder, type and/or color of the sample container cap, a sample’s volume or height within the sample container, barcode/label position and/or condition thereof on the sample container, etc.).
[0024] By characterizing each sample container, an automated diagnostic analysis system can more accurately and safely handle each sample container and the biological sample contained therein. For example, by characterizing a sample container’s type, height, diameter, and position and/or orientation within a slot of a sample container holder, a robot of an input module can be accurately aligned and maneuvered to grasp a sample container from the
sample container holder and move it to a sample carrier received at the input module without damaging or mishandling the sample container. Similarly, e.g., by characterizing the type of cap on a sample container, a decapper module can be directed to correctly remove that particular cap from the sample container to provide access to the biological sample therein without damaging the sample container or cap and/or spilling the sample. In another example, characterizing a biological sample’s height or volume within a sample container allows the automated diagnostic analysis system to first determine whether a sufficient amount of the biological sample is present in the sample container and then, if there is a sufficient amount, accurately direct an aspiration probe into the sample container to remove a specific amount of the biological sample without the aspiration probe becoming contaminated and/or aspirating air or any unintended portion of the biological sample. In still another example, characterizing a barcode/label position and/or condition thereof on a sample container allows the automated diagnostic analysis system to identify problematic barcode/labels (e.g., unreadable) and to divert the sample container offline to address the issue.
[0025] Sample container characterization usually involves capturing and processing one or more images of each sample container. In some known automated diagnostic analysis systems, a plurality of sample containers held in a sample container holder received in an input module may be imaged together from a top view by an imaging sensor positioned directly above the sample container holder. However, large portions of the sample containers below their caps may be self-occluded and/or not visible in the top-view images, thus limiting the amount of characterization information that can be obtained. In other known systems, sample containers may need to be transported via respective sample carriers to a quality check module, where they are individually imaged one at a time from one or more side views by one or more imaging sensors. Although more characterization information can be obtained from side-view images, sample processing may be delayed as each sample container waits in turn to be imaged. Furthermore, such known systems require additional floorspace dedicated to sample container imaging apparatus, such as, e.g., one or more quality check modules each connected to the sample transport system and each having an enclosure with multiple imaging sensors and lighting apparatus.
[0026] Automated diagnostic analysis systems according to embodiments described herein may advantageously improve sample container characterization and system performance (e.g., system throughput - that is, the number of sample containers processed per hour, per shift, per day, etc.) by imaging together all or most of the sample containers held in a sample container holder received in an input module. The sample containers are imaged with an imaging sensor positioned at a tilted angle relative to the plurality of sample containers (e.g., relative to a
horizontal plane). The imaging sensor may be a digital camera and, more particularly, may be a color or a color-plus-depth imaging sensor. In some embodiments, the imaging sensor may be an RGB (red-green-blue) or an RGB-D (red-green-blue plus depth) imaging sensor. The captured image(s) may be analyzed via a computer processor executing a trained machine learning model that, among other things, infers or refines three-dimensional depth information from the captured image(s) to determine more precisely physical characteristics of each sample container.
[0027] Although “depth" may be calculated by conventional multi-view stereo imaging or may be acquired by a conventional depth imaging sensor, such conventionally calculated or acquired depth information will not represent well the true depth of the sample containers because sample containers of automated diagnostic analysis systems are mostly transparent or semi-transparent, which can adversely affect conventionally obtained depth information. Embodiments of the trained machine learning model described herein are operative to perform more precise sample container characterization by inferring or refining the true depth of a sample container from captured image data and optional noisy depth data acquired by a conventional depth imaging sensor.
[0028] In accordance with one or more embodiments, automated diagnostic analysis systems having improved sample container characterization and system performance will be explained in greater detail below in connection with FIGS. 1-10.
[0029] FIG. 1 illustrates an automated diagnostic analysis system 100 configured to automatically analyze biological samples according to one or more embodiments.
Automated diagnostic analysis system 100 may include a plurality of sample carriers 102 (only three labeled in FIG. 1 to maintain clarity), a sample transport system 104 that includes an automated track 105 and track sensors 105-S (only three labeled), a plurality of modules MOMS, and a system controller 106. Automated diagnostic analysis system 100 may include more or less modules and/or other components. (Note that modules M0-M5, while illustrated as all having the same size and shape, are not limited to all having the same size and/or shape.)
[0030] Modules M0-M5 may each be configured to perform one or more actions on a sample container or a biological sample contained in the sample container. In particular, one or more modules M0-M5 may be configured to perform sample container handling, sample preprocessing, sample analysis, or sample post-processing. For example, in some embodiments, module M0 may be an input module including an input module controller 108. Module M1 may be a decapper module, module M2 may be a centrifuge, module M3 may be a chemistry analyzer, module M4 may be an immunoassay analyzer, and module M5 may be a sealer
module. Modules M1-M5 may each include a respective module controller (not shown) and may perform different functions in other embodiments.
[0031] Each sample carrier 102 may be configured to carry at least one sample container thereon. FIG. 2 illustrates a sample container 203 according to one or more embodiments. Sample container 203 may include a cap 203C, a tubular body 203T, and a label 203L. Tubular body 203T has a height HT measured from the bottom-most part of tubular body 203T to the bottom of cap 203C. Tubular body 203T also has a container wall thickness TW, an outer diameter or width W, and an inner diameter or width Wl.
[0032] Cap 203C may have different shapes and/or colors (e.g., red, royal blue, light blue, green, grey, tan, yellow, etc., or combinations thereof), which may indicate what sample analysis(es) the sample container is to be used for, the type of additive contained therein (e.g., a reagent or diluent), the type of cap and how to remove it, and/or the like. Characterization of cap 203C may facilitate cap removal and safe handling of the sample container. In some embodiments, characterization of cap 203C may be used to check that the correct type of sample container 203 has been used for the analysis to be performed on the biological sample contained therein.
[0033] Label 203L is attached to tubular body 203T and may include identification information 203I (e.g., indicia) thereon in the form of, e.g., a barcode, alphabetic characters, numeric characters, an RF (radio frequency) ID tag, or combinations thereof. The identification information 203I may be machine readable at various locations within automated diagnostic analysis system 100, such as, e.g., at each of modules M0-M5 (which may include a scanning/imaging apparatus) and at various locations around automated track 105 such as where sensors 105-S are located. The identification information 203I may indicate a sample container identification number (to be correlated by system controller 106 with analysis instructions residing in, e.g., a database) or may directly include information regarding one or more analyses to be performed by the automated diagnostic analysis system on the biological sample contained therein.
[0034] A biological sample 212 to be analyzed may be contained in sample container 203. The biological sample may be, e.g., urine, whole blood, blood serum, blood plasma, interstitial liquid, cerebrospinal liquid, or the like. In some embodiments, as shown in FIG. 2, biological sample 212 may include a blood serum or plasma portion 212SP and a settled blood portion 212SB. Air 212A may be present above the blood serum and plasma portion 212SP and the line of demarcation between air 212A and the blood serum and plasma portion 212SP is referred to as the liquid-air interface LA. The line of demarcation between the blood serum or plasma portion 212SP and the settled blood portion 212SB is referred to as the serum-blood
interface SB. The interface between air 212A and cap 203C is referred to as the tube-cap interface TC. The height HSP of the blood serum or plasma portion 212SP is defined as from the top of the settled blood portion 212SB to the top of the blood serum or plasma portion 212SP (i.e., from SB to LA). The height HSB of the settled blood portion 212SB is defined as from the bottom-most part of tubular body 203T to the top of the settled blood portion 212SB at the serum-blood interface SB. Total sample height HTOT of biological sample 212 in sample container 203 is HSP + HSB.
[0035] FIG. 3 illustrates a sample container 203 loaded into a sample carrier 302, which is an embodiment of sample carrier 102. In some embodiments, sample carrier 302 may be a passive, non-motorized puck configured to carry a single sample container 203 on automated track 105 of sample transport system 104 (via, e.g., a magnet in sample carrier 302). In other embodiments, sample carrier 302 may be an automated carrier including an onboard drive motor, such as a linear motor, that is programmed via system controller 106 or input module controller 108 to move about the track and stop at pre-programmed locations (e.g., one or more of modules M0-M5). Sample carrier 302 may include a holder 302H configured to hold sample container 203 in a defined upright position and orientation. Holder 302H may include a plurality of fingers or leaf springs that secure sample container 203 in and on sample carrier 302, wherein some fingers or leaf springs may be moveable or flexible to accommodate different sizes of sample containers. Sample carrier 302 may also include a transceiver 310 for communicating with system controller 106, input module controller 108, and other components in system 100. Sample carrier 302 may further include one or more sensors 302-S, which in some embodiments may be a camera and/or a collision or position sensor. Other types of sensors may be included. Sample carrier 302 may be of other types and/or configurations, and system 100 may include multiple types or configurations of sample carriers.
[0036] Returning to FIG. 1 , sample transport system 104 may be configured to transport sample containers to and from each of modules M0-M5 via respective sample carriers 102 and track 105. Track 105 may include multiple interconnected sections configured to allow unidirectional or bidirectional sample container transport. Track 105 may be a railed track (e.g., a monorail or multi-rail), a collection of conveyor belts, conveyor chains, moveable platforms, or any other suitable type of conveyance mechanism. Track 105 may be circular, oval, or any other suitable shape or configuration and combinations thereof and, in some embodiments, may be a closed track.
[0037] System controller 106 may be in communication either directly via wired and/or wireless connections as shown or via a network 1 14 with each of sample carriers 102, sample
transport system 104, and modules M0-M5, each of which includes suitable communications apparatus (e.g., transceivers). Network 1 14 may be, e.g., a local area network (LAN), wide area network (WAN), or other suitable communication network, including wired and wireless networks. System controller 106 may be housed as part of automated diagnostic analysis system 100 or may be remote therefrom.
[0038] System controller 106 may be in communication with one or more databases or like sources, represented in FIG. 1 as a laboratory information system (LIS) 116 for receiving sample information including, e.g., one or more of patient information, analyses to be performed on each sample, time and date each sample was obtained, medical facility information, tracking and routing information, and/or any other information relevant to the samples to be analyzed.
[0039] System controller 106 may include a user interface 1 18, which may include a display, to enable a user to access a variety of control and status display screens and to enter commands and/or data into system controller 106.
[0040] System controller 106 may also include a computer processor 106P, memory 106M, and programming instructions 106PI (e.g., software, programs, algorithms, and the like). Programming instructions 106PI may be stored in memory 106M and executed by processor 106P. A workflow planning (WFP) algorithm 106WFP also may be stored in memory 106M and executed by processor 106P. Memory 106M may further have one or more artificial intelligence (Al) and/or machine learning algorithms and/or trained machine learning models stored therein to perform or facilitate various pre- and post-processing actions and/or sample analyses. System controller 106 may alternatively or additionally include other processing devices/circuits (including microprocessors, A/D converters, amplifiers, filters, etc.), transceivers, interfaces, device drivers, and/or other electronics.
[0041] System controller 106 may be configured to operate and/or control the various components of system 100, including sample carriers 102, sample transport system 104, and modules M0-M5 via communication therewith. In particular, e.g., system controller 106 may control movement of each sample carrier 102 to and from any of modules M0-M5 and to and from any other components (not shown) in system 100 via sample transport system 104. System controller 106 may plan the workflow of system 100 based on information received from, e.g., LIS 1 16, user interface 118, and/or information obtained from scanned or imaged sample container indicia (e.g., identification information 203I). That is, system controller 106 may be operative to schedule and direct one or more analyses of each sample contained in a respective sample container 102 to be performed at one or more of modules M0-M5 and,
in some cases, to be performed pursuant to a particular time schedule via WFP algorithm 106WFP.
[0042] FIG. 4 illustrates a more detailed view of input module MO of FIG. 1 according to one or more embodiments. Input module MO may be configured to receive one or more sample container holders 420A, 420B each having sample containers 403 (some labelled) held therein, one or more of which may be identical or similar to sample container 203. Input module M0 may also include a sensor (not shown in FIG. 4) and a robot 422, wherein robot 422 may be directed to grasp each sample container 403 and move it from sample container holder 420A or 420B to an empty sample carrier 102B received at input module M0 via a track segment 405B. The sensor may be an imaging sensor as described herein to characterize sample containers 403 held in sample container holders 420A, 420B and to facilitate operation of robot 422 based on that characterization. As shown in FIG. 4, a sample carrier 102A loaded with a sample container 403 from input module M0 may exit input module M0 via track segment 405A. Note that in some embodiments, sample carriers 102 are not limited to the directions of travel as described herein for sample carriers 102A and 102B.
[0043] In some embodiments input module M0 may be an input/output module (IOM) wherein sample containers to be processed may be received and loaded into sample carriers 102 and, after processing, may be returned to the IOM and unloaded from sample carriers 102 back into sample container holders 420A, 420B for removal from the IOM (and automated diagnostic analysis system 100). In other embodiments, input module M0 may be a bulk input module (BIM) or a refrigeration/storage module (RSM).
[0044] Returning to FIG. 1 , input module controller 108 according to one or more embodiments may include a computer processor 108P, memory 108M, and programming instructions 108PI (e.g., one or more software programs, Al or machine learning algorithms, or the like) that are stored in memory 108M and executable by computer processor 108P. In some embodiments, programming instructions 108PI may include a machine learning model trained to perform sample container characterization, as described in more detail below. Input module controller 108 may also include a user interface 124, which may include a display, to enable a user to access one or more control and status display screens and to enter commands and/or data into input module controller 108. Input module controller 108 may further include suitable communications apparatus (e.g., transceivers) to communicate either directly via wired and/or wireless connections or via network 1 14 with system controller 106, sample carriers 102, sample transport system 104, and modules M1-M5.
[0045] FIG. 5 illustrates an example input module robot and imaging sensor assembly 500 that may be included in input module M0 according to one or more embodiments.
Assembly 500 may be controlled by input module controller 108. In other embodiments, assembly 500 may be controlled directly by system controller 106 (of FIG. 1 ) or another (e.g., remote) controller. Assembly 500 includes a robot 522 and an imaging sensor 526 attached to robot 522.
[0046] Robot 522 is operative to grasp and transfer sample containers 503 from/to a sample container holder 520 and to/from a sample carrier 102 or 302. Robot 522 includes a gripper 528 operative to move in three dimensions (e.g., X, Y, and Z or R, 6, and Z). Gripper 528 is coupled to a telescoping arm 530 movable in horizontal directions (-/+ X) as shown via a translational motor 530M. Telescoping arm 530 is attached to an upright portion 532, which is movable in vertical directions (-/+ Y) as shown via a vertical motor 532M.
Telescoping arm 530 is also capable of rotating about upright portion 532 in angular directions (+/- 0) via a rotational motor 532R. Upright portion 532 may be mounted to a frame 534 of an input module. Gripper 528 may include two gripper fingers 528A, 528B that may be driven open and closed by an actuation mechanism 528M. A rotary actuator 528R is operative to rotate gripper fingers 528A, 528B in angular directions (+/- 02) about axis 536 to any prescribed rotational position/orientation. Robot 522 may be any suitable robot capable of moving a sample container received at an input module to/from a sample carrier also received at the input module.
[0047] Imaging sensor 526 may include a digital camera 526C and, more particularly in some embodiments, may include a color imaging sensor, such as, e.g., an RGB (red-green- blue) imaging sensor, or a color-plus-depth imaging sensor, such as, e.g., an RGB-D (red- green-blue plus depth) imaging sensor, which may provide depth and color via per-pixel depth information aligned with corresponding image pixels. Digital camera 526C may be mounted to and movable via a support structure such as, e.g., a vertical support 526S attached to telescoping arm 530.
[0048] As shown in FIG. 6, digital camera 526C may be movable via vertical support 526S and telescoping arm 530 (neither shown in FIG. 6) along an arced path 640 of sample container holder viewpoints 600 about sample container holder 520 at which digital camera 526C may be positioned to capture images of sample containers 503 according to one or more embodiments. For example, in some embodiments, digital camera 526C may be positioned at one or more of viewpoints VP1 , VP2, and/or VP3 to capture one or more images of sample containers 503 at each viewpoint.
[0049] Returning to FIG. 5, imaging sensor 526 is operative to image sample containers 503 at a tilted angle TA as measured downward from the horizontal (e.g., horizontal plane H). The tilted angle TA is greater than 0 degrees (i.e., imaging sensor 526 is not aimed at
sample containers 503 horizontally from a side) and is less than 90 degrees (i.e., imaging sensor 526 is not aimed at sample containers 503 vertically downward from the top). The height of imaging sensor 526 above sample container holder 520, which is controlled by vertical motor 532M, and the tilted angle TA is chosen such that each sample container 503 (or most sample containers 503 in some embodiments) in sample container holder 520 is captured in a single image by imaging sensor 526. For example, in some embodiments, the height of imaging sensor 526 above sample container holder 520 may range from 7 inches (17.8 cm) to 15 inches (38.1 cm) and, depending on the height of imaging sensor 526 above sample container holder 520, tilted angle TA may range from 30 degrees to 60 degrees and in other embodiments from 10 degrees to 80 degrees. Other tilted angle ranges and heights above sample container holder 520 may be possible based on the sizes of the sample containers and the features of imaging sensor 526 and the desired image quality. Other embodiments of the input module robot and imaging sensor assembly 500 are possible.
[0050] FIG. 7 illustrates a depth-assisted sample container characterization process 700 according to one or more embodiments. Process 700 may be implemented via a trained machine learning model executable in a computer processor, such as, e.g., computer processor 108P of input module controller 108, computer processor 106P of system controller 106 (both of FIG. 1 ), or another computer processor in communication with input module controller 108, system controller 106, or both. Process 700 may include image acquisition 742, depth inference/refinement 744, and sample container characterization 746.
[0051] Image acquisition 742 may take place in input module M0 (of FIGS. 1 and 4) using, in some embodiments, the input module robot and imaging sensor assembly 500, wherein imaging sensor 526 may capture one or more images of sample containers 503 at, e.g., one or more sample container holder viewpoints 600, such as, e.g., viewpoints VP1 , VP2, and/or VP3 (of FIG. 6). In those embodiments where imaging sensor 526 is a colorplus-depth or an RGB-D (red-green-blue plus depth) imaging sensor, a single image pair may be captured (wherein an image pair refers to a color image and a corresponding depth image). Color-plus-depth imaging sensors provide both depth and color image data and may include either a stereo depth sensor or a time-of-flight depth sensor. A color-plus-depth imaging sensor may also provide the color (e.g., RGB) and depth image data in a single frame by merging pixel-to-pixel color and depth data (wherein a frame herein refers to a single image with four channels, the first three channels storing, e.g., R, G, and B color values, respectively, and the fourth channel storing a depth value). In those embodiments where imaging sensor 526 is a color (e.g., an RGB (red-green-blue)) imaging sensor, multiple images at different viewpoints 600 may be captured. For example, at least one
image at each of two different viewpoints may be captured by a color imaging sensor 526. Color imaging sensors provide colored images by capturing light in, e.g., red, green, and blue wavelengths.
[0052] Depth inference/refinement 744 may include a trained machine learning software model that infers or refines the depth of an imaged scene in the image sensor’s coordinate system. Training data for the trained depth inference/refinement model may include color images of sample containers and synthetic images corresponding to the sample containers with respective depth and center annotations generated by CG (computer generated) rendering, as illustrated in FIG. 8. As shown in FIG. 8, synthetic image 800 includes sample containers 803 (only two labelled) each having a depth and center annotation 850 (only two labelled).
[0053] The trained model in depth inference/refinement 744 may infer depth from multiple color images (having no depth data] by running the structure from motion (SfM) and multi-view stereo (MVS) algorithms such as Colmap (see, e.g., colmap.github.io). Because these algorithms may be computationally intensive, they may be processed offline, which is suitable for generating training data.
[0054] The trained model in depth inference/refinement 744 may also refine depth information from an imaged scene that has been estimated/provided by one of the following:
(1 ) an imaging depth sensor based on a single color-plus-depth (e.g., RGB-D) image pair,
(2) a multi-view stereo algorithm relying on point or dense feature correspondences across multiple color (e.g., RGB) images captured at different viewpoints, or (3) a depth estimation algorithm based on a single monocular color (e.g., RGB) image. Depth information obtained from these three approaches, which may be suitable for opaque objects with matte surfaces, may not be accurate for transparent/semi-transparent objects with reflective surfaces such as sample containers (e.g., sample containers 203, 403, and 503) used in automated diagnostic analysis systems. The trained model may leverage synthetic color and depth pairs to learn how to refine the depth obtained from these three approaches.
[0055] For the depth output from a monocular depth algorithm, the trained model in depth inference/refinement 744 may reconstruct the depth up to an unknown scale - that is, the reconstructed depth/three-dimensional scene is only proportionally correct (up to a scale). The scaling factor is unknown if no additional information is provided. For example, an object looks identical when it is scaled twice as big but placed twice as far away from the camera. To resolve the unknown scale, multiple images may be used if the amount of movement of the object from one image to another image is known. Further, because monocular depth estimation may not be consistent across multiple frames, the trained model
may process monocular depth estimations from multiple frames to ensure that the refined depth information is consistent across those multiple frames. To ensure scale consistency, the scale of the reconstructed depth is adjusted such that the depth corresponding to the same object is consistent across two or more viewpoints.
[0056] The trained model in depth inference/refinement 744 may also receive as input auxiliary viewpoint information to ensure a metric reconstruction. Given the known relative poses between two frames obtained from the robot motion, the depth from the viewpoint of the first frame to the viewpoint of the second frame can be transformed. The scales of these two frames can then be adjusted such that the transformed depth matches the depth of the second frame.
[0057] Returning to FIG. 7, sample container characterization 746 receives image data 743 from image acquisition 742 and inferred or refined depth data 745 from depth inference/refinement 744 to determine physical characteristics of each sample container captured via image acquisition 742. Image data 743 and/or depth data 745 may be stored in memory 108M of input module controller 108 or, alternatively, in memory 106M of system controller 106 or another (e.g., remote) memory. Sample container characterization 746 may perform segmentation using a multi-class classifier to identify and determine various physical features of a sample container based on the image and depth data.
FIG. 9 illustrates an embodiment of sample container characterization 746. Sample container characterization 946 performs a segmentation process 952, which receives image data 743 from image acquisition 742 and depth data 745 from depth inference/refinement 744. In some embodiments, image data 743 received at segmentation process 952 may first undergo image consolidation 956, wherein optimally exposed pixels from the image data 743 are selected and consolidated in the image data provided to segmentation process 952 along with the depth data. Segmentation process 952 may include a semantic segmentation network (SSN) 954 that processes the output from image consolidation 956 to perform and output pixel class identification 960 for each pixel. That is, semantic segmentation network 954 may identify a class for each pixel of image data 743. For example, pixels may be classified as (referring to FIG. 2) a sample container cap 203C, a sample container label 203L, a sample container tubular body 203T, liquid 212, air 212A, blood serum or plasma portion 212SP, or settled blood portion 212SB. Semantic segmentation network 954 may be, e.g., a Dense U-Net or object detection network such as Mask R-CNN (region-based convolutional neural network) that takes the coIor and depth pair as input and outputs a semantic image where each pixel is assigned to a class. Other machine learning algorithms that may be employed include one or more of a hybrid CNN-CRF method (e.g., L.C. Chen,
G. Papandreou, I Kokkinos, K. Murphy, A. Yuille, DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. arXiv:1606.00915 [cs] (2016) (available at arxiv.org/abs/1606.00915) (submitted on 2 Jun 2016 (v1 ), last revised 12 May 2017 (v2))), a fully convolutional network (FCN) (e.g., J. Long, E. Shelhamer, T. Darrell, Fully Convolutional Networks for Semantic Segmentation. arXiv:1411 .4038 [cs] (2014) (available at arxiv.org/abs/1411 .4038) (submitted on 14 Nov 2014 (v1 ), last revised 8 Mar 2015 (v2))), a FastFCN (e.g., H. Wu, J. Zhang, K. Huang, K. Lian, Y. Yu, FastFCN: Rethinking Dilated Convolution in the Backbone for Semantic Segmentation. arXiv:1903.11816 [cs] (2019) (available at arxiv.org/abs/1903.11816) (submitted on 28 Mar 2019)), vision transformer-based models such as depthFormer (e.g., Z. Li, Z. Chen, X. Liu, J. Jiang, DepthFormer: Exploiting Long-Range Correlation and Local Information for Accurate Monocular Depth Estimation. arXiv:2203.14211 [cs] (2022) (available at arxiv.org/abs/2203.14211 ) (submitted on 27 Mar 2022)), a dense vision transformer (DPT) model (e.g., R. Ranftl, A. Bochkovskiy, V. Koltun, Vision Transformers for Dense Prediction. arXiv:2103.13413 [cs] (2021 ) (available at arxiv.org/abs/2103.13413) (submitted on 24 Mar 2021 )), etc. Other models and/or algorithms may be employed.
[0058] Based on the pixel class identification 960, all pixels identified as being in a same class may be grouped as shown in settled blood portion 962, liquid region 964, barcode/label 966, container tubular body 968, and container cap 970. From the pixel groupings, various physical features (e.g., LA, SB, Wi, HSB, HT, W, volume of liquid region, volume of settled blood portion, barcode/label condition, cap type, and cap color) may be determined or calculated.
[0059] In some embodiments, the depth image records the physical distance of each pixel in the Z-axis (see FIG. 8). If the region of an object (e.g., a sample container) in an image is known, then the corresponding depth information in that object region provides the geometry information of the object surface. The depth information is used as a fourth channel in addition to the three RGB channels by depth inference/refinement 744 to estimate a better depth image. Thereafter, the semantic segmentation network may also leverage the depth information in addition to RGB information to perform object segmentation. Once the object (e.g., a sample container) has been identified, a three-dimensional model (e.g., a sample container model) may be fit to the object region to estimate the object’s geometry such as height, diameter, and tilt angle. That is, variously-sized three-dimensional models, which may be, e.g., simple cylindrical models each having known height, diameter, three- dimensional orientation, and tilt angle, may be applied to an object region in an image identified as a sample container (and more particularly, the sample container body) until one
of the models matches (fits) the object region. Other three-dimensional models may include different types of caps that can be used to estimate a sample container cap’s geometry in the same way. The cap models may be separate from the sample container models or may be combined into sample container/cap models. Also, the barcode label condition may be inferred by examining the surface smoothness from the estimated depth.
[0060] FIG. 10 illustrates a method 1000 of operating an automated diagnostic analysis system according to one or more embodiments. At process block 1002, method 1000 may include capturing with an imaging sensor one or more images of one or more sample containers at a tilted angle relative to the one or more sample containers (e.g., relative to a horizontal plane), wherein the one or more sample containers are held in a sample container holder received at an input module of the automated diagnostic analysis system. For example, referring to FIGS. 1 , 4, and 5, imaging sensor 526 arranged at tilted angle TA may capture one or more images of sample containers 503 held in sample container holder 520 received in input module M0.
[0061] At process block 1004, method 1000 may include analyzing, via a computer processor executing a trained machine learning model, the captured one or more images including inferring or refining three-dimensional depth information from the captured one or more images to determine physical characteristics of each sample container. For example, referring to FIGS. 1 , 7, and 9, computer processor 108P executing trained machine learning model 108PI of input module controller 108 may analyze one or more captured images to perform sample container characterization via depth-assisted sample container characterization process 700 including, in some embodiments, sample container characterization 946.
[0062] At process block 1006, method 1000 may include directing, via the computer processor or another computer processor, one or more system components to take respective actions related to each sample container based on the determined physical characteristics of each sample container. For example, referring to FIGS. 1-4, computer processor 108P of input module controller 108 may direct robot 422 of input module M0 to grasp and move a sample container 203 or 403 to a sample carrier 102B of the automated diagnostic analysis system based on the determined physical characteristics of that sample container 203 or 403.
[0063] While this disclosure is susceptible to various modifications and alternative forms, specific method and apparatus embodiments have been shown by way of example in the drawings and are described in detail herein. It should be understood, however, that the
particular methods and apparatus disclosed herein are not intended to limit the disclosure or the following claims.
[0064] Independent of the grammatical term usage, individuals with male, female or other gender identities are included within the term.
NON-LIMITING ILLUSTRATIVE EMBODIMENTS
[0065] The following is a list of non-limiting illustrative embodiments disclosed herein:
[0066] Illustrative embodiment 1. An automated diagnostic analysis system, comprising: an input module operative to receive a sample container holder including one or more sample containers, the input module comprising an imaging sensor positioned within the input module to capture one or more images of the one or more sample containers at a tilted angle relative to a horizontal plane; and a computer processor and a trained machine learning model executable thereon operative to: capture one or more images of the one or more sample containers with the imaging sensor at the tilted angle; analyze the captured one or more images including inferring or refining three-dimensional depth information from the captured one or more images to determine physical characteristics of each sample container; and direct one or more system components to take respective actions related to each sample container based on the determined physical characteristics of each sample container.
[0067] Illustrative embodiment 2. The automated diagnostic analysis system of illustrative embodiment 1 , wherein the one or more system components include: an input module robot configured to grasp a sample container and move it to a sample carrier of the automated diagnostic analysis system based on the determined physical characteristics of that sample container; or a pre-processing module configured to remove a cap from a sample container based on the determined physical characteristics of that sample container.
[0068] Illustrative embodiment 3. The automated diagnostic analysis system according to one of the preceding embodiments, wherein the imaging sensor is a color or a color-plusdepth imaging sensor.
[0069] Illustrative embodiment 4. The automated diagnostic analysis system according to one of the preceding embodiments, wherein the imaging sensor is movable within the input module to a first viewpoint to capture the one or more images of the one or more sample containers with the imaging sensor at the tilted angle.
[0070] Illustrative embodiment 5. The automated diagnostic analysis system according to one of the preceding embodiments, wherein the computer processor and the trained
machine learning model executable thereon are further operative to: position the imaging sensor at a second viewpoint at the tilted angle relative to the one or more sample containers; capture one or more images of the one or more sample containers with the imaging sensor at the second viewpoint and the tilted angle; and analyze the captured one or more images from the first and second viewpoints including inferring or refining three- dimensional depth information from the captured one or more images from the first and second viewpoints to determine physical characteristics of each sample container.
[0071] Illustrative embodiment 6. The automated diagnostic analysis system according to one of the preceding embodiments, wherein the computer processor and the trained machine learning model executable thereon are further operative to analyze the captured one or more images by inferring or refining three-dimensional depth information from the captured one or more images using noisy output received from a depth sensor to determine physical characteristics of each sample container.
[0072] Illustrative embodiment 7. The automated diagnostic analysis system according to one of the preceding embodiments, wherein the computer processor and the trained machine learning model executable thereon are further operative to analyze the captured one or more images using inconsistent output received from a monocular depth algorithm across multiple image frames to determine physical characteristics of each sample container.
[0073] Illustrative embodiment 8. The automated diagnostic analysis system according to one of the preceding embodiments, wherein the determined physical characteristics include one or more of sample container type, height, diameter, position or orientation within the sample container holder, cap type, cap color, a sample’s volume or height within the sample container, and barcode/label position or condition thereof on the sample container.
[0074] Illustrative embodiment 9. The automated diagnostic analysis system according to one of the preceding embodiments, wherein the tilted angle as measured downward relative to the horizontal plane is greater than 0 degrees and less than 90 degrees.
[0075] Illustrative embodiment 10. The automated diagnostic analysis system according to one of the preceding embodiments, wherein the computer processor and the trained machine learning model executable thereon are further operative to: identify a sample container in an object region of the one or more images; and fit a three-dimensional sample container model to the object region to estimate physical characteristics of the sample container identified in the object region.
[0076] Illustrative embodiment 11 . A method of operating an automated diagnostic analysis system, the method comprising: capturing with an imaging sensor one or more images of one or more sample containers at a tilted angle relative to a horizontal plane, the one or more sample containers held in a sample container holder received at an input module of the automated diagnostic analysis system; analyzing, via a computer processor executing a trained machine learning model, the captured one or more images including inferring or refining three-dimensional depth information from the captured one or more images to determine physical characteristics of each sample container; anddirecting, via the computer processor or another computer processor, one or more system components to take respective actions related to each sample container based on the determined physical characteristics of each sample container.
[0077] Illustrative embodiment 12. The method of illustrative embodiment 11 , wherein the capturing comprises capturing the one or more images of the one or more sample containers at the tilted angle with a color or a color-plus-depth imaging sensor.
[0078] Illustrative embodiment 13. The method according to one of the preceding embodiments, further comprising moving the imaging sensor via a support structure to a first viewpoint in the input module prior to the capturing with the imaging sensor the one or more images of the one or more sample containers.
[0079] Illustrative embodiment 14. The method according to one of the preceding embodiments, further comprising: positioning the imaging sensor via the support structure at a second viewpoint at the tilted angle relative to the horizontal plane; capturing one or more images of the one or more sample containers at the second viewpoint and the tilted angle with the imaging sensor; and analyzing, via the computer processor executing the trained machine learning model, the captured one or more images from the first and second viewpoints including inferring or refining three-dimensional depth information from the captured one or more images from the first and second viewpoints to determine physical characteristics of each sample container.
[0080] Illustrative embodiment 15. The method according to one of the preceding embodiments, wherein the directing comprises one or more of: directing an input module robot to grasp a sample container and move it to a sample carrier of the automated diagnostic analysis system based on the determined physical characteristics of that sample container; and directing a pre-processing module to remove a cap from a sample container based on the determined physical characteristics of that sample container.
[0081] Illustrative embodiment 16. The method according to one of the preceding embodiments, wherein the analyzing comprises analyzing, via the computer processor, the captured one or more images by inferring or refining three-dimensional depth information from the captured one or more images using noisy output received from a depth sensor to determine physical characteristics of each sample container.
[0082] Illustrative embodiment 17. The method according to one of the preceding embodiments, wherein the analyzing comprises analyzing, via the computer processor, the captured one or more images by inferring or refining three-dimensional depth information from the captured one or more images using inconsistent output received from a monocular depth algorithm across multiple image frames to determine physical characteristics of each sample container.
[0083] Illustrative embodiment 18. The method according to one of the preceding embodiments, wherein the determined physical characteristics include one or more of sample container type, height, diameter, position or orientation within the sample container holder, cap type, cap color, a sample’s volume or height within the sample container, and barcode/label position or condition thereof on the sample container.
[0084] Illustrative embodiment 19. The method according to one of the preceding embodiments, wherein the tilted angle as measured downward from the horizontal plane ranges from 30 degrees to 60 degrees.
[0085] Illustrative embodiment 20. The method according to one of the preceding embodiments, wherein the analyzing, via the computer processor executing the trained machine learning model, further comprises: identifying a sample container in an object region of the one or more images; and fitting a three-dimensional sample container model to the object region to estimate physical characteristics of the sample container identified in the object region.
Claims
1 . An automated diagnostic analysis system, comprising: an input module operative to receive a sample container holder including one or more sample containers, the input module comprising an imaging sensor positioned within the input module to capture one or more images of the one or more sample containers at a tilted angle relative to a horizontal plane; and a computer processor and a trained machine learning model executable thereon operative to: capture one or more images of the one or more sample containers with the imaging sensor at the tilted angle; analyze the captured one or more images including inferring or refining three- dimensional depth information from the captured one or more images to determine physical characteristics of each sample container; and direct one or more system components to take respective actions related to each sample container based on the determined physical characteristics of each sample container.
2. The automated diagnostic analysis system of claim 1 , wherein the one or more system components include: an input module robot configured to grasp a sample container and move it to a sample carrier of the automated diagnostic analysis system based on the determined physical characteristics of that sample container; or a pre-processing module configured to remove a cap from a sample container based on the determined physical characteristics of that sample container.
3. The automated diagnostic analysis system of claim 1 , wherein the imaging sensor is a color or a color-plus-depth imaging sensor.
4. The automated diagnostic analysis system of claim 1 , wherein the imaging sensor is movable within the input module to a first viewpoint to capture the one or more images of the one or more sample containers with the imaging sensor at the tilted angle.
5. The automated diagnostic analysis system of claim 4, wherein the computer processor and the trained machine learning model executable thereon are further operative to: position the imaging sensor at a second viewpoint at the tilted angle relative to the one or more sample containers; capture one or more images of the one or more sample containers with the imaging sensor at the second viewpoint and the tilted angle; and analyze the captured one or more images from the first and second viewpoints including inferring or refining three-dimensional depth information from the captured one or more images from the first and second viewpoints to determine physical characteristics of each sample container.
6. The automated diagnostic analysis system of claim 1 , wherein the computer processor and the trained machine learning model executable thereon are further operative to analyze the captured one or more images by inferring or refining three-dimensional depth information from the captured one or more images using noisy output received from a depth sensor to determine physical characteristics of each sample container.
7. The automated diagnostic analysis system of claim 1 , wherein the computer processor and the trained machine learning model executable thereon are further operative to analyze the captured one or more images using inconsistent output received from a monocular depth algorithm across multiple image frames to determine physical characteristics of each sample container.
8. The automated diagnostic analysis system of claim 1 , wherein the determined physical characteristics include one or more of sample container type, height, diameter, position or orientation within the sample container holder, cap type, cap color, a sample’s volume or height within the sample container, and barcode/label position or condition thereof on the sample container.
9. The automated diagnostic analysis system of claim 1 , wherein the tilted angle as measured downward relative to the horizontal plane is greater than 0 degrees and less than 90 degrees.
10. The automated diagnostic analysis system of claim 1 , wherein the computer processor and the trained machine learning model executable thereon are further operative to: identify a sample container in an object region of the one or more images; and fit a three-dimensional sample container model to the object region to estimate physical characteristics of the sample container identified in the object region.
11. A method of operating an automated diagnostic analysis system, the method comprising: capturing with an imaging sensor one or more images of one or more sample containers at a tilted angle relative to a horizontal plane, the one or more sample containers held in a sample container holder received at an input module of the automated diagnostic analysis system; analyzing, via a computer processor executing a trained machine learning model, the captured one or more images including inferring or refining three-dimensional depth information from the captured one or more images to determine physical characteristics of each sample container; and directing, via the computer processor or another computer processor, one or more system components to take respective actions related to each sample container based on the determined physical characteristics of each sample container.
12. The method of claim 11 , wherein the capturing comprises capturing the one or more images of the one or more sample containers at the tilted angle with a color or a color-plus- depth imaging sensor.
13. The method of claim 11 , further comprising moving the imaging sensor via a support structure to a first viewpoint in the input module prior to the capturing with the imaging sensor the one or more images of the one or more sample containers.
14. The method of claim 13, further comprising: positioning the imaging sensor via the support structure at a second viewpoint at the tilted angle relative to the horizontal plane; capturing one or more images of the one or more sample containers at the second viewpoint and the tilted angle with the imaging sensor; and analyzing, via the computer processor executing the trained machine learning model, the captured one or more images from the first and second viewpoints including inferring or
refining three-dimensional depth information from the captured one or more images from the first and second viewpoints to determine physical characteristics of each sample container.
15. The method of claim 1 1 , wherein the directing comprises one or more of: directing an input module robot to grasp a sample container and move it to a sample carrier of the automated diagnostic analysis system based on the determined physical characteristics of that sample container; and directing a pre-processing module to remove a cap from a sample container based on the determined physical characteristics of that sample container.
16. The method of claim 1 1 , wherein the analyzing comprises analyzing, via the computer processor, the captured one or more images by inferring or refining three- dimensional depth information from the captured one or more images using noisy output received from a depth sensor to determine physical characteristics of each sample container.
17. The method of claim 1 1 , wherein the analyzing comprises analyzing, via the computer processor, the captured one or more images by inferring or refining three- dimensional depth information from the captured one or more images using inconsistent output received from a monocular depth algorithm across multiple image frames to determine physical characteristics of each sample container.
18. The method of claim 1 1 , wherein the determined physical characteristics include one or more of sample container type, height, diameter, position or orientation within the sample container holder, cap type, cap color, a sample’s volume or height within the sample container, and barcode/label position or condition thereof on the sample container.
19. The method of claim 1 1 , wherein the tilted angle as measured downward from the horizontal plane ranges from 30 degrees to 60 degrees.
20. The method of claim 1 1 , wherein the analyzing, via the computer processor executing the trained machine learning model, further comprises: identifying a sample container in an object region of the one or more images; and fitting a three-dimensional sample container model to the object region to estimate physical characteristics of the sample container identified in the object region.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463548886P | 2024-02-02 | 2024-02-02 | |
| US63/548,886 | 2024-02-02 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025165595A1 true WO2025165595A1 (en) | 2025-08-07 |
Family
ID=96591111
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2025/012279 Pending WO2025165595A1 (en) | 2024-02-02 | 2025-01-20 | Depth-assisted sample container characterization |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025165595A1 (en) |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150145966A1 (en) * | 2013-11-27 | 2015-05-28 | Children's National Medical Center | 3d corrected imaging |
| WO2023230024A1 (en) * | 2022-05-23 | 2023-11-30 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for determining a viewpoint for inspecting a sample within a sample container |
-
2025
- 2025-01-20 WO PCT/US2025/012279 patent/WO2025165595A1/en active Pending
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150145966A1 (en) * | 2013-11-27 | 2015-05-28 | Children's National Medical Center | 3d corrected imaging |
| WO2023230024A1 (en) * | 2022-05-23 | 2023-11-30 | Siemens Healthcare Diagnostics Inc. | Methods and apparatus for determining a viewpoint for inspecting a sample within a sample container |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110573859B (en) | Method and apparatus for HILN characterization using convolutional neural networks | |
| EP3408641B1 (en) | Methods and apparatus for multi-view characterization | |
| CN107003124B (en) | Drawer Vision System | |
| JP7324757B2 (en) | Method and Apparatus for Biofluid Specimen Characterization Using Reduced-Training Neural Networks | |
| JP6870826B2 (en) | Methods and devices configured to quantify samples from lateral multi-viewpoints | |
| CN110199172B (en) | Method, apparatus and quality inspection module for detecting hemolysis, icterus, lipemia, or normality of a sample | |
| JP7216069B2 (en) | Deep learning volume quantification method and apparatus | |
| US11763461B2 (en) | Specimen container characterization using a single deep neural network in an end-to-end training fashion | |
| CN108603817A (en) | Method and apparatus suitable for identifying sample containers from multiple side views | |
| US11927736B2 (en) | Methods and apparatus for fine-grained HIL index determination with advanced semantic segmentation and adversarial training | |
| CN116917939A (en) | Method and apparatus suitable for identifying the 3D center position of a sample container using a single image capture device | |
| US20250321241A1 (en) | Methods and apparatus for determining a viewpoint for inspecting a sample within a sample container | |
| US20240169517A1 (en) | Methods and apparatus for automated specimen characterization using diagnostic analysis system with continuous performance based training | |
| JP2025523000A (en) | DEVICE AND METHOD FOR TRAINING SAMPLE CHARACTERIZATION ALGORITHMS IN DIAGNOSTIC LABORATORY SYSTEMS - Patent application | |
| WO2025165595A1 (en) | Depth-assisted sample container characterization | |
| WO2024054894A1 (en) | Devices and methods for training sample container identification networks in diagnostic laboratory systems | |
| JP7177917B2 (en) | Visualization analyzer and visual learning method | |
| US12287320B2 (en) | Methods and apparatus for hashing and retrieval of training images used in HILN determinations of specimens in automated diagnostic analysis systems | |
| CN114298963A (en) | Method for determining at least one state of at least one cavity of a transfer interface configured to transfer a sample tube | |
| HK40013553B (en) | Methods and apparatus for hiln characterization using convolutional neural network | |
| HK40013553A (en) | Methods and apparatus for hiln characterization using convolutional neural network | |
| HK40041703A (en) | Specimen container characterization using a single deep neural network in an end-to-end training fashion | |
| HK40026665A (en) | Methods and apparatus for bio-fluid specimen characterization using neural network having reduced training | |
| HK40013224A (en) | Methods and apparatus for label compensation during specimen characterization | |
| HK1235856B (en) | Drawer vision system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25748896 Country of ref document: EP Kind code of ref document: A1 |