[go: up one dir, main page]

WO2025174726A1 - Systems and methods for identifying grasping locations on sample containers in diagnostic laboratory systems - Google Patents

Systems and methods for identifying grasping locations on sample containers in diagnostic laboratory systems

Info

Publication number
WO2025174726A1
WO2025174726A1 PCT/US2025/015351 US2025015351W WO2025174726A1 WO 2025174726 A1 WO2025174726 A1 WO 2025174726A1 US 2025015351 W US2025015351 W US 2025015351W WO 2025174726 A1 WO2025174726 A1 WO 2025174726A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample container
semantic
grasping
keypoints
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2025/015351
Other languages
French (fr)
Inventor
Nikhil SHENOY
Yao-Jen Chang
Ankur KAPOOR
Benjamin S. Pollack
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Siemens Healthcare Diagnostics Inc
Original Assignee
Siemens Healthcare Diagnostics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Siemens Healthcare Diagnostics Inc filed Critical Siemens Healthcare Diagnostics Inc
Publication of WO2025174726A1 publication Critical patent/WO2025174726A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1612Programme controls characterised by the hand, wrist, grip control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/09Supervised learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/20Ensemble learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation

Definitions

  • Diagnostic laboratory systems conduct clinical chemistry tests that identify analytes or other constituents in biological samples such as blood serum, blood plasma, urine, interstitial liquid, cerebrospinal liquids, and the like.
  • the biological samples are collected in sample containers, such as test tubes, and are transported to a diagnostic laboratory system.
  • the sample containers are loaded into one or more sample container carriers (e.g., tube tray or racks).
  • the sample container carriers are then loaded into a sample handler (e.g., an input/output module) of the laboratory system that enables the laboratory system to receive and discharge the sample containers.
  • Robots within the sample handler and elsewhere in the laboratory system may grasp and transfer the sample containers between various locations and components within the laboratory system. However, the robots may operate using fixed grasping rules that may not account for variations in the type and geometry of different sample containers that may be used in the laboratory system.
  • sample containers may have different diameters and heights.
  • the robot may descend to a fixed height above the sample container carrier.
  • the robot may not be able to grasp that shorter sample container.
  • the robot may collide with that sample container or grasp the sample container at a sensitive area, which may break the sample container.
  • FIG. 1 illustrates a perspective view of a diagnostic laboratory system located in a laboratory according to one or more embodiments.
  • FIG. 2 illustrates a detailed view of the computer of FIG. 1 in communication with a sample handler of the diagnostic laboratory system according to one or more embodiments.
  • FIG. 3 illustrates a robot and a sample container carrier located in a sample handler of a diagnostic laboratory system, wherein an imaging device is attached to a gripper of a robot within the sample handler according to one or more embodiments.
  • FIG. 4A illustrates a side perspective view of nodes of a robot gripper grasping a sample container on a barcode label, wherein an imaging device is attached to the robot gripper according to one or more embodiments.
  • FIG. 4B illustrates a side elevation view of the robot gripper of FIG. 4A extending along an x-axis proximate a sample container according to one or more embodiments.
  • FIG. 4C illustrates a top plan view of a sample container showing locations where nodes of a robot gripper may contact an exterior surface of a sample container according to one or more embodiments.
  • FIG. 4D illustrates nodes of a robot gripper grasping a sample container at a higher vertical position than the nodes grasping the sample container shown in FIG. 4A according to one or more embodiments.
  • FIG. 4E illustrates a robot gripper configured with a large number of degrees of freedom according to one or more embodiments.
  • FIG. 4F illustrates a top plan view of a sample container wherein three nodes of a robot gripper are contacting an exterior of the sample container according to one or more embodiments.
  • FIGS. 5 illustrates a side elevation view of an image of a sample container and different semantic keypoints located on the image of the sample container according to one or more embodiments.
  • FIGS. 6A-6E illustrate examples of real-world images of different sample containers and areas on the sample containers that should be avoided during grasping operations by a robot in a diagnostic laboratory system according to one or more embodiments.
  • FIG. 7 illustrates a synthetic image of a plurality of sample containers that may be used to train a semantic keypoints identification model in a diagnostic laboratory system according to one or more embodiments.
  • FIG. 8 illustrates a real-world image of a plurality of sample containers that may be used to train a semantic keypoints identification model in a diagnostic laboratory system according to one or more embodiments.
  • FIG. 9 illustrates a heatmap showing centers of mass of sample containers overlaid onto the images of sample containers in a synthetic image according to one or more embodiments.
  • FIG. 10 illustrates a flowchart of a method of grasping a sample container in a diagnostic laboratory system using a robot according to one or more embodiments.
  • FIG. 11 illustrates a flowchart of a method of grasping a sample container in a diagnostic laboratory system using a robot according to one or more embodiments.
  • Automated diagnostic laboratory systems perform analyses (e.g., tests) on various biological samples, such as blood, blood serum, urine, and other bodily fluids.
  • the samples are collected from patients and placed into sample containers, such as test tubes.
  • the sample containers along with testing instructions are then sent to an automated diagnostic laboratory system.
  • the testing instructions may indicate which tests are to be performed on the samples by instruments located in the diagnostic laboratory system.
  • a technician or software executing on a computer may determine which instruments in the diagnostic laboratory are to perform each test on each of the samples per the instructions.
  • the components in diagnostic laboratory systems can be broadly characterized as sample transport systems, sample movers, and instruments.
  • the sample transport systems may include hardware, such as tracks, that are configured to move the sample movers throughout the laboratory systems.
  • the sample movers may receive the sample containers and move the sample containers on the tracks.
  • the instruments may be modules and/or analyzers that the sample movers may be directed to, wherein processes and analyses may be performed on the samples by the instruments. Examples of the instruments include centrifuges, chemistry analyzers, decappers, storage modules, and refrigeration modules.
  • Typical workflow in a laboratory system may include loading sample containers into sample movers and then instructing the sample transport system to transport the sample movers to one or more of the instruments.
  • Many laboratory systems use robot systems to move the sample containers into and out of the sample movers.
  • the robot systems may use robot grippers (e.g., end effectors) to grasp and move the sample containers.
  • the robot systems may move quality control packs, calibrator packs, and other items throughout the laboratory systems.
  • Some laboratory systems include one or more input/output modules (e.g., sample handlers) where sample containers are loaded into and removed from the laboratory systems via sample container carriers.
  • a robot picks up the sample containers one at a time from the sample container carriers and places the sample containers into sample movers that move the sample containers to other modules via the transport system, such as the tracks. After tests have been performed, the sample movers move the sample containers back to the sample handler where a robot transfers the sample containers one at a time from the sample movers back to the sample container carriers, which are then removed from the sample handlers.
  • Robots configured to move the sample containers may be used in many locations in the laboratory systems.
  • some laboratory systems include transfer stations where robots transfer sample containers from one track to another track.
  • a first sample mover containing a sample container arrives at a transfer station via a first track.
  • An empty second sample mover arrives at the transfer station via a second track.
  • a robot grasps the sample container from the first sample mover and transfers the sample container to the second sample mover.
  • sample containers may stop partially before or after the precise transfer position, which shifts the sample container slightly away from where the robot fingers are expected to grasp the sample containers. The result is that the fingers can descend on top of the sample containers and puncture tops or break the sample containers. This may be a significant problem should biohazardous liquids spill from the sample containers onto the instruments and the sample movers. The sample movers may then spread the biohazardous liquids to other parts of the laboratory systems. Even if the misalignment of sample containers at a transfer position results in only a relatively small percentage of damaged sample containers, the aggregate effect can seriously hamper the operation of a laboratory system.
  • the robots may have grippers configured to grasp the sample containers.
  • the robots and/or robot controllers may use grasping rules to operate and move the grippers. Given all the different items (e.g., different sample container types) that may be grasped by the robots, it may not be advantageous to use the same grasping rules for every item that may be grasped by the robots. For example, different sample containers may have different geometries and surface properties. Using the same grasping rules for the different sample containers is inefficient. Furthermore, should a gripper grasp all the different sample container types using the same grasping rules, there is a risk that short sample containers may be grasped at unstable locations, such as to close to their tops, which may cause breaks or spillage of the sample container contents.
  • sample containers may have different diameters and/or heights.
  • the height of the sample container may not be considered when the robot descends downward to grasp the sample container. Instead, the robot may attempt to grasp the sample container at a fixed height. When sample containers are shorter than this fixed height, the robot will grasp nothing. When sample containers are taller than the fixed height, the robot may grasp the sample containers at sensitive or unstable areas, which may cause the sample containers to break or result in other problems.
  • sample containers may have indicia, such as barcodes and barcode labels, attached to the exterior surfaces of the sample containers.
  • the indicia may include reference patient information or testing criteria, for example. These barcodes and barcode labels should remain readable or scannable as the sample containers are moved throughout the laboratory system. The locations of the indicia may vary between different types of sample containers. Thus, if the same grasping rules are used for all the different types of sample containers, the grippers may grasp and consequently damage the indicia.
  • the exteriors of some sample containers may have exposed adhesives, such as from barcode labels, and/or liquids spilled on the exterior surfaces. These adhesives and/or liquids may hinder grasping actions of the robot grippers should the robot grippers grasp these regions. For example, the adhesives may cause the robot grippers to adhere to the sample containers and the liquids may cause the robot grippers to slide relative to the sample containers.
  • the methods and apparatus described herein overcome the issues with conventional laboratory systems by using imaging devices to capture images of the sample containers before the robots grasp the sample containers. Visual information generated by the imaging devices is used to determine the best (optimal) grasping locations on the sample containers for the robots to grasp.
  • the methods and apparatus described herein reduce instances of breaks, spills, and other failures during sample container handling.
  • the methods and apparatus described herein may direct the robots to grasp sample containers at different heights to accommodate sample containers with different geometric configurations.
  • Machine learning models or networks may identify and locate semantic keypoints on certain objects in an image.
  • a semantic keypoint is a predetermined feature of or “point of interest” on a sample container to be analyzed in the determination of an optimum grasping location on the sample container.
  • Examples of semantic keypoints may include corners or edges of barcode labels, corners or edges of a sample container, corners or edges of a cap on the sample container, height of a liquid in a sample container, and a center of mass of the sample container.
  • the semantic keypoints may be analyzed, such as by a machine learning model, to determine the optimal grasping locations of the sample containers.
  • the methods may also consider the degrees-of-freedom of the robot gripper to provide the optimal grasping locations for the grippers to grasp the sample containers.
  • the optimal grasping locations may result in safe and reliable grasps while abating potential damage to the sample containers and the system through breaks and spills of the sample containers.
  • the semantic keypoints may be analyzed to identify one or more surface and/or geometric properties of the sample containers.
  • the surface properties may include the heights of the sample containers, the materials from which the sample containers are made, indicia located on the exteriors of the sample containers, surface anomalies, and liquid levels in the sample containers.
  • the grasping rules of the robot grippers may be determined.
  • the grasping rules may avoid blocking indicia or contacting liquids, for example.
  • Grasping locations for the robot grippers on the sample containers may be determined in response to the grasping rules.
  • the grasping locations may be locations where the grippers will not adversely affect the sample containers.
  • the grasping locations also may be locations that enable the grippers to grasp the sample containers without sliding or sticking to the sample containers.
  • the diagnostic laboratory system 102 may include a plurality of diagnostic instruments 104 (a few labelled) that are configured to perform the same or different tests on the biological samples.
  • the diagnostic instruments 104 may be interconnected by a transport system (e.g., track 216 - FIG. 2).
  • the transport system may be configured to transport the biological samples between the diagnostic instruments 104 and/or other devices in the laboratory system 102, such as centrifuges and decappers.
  • the configuration of the laboratory system 102 may be different than the configuration shown in FIG. 1 .
  • the laboratory system 102 may only include a single one of the diagnostic instruments 104.
  • the diagnostic laboratory system 102 may be coupled to a computer 120 that may be located within the laboratory 100 or external to the laboratory 100. In some embodiments, portions of the computer 120 may be located within the laboratory 100 and other portions of the computer 120 may be located external to the laboratory 100.
  • the computer 120 may include a processor 122 and a memory 124, wherein the memory 124 stores programs 126 configured to be executed or run on the processor 122. In some embodiments, the memory 124 and/or the programs 126 may be located external to the computer 120. For example, the computer 120 may be connected to the Internet to access external data and the like.
  • the programs 126 may operate the diagnostic instruments 104 and process data generated by the diagnostic instruments 104.
  • the memory 124 may be any suitable type of memory, such as, but not limited to one or more of a volatile memory and/or a non-volatile memory.
  • the memory 124 may have a plurality of programs 126 that include instructions stored therein that, when executed by processor 122, cause the processor 122 to perform various actions specified by one or more of the stored instructions.
  • the program instructions may be provided to the processor 122 to perform operations in accordance with the present systems and methods specified in the flowcharts and/or block diagrams described herein.
  • the processor 122 so configured, may become a special purpose machine particularly suited for performing in accordance with the present systems and methods.
  • the program instructions which may be stored in a computer readable medium such as the memory 124, can direct the processor 122 to function in a particular manner.
  • the term "memory" as used herein can refer to both non-transitory and transitory memory.
  • At least one of the diagnostic instruments 104 or other components may be a sample handler 130, which is described in greater detail below.
  • the sample handler 130 is a component in the laboratory system 102.
  • the laboratory system 102 may include a plurality of sample handlers.
  • the operations performed by the sample handler 130 may be implemented in one or more of the diagnostic instruments 104.
  • the sample handler 130 may be located in various locations in the diagnostic laboratory system 102, such as in individual ones of the diagnostic instruments 104.
  • FIG. 2 illustrates a more detailed embodiment of the computer 120 in communication with the sample handler 130.
  • the computer 120 may include a plurality of programs 126 that may be run on the processor 122.
  • One of the programs 126 may be a robot controller 204 that may be configured to direct a robot 206 to move to specific locations as described herein.
  • the robot controller 204 executing on the processor 122, may direct the robot 206 or portions of the robot 206 to move within the sample handler 130 and to perform certain operations.
  • the robot controller 204 may also direct the robot 206 to move sample containers 210 between sample container carriers 212 and sample movers 214.
  • the sample movers 214 may move the sample containers 210 between diagnostic instruments 104 (FIG.
  • sample handler 130 by way of a transport system, which in the embodiment of FIG. 2 may include a track 216 configured to move the sample movers 214.
  • the embodiment of the sample handler 130 shown in FIG. 2 has three sample container carriers 212, which are referred to individually as a first container carrier 212A, a second container carrier 212B, and a third container carrier 212C. Some of the container slots 232 may be occupied with sample containers 210 and are identified with dark fill. A sample container 210A is shown occupying a container slot in the third container carrier 212C and will be referenced in examples herein.
  • An image processor 220 may be coupled to the computer 120 and may be configured to receive real-world image data 222 generated by an imaging device 224 (e.g., a digital camera). In some embodiments, one or more portions of the image processor 220 may be implemented in the imaging device 224. In some embodiments, the imaging device 224 may be configured to capture three-dimensional (3D) images of the sample containers 210, the sample container carriers 212, and other items. The image processor 220 may be configured to direct the imaging device 224 to capture images, such as images of the sample containers 210 and other items in the laboratory system 102.
  • the laboratory system 102 may include a plurality of imaging devices. Some imaging devices may be stationary, and some may be mobile, such as imaging devices affixed to the robot 206 as described herein.
  • the real-world image data 222 may be data generated by the imaging device 224 and may include color data indicative of colors present in the captured images.
  • the image data 222 may be representative of 3D scenes.
  • the image data 222 may be representative of two images captured from two adjacent viewpoints.
  • the imaging device 224 may be a 3D camera that generates image data that includes data indicating distances between objects in the captured images and the imaging device 224.
  • the computer 120 may include a semantic keypoints identification model 226 configured or trained to identify semantic keypoints in images captured by the imaging device 224.
  • the semantic keypoints identification model 226 may be a machine learning model (e.g., a software model) or algorithm, such as a trained model or a network.
  • the semantic keypoints identification model 226 is or includes a deep neural network.
  • the semantic keypoints identification model 226 may be or include a convolutional neural network (CNN) trained or configured to identify semantic keypoints in images.
  • CNN convolutional neural network
  • the semantic keypoints identification model 226 may be trained to identify properties such as dimensions of the sample containers 210, sample container geometry, whether the sample containers 210 have caps, barcodes and other indicia, and locations of these items.
  • the semantic keypoints identification model 226 may also be trained to identify and locate anomalies such as sample contents that have spilled from the sample containers 210, adhesives used to affix barcode labels to the sample containers 210, damage or markings on barcode labels, and other anomalies.
  • the semantic keypoints identification model 226 may also determine heights of the sample containers 210 and/or heights (e.g., levels) of samples in the sample containers 210.
  • the robot 206 may include an arm 320 to which the gripper 304 may be attached.
  • the arm 320 may be affixed to the third gantry 314.
  • the configuration between the robot and the imaging device 224 may be in an eye-in-hand configuration wherein the imaging device 224 moves with the gripper 304.
  • the imaging device 224 also may be affixed to the arm 320, which may move with the gripper 304.
  • the robot 206 may be configured to move the imaging device 224 throughout the sample handler 130 to capture images of items, such as the sample containers 210 (FIG. 2), from various viewpoints as described herein.
  • Different embodiments of the imaging device 224 may have many different physical configurations that enable the imaging device 224 to be affixed to the gripper 304 without interfering with the operation of the gripper 304.
  • FIG. 4A illustrates an enlarged view of the gripper 304 of FIG. 3.
  • the gripper 304 illustrated in FIG. 4A includes two fingers 400 that are referred to individually as a first finger 400A and a second finger 400B. Ends of the fingers 400 may have nodes 404 that are configured to contact the sample container 210A. Friction between the nodes 404 and the sample container 210A enables the gripper 304 to grasp the sample container 210A and move the sample container 210A as described herein.
  • each of the two fingers 400 includes two nodes, which are referred to as node 404A, node 404B, node 404C, and node 404D.
  • the gripper 304 is described as moving herein by way of the robot 206.
  • the robot controller 204 may include instructions that when executed by the processor 122 cause predetermined forces to be applied by the nodes 404 of the gripper 304 to the sample container 210A.
  • the forces may be at least partially dependent on the material of the sample container 210A and may be determined by the grasping location algorithm 230.
  • the imaging device 224 may include an RGBD sensor, which generates red, green, and blue color data and depth data. In some embodiments, the imaging device 224 may include an RGB sensor with a separate depth or distance sensor. The imaging device 224 is shown attached to the gripper 304, and the depth information may be measured between the items being captured and the location of the gripper 304. [0065] Additional reference is made to FIG. 4B, which illustrates the gripper 304 positioned horizontally on the x-axis. In this embodiment, the gripper 304 may have several degrees of freedom, such as six degrees of freedom, that may enable the gripper to move as shown in FIG. 4B.
  • the imaging device 224 may capture elevation images of objects in the laboratory system 102, such as the sample container 210A, using the first field of view 412. In such embodiments, the imaging device 224 may only need one field of view (e.g., the first field of view 412).
  • FIG. 4C illustrates a top plan view of the sample container 210A showing locations where the nodes 404 may contact the exterior surface of the sample container 210A.
  • the gripper 304 may be configured to rotate or pivot in an arc R41 relative to the sample container 210A.
  • the nodes 404 shown as solid lines indicate first grasping locations where the nodes 404 may contact the sample container 210A.
  • the nodes 404 shown as dashed lines indicate second grasping locations where the nodes 404 may contact the sample container 210A.
  • the gripper 304 and thus the fingers 400 may rotate as shown by the arc R41 so that the nodes 404 may contact the sample container 210 at various arcuate locations on the surface of the sample container 210A.
  • the grasping location algorithm 230 may determine an optimal grasping location on the surface of the sample container 210A and the gripper 304 may rotate about the arc R41 to the optimal grasping location.
  • the gripper 304 and thus the fingers 400 and the nodes 404 may be configured to move in the z-direction to grasp the sample container 210A at various vertical locations or heights on the sample container 210A.
  • FIG. 4D illustrates the nodes 404 grasping the sample container 210A at a higher vertical position than the nodes 404 grasping the sample container 21 OA shown in FIG. 4A.
  • the gripper 304 and the fingers 400 may have positioned the nodes 404 so that the nodes 404 do not contact the barcode label 410 located on the surface of the sample container 21 OA.
  • the nodes 404 are located vertically lower than in FIG.
  • the methods and apparatus described herein may prevent the nodes 404 from contacting barcode labels as shown in FIG. 4A.
  • the grasping location algorithm 230 may direct the gripper 304 to grasp the sample container 21 OA to avoid the barcode label 410.
  • the nodes 404 may contact the barcode labels, but avoid direct contact with the barcodes on the label.
  • FIG. 4E shows the gripper 304 configured with a large number of degrees of freedom, such as six degrees of freedom.
  • the degrees of freedom enable the gripper 304 to have many different poses to grasp and move the sample container 210A.
  • the degrees of freedom may also be referred to as degrees of pose.
  • the gripper 304 may have other degrees of freedom.
  • the gripper 304 of FIG. 4E may be configured to move or rotate in an arc R42.
  • the gripper 304, the fingers 400, and the nodes 404 may grasp the sample container 210A when the sample container 210 is askew or in a plurality of different poses.
  • the ability of the gripper 304 to move in the arc R42 may be in addition to movements in the z-direction and along the arc R41 as described in FIGS. 4C and 4D.
  • the gripper 304 may also be configured to move in an arc that is perpendicular to the arc R42.
  • the gripper 304 has been described as having two fingers 400 and four nodes 404.
  • the gripper 304 may have different configurations of fingers and nodes.
  • FIG. 4F illustrates a top plan view of the sample container 210A where three nodes 418 are contacting the exterior of the sample container 210A.
  • the three nodes 418 may be spaced equally around the circumference of the outer surface of the sample container 210A.
  • the gripper 304 may have other numbers of nodes that are configured to contact the outer surface of the sample container 210A.
  • sample containers 210 may be used in the sample handler 130.
  • the different types of sample containers 210 may have different surface properties and geometric properties.
  • the different types of sample containers 210 may be grasped and moved throughout the laboratory system 102 (FIG. 2) in a similar manner as the sample container 210A and as determined by the programs 126 (FIG. 2).
  • the semantic keypoints may be defined as category-level semantic points on images of the sample containers 210, such as 3D images of sample containers 210. Semantic keypoints may be points of interest with semantic meanings for images of the sample containers 210. The semantic keypoints may include corners of barcode labels, edges of sample container tubes, heights of liquids in the sample containers, center of mass of the sample containers, and other characteristics. In some embodiments, semantic keypoints may be referred to as category-level semantic points on 3D objects, wherein the categories are objects such as barcodes, caps, liquids, and other objects that may be identified in the images.
  • Semantic keypoints may provide concise abstractions for a variety of visual understanding tasks, such as grasping operations performed by the robot 206 (FIG. 3).
  • the semantic keypoints identification model 226 may define semantic keypoints separately for each category in the images of the sample containers 210, such as edges or corners on tubes, caps, and barcode labels and may provide concise abstractions of these objects regarding their compositions, shapes, and poses.
  • Semantic keypoints may be identified using deep learning methods, such as mask RCNN (ICCV 2017) and PifPaf (CVPR 2019). In other embodiments, convolutional neural networks (CNNs) may be employed to identify the semantic keypoints. Semantic keypoint identification may involve simultaneously detecting sample containers 210 and localizing their semantic keypoints.
  • FIG. 5 illustrates an image of an elevation view of a sample container 500, which may be similar to one or more of the sample containers 210 (FIG. 2).
  • the image of the sample container 500 may be a real-world image captured by the imaging device 224 or a synthetic or computer-generated image generated by the computer 120 or another computer.
  • the image of the sample container 500 may be used to train the semantic keypoints identification model 226 and/or the grasping location algorithm 230.
  • the sample container 500 shown in FIG. 5 includes a tube 502 and a cap 504 that seals the tube 502. Some versions of the sample containers 210 do not include caps.
  • the tube 502 may have an indicia, which in the embodiment of FIG.
  • the tube 502 may contain a liquid 510, which may be a biological sample.
  • the sample container 500 may have a height H51 extending between a tube bottom 502A and a cap top 504A.
  • the tube 502 may have a height H52 extending between the tube bottom 502A and a cap bottom 504B.
  • the liquid 510 may have a height H53 extending between the tube bottom 502A and the top of the liquid 510.
  • FIG. 5 includes a plurality of dots that represent semantic keypoints 514.
  • the semantic keypoints identification model 226 may identify the semantic keypoints 514 and information or data derived from the semantic keypoints 514 as described herein.
  • the semantic keypoints 514 are shown as individual single dots for illustration purposes.
  • the individual semantic keypoints 514 may be a plurality of keypoints that define or outline portions of objects in the image of the sample container 500.
  • a semantic keypoint 516A and a semantic keypoint 516B identify bottom corners of the tube bottom 502A. The difference between the semantic keypoint 516A and the semantic keypoint 516B is the width W51 of the lower portion of the tube 502.
  • Semantic keypoints 522A, 522B, 522C, and 522D mark corners or edges of the cap 504.
  • a line between the semantic keypoint 522A and the semantic keypoint 522B defines the cap top 504A.
  • a line between the semantic keypoint 522C and the semantic keypoint 522D defines the cap bottom 504B.
  • the difference between the cap top 504A and the cap bottom 504B represents a height H54 of the cap 504.
  • the difference between the semantic keypoint 522A and the semantic keypoint 522B is the width W52 of the cap 504.
  • a semantic keypoint 524 may identify the color and/or texture of the cap 504.
  • a semantic keypoint 526 marks the top of the liquid 510 and a difference between the semantic keypoint 526 and the semantic keypoint 516A is the height H53 of the liquid 510.
  • Semantic keypoints 530A, 530B, 530C, and 530D mark edges or corners of the barcode label 506.
  • Semantic keypoints 532A, 532B, 532C, and 532D mark edges or corners of the barcode 508 itself.
  • the locations of the semantic keypoints 530A, 530B, 530C, and 530D may identify the location of the barcode label 506 on the tube 502.
  • the location of the barcode label 506 may be identified in addition to the size of the barcode label 506.
  • the semantic keypoints 532A, 532B, 532C, and 532D may identify the location of the barcode 508 on the barcode label 506 in addition to the size of the barcode 508.
  • Semantic keypoint detection does not have to be limited to the semantic keypoints 514 shown in FIG. 5.
  • Additional semantic keypoints may identify and locate instances of liquids (e.g., biological fluids) on the exteriors of the sample containers 210.
  • biological liquids may escape from a sample container that has broken or during a handling procedure, such as a de-capping procedure.
  • the biological liquids may flow onto the outside of the sample container and dry, which can affect the appearance of the sample container.
  • the biological liquids can change the shape of the image of the sample container 500 as processed by the programs 126 (FIG. 2).
  • the leaked liquid may spread onto surrounding equipment and possibly onto other sample containers.
  • Embodiments of the grasping location algorithm 230 may be trained to avoid these areas and identify the shapes of the sample containers that are obscured by the liquids.
  • FIGS. 6A-6E illustrate examples of images of different sample containers and areas on the sample containers that may be avoided during grasping operations.
  • the sample container 600A is relatively short, has a barcode label 604 positioned in the middle of the tube portion, and does not have a cap.
  • a barcode 606 may be centrally located on the barcode label 604.
  • the sample container 600A is almost fully filled with a sample 608 as shown by hatching in the sample container 600A.
  • An anomaly 610 is located on the upper portion of the barcode label 604 and the barcode 606. The anomaly 610 may be spilled liquid or adhesive from the barcode label 604, for example.
  • a real-world image of the sample container 600A may be captured by the imaging device 224 (FIG. 2) and analyzed by the image processor 220, which may process the image data 222 generated by the imaging device 224.
  • the semantic keypoints identification model 226 may identify semantic keypoints on the sample container 600A in a similar manner as the identification of semantic keypoints 514 (FIG. 5) described with reference to the sample container 500.
  • the anomaly 610 may be identified and/or located by semantic keypoints 612 using the semantic keypoints identification model 226, which has been trained, e.g., to identify the anomaly 610.
  • the semantic keypoints identification model 226 may also identify the height of the sample 608 by a semantic keypoint 613.
  • the grasping location algorithm 230 may be trained to avoid the anomaly 610 identified by the semantic keypoints identification model 226 when generating instructions to the robot controller 204 to direct the gripper 304 (FIG. 3) to grasp the sample container 600A as described in greater detail herein.
  • the semantic keypoints identification model 226 may determine the center of mass of the sample container 600A based at least in part on the level of the sample 608 in the sample container 600A.
  • the sample container 600B is relatively tall, includes a cap 615, and has a barcode label 614 located on the upper portion of the tube 616. A barcode 618 may be located on the barcode label 614.
  • An image of the sample container 600B may be captured by the imaging device 224 (FIG. 2) and analyzed to identify surface and/or geometric properties of the sample container 600B.
  • the semantic keypoints identification model 226 may identify the surface and/or geometric properties in the image data 222 including the height and width of the sample container 600B, cap status, and barcode label size and position as described above.
  • the semantic keypoints identification model 226 may generate a semantic keypoint 620 that identifies or locates the height of the sample 622 in the sample container 600B.
  • the semantic keypoints identification model 226 may determine the center of mass of the sample container 600B based at least on part on the height of the sample 622. Based on these properties, the grasping location algorithm 230 may determine the optimal grasping location as described in greater detail below.
  • the sample container 600C does not include a cap and is empty.
  • the semantic keypoints identification model 226 may identify the height, width, cap status, and/or liquid height (empty). Based on these properties, the grasping location algorithm 230 may determine the optimal grasping location as described in greater detail below.
  • the sample container 600E is skewed.
  • the sample container 600E may have moved within a sample mover (e.g., one of the sample movers 214 - FIG. 2).
  • the semantic keypoints identification model 226 may locate semantic keypoints 644 at the corners of the image of the sample container 600E and analyze the semantic keypoints 644 to calculate the skew of the sample container 600E.
  • the image data 222 (FIG. 2) of the sample container 600E may also be analyzed by the semantic keypoints identification model 226 to identify a semantic keypoint 640 indicating a height of a sample 642 stored in the sample container 600E.
  • the semantic keypoints identification model 226 may use the height of the sample 642 to determine the center of mass of the sample container 600F.
  • the grasping location algorithm 230 may determine the optimal grasping location based at least in part on the height of the sample 642 and the skew of the sample container 600E.
  • the apparatus and methods described herein may consider the condition of barcodes and barcode labels on the sample containers when determining optimal grasping locations. Barcode labels imaged in the laboratory system 102 may be in a variety of different states. For example, some of the barcode labels may be applied properly to the sample containers, but other barcode labels may be skewed. In some embodiments, the barcode labels can lose their adhesiveness and peel off the sample containers as shown in FIG. 6D.
  • the semantic keypoints identification model 226 may analyze the image data 222 and/or some of the already identified/located semantic keypoints to determine centers of mass and/or center of gravity of the sample containers 210.
  • the following description refers to centers of mass calculations and uses, but the same description may apply to center of gravity calculations and uses.
  • Center of mass information may be used by the grasping location algorithm 230 to determine the optimal grasping locations where the gripper 304 grasps the sample containers 210.
  • a center of mass algorithm may be a separately trained model or network in addition to the semantic keypoints identification model 226.
  • the semantic keypoints 514 may also be used during pick-up and placement tasks performed by the robot 206, such as when calculating how far the sample container 500 should be inserted into a container slot 232 (FIG. 2) before the tube bottom 502A of the sample container 500 hits a surface or obstacle in a sample container carrier.
  • the location of the cap top 504A and/or the height H51 may indicate a minimum height that the gripper 304 must clear before attempting to move along the x-axis and the y-axis.
  • training the semantic keypoints identification model 226 may be performed, at least in part, by applying a convolutional neural network (CNN) to the training images or image data representative of the training images.
  • CNNs may accurately classify, regress, and segment training images of the sample containers.
  • architectures such as U-Net may be used to segment areas of interest in the training images as well as for detection of the semantic keypoints 514 (FIG. 5) and other semantic keypoints described herein. The segments, classifications, and regression may identify the items described with reference to the semantic keypoints described herein.
  • the input to the CNN or other network or model may include depth images (e.g., data generated by an RGBD sensor) of a scene that includes real-world and/or synthetic training images.
  • the real-world training images may be captured from the imaging device 224 mounted on the gripper 304 or the arm 320 (FIG. 3). Other imaging devices, such as imaging devices external to the laboratory system 102 (FIG. 1) may be used to capture the real-world training images.
  • the synthetic image data may be generated by the computer 120 or another computer.
  • the training image data may include depth data indicating distances between objects in the scene, such as the sample containers 210 and the imaging device 224.
  • the depth data may be distances between the synthesized objects in the scene and synthetic viewpoints.
  • the depth data combined with intrinsic parameters of the imaging device 224 enables the grasping location algorithm 230 to evaluate 3D information of the scene when learning about the grasping locations of the sample containers 210.
  • the grasping location algorithm 230 can evaluate the captured training images of the scene from the first viewpoint.
  • the missing information can be captured by manipulating the robot 206 so that the imaging device 224 may observe the scene from a second viewpoint and the grasping location algorithm 230 may evaluate the scene from the second viewpoint.
  • the training images from the two viewpoints may be analyzed to generate training images of entire sample containers.
  • FIG. 7 is a synthetic image 700 of a plurality of sample containers 702 (a few labelled).
  • the synthetic image 700 includes an overlaid representative heatmap 706 that may be used to train the grasping location algorithm 230.
  • the synthetic image 700 may be computer-generated and may be referred to as a virtual image and may include synthesized training data. Accordingly, the training images of the sample containers 702 may be computer-generated or they may be real-world images that are placed into the synthetic image 700 using computer algorithms.
  • the synthetic image 700 and variations thereof may be used to train the semantic keypoints identification model 226 and/or the grasping location algorithm 230.
  • the heatmap 706 shows neighborhoods 708 indicating possible locations for the gripper 304 to grasp the sample containers 702 as described herein.
  • the sample container height and diameter, the cap height and diameter, the amount of liquid in synthetic sample container images, and other items of each of the sample containers 702 can be varied.
  • a virtual camera or viewpoint may be moved around the scene to virtually capture synthetic color and depth of the images of the sample containers 702.
  • Semantic keypoints may be labeled in three dimensions on the sample containers 702, wherein the semantic keypoints correspond to the semantic keypoints described herein.
  • each of the sample containers 702 may be generated using pre-selected sample container characteristics that are included in or projected into the images using intrinsic and extrinsic parameters of a virtual imaging device.
  • images of the sample containers 702 may be automatically generated with a variety of labeled synthetic data that can augment an overall dataset and guide the semantic keypoints identification model 226 to effectively learn to identify the objects and features thereof in the images of the sample containers 702.
  • the synthetic image 700 may be generated with known distances between the sample containers 702 and the first viewpoint 710.
  • a first sample container 702A is generated as being a distance D71 from the first viewpoint 710.
  • a second sample container 702B is generated as being a distance D72 from the first viewpoint 710.
  • the distance data may be input to a CNN to train the grasping location algorithm 230 and other models and networks.
  • the imaging device 224 (FIG. 2) or another device may provide the distances or depths as described herein.
  • the real-world image 800 may be captured with the imaging device 224 at a known first viewpoint 810, which may be known or predetermined in the sample handler 130 (FIG. 2).
  • the first viewpoint 810 may orient the imaging device 224 to be a distance D81 from a first sample container 802A and a distance D82 from a second sample container 802B.
  • the distances may be changed by moving the imaging device 224 (FIG. 2) relative to the sample containers.
  • the distance data and image data may be input to a CNN to train the grasping location algorithm 230 and other models and networks.
  • the imaging device 224 (FIG. 2) or another device may provide the distance or depth data.
  • Training the semantic keypoints identification model 226 may include analyzing both synthetic image data of the synthetic image 700 (FIG. 7) and real-world image data representative of the sample containers 802 in FIG. 8. With the synthetic image data, the height and diameter, cap geometry, the amount of liquid in each sample container, and other characteristics can be varied virtually. A virtual imaging device may be moved around the sample container images to virtually capture color and depth in the images. Annotated synthetic data can be generated automatically in software, such as by the programs 126 (FIG. 1) at scales required for deep learning with less effort than by manual annotation. For example, scene characteristics such as the object geometry and orientation, lighting, environmental conditions, and other features may be precisely controlled and varied as needed for proper training by software that generates the synthetic images.
  • a sample container 802C may obscure an image of the sample container 802B from the imaging device 224 when the imaging device 224 is positioned at the first viewpoint 810.
  • the robot controller 204 (FIG. 2) may direct the robot 206 (FIG. 2) to move to a second viewpoint (not shown) or a plurality of other viewpoints, which may enable the imaging device 224 to capture images of portions of the sample container 802C that were obscured from the first viewpoint 810.
  • Processing performed by the computer 120 may form complete images of the sample container 820C.
  • machine learning or other algorithms may generate missing portions of images of the sample containers 802. For example, portions of images of the sample containers 802 that are obscured by the container slots 232 may be generated.
  • the images of the sample containers 802 may be segmented and annotated.
  • the images may be manually segmented and/or annotated.
  • the images may be segmented and/or augmented by machine learning algorithms, such as by the semantic keypoints identification model 226.
  • Deep learning networks may be employed to regress the images to identify geometric features and other parameters of the sample containers using the semantic keypoints identification model 226.
  • a CNN running in the grasping location algorithm 230 may be trained to detect grasping locations on the sample container 210.
  • the grasping locations may be at least partially based on geometric features of the gripper 304 (FIG. 3) as determined by the regression.
  • the diagnostic laboratory system 102 may include a plurality of different types of robots that have grippers with different geometric features and grasping capabilities.
  • the grasping location algorithm 230 may be trained to identify different grasping locations on the different sample containers 210 based on the specific geometric features, such as the geometric features identified during regression.
  • one or more of the grippers may be parallel-jaw grippers as illustrated in FIG. 4A that include two fingers 400 configured to close simultaneously around the sample containers 210 (FIG. 2) and exert forces in opposing directions to create the grasps.
  • the two fingers 400 may have the four nodes 404 (FIG. 4A) that contact the sample containers 210.
  • the gripper 304 may have two nodes 404 that contact the sample containers 210.
  • the gripper 304 may have three nodes 418 that contact the sample containers 210 as shown in FIG. 4F.
  • Other embodiments may include robots that operate using suction and may be referred to as suction robots.
  • the selection of the grasping locations may determine the success and stability of the grasps and the transfer of the sample containers 210 between different locations. For example, if the grasping locations are not selected in a configuration that results in an antipodal grasp, then the grasp may fail and the sample containers 210 may not be picked up or placed properly.
  • the sample containers 210 are typically symmetric, which may be a variable in determining the grasping locations. For example, if a parallel-jaw gripper is used to grasp sample containers 210 that are cylindrical, there may be an infinite number of potential pairs of grasping locations along the circumference of the sample containers 210 at a single height.
  • grasping location algorithm 230 When the grasping location algorithm 230 generates ground truth grasping locations, there may be a strategy for selecting the appropriate grasping locations based at least partially on the kinematics of the robot 206 and characteristics of the sample containers 210, such as locations of the barcodes and anomalies.
  • One method of selecting the optimal grasping locations is to select a height on a sample container at which the gripper 304 will always grasp a sample container. The method may then detect locations along the circumference of the sample container as candidate points for grasping. From the candidate grasping locations, a pair of grasping locations that are furthest apart from each other but still within a view of the imaging device 224 may be selected as the ground truth grasping location.
  • the grasping location algorithm 230 may generate a heatmap representation of grasping locations overlaid on images of the sample containers 210 in order to regress candidate neighborhoods within which the optimal grasping locations may be located. By considering the probability that an optimal grasping location is located in some neighborhood around the predicted location, the grasping location algorithm 230 may be trained to be more robust to slight perturbations in the optimal grasping locations and may help the grasping location algorithm 230 to predict more accurate grasping locations. [00107] The ground truth heatmaps for training the grasping location algorithm 230 may be generated using precise grasping locations as well as probability functions centered on the grasping locations, which define the neighborhoods around the precise locations. For example, referring to FIG.
  • two-dimensional Gaussian functions with a set mean and variance may control where the heatmap 706 locates the optimal grasping locations and how far the neighborhoods 708 extend beyond peaks of the Gaussian functions.
  • the entire heatmap 706 may be normalized to be within the range of 0.0 to 1 .0, so the Gaussian functions will also be normalized to the same range.
  • the precise locations of the landmarks in the neighborhoods 708 may be referred to as 1 to indicate that optimal grasping locations are definitely at those locations.
  • the values in the neighborhoods around the points slowly decrease to 0.0 according to the shapes of the Gaussian functions and their variances. The lower the values in the neighborhoods 708, the less likely that optimal grasping locations exist there.
  • the result is the heatmap 706 containing many local functions that robustly describe where the ground truth keypoints or optimal grasping locations are located.
  • the grasping location algorithm 230 may be trained to locate these neighborhoods 708.
  • the gripper 304 may be directed to grasp optimal grasping locations of the sample containers 210, which may include many different types of sample containers 210.
  • the robot 206 may move the gripper 304 to a location of the sample container 210A.
  • the imaging device 224 may then capture an image of the sample container 210A.
  • the semantic keypoints identification model 226 may identify semantic keypoints in the image of the sample container 210A.
  • the grasping location algorithm 230 which has been trained as described herein, may determine an optimal grasping location for the sample container 210A.
  • the optimal grasping location may be used by the robot controller 204 to direct the gripper 304 to grasp the sample container 210A as described herein.
  • determinations made by the grasping location algorithm 230 may be at least partially based on centers of mass.
  • the synthetic image 700 (FIG. 7) and/or the real-world image 800 (FIG. 8) may be used to train the semantic keypoints identification model 226 to determine or calculate centers of mass.
  • the sample containers 702 may be analyzed in the same or similar methods as the analyses of the sample container 500 of FIG. 5.
  • Semantic keypoints on the images of the sample containers 702 may be located.
  • a CNN executed within the semantic keypoints identification model 226 may locate the semantic keypoints as described herein.
  • a user may place the semantic keypoints on the images of the sample containers 702.
  • the images of the sample containers 702 as generated in the synthetic image 700 may be displayed on the display 242 (FIG. 2).
  • a user may use the keyboard 244 or another input device to mark or identify the semantic keypoints.
  • FIG. 9 illustrates a heatmap 900 that indicates centers of mass overlaid onto the images of the sample containers 702 in the synthetic image 700.
  • the heatmap 900 may be generated by analyzing the semantic keypoints 514 (FIG. 5) and other data.
  • the heatmap 900 may include a plurality of neighborhoods 902 indicating centers of mass. Centers of the neighborhoods 902, shown as dark dots, indicate the highest values of the Gaussian functions (e.g., closest to normalized 1 .0 values) indicative of the centers of mass. The lower values of the Gaussian functions are shown as hatched circles in the neighborhoods 902 surrounding the dark dots. Ground truth heatmaps for the training may be generated using similar or identical methods as with the optimal grasping points.
  • the semantic keypoints identification model 226 may be able to determine centers of mass of the sample containers 702 by analyzing images of the sample containers 702 or other semantic keypoints in images of the sample containers 702.
  • the computer 120 may generate grasp parameters or rules that direct the gripper 304 to grasp the sample containers 210 based on geometry of an individual sample container 210.
  • the processor 122 executing the robot controller 204, may direct the gripper 304 to move to specific locations and grasp sample containers 210 on the optimal grasping locations.
  • the grasping parameters may include, but are not limited to: three-dimensional (3D) positions of grasp points or regions on the surfaces of the sample containers 210; a (3D) vector to indicate the direction of approach of the gripper 304 to the sample containers 210; joint angles of the gripper 304 to achieve the optimal grasp locations within the robot constraints; joint trajectories of the gripper 304 to characterize the approach to the sample containers 210; and forces applied by the gripper 304 to reliably hold the sample containers 210.
  • the programs 126 may determine that certain surface properties may require that certain forces be applied to the sample containers 210 by the gripper 304.
  • the image of the sample container 600A may be a real-world image captured by the imaging device 224.
  • a portion of the sample container 600A may be a real-world image captured by the imaging device 224 and the remaining portion of the image of the sample container 600A may be synthetically constructed by software.
  • the semantic keypoints identification model 226 may identify semantic keypoints as shown in FIG. 5.
  • the semantic keypoints identification model 226 may identify the keypoints 612 to locate and/or identify the anomaly 610 and also locate the center of mass of the sample container 600A as described herein.
  • the grasping location algorithm 230 may analyze the data generated by the semantic keypoints identification model 226 to determine the optimal grasping locations.
  • the optimal grasping locations are shown as a first semantic keypoint 650A and a second semantic keypoint 650B, which are locations where the nodes 404 (FIG. 4A) may contact the sample container 600A.
  • the optimal grasping location may be above the barcode label 604 and the anomaly 610 in order to prevent the nodes 404 from damaging the barcode label 604 while avoiding the anomaly 610.
  • the sample container 600A is relatively full, so the optimal grasping location may be high to prevent the sample container 600A from tipping during grasping and transportation.
  • FIG. 6B provides another example of the operation of the diagnostic laboratory system 102 (FIG. 2).
  • the image of the sample container 600B may be captured by the imaging device 224.
  • a portion of the sample container 600B may be captured by the imaging device 224 and the remaining portion of the image of the sample container 600B may be constructed by software.
  • the semantic keypoints identification model 226 may identify semantic keypoints as shown in FIG. 5.
  • the semantic keypoints identification model 226 may identify the keypoint 620 to locate the height of the sample 622 and a keypoint 623 to locate the cap 615.
  • the semantic keypoints identification model 226 may also locate the center of mass of the sample container 600B as described herein. The center of mass may be centered in the sample container 600B because the height of the sample 622 is low and the sample container 600B has a cap 615.
  • the grasping location algorithm 230 may analyze the data generated by the semantic keypoints identification model 226 to determine the optimal grasping location.
  • the optimal grasping locations are shown as a first semantic keypoint 652A and a second semantic keypoint 652B, which are locations where the nodes 404 (FIG. 4A) may contact the sample container 600B.
  • the optimal grasping location may be above the barcode label 614 and below the cap 615 in order to prevent the nodes 404 from damaging the barcode label 604 while avoiding the cap 615.
  • the sample container 600B is relatively empty, so the optimal grasping location may alternatively be below the barcode label 614, which may avoid tipping the sample container 600B.
  • the determination of the optimal grasping location may also include the materials (e.g., glass or plastic) of the sample containers 210.
  • the optimal grasping locations may be located away from a top opening, which may be fragile. That is, if a grasping location is located close to the top opening of a sample container made of a weak material, the grasping may cause the sample container to break.
  • the grasping properties may consider other factors, such as mechanical constraints of the robot 206, which includes joint limits, force/torque considerations, and available workspace for the robot 206.
  • the surface friction of a sample container at the optimal grasping locations may be a factor in determining a force applied by the gripper 304 to the sample container.
  • FIG. 10 illustrates a flowchart of a method 1000 of grasping a sample container (e.g., sample containers 210) in a diagnostic laboratory system (e.g., laboratory system 102) using a robot (e.g., robot 206).
  • the method 1000 includes, in block 1002, capturing a real-world image including a sample container configured to be used in a diagnostic laboratory system, wherein the captured real-world image includes image data (e.g., image data 222).
  • the image data may be provided by the imaging device 224 affixed to the gripper 304 of the robot 206.
  • the method 1000 includes, in block 1004, executing a machine learning model (e.g., semantic keypoints identification model 226) to analyze the image data and locate one or more semantic keypoints (e.g., semantic keypoints 514) on the image of the sample container.
  • the semantic keypoints may identify and/or locate geometric features and other characteristics of the sample container.
  • Training data for the machine learning model may include real-world image data and/or synthetic image data of sample containers having different geometries and each having semantic keypoints annotated thereon.
  • the method 1000 includes, in block 1106, determining a grasping location on the sample container at least partially based on the locations of the one or more semantic keypoints.
  • the grasping locations may be optimal grasping locations determined by the grasping location algorithm 230.
  • the method 1000 includes, in block 1008, directing, via a robot controller (e.g., robot controller 204) a gripper (e.g., gripper 304) of a robot (e.g., robot 206) to grasp the sample container at the grasping location.
  • a robot controller e.g., robot controller 20
  • a gripper e.g., gripper 304 of a robot (e.g., robot 206) to grasp the sample container at the grasping location.
  • the robot controller 204 executing on the processor 122 may direct the robot 206 to grasp the sample container 210A at an optimal grasping location.
  • FIG. 11 illustrates a method 1100 of grasping a sample container (e.g., sample containers 210) in a diagnostic laboratory system (e.g., laboratory system 102) using a robot (e.g., robot 206).
  • the method 1100 includes, in block 1102, capturing a real-world image including a sample container configured to be used in a diagnostic laboratory system, wherein the captured image includes image data (e.g., image data 222).
  • the method 1100 includes, in block 1104, executing a machine learning model (e.g., semantic keypoints identification model 226) to analyze the image data and locate one or more semantic keypoints (e.g., semantic keypoints 514) on the image of the sample container.
  • a machine learning model e.g., semantic keypoints identification model 2236
  • Training data for the machine learning model may include real-world image data and/or synthetic image data of sample containers having different geometries and each having semantic keypoints annotated thereon.
  • the method 1100 includes, in block 1106, determining one or more surface properties or geometric properties of the sample container at least partially based on the one or more semantic keypoints.
  • the method 1100 includes, in block 1108, determining a grasping location on the sample container at least partially based on the one or more surface properties or geometric properties.
  • the method 1100 includes, in block 1110, directing, via a robot controller (e.g., robot controller 204), a gripper (e.g., gripper 304) of a robot (e.g., robot 206) to grasp the sample container at the grasping location.
  • a robot controller e.g., robot controller 204
  • a gripper e.g., gripper 304
  • Illustrative embodiment 1 A method of grasping a sample container in a diagnostic laboratory system using a robot, the method comprising: capturing an image including a sample container configured to be used in a diagnostic laboratory system, wherein the captured image includes image data; executing a machine learning model to analyze the image data and locate one or more semantic keypoints on the image of the sample container, wherein training data for the machine learning model includes real-world image data or synthetic image data of sample containers having different geometries and each having semantic keypoints annotated thereon; determining a grasping location on the sample container, via a grasping location algorithm, at least partially based on the locations of the one or more semantic keypoints; and directing, via a robot controller, a gripper of a robot to grasp the sample container at the grasping location.
  • Illustrative embodiment 2 The method according to the preceding illustrative embodiment, wherein the executing the machine learning model comprises executing a convolutional neural network to analyze the image data and locate one or more semantic keypoints on the sample container.
  • Illustrative embodiment 3 The method according to one of the preceding illustrative embodiments, wherein the image is captured using an imaging device, wherein the image data includes depth data, wherein the depth data includes a distance between the sample container and the imaging device, and wherein the determining the grasping location comprises determining the grasping location at least partially based on the depth data.
  • Illustrative embodiment 4 The method according to one of the preceding illustrative embodiments, wherein the image data includes color data, and wherein the determining a grasping location comprises determining the grasping location at least partially based on the color data.
  • Illustrative embodiment 5 The method according to one of the preceding illustrative embodiments, wherein the executing the machine learning model comprises executing a deep neural network to analyze the image data and locate the one or more semantic keypoints on the sample container.
  • Illustrative embodiment 6 The method according to one of the preceding illustrative embodiments, wherein the determining the grasping location via the grasping location algorithm comprises executing a deep neural network trained to determine the grasping location on the sample container at least partially based on analyzing the locations of the one or more semantic keypoints.
  • Illustrative embodiment 7 The method according to one of the preceding illustrative embodiments, further comprising moving the sample container from a first location to a second location using the gripper of the robot in response to the directing.
  • Illustrative embodiment 8. The method according to one of the preceding illustrative embodiments, wherein: the gripper has at least one degree of freedom; and the determining the grasping location via the grasping location algorithm comprises determining the grasping location for the gripper at least partially based on the at least one degree of freedom.
  • Illustrative embodiment 9 The method according to one of the preceding illustrative embodiments, wherein the sample container includes a biological sample.
  • Illustrative embodiment 10 The method according to one of the preceding illustrative embodiments, wherein the sample container has one or more geometric properties and wherein the determining the grasping location via the grasping location algorithm comprises determining the grasping location for the gripper based on the one or more geometric properties.
  • Illustrative embodiment 11 The method according to one of the preceding illustrative embodiments, wherein: the one or more semantic keypoints on the sample container locate an indicia; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location such that the gripper avoids contacting the indicia.
  • Illustrative embodiment 12 The method according to one of the preceding illustrative embodiments, wherein: the one or more semantic keypoints on the sample container locate a barcode; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location such that the gripper avoids contacting the barcode.
  • Illustrative embodiment 13 The method according to one of the preceding illustrative embodiments, wherein: the one or more semantic keypoints on the sample container locate an anomaly; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location such that the gripper avoids contacting the anomaly.
  • Illustrative embodiment 14 The method according to one of the preceding illustrative embodiments, wherein: the one or more semantic keypoints on the sample container locate a cap; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location at least partially based on the location of the cap.
  • Illustrative embodiment 15 The method according to one of the preceding illustrative embodiments, wherein: the one or more semantic keypoints on the sample container identify a height of a liquid in the sample container; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location at least partially based on the height of the liquid.
  • Illustrative embodiment 16 The method according to one of the preceding illustrative embodiments, wherein: the one or more semantic keypoints on the container identify a height of the sample container; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location at least partially based on the height of the sample container.
  • Illustrative embodiment 17 The method according to one of the preceding illustrative embodiments, wherein: the one or more semantic keypoints on the container identify a center of mass of the sample container; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location at least partially based on the center of mass.
  • Illustrative embodiment 19 The apparatus according to the preceding illustrative embodiment, wherein the imaging device is attached to the robot and wherein the imaging device is movable with the robot.
  • Illustrative embodiment 20 A method of grasping a sample container in a diagnostic laboratory system using a robot, the method comprising: capturing an image including a sample container configured to be used in a diagnostic laboratory system, wherein the captured image includes image data; executing a machine learning model to analyze the image data and locate one or more semantic keypoints on the image of the sample container, wherein training data for the machine learning model includes real-world image data and synthetic image data of sample containers having different geometries and each having semantic keypoints annotated thereon; determining one or more surface properties or geometric properties at least partially based on the one or more semantic keypoints; determining a grasping location on the sample container, via a grasping location algorithm, at least partially based on the one or more surface properties or geometric properties; and directing, via a robot controller, a gripper of a robot to grasp the sample container at the grasping location.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Multimedia (AREA)
  • Automatic Analysis And Handling Materials Therefor (AREA)

Abstract

A method of grasping a container in a diagnostic laboratory system using a robot includes capturing an image of a sample container to be used in a diagnostic laboratory system, wherein the captured image includes image data; executing a machine learning model to analyze the image data and locate one or more semantic keypoints on the image of the sample container; determining a grasping location on the sample container at least partially based on the locations of the one or more semantic keypoints; and generating instructions to direct a gripper of a robot to grasp the sample container at the grasping location. Other methods and systems are disclosed.

Description

SYSTEMS AND METHODS FOR IDENTIFYING GRASPING LOCATIONS ON SAMPLE CONTAINERS IN DIAGNOSTIC LABORATORY SYSTEMS
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims benefit under 35 USC § 119(e) of U.S. Provisional Patent Application No. 63/552,865, filed on February 13, 2024, the disclosure of which is hereby incorporated by reference herein in its entirety.
FIELD
[0002] This disclosure relates to systems and methods for identifying grasping locations on sample containers in diagnostic laboratory systems.
BACKGROUND
[0003] Diagnostic laboratory systems conduct clinical chemistry tests that identify analytes or other constituents in biological samples such as blood serum, blood plasma, urine, interstitial liquid, cerebrospinal liquids, and the like. The biological samples are collected in sample containers, such as test tubes, and are transported to a diagnostic laboratory system. After the sample containers are received at the laboratory system, the sample containers are loaded into one or more sample container carriers (e.g., tube tray or racks). The sample container carriers are then loaded into a sample handler (e.g., an input/output module) of the laboratory system that enables the laboratory system to receive and discharge the sample containers. Robots within the sample handler and elsewhere in the laboratory system may grasp and transfer the sample containers between various locations and components within the laboratory system. However, the robots may operate using fixed grasping rules that may not account for variations in the type and geometry of different sample containers that may be used in the laboratory system.
[0004] For example, in some conventional laboratory systems, sample containers may have different diameters and heights. When a robot moves to pick up a sample container from a sample container carrier, the robot may descend to a fixed height above the sample container carrier. When the robot attempts to grasp a sample container shorter than the fixed height, the robot may not be able to grasp that shorter sample container. When the robot attempts to grasp a sample container taller than the fixed height, the robot may collide with that sample container or grasp the sample container at a sensitive area, which may break the sample container.
[0005] In some situations, the robots may grasp sample containers at locations covered by barcode labels or other indicia, wherein the grasping may damage the barcode labels or indicia. This may cause information in the barcode label or indicia to be unreadable or misinterpreted, which may disrupt sample testing and/or cause ordered tests to be missed or erroneous tests to be performed instead.
[0006] Based on the forgoing, improved methods and apparatus for identifying precise grasping locations on sample containers of various types and geometries are sought.
SUMMARY
[0007] According to a first aspect, a method of grasping a container in a diagnostic laboratory system using a robot is provided. The method includes capturing an image including a container configured to be used in a diagnostic laboratory system, wherein the captured image includes image data; executing a machine learning model to analyze the image data and locate one or more semantic keypoints on the image of the sample container, wherein training data for the machine learning model includes real-world image data or synthetic image data of sample containers having different geometries and each having semantic keypoints annotated thereon; determining a grasping location on the sample container via a grasping location algorithm at least partially based on the locations of the one or more semantic keypoints; and directing, via a robot controller, a gripper of a robot to grasp the container at the grasping location.
[0008] In another aspect, an apparatus is provided. The apparatus includes a robot having a gripper; an image capture device configured to capture images of sample containers within the diagnostic laboratory system; a processor; a memory coupled to the processor and including computer program instructions that, when executed by the processor, cause the processor to: receive image data of a sample container captured by the image capture device; execute a machine learning model to analyze the image data and locate one or more semantic keypoints on the image of the sample container, wherein training data for the machine learning model includes real-world image data or synthetic image data of sample containers having different geometries and each having semantic keypoints annotated thereon; determine a grasping location on the sample container at least partially based on the locations of the one or more semantic keypoints; and direct the robot to grasp the sample container at the grasping location using the gripper of the robot.
[0009] In a further aspect, a method of grasping a sample container in a diagnostic laboratory system using a robot is provided. The method includes capturing an image including a sample container configured to be used in a diagnostic laboratory system, wherein the captured image includes image data; executing a machine learning model to analyze the image data and locate one or more semantic keypoints on the image of the sample container, wherein training data for the machine learning model includes real-world image data or synthetic image data of sample containers having different geometries and each having semantic keypoints annotated thereon; determining one or more surface properties or geometric properties of the sample container at least partially based on the one or more semantic keypoints; determining a grasping location on the sample container via a grasping location algorithm at least partially based on the one or more one or more surface properties or geometric properties; and directing, via a robot controller, a gripper of a robot to grasp the sample container at the grasping location.
[0010] Still other aspects, features, and advantages of this disclosure may be readily apparent from the following description and illustration of a number of example embodiments, including the best mode contemplated for carrying out the disclosure. This disclosure may also be capable of other and different embodiments, and its several details may be modified in various respects, all without departing from the scope of the disclosure.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] The drawings described below are provided for illustrative purposes and are not necessarily drawn to scale. Accordingly, the drawings and descriptions are to be regarded as illustrative in nature, and not as restrictive. The drawings are not intended to limit the scope of the disclosure in any way.
[0012] FIG. 1 illustrates a perspective view of a diagnostic laboratory system located in a laboratory according to one or more embodiments.
[0013] FIG. 2 illustrates a detailed view of the computer of FIG. 1 in communication with a sample handler of the diagnostic laboratory system according to one or more embodiments.
[0014] FIG. 3 illustrates a robot and a sample container carrier located in a sample handler of a diagnostic laboratory system, wherein an imaging device is attached to a gripper of a robot within the sample handler according to one or more embodiments.
[0015] FIG. 4A illustrates a side perspective view of nodes of a robot gripper grasping a sample container on a barcode label, wherein an imaging device is attached to the robot gripper according to one or more embodiments.
[0016] FIG. 4B illustrates a side elevation view of the robot gripper of FIG. 4A extending along an x-axis proximate a sample container according to one or more embodiments.
[0017] FIG. 4C illustrates a top plan view of a sample container showing locations where nodes of a robot gripper may contact an exterior surface of a sample container according to one or more embodiments.
[0018] FIG. 4D illustrates nodes of a robot gripper grasping a sample container at a higher vertical position than the nodes grasping the sample container shown in FIG. 4A according to one or more embodiments.
[0019] FIG. 4E illustrates a robot gripper configured with a large number of degrees of freedom according to one or more embodiments. [0020] FIG. 4F illustrates a top plan view of a sample container wherein three nodes of a robot gripper are contacting an exterior of the sample container according to one or more embodiments.
[0021] FIGS. 5 illustrates a side elevation view of an image of a sample container and different semantic keypoints located on the image of the sample container according to one or more embodiments.
[0022] FIGS. 6A-6E illustrate examples of real-world images of different sample containers and areas on the sample containers that should be avoided during grasping operations by a robot in a diagnostic laboratory system according to one or more embodiments.
[0023] FIG. 7 illustrates a synthetic image of a plurality of sample containers that may be used to train a semantic keypoints identification model in a diagnostic laboratory system according to one or more embodiments.
[0024] FIG. 8 illustrates a real-world image of a plurality of sample containers that may be used to train a semantic keypoints identification model in a diagnostic laboratory system according to one or more embodiments.
[0025] FIG. 9 illustrates a heatmap showing centers of mass of sample containers overlaid onto the images of sample containers in a synthetic image according to one or more embodiments.
[0026] FIG. 10 illustrates a flowchart of a method of grasping a sample container in a diagnostic laboratory system using a robot according to one or more embodiments.
[0027] FIG. 11 illustrates a flowchart of a method of grasping a sample container in a diagnostic laboratory system using a robot according to one or more embodiments.
DETAILED DESCRIPTION
[0028] Independent of the grammatical term usage, individuals with male, female or other gender identities are included within the term.
[0029] Automated diagnostic laboratory systems perform analyses (e.g., tests) on various biological samples, such as blood, blood serum, urine, and other bodily fluids. The samples are collected from patients and placed into sample containers, such as test tubes. The sample containers along with testing instructions are then sent to an automated diagnostic laboratory system. The testing instructions may indicate which tests are to be performed on the samples by instruments located in the diagnostic laboratory system. A technician or software executing on a computer may determine which instruments in the diagnostic laboratory are to perform each test on each of the samples per the instructions. [0030] The components in diagnostic laboratory systems can be broadly characterized as sample transport systems, sample movers, and instruments. The sample transport systems may include hardware, such as tracks, that are configured to move the sample movers throughout the laboratory systems. The sample movers may receive the sample containers and move the sample containers on the tracks. The instruments may be modules and/or analyzers that the sample movers may be directed to, wherein processes and analyses may be performed on the samples by the instruments. Examples of the instruments include centrifuges, chemistry analyzers, decappers, storage modules, and refrigeration modules.
[0031] Typical workflow in a laboratory system may include loading sample containers into sample movers and then instructing the sample transport system to transport the sample movers to one or more of the instruments. Many laboratory systems use robot systems to move the sample containers into and out of the sample movers. The robot systems may use robot grippers (e.g., end effectors) to grasp and move the sample containers. In addition to sample containers, the robot systems may move quality control packs, calibrator packs, and other items throughout the laboratory systems.
[0032] Some laboratory systems include one or more input/output modules (e.g., sample handlers) where sample containers are loaded into and removed from the laboratory systems via sample container carriers. A robot picks up the sample containers one at a time from the sample container carriers and places the sample containers into sample movers that move the sample containers to other modules via the transport system, such as the tracks. After tests have been performed, the sample movers move the sample containers back to the sample handler where a robot transfers the sample containers one at a time from the sample movers back to the sample container carriers, which are then removed from the sample handlers.
[0033] Robots configured to move the sample containers may be used in many locations in the laboratory systems. For example, some laboratory systems include transfer stations where robots transfer sample containers from one track to another track. In such situations, a first sample mover containing a sample container arrives at a transfer station via a first track. An empty second sample mover arrives at the transfer station via a second track. A robot grasps the sample container from the first sample mover and transfers the sample container to the second sample mover.
[0034] One issue affecting robots moving sample containers from one track to another track is the variability of the arrival position of the sample containers. Conventional laboratory systems operate under the assumption that the sample containers stop at a precise transfer position every time such that a robot can grasp a sample container from that precise transfer position in order to perform sample container transfers. In reality, sample containers may stop partially before or after the precise transfer position, which shifts the sample container slightly away from where the robot fingers are expected to grasp the sample containers. The result is that the fingers can descend on top of the sample containers and puncture tops or break the sample containers. This may be a significant problem should biohazardous liquids spill from the sample containers onto the instruments and the sample movers. The sample movers may then spread the biohazardous liquids to other parts of the laboratory systems. Even if the misalignment of sample containers at a transfer position results in only a relatively small percentage of damaged sample containers, the aggregate effect can seriously hamper the operation of a laboratory system.
[0035] As described above, the robots may have grippers configured to grasp the sample containers. The robots and/or robot controllers may use grasping rules to operate and move the grippers. Given all the different items (e.g., different sample container types) that may be grasped by the robots, it may not be advantageous to use the same grasping rules for every item that may be grasped by the robots. For example, different sample containers may have different geometries and surface properties. Using the same grasping rules for the different sample containers is inefficient. Furthermore, should a gripper grasp all the different sample container types using the same grasping rules, there is a risk that short sample containers may be grasped at unstable locations, such as to close to their tops, which may cause breaks or spillage of the sample container contents.
[0036] Different types of sample containers may have different diameters and/or heights. When a robot picks up a sample container from a sample container carrier, the height of the sample container may not be considered when the robot descends downward to grasp the sample container. Instead, the robot may attempt to grasp the sample container at a fixed height. When sample containers are shorter than this fixed height, the robot will grasp nothing. When sample containers are taller than the fixed height, the robot may grasp the sample containers at sensitive or unstable areas, which may cause the sample containers to break or result in other problems.
[0037] Some sample containers may have indicia, such as barcodes and barcode labels, attached to the exterior surfaces of the sample containers. The indicia may include reference patient information or testing criteria, for example. These barcodes and barcode labels should remain readable or scannable as the sample containers are moved throughout the laboratory system. The locations of the indicia may vary between different types of sample containers. Thus, if the same grasping rules are used for all the different types of sample containers, the grippers may grasp and consequently damage the indicia.
[0038] In some situations, the exteriors of some sample containers may have exposed adhesives, such as from barcode labels, and/or liquids spilled on the exterior surfaces. These adhesives and/or liquids may hinder grasping actions of the robot grippers should the robot grippers grasp these regions. For example, the adhesives may cause the robot grippers to adhere to the sample containers and the liquids may cause the robot grippers to slide relative to the sample containers.
[0039] The methods and apparatus described herein overcome the issues with conventional laboratory systems by using imaging devices to capture images of the sample containers before the robots grasp the sample containers. Visual information generated by the imaging devices is used to determine the best (optimal) grasping locations on the sample containers for the robots to grasp. The methods and apparatus described herein reduce instances of breaks, spills, and other failures during sample container handling. The methods and apparatus described herein may direct the robots to grasp sample containers at different heights to accommodate sample containers with different geometric configurations.
[0040] Machine learning models or networks may identify and locate semantic keypoints on certain objects in an image. A semantic keypoint is a predetermined feature of or “point of interest” on a sample container to be analyzed in the determination of an optimum grasping location on the sample container. Examples of semantic keypoints may include corners or edges of barcode labels, corners or edges of a sample container, corners or edges of a cap on the sample container, height of a liquid in a sample container, and a center of mass of the sample container. The semantic keypoints may be analyzed, such as by a machine learning model, to determine the optimal grasping locations of the sample containers. The methods may also consider the degrees-of-freedom of the robot gripper to provide the optimal grasping locations for the grippers to grasp the sample containers. The optimal grasping locations may result in safe and reliable grasps while abating potential damage to the sample containers and the system through breaks and spills of the sample containers.
[0041] The semantic keypoints may be analyzed to identify one or more surface and/or geometric properties of the sample containers. The surface properties may include the heights of the sample containers, the materials from which the sample containers are made, indicia located on the exteriors of the sample containers, surface anomalies, and liquid levels in the sample containers. Based on the analyses, the grasping rules of the robot grippers may be determined. The grasping rules may avoid blocking indicia or contacting liquids, for example. Grasping locations for the robot grippers on the sample containers may be determined in response to the grasping rules. The grasping locations may be locations where the grippers will not adversely affect the sample containers. The grasping locations also may be locations that enable the grippers to grasp the sample containers without sliding or sticking to the sample containers. These and other systems, methods, and devices that determine grasping rules of robot grippers used in diagnostic laboratory systems are described in greater detail below in connection with FIGS. 1-11.
[0042] Reference is made to FIG. 1 , which is a perspective view of a laboratory 100. A diagnostic laboratory system 102 may be located within the laboratory 100. The diagnostic laboratory system 102 may be configured to perform a plurality of analyses or tests on a plurality of different biological samples. For example, the tests may determine levels of constituents or chemicals present in biological samples, such as blood, urine, cerebral fluid, and other biological samples. In other embodiments, the diagnostic laboratory system 102 may be configured to perform a plurality of different tests on a single biological sample type, such as blood serum. In other embodiments, the diagnostic laboratory system 102 may be configured to perform a single type of test on a single biological sample type, such as blood serum.
[0043] The diagnostic laboratory system 102 may include a plurality of diagnostic instruments 104 (a few labelled) that are configured to perform the same or different tests on the biological samples. In some embodiments, the diagnostic instruments 104 may be interconnected by a transport system (e.g., track 216 - FIG. 2). The transport system may be configured to transport the biological samples between the diagnostic instruments 104 and/or other devices in the laboratory system 102, such as centrifuges and decappers. The configuration of the laboratory system 102 may be different than the configuration shown in FIG. 1 . In some embodiments, the laboratory system 102 may only include a single one of the diagnostic instruments 104.
[0044] The diagnostic laboratory system 102 may be coupled to a computer 120 that may be located within the laboratory 100 or external to the laboratory 100. In some embodiments, portions of the computer 120 may be located within the laboratory 100 and other portions of the computer 120 may be located external to the laboratory 100. The computer 120 may include a processor 122 and a memory 124, wherein the memory 124 stores programs 126 configured to be executed or run on the processor 122. In some embodiments, the memory 124 and/or the programs 126 may be located external to the computer 120. For example, the computer 120 may be connected to the Internet to access external data and the like. The programs 126 may operate the diagnostic instruments 104 and process data generated by the diagnostic instruments 104.
[0045] The memory 124 may be any suitable type of memory, such as, but not limited to one or more of a volatile memory and/or a non-volatile memory. The memory 124 may have a plurality of programs 126 that include instructions stored therein that, when executed by processor 122, cause the processor 122 to perform various actions specified by one or more of the stored instructions. The program instructions may be provided to the processor 122 to perform operations in accordance with the present systems and methods specified in the flowcharts and/or block diagrams described herein. The processor 122, so configured, may become a special purpose machine particularly suited for performing in accordance with the present systems and methods. The program instructions, which may be stored in a computer readable medium such as the memory 124, can direct the processor 122 to function in a particular manner. The term "memory" as used herein can refer to both non-transitory and transitory memory.
[0046] At least one of the diagnostic instruments 104 or other components may be a sample handler 130, which is described in greater detail below. In the embodiment of FIG. 1 , the sample handler 130 is a component in the laboratory system 102. In other embodiments, the laboratory system 102 may include a plurality of sample handlers. The operations performed by the sample handler 130 may be implemented in one or more of the diagnostic instruments 104. The sample handler 130 may be located in various locations in the diagnostic laboratory system 102, such as in individual ones of the diagnostic instruments 104.
[0047] The sample handler 130 may be configured to receive items into the laboratory system 102 and to disperse items from laboratory system 102. The items may include sample containers (e.g., sample containers 210 - FIG. 2) and reagent packages. The sample containers 210 received into the sample handler 130 may contain biological samples that are to be tested by one or more of the diagnostic instruments 104. The sample containers 210 dispersed from the laboratory system 102 may contain residual liquids after the biological samples have been tested.
[0048] Additional reference is made to FIG. 2, which illustrates a more detailed embodiment of the computer 120 in communication with the sample handler 130. The computer 120 may include a plurality of programs 126 that may be run on the processor 122. One of the programs 126 may be a robot controller 204 that may be configured to direct a robot 206 to move to specific locations as described herein. For example, the robot controller 204, executing on the processor 122, may direct the robot 206 or portions of the robot 206 to move within the sample handler 130 and to perform certain operations. The robot controller 204 may also direct the robot 206 to move sample containers 210 between sample container carriers 212 and sample movers 214. The sample movers 214 may move the sample containers 210 between diagnostic instruments 104 (FIG. 1) by way of a transport system, which in the embodiment of FIG. 2 may include a track 216 configured to move the sample movers 214. The embodiment of the sample handler 130 shown in FIG. 2 has three sample container carriers 212, which are referred to individually as a first container carrier 212A, a second container carrier 212B, and a third container carrier 212C. Some of the container slots 232 may be occupied with sample containers 210 and are identified with dark fill. A sample container 210A is shown occupying a container slot in the third container carrier 212C and will be referenced in examples herein.
[0049] An image processor 220 may be coupled to the computer 120 and may be configured to receive real-world image data 222 generated by an imaging device 224 (e.g., a digital camera). In some embodiments, one or more portions of the image processor 220 may be implemented in the imaging device 224. In some embodiments, the imaging device 224 may be configured to capture three-dimensional (3D) images of the sample containers 210, the sample container carriers 212, and other items. The image processor 220 may be configured to direct the imaging device 224 to capture images, such as images of the sample containers 210 and other items in the laboratory system 102. The laboratory system 102 may include a plurality of imaging devices. Some imaging devices may be stationary, and some may be mobile, such as imaging devices affixed to the robot 206 as described herein.
[0050] The real-world image data 222 may be data generated by the imaging device 224 and may include color data indicative of colors present in the captured images. In some embodiments, the image data 222 may be representative of 3D scenes. For example, the image data 222 may be representative of two images captured from two adjacent viewpoints. In other embodiments, the imaging device 224 may be a 3D camera that generates image data that includes data indicating distances between objects in the captured images and the imaging device 224.
[0051] The computer 120 may include a semantic keypoints identification model 226 configured or trained to identify semantic keypoints in images captured by the imaging device 224. The semantic keypoints identification model 226 may be a machine learning model (e.g., a software model) or algorithm, such as a trained model or a network. In some embodiments, the semantic keypoints identification model 226 is or includes a deep neural network. In some embodiments, the semantic keypoints identification model 226 may be or include a convolutional neural network (CNN) trained or configured to identify semantic keypoints in images. The semantic keypoints identification model 226 may be trained to identify properties such as dimensions of the sample containers 210, sample container geometry, whether the sample containers 210 have caps, barcodes and other indicia, and locations of these items. The semantic keypoints identification model 226 may also be trained to identify and locate anomalies such as sample contents that have spilled from the sample containers 210, adhesives used to affix barcode labels to the sample containers 210, damage or markings on barcode labels, and other anomalies. [0052] The semantic keypoints identification model 226 may also determine heights of the sample containers 210 and/or heights (e.g., levels) of samples in the sample containers 210. The heights of the samples may be determined by identifying transitions of color or brightness in images of the sample containers 210. In addition, the semantic keypoints identification model 226 may determine materials (e.g., glass or plastic) of the sample containers 210 by analyzing light reflected from or transmitted through the sample containers 210. When items other than the sample containers 210, such as reagent packages, are imaged, the semantic keypoints identification model 226 may analyze the resulting image data in similar manners as described above.
[0053] The semantic keypoints identification model 226 may also determine or calculate centers of mass of the sample containers 210 at least partially in response to other semantic keypoints identified by the semantic keypoints identification model 226. The center of mass may be calculated based on the sizes of the sample containers 210, whether the sample containers 210 include caps, the heights of samples in the sample containers 210, and other variables. In some embodiments, the semantic keypoints identification model 226 may calculate centroids of the sample containers 210 using other semantic keypoints identified by the semantic keypoints identification model 226.
[0054] A grasping location algorithm 230 may be a program, machine learning model, and/or an algorithm configured to identify optimal gripper locations on the sample containers 210 or other items where the robot 206 may grasp the sample containers 210 or the other items. The grasping location algorithm 230 may determine optimal grasping locations by analyzing the semantic keypoints generated by the semantic keypoints identification model 226. In one or more embodiments, the grasping location algorithm 230 may be a convolutional neural network or rule-based network that performs the methods described herein. In some embodiments, the grasping location algorithm 230 may be a deep neural network trained to determine optimal grasping locations based at least partially on the semantic keypoints. In some embodiments, the grasping location algorithm 230 may identify optimal grasping locations by way of heatmaps. In other embodiments, the optimal grasping locations may be determined to be a semantic keypoint, for example.
[0055] Some of the sample containers 210 may have several optimal grasping locations and the grasping location algorithm 230 may select one of these grasping locations as an optimal grasping location. The grasping location algorithm 230 may analyze the data generated by the semantic keypoints identification model 226 to identify optimal or proper locations where the robot 206 may grasp individual ones of the sample containers 210. The optimal grasping locations may avoid barcode labels, indicia, anomalies, caps, spilled samples and other liquids, and other items that may interfere with proper grasping of the sample containers 210.
[0056] The computer 120 may be coupled to a workstation 240 that enables users to communicate with the computer 120. The workstation 240 may include a display 242 and a keyboard 244 and/or other input devices. Data generated by the computer 120 may be displayed on the display 242. The user may input data to the computer 120 via the keyboard 244 and/or other input devices. The display 242 may be configured to display images captured by the imaging device 224 and/or other imaging devices. The display 242 may also be configured to display semantic keypoints determined by the semantic keypoints identification model 226, and/or grasping locations determined by the grasping location algorithm 230. The display 242 may also display heatmaps generated by the programs 126 described herein.
[0057] The computer 120 and/or the laboratory system 102 may be coupled to a laboratory information system (LIS) 250. In some embodiments, the LIS 250 may be a program and may be executed by the computer 120. The LIS 250 may receive data and/or instructions from a hospital information system (HIS) 252, which may be at least partially implemented in a program executed by the computer 120. Medical professionals may enter testing requirements for specific patients into the HIS 252. For example, a doctor may require that blood taken from a first patient be tested for a first chemical and blood taken from a second patient be tested for a second chemical. These testing requirements may be input to the HIS 252. The testing requirements may then be transmitted to the LIS 250, which may generate a testing plan to be run by the laboratory system 102, such as on specific ones of the diagnostic instruments 104 (FIG. 1) to perform the tests.
[0058] Additional reference is made to FIG. 3, which illustrates a front perspective view of an embodiment of the robot 206 grasping the sample container 210A. The robot 206 may include a gripper 304 (e.g., an end effector) configured to grasp the sample containers 210 and move the sample containers 210 throughout the sample handler 130, including into and out of the sample container carriers 212 (FIG. 2). In the embodiment of FIG. 3, the robot 206 is illustrated moving the sample container 210A into and out of the third container carrier 212C. The robot 206 may be configured to move all the sample containers 210 into and out of all the sample container carriers 212 (FIG. 2). The robot 206 may also be configured to move all the sample containers 210 into and out of the sample movers 214 (FIG. 2).
[0059] The robot 206 may include a plurality of gantries that enable the gripper 304 to move in an x-direction, a y-direction, and a z-direction. First gantries 310 may be configured to move the gripper 304 in the Y-direction. A second gantry 312 may be configured to move the gripper 304 in the X-direction. A third gantry 314 may be configured to move the gripper 304 in the Z-direction. The gantries may be controlled by motors (not shown) that may receive signals generated by the robot controller 204 (FIG. 2). In some embodiments, the robot 206 may be configured as a selective compliance assembly robot arm (SCARA) or a 6DOF robot arm that can navigate throughout the diagnostic instruments 104 (FIG. 1), including the sample handler 130.
[0060] The robot 206 may include an arm 320 to which the gripper 304 may be attached. In some embodiments, the arm 320 may be affixed to the third gantry 314. The configuration between the robot and the imaging device 224 may be in an eye-in-hand configuration wherein the imaging device 224 moves with the gripper 304. The imaging device 224 also may be affixed to the arm 320, which may move with the gripper 304. Thus, the robot 206 may be configured to move the imaging device 224 throughout the sample handler 130 to capture images of items, such as the sample containers 210 (FIG. 2), from various viewpoints as described herein. Different embodiments of the imaging device 224 may have many different physical configurations that enable the imaging device 224 to be affixed to the gripper 304 without interfering with the operation of the gripper 304.
[0061] Additional reference is made to FIG. 4A, which illustrates an enlarged view of the gripper 304 of FIG. 3. The gripper 304 illustrated in FIG. 4A includes two fingers 400 that are referred to individually as a first finger 400A and a second finger 400B. Ends of the fingers 400 may have nodes 404 that are configured to contact the sample container 210A. Friction between the nodes 404 and the sample container 210A enables the gripper 304 to grasp the sample container 210A and move the sample container 210A as described herein. In the embodiment of FIG. 4A, each of the two fingers 400 includes two nodes, which are referred to as node 404A, node 404B, node 404C, and node 404D. The gripper 304 is described as moving herein by way of the robot 206. The robot controller 204 may include instructions that when executed by the processor 122 cause predetermined forces to be applied by the nodes 404 of the gripper 304 to the sample container 210A. In some embodiments, the forces may be at least partially dependent on the material of the sample container 210A and may be determined by the grasping location algorithm 230.
[0062] The imaging device 224 may be attached to the gripper 304 as shown in FIG. 4A. For example, the imaging device 224 may be attached to one of the fingers 400. Thus, the imaging device 224 may move with the fingers 400 of the gripper 304, which enables the imaging device 224 to capture images of the sample containers 210 (FIG. 3) as the gripper 304 is moved throughout the sample handler 130. The imaging device 224 may also be able to capture images of other items. For example, robots located in other components of the laboratory system 102 may have similarly mounted imaging devices that enable images to be captured within those other components. [0063] In the embodiment of FIG. 4A the imaging device 224 may have a first field of view 412 extending generally in the direction of the fingers 400, which in the configuration of FIG. 4A is in the z-direction. The first field of view 412 may enable items, such as the sample container 210A, to be imaged when the gripper is located above the sample container 210A. In some embodiments, the imaging device 224 may have a second field of view 414 that is in a direction other than the direction of the first field of view 412. In the embodiment of FIG. 4A, the second field of view 414 may be orthogonal to the first field of view 412. The second field of view 414 may enable the imaging device 224 to capture elevation views of objects such as the sample containers 210. In other embodiments (not shown), the imaging device 224 may be attached to an inside of a finger 400 above a node 404 such that the imaging device 224 faces the sample container 210 as the fingers 400A,B close around the sample container 210. In still other embodiments (not shown), the imaging device 224 may be attached to another structure in a fixed location such that the imaging device 224 does not move with the robot 206 and may be configured to capture images of the sample containers 210 and the gripper 304.
[0064] In some embodiments, the imaging device 224 may include an RGBD sensor, which generates red, green, and blue color data and depth data. In some embodiments, the imaging device 224 may include an RGB sensor with a separate depth or distance sensor. The imaging device 224 is shown attached to the gripper 304, and the depth information may be measured between the items being captured and the location of the gripper 304. [0065] Additional reference is made to FIG. 4B, which illustrates the gripper 304 positioned horizontally on the x-axis. In this embodiment, the gripper 304 may have several degrees of freedom, such as six degrees of freedom, that may enable the gripper to move as shown in FIG. 4B. In such a position, the imaging device 224 may capture elevation images of objects in the laboratory system 102, such as the sample container 210A, using the first field of view 412. In such embodiments, the imaging device 224 may only need one field of view (e.g., the first field of view 412).
[0066] FIG. 4C illustrates a top plan view of the sample container 210A showing locations where the nodes 404 may contact the exterior surface of the sample container 210A. The gripper 304 may be configured to rotate or pivot in an arc R41 relative to the sample container 210A. The nodes 404 shown as solid lines indicate first grasping locations where the nodes 404 may contact the sample container 210A. The nodes 404 shown as dashed lines indicate second grasping locations where the nodes 404 may contact the sample container 210A. For example, the gripper 304 and thus the fingers 400 may rotate as shown by the arc R41 so that the nodes 404 may contact the sample container 210 at various arcuate locations on the surface of the sample container 210A. The grasping location algorithm 230 may determine an optimal grasping location on the surface of the sample container 210A and the gripper 304 may rotate about the arc R41 to the optimal grasping location.
[0067] The gripper 304 and thus the fingers 400 and the nodes 404 may be configured to move in the z-direction to grasp the sample container 210A at various vertical locations or heights on the sample container 210A. FIG. 4D illustrates the nodes 404 grasping the sample container 210A at a higher vertical position than the nodes 404 grasping the sample container 21 OA shown in FIG. 4A. In the configuration of FIG. 4D, the gripper 304 and the fingers 400 may have positioned the nodes 404 so that the nodes 404 do not contact the barcode label 410 located on the surface of the sample container 21 OA. In the configuration of FIG. 4A, the nodes 404 are located vertically lower than in FIG. 4D and are in contact with the barcode label 410. The methods and apparatus described herein may prevent the nodes 404 from contacting barcode labels as shown in FIG. 4A. For example, the grasping location algorithm 230 may direct the gripper 304 to grasp the sample container 21 OA to avoid the barcode label 410. In some embodiments, the nodes 404 may contact the barcode labels, but avoid direct contact with the barcodes on the label.
[0068] Additional reference is made to FIG. 4E, which shows the gripper 304 configured with a large number of degrees of freedom, such as six degrees of freedom. The degrees of freedom enable the gripper 304 to have many different poses to grasp and move the sample container 210A. The degrees of freedom may also be referred to as degrees of pose. The gripper 304 may have other degrees of freedom. The gripper 304 of FIG. 4E may be configured to move or rotate in an arc R42. Thus, the gripper 304, the fingers 400, and the nodes 404 may grasp the sample container 210A when the sample container 210 is askew or in a plurality of different poses. The ability of the gripper 304 to move in the arc R42 may be in addition to movements in the z-direction and along the arc R41 as described in FIGS. 4C and 4D. In some embodiments, the gripper 304 may also be configured to move in an arc that is perpendicular to the arc R42.
[0069] The gripper 304 has been described as having two fingers 400 and four nodes 404. The gripper 304 may have different configurations of fingers and nodes. Reference is made to FIG. 4F, which illustrates a top plan view of the sample container 210A where three nodes 418 are contacting the exterior of the sample container 210A. The three nodes 418 may be spaced equally around the circumference of the outer surface of the sample container 210A. The gripper 304 may have other numbers of nodes that are configured to contact the outer surface of the sample container 210A.
[0070] Different types of sample containers 210 (FIG. 2) may be used in the sample handler 130. The different types of sample containers 210 may have different surface properties and geometric properties. The different types of sample containers 210 may be grasped and moved throughout the laboratory system 102 (FIG. 2) in a similar manner as the sample container 210A and as determined by the programs 126 (FIG. 2).
[0071] As described above, semantic keypoints may be applied to or identified in images of the sample containers 210 (FIG. 2) by the semantic keypoints identification model 226. In some embodiments, the semantic keypoints identification model 226 may identify semantic keypoints using a model (e.g., a CNN), which may be integral with the semantic keypoints identification model 226 and may analyze the image data 222 (FIG. 2) representative of the sample containers 210. The semantic keypoints may enable determinations of sample characteristics such as sample container orientation, edges of the sample containers 210, and edges of items located on the sample containers 210.
[0072] The semantic keypoints may be defined as category-level semantic points on images of the sample containers 210, such as 3D images of sample containers 210. Semantic keypoints may be points of interest with semantic meanings for images of the sample containers 210. The semantic keypoints may include corners of barcode labels, edges of sample container tubes, heights of liquids in the sample containers, center of mass of the sample containers, and other characteristics. In some embodiments, semantic keypoints may be referred to as category-level semantic points on 3D objects, wherein the categories are objects such as barcodes, caps, liquids, and other objects that may be identified in the images.
[0073] Semantic keypoints may provide concise abstractions for a variety of visual understanding tasks, such as grasping operations performed by the robot 206 (FIG. 3). The semantic keypoints identification model 226 may define semantic keypoints separately for each category in the images of the sample containers 210, such as edges or corners on tubes, caps, and barcode labels and may provide concise abstractions of these objects regarding their compositions, shapes, and poses. Semantic keypoints may be identified using deep learning methods, such as mask RCNN (ICCV 2017) and PifPaf (CVPR 2019). In other embodiments, convolutional neural networks (CNNs) may be employed to identify the semantic keypoints. Semantic keypoint identification may involve simultaneously detecting sample containers 210 and localizing their semantic keypoints.
[0074] Additional reference is made to FIG. 5, which illustrates an image of an elevation view of a sample container 500, which may be similar to one or more of the sample containers 210 (FIG. 2). The image of the sample container 500 may be a real-world image captured by the imaging device 224 or a synthetic or computer-generated image generated by the computer 120 or another computer. The image of the sample container 500 may be used to train the semantic keypoints identification model 226 and/or the grasping location algorithm 230. The sample container 500 shown in FIG. 5 includes a tube 502 and a cap 504 that seals the tube 502. Some versions of the sample containers 210 do not include caps. The tube 502 may have an indicia, which in the embodiment of FIG. 5 is a barcode label 506 with a barcode 508 printed on the barcode label 506. The tube 502 may contain a liquid 510, which may be a biological sample. The sample container 500 may have a height H51 extending between a tube bottom 502A and a cap top 504A. The tube 502 may have a height H52 extending between the tube bottom 502A and a cap bottom 504B. The liquid 510 may have a height H53 extending between the tube bottom 502A and the top of the liquid 510.
[0075] FIG. 5 includes a plurality of dots that represent semantic keypoints 514. The semantic keypoints identification model 226 may identify the semantic keypoints 514 and information or data derived from the semantic keypoints 514 as described herein. The semantic keypoints 514 are shown as individual single dots for illustration purposes. In some embodiments, the individual semantic keypoints 514 may be a plurality of keypoints that define or outline portions of objects in the image of the sample container 500. A semantic keypoint 516A and a semantic keypoint 516B identify bottom corners of the tube bottom 502A. The difference between the semantic keypoint 516A and the semantic keypoint 516B is the width W51 of the lower portion of the tube 502. A semantic keypoint 518A and a semantic keypoint 518B identify top corners of the tube 502 at the bottom of the cap 504. The difference between the semantic keypoint 518A and the semantic keypoint 518B is the width of the upper portion of the tube 502. If the width at the upper portion is the same as the width W51 at the lower portion, the tube 502 may be considered to be cylindrical.
[0076] Semantic keypoints 522A, 522B, 522C, and 522D mark corners or edges of the cap 504. A line between the semantic keypoint 522A and the semantic keypoint 522B defines the cap top 504A. A line between the semantic keypoint 522C and the semantic keypoint 522D defines the cap bottom 504B. The difference between the cap top 504A and the cap bottom 504B represents a height H54 of the cap 504. The difference between the semantic keypoint 522A and the semantic keypoint 522B is the width W52 of the cap 504. A semantic keypoint 524 may identify the color and/or texture of the cap 504. A semantic keypoint 526 marks the top of the liquid 510 and a difference between the semantic keypoint 526 and the semantic keypoint 516A is the height H53 of the liquid 510.
[0077] Semantic keypoints 530A, 530B, 530C, and 530D mark edges or corners of the barcode label 506. Semantic keypoints 532A, 532B, 532C, and 532D mark edges or corners of the barcode 508 itself. The locations of the semantic keypoints 530A, 530B, 530C, and 530D may identify the location of the barcode label 506 on the tube 502. The location of the barcode label 506 may be identified in addition to the size of the barcode label 506. The semantic keypoints 532A, 532B, 532C, and 532D may identify the location of the barcode 508 on the barcode label 506 in addition to the size of the barcode 508.
[0078] Semantic keypoint detection does not have to be limited to the semantic keypoints 514 shown in FIG. 5. Additional semantic keypoints may identify and locate instances of liquids (e.g., biological fluids) on the exteriors of the sample containers 210. For example, biological liquids may escape from a sample container that has broken or during a handling procedure, such as a de-capping procedure. The biological liquids may flow onto the outside of the sample container and dry, which can affect the appearance of the sample container. In some situations, the biological liquids can change the shape of the image of the sample container 500 as processed by the programs 126 (FIG. 2). In some situations, the leaked liquid may spread onto surrounding equipment and possibly onto other sample containers. Embodiments of the grasping location algorithm 230 may be trained to avoid these areas and identify the shapes of the sample containers that are obscured by the liquids.
[0079] Additional reference is made to FIGS. 6A-6E, which illustrate examples of images of different sample containers and areas on the sample containers that may be avoided during grasping operations. For illustrative purposes, not all the semantic keypoints 514 shown in FIG. 5 are shown in FIGS. 6A-6E. The sample container 600A is relatively short, has a barcode label 604 positioned in the middle of the tube portion, and does not have a cap. A barcode 606 may be centrally located on the barcode label 604. The sample container 600A is almost fully filled with a sample 608 as shown by hatching in the sample container 600A. An anomaly 610 is located on the upper portion of the barcode label 604 and the barcode 606. The anomaly 610 may be spilled liquid or adhesive from the barcode label 604, for example.
[0080] A real-world image of the sample container 600A may be captured by the imaging device 224 (FIG. 2) and analyzed by the image processor 220, which may process the image data 222 generated by the imaging device 224. The semantic keypoints identification model 226 may identify semantic keypoints on the sample container 600A in a similar manner as the identification of semantic keypoints 514 (FIG. 5) described with reference to the sample container 500. The anomaly 610 may be identified and/or located by semantic keypoints 612 using the semantic keypoints identification model 226, which has been trained, e.g., to identify the anomaly 610. The semantic keypoints identification model 226 may also identify the height of the sample 608 by a semantic keypoint 613. The grasping location algorithm 230 may be trained to avoid the anomaly 610 identified by the semantic keypoints identification model 226 when generating instructions to the robot controller 204 to direct the gripper 304 (FIG. 3) to grasp the sample container 600A as described in greater detail herein. The semantic keypoints identification model 226 may determine the center of mass of the sample container 600A based at least in part on the level of the sample 608 in the sample container 600A.
[0081] The sample container 600B is relatively tall, includes a cap 615, and has a barcode label 614 located on the upper portion of the tube 616. A barcode 618 may be located on the barcode label 614. An image of the sample container 600B may be captured by the imaging device 224 (FIG. 2) and analyzed to identify surface and/or geometric properties of the sample container 600B. The semantic keypoints identification model 226 may identify the surface and/or geometric properties in the image data 222 including the height and width of the sample container 600B, cap status, and barcode label size and position as described above. In addition, the semantic keypoints identification model 226 may generate a semantic keypoint 620 that identifies or locates the height of the sample 622 in the sample container 600B. The semantic keypoints identification model 226 may determine the center of mass of the sample container 600B based at least on part on the height of the sample 622. Based on these properties, the grasping location algorithm 230 may determine the optimal grasping location as described in greater detail below.
[0082] The sample container 600C does not include a cap and is empty. The semantic keypoints identification model 226 may identify the height, width, cap status, and/or liquid height (empty). Based on these properties, the grasping location algorithm 230 may determine the optimal grasping location as described in greater detail below.
[0083] The sample container 600D includes a barcode label 630 and is filled with a sample 632. The height of the sample 632 is located by a semantic keypoint 634. The barcode label 630 is skewed and is not properly attached to the sample container 600D. The edges and/or corners of the barcode label 630 may be located by semantic keypoints 636. Image data 222 (FIG. 2) of the sample container 600D may be analyzed by the semantic keypoints identification model 226, which may identify the skewed and partially detached barcode label 630. The grasping location algorithm 230 may determine optimal grasping locations to be well above the barcode label 630 in order to prevent the nodes 404 (FIG. 4A) from further damaging or detaching the barcode label 630 as described herein.
[0084] The sample container 600E is skewed. For example, the sample container 600E may have moved within a sample mover (e.g., one of the sample movers 214 - FIG. 2). The semantic keypoints identification model 226 may locate semantic keypoints 644 at the corners of the image of the sample container 600E and analyze the semantic keypoints 644 to calculate the skew of the sample container 600E. The image data 222 (FIG. 2) of the sample container 600E may also be analyzed by the semantic keypoints identification model 226 to identify a semantic keypoint 640 indicating a height of a sample 642 stored in the sample container 600E. In addition, the semantic keypoints identification model 226 may use the height of the sample 642 to determine the center of mass of the sample container 600F. The grasping location algorithm 230 may determine the optimal grasping location based at least in part on the height of the sample 642 and the skew of the sample container 600E. [0085] The apparatus and methods described herein may consider the condition of barcodes and barcode labels on the sample containers when determining optimal grasping locations. Barcode labels imaged in the laboratory system 102 may be in a variety of different states. For example, some of the barcode labels may be applied properly to the sample containers, but other barcode labels may be skewed. In some embodiments, the barcode labels can lose their adhesiveness and peel off the sample containers as shown in FIG. 6D. Parts of the barcode labels may also tear during processing which may leave pieces of the barcode labels that can alter the overall composite shape of the images of the sample containers 210. The apparatus and methods described herein may be able to generate semantic keypoints and process the semantic keypoints to identify the barcodes and barcode labels. For example, the semantic keypoints identification model 226 may be trained by or include a CNN that locates and/or identifies barcodes and barcode labels along with problems with the barcodes and barcode labels.
[0086] In some embodiments, the semantic keypoints identification model 226 may analyze the image data 222 and/or some of the already identified/located semantic keypoints to determine centers of mass and/or center of gravity of the sample containers 210. The following description refers to centers of mass calculations and uses, but the same description may apply to center of gravity calculations and uses. Center of mass information may be used by the grasping location algorithm 230 to determine the optimal grasping locations where the gripper 304 grasps the sample containers 210. In some embodiments, a center of mass algorithm may be a separately trained model or network in addition to the semantic keypoints identification model 226.
[0087] The center of mass data may be used by the grasping location algorithm 230 to locate steady grasping locations on the sample containers 210. Referring to FIG. 5, the center of mass of the sample container 500 is a function of the height H53 of the liquid 510. For example, the height H53 can cause the calculated grasping locations to change due to the change in mass distribution caused by the liquid 510. Other points, including the semantic keypoints 514, can also be considered in the determination of the center of mass. For example, the tube bottom 502A, the cap top 504A, the presence of the cap 504, the height H52, the height H54, and other variables may be additional data used by the semantic keypoints identification model 226 to determine the center of mass. [0088] The semantic keypoints 514 may also be used during pick-up and placement tasks performed by the robot 206, such as when calculating how far the sample container 500 should be inserted into a container slot 232 (FIG. 2) before the tube bottom 502A of the sample container 500 hits a surface or obstacle in a sample container carrier. The location of the cap top 504A and/or the height H51 may indicate a minimum height that the gripper 304 must clear before attempting to move along the x-axis and the y-axis.
[0089] In order for the networks, models, and other machine learning algorithms to identify semantic keypoints, grasping locations, and other properties described herein, the semantic keypoints identification model 226 and/or the grasping location algorithm 230 may have to be trained. The semantic keypoints identification model 226 (FIG. 2) may be trained by analyzing or processing training image data representative of a plurality of sample containers. The training image data may be computer-generated (e.g., synthetic) or real- world image data generated by the imaging device 224 or other imaging devices. The plurality of sample container training images used fortraining may have different geometric characteristics and/or different types and heights of liquids located therein. In addition, the training images of the sample containers may have barcodes and barcode labels in different states, such as properly attached, partially detached, skewed, and torn.
[0090] In some embodiments, training the semantic keypoints identification model 226 may be performed, at least in part, by applying a convolutional neural network (CNN) to the training images or image data representative of the training images. CNNs may accurately classify, regress, and segment training images of the sample containers. In some embodiments, architectures such as U-Net may be used to segment areas of interest in the training images as well as for detection of the semantic keypoints 514 (FIG. 5) and other semantic keypoints described herein. The segments, classifications, and regression may identify the items described with reference to the semantic keypoints described herein.
[0091] A similar training approach may be employed to train the grasping location algorithm 230 to provide optimal grasping location information to the robot controller 204 for the gripper 304 (FIG. 3). The processor 122, executing the robot controller 204, may direct the gripper 304 to contact the sample containers 210 as described herein. Based on the foregoing, the grasping location algorithm 230 may be trained to account for the sample container shapes and characteristics identified and/or located by the semantic keypoints 514 in order to determine optimal grasping points on the sample containers 210.
[0092] The input to the CNN or other network or model may include depth images (e.g., data generated by an RGBD sensor) of a scene that includes real-world and/or synthetic training images. The real-world training images may be captured from the imaging device 224 mounted on the gripper 304 or the arm 320 (FIG. 3). Other imaging devices, such as imaging devices external to the laboratory system 102 (FIG. 1) may be used to capture the real-world training images. The synthetic image data may be generated by the computer 120 or another computer. As described above, the training image data may include depth data indicating distances between objects in the scene, such as the sample containers 210 and the imaging device 224. When the training image data is synthetic, the depth data may be distances between the synthesized objects in the scene and synthetic viewpoints.
[0093] The depth data combined with intrinsic parameters of the imaging device 224 enables the grasping location algorithm 230 to evaluate 3D information of the scene when learning about the grasping locations of the sample containers 210. Some parts of a scene, such as the back portions of one or more of the sample containers 210, may not be visible to the imaging device 224 from a first viewpoint. However, the grasping location algorithm 230 can evaluate the captured training images of the scene from the first viewpoint. The missing information can be captured by manipulating the robot 206 so that the imaging device 224 may observe the scene from a second viewpoint and the grasping location algorithm 230 may evaluate the scene from the second viewpoint. The training images from the two viewpoints may be analyzed to generate training images of entire sample containers.
[0094] Reference is made to FIG. 7, which is a synthetic image 700 of a plurality of sample containers 702 (a few labelled). The synthetic image 700 includes an overlaid representative heatmap 706 that may be used to train the grasping location algorithm 230. The synthetic image 700 may be computer-generated and may be referred to as a virtual image and may include synthesized training data. Accordingly, the training images of the sample containers 702 may be computer-generated or they may be real-world images that are placed into the synthetic image 700 using computer algorithms. The synthetic image 700 and variations thereof may be used to train the semantic keypoints identification model 226 and/or the grasping location algorithm 230. In the embodiment of FIG. 7, the heatmap 706 shows neighborhoods 708 indicating possible locations for the gripper 304 to grasp the sample containers 702 as described herein.
[0095] The heatmap 706 may include two neighborhoods 708 (a few labelled) on each of the sample containers 702. The centers of the neighborhoods 708, shown as dark dots, indicate the highest values of Gaussian functions (e.g., closest to normalized 1 .0 values) indicative of optimal grasping locations. The lower values of the Gaussian functions are shown as hatched circles surrounding the dark dots of the neighborhoods 708. The Gaussian functions may be calculated based on the properties of the sample containers 702 identified by the semantic keypoints as described herein. A similar heatmap may be generated with respect to a real-world image, such as the real-world image 800 of FIG. 8. [0096] In some embodiments, the sample container height and diameter, the cap height and diameter, the amount of liquid in synthetic sample container images, and other items of each of the sample containers 702 can be varied. A virtual camera or viewpoint may be moved around the scene to virtually capture synthetic color and depth of the images of the sample containers 702. Semantic keypoints may be labeled in three dimensions on the sample containers 702, wherein the semantic keypoints correspond to the semantic keypoints described herein. For example, each of the sample containers 702 may be generated using pre-selected sample container characteristics that are included in or projected into the images using intrinsic and extrinsic parameters of a virtual imaging device. Thus, images of the sample containers 702 may be automatically generated with a variety of labeled synthetic data that can augment an overall dataset and guide the semantic keypoints identification model 226 to effectively learn to identify the objects and features thereof in the images of the sample containers 702.
[0097] The synthetic image 700 may be generated with known distances between the sample containers 702 and the first viewpoint 710. A first sample container 702A is generated as being a distance D71 from the first viewpoint 710. A second sample container 702B is generated as being a distance D72 from the first viewpoint 710. The distance data may be input to a CNN to train the grasping location algorithm 230 and other models and networks. When real-world images are used, the imaging device 224 (FIG. 2) or another device may provide the distances or depths as described herein.
[0098] Additional reference is made to FIG. 8, which illustrates a real-world image 800 of a plurality of sample containers 802 (a few labelled) located in a sample container carrier 804. The training may use real-world image data and/or the synthetic training data of FIG. 7. The sample containers 802 may include a plurality of different types of the sample containers 210 (FIG. 2) and may have been imaged by the imaging device 224 located at a first imaging location. Training image data 222 generated by the imaging device 224 may be analyzed by a CNN or other model, such as the semantic keypoints identification model 226 to annotate the images of the sample containers 802. In some embodiments, the real-world image 800 may be displayed, such as on the display 242 (FIG. 2) where an operator may manually annotate the images of the sample containers 802.
[0099] The real-world image 800 may be captured with the imaging device 224 at a known first viewpoint 810, which may be known or predetermined in the sample handler 130 (FIG. 2). The first viewpoint 810 may orient the imaging device 224 to be a distance D81 from a first sample container 802A and a distance D82 from a second sample container 802B. The distances may be changed by moving the imaging device 224 (FIG. 2) relative to the sample containers. The distance data and image data may be input to a CNN to train the grasping location algorithm 230 and other models and networks. When real-world images are used, the imaging device 224 (FIG. 2) or another device may provide the distance or depth data.
[00100] Training the semantic keypoints identification model 226 may include analyzing both synthetic image data of the synthetic image 700 (FIG. 7) and real-world image data representative of the sample containers 802 in FIG. 8. With the synthetic image data, the height and diameter, cap geometry, the amount of liquid in each sample container, and other characteristics can be varied virtually. A virtual imaging device may be moved around the sample container images to virtually capture color and depth in the images. Annotated synthetic data can be generated automatically in software, such as by the programs 126 (FIG. 1) at scales required for deep learning with less effort than by manual annotation. For example, scene characteristics such as the object geometry and orientation, lighting, environmental conditions, and other features may be precisely controlled and varied as needed for proper training by software that generates the synthetic images.
[00101] In the embodiment of FIG. 8, a sample container 802C may obscure an image of the sample container 802B from the imaging device 224 when the imaging device 224 is positioned at the first viewpoint 810. The robot controller 204 (FIG. 2) may direct the robot 206 (FIG. 2) to move to a second viewpoint (not shown) or a plurality of other viewpoints, which may enable the imaging device 224 to capture images of portions of the sample container 802C that were obscured from the first viewpoint 810. Processing performed by the computer 120 may form complete images of the sample container 820C. In some embodiments machine learning or other algorithms may generate missing portions of images of the sample containers 802. For example, portions of images of the sample containers 802 that are obscured by the container slots 232 may be generated. The images of the sample containers 802 may be segmented and annotated. In some embodiments, the images may be manually segmented and/or annotated. In other embodiments, the images may be segmented and/or augmented by machine learning algorithms, such as by the semantic keypoints identification model 226.
[00102] Deep learning networks may be employed to regress the images to identify geometric features and other parameters of the sample containers using the semantic keypoints identification model 226. In some embodiments, a CNN running in the grasping location algorithm 230 may be trained to detect grasping locations on the sample container 210. The grasping locations may be at least partially based on geometric features of the gripper 304 (FIG. 3) as determined by the regression. The diagnostic laboratory system 102 may include a plurality of different types of robots that have grippers with different geometric features and grasping capabilities. The grasping location algorithm 230 may be trained to identify different grasping locations on the different sample containers 210 based on the specific geometric features, such as the geometric features identified during regression. [00103] In some embodiments, one or more of the grippers may be parallel-jaw grippers as illustrated in FIG. 4A that include two fingers 400 configured to close simultaneously around the sample containers 210 (FIG. 2) and exert forces in opposing directions to create the grasps. The two fingers 400 may have the four nodes 404 (FIG. 4A) that contact the sample containers 210. In other embodiments, the gripper 304 may have two nodes 404 that contact the sample containers 210. In yet other embodiments, the gripper 304 may have three nodes 418 that contact the sample containers 210 as shown in FIG. 4F. Other embodiments may include robots that operate using suction and may be referred to as suction robots.
[00104] The selection of the grasping locations may determine the success and stability of the grasps and the transfer of the sample containers 210 between different locations. For example, if the grasping locations are not selected in a configuration that results in an antipodal grasp, then the grasp may fail and the sample containers 210 may not be picked up or placed properly. The sample containers 210 are typically symmetric, which may be a variable in determining the grasping locations. For example, if a parallel-jaw gripper is used to grasp sample containers 210 that are cylindrical, there may be an infinite number of potential pairs of grasping locations along the circumference of the sample containers 210 at a single height. When the grasping location algorithm 230 generates ground truth grasping locations, there may be a strategy for selecting the appropriate grasping locations based at least partially on the kinematics of the robot 206 and characteristics of the sample containers 210, such as locations of the barcodes and anomalies.
[00105] One method of selecting the optimal grasping locations is to select a height on a sample container at which the gripper 304 will always grasp a sample container. The method may then detect locations along the circumference of the sample container as candidate points for grasping. From the candidate grasping locations, a pair of grasping locations that are furthest apart from each other but still within a view of the imaging device 224 may be selected as the ground truth grasping location.
[00106] In some embodiments, the grasping location algorithm 230 may generate a heatmap representation of grasping locations overlaid on images of the sample containers 210 in order to regress candidate neighborhoods within which the optimal grasping locations may be located. By considering the probability that an optimal grasping location is located in some neighborhood around the predicted location, the grasping location algorithm 230 may be trained to be more robust to slight perturbations in the optimal grasping locations and may help the grasping location algorithm 230 to predict more accurate grasping locations. [00107] The ground truth heatmaps for training the grasping location algorithm 230 may be generated using precise grasping locations as well as probability functions centered on the grasping locations, which define the neighborhoods around the precise locations. For example, referring to FIG. 7, two-dimensional Gaussian functions with a set mean and variance may control where the heatmap 706 locates the optimal grasping locations and how far the neighborhoods 708 extend beyond peaks of the Gaussian functions. The entire heatmap 706 may be normalized to be within the range of 0.0 to 1 .0, so the Gaussian functions will also be normalized to the same range. The precise locations of the landmarks in the neighborhoods 708 may be referred to as 1 to indicate that optimal grasping locations are definitely at those locations. The values in the neighborhoods around the points slowly decrease to 0.0 according to the shapes of the Gaussian functions and their variances. The lower the values in the neighborhoods 708, the less likely that optimal grasping locations exist there. The result is the heatmap 706 containing many local functions that robustly describe where the ground truth keypoints or optimal grasping locations are located. The grasping location algorithm 230 may be trained to locate these neighborhoods 708.
[00108] After the grasping location algorithm 230 is trained, the gripper 304 may be directed to grasp optimal grasping locations of the sample containers 210, which may include many different types of sample containers 210. For example, the robot 206 may move the gripper 304 to a location of the sample container 210A. The imaging device 224 may then capture an image of the sample container 210A. The semantic keypoints identification model 226 may identify semantic keypoints in the image of the sample container 210A. Based on the semantic keypoints, the grasping location algorithm 230, which has been trained as described herein, may determine an optimal grasping location for the sample container 210A. The optimal grasping location may be used by the robot controller 204 to direct the gripper 304 to grasp the sample container 210A as described herein.
[00109] In some embodiments, determinations made by the grasping location algorithm 230 may be at least partially based on centers of mass. The synthetic image 700 (FIG. 7) and/or the real-world image 800 (FIG. 8) may be used to train the semantic keypoints identification model 226 to determine or calculate centers of mass. With reference to FIG. 7, the sample containers 702 may be analyzed in the same or similar methods as the analyses of the sample container 500 of FIG. 5. Semantic keypoints on the images of the sample containers 702 may be located. For example, a CNN executed within the semantic keypoints identification model 226 may locate the semantic keypoints as described herein. In some embodiments, a user may place the semantic keypoints on the images of the sample containers 702. In other situations, the images of the sample containers 702 as generated in the synthetic image 700 may be displayed on the display 242 (FIG. 2). A user may use the keyboard 244 or another input device to mark or identify the semantic keypoints.
[00110] Additional reference is made to FIG. 9, which illustrates a heatmap 900 that indicates centers of mass overlaid onto the images of the sample containers 702 in the synthetic image 700. The heatmap 900 may be generated by analyzing the semantic keypoints 514 (FIG. 5) and other data. The heatmap 900 may include a plurality of neighborhoods 902 indicating centers of mass. Centers of the neighborhoods 902, shown as dark dots, indicate the highest values of the Gaussian functions (e.g., closest to normalized 1 .0 values) indicative of the centers of mass. The lower values of the Gaussian functions are shown as hatched circles in the neighborhoods 902 surrounding the dark dots. Ground truth heatmaps for the training may be generated using similar or identical methods as with the optimal grasping points. Once trained, the semantic keypoints identification model 226 may be able to determine centers of mass of the sample containers 702 by analyzing images of the sample containers 702 or other semantic keypoints in images of the sample containers 702.
[00111] In use, the computer 120 may generate grasp parameters or rules that direct the gripper 304 to grasp the sample containers 210 based on geometry of an individual sample container 210. For example, the processor 122, executing the robot controller 204, may direct the gripper 304 to move to specific locations and grasp sample containers 210 on the optimal grasping locations. The grasping parameters may include, but are not limited to: three-dimensional (3D) positions of grasp points or regions on the surfaces of the sample containers 210; a (3D) vector to indicate the direction of approach of the gripper 304 to the sample containers 210; joint angles of the gripper 304 to achieve the optimal grasp locations within the robot constraints; joint trajectories of the gripper 304 to characterize the approach to the sample containers 210; and forces applied by the gripper 304 to reliably hold the sample containers 210. In some embodiments, the programs 126 may determine that certain surface properties may require that certain forces be applied to the sample containers 210 by the gripper 304.
[00112] Reference is made to FIG. 6A to provide an example of the operation of the diagnostic laboratory system 102 (FIG. 2). The image of the sample container 600A may be a real-world image captured by the imaging device 224. In some embodiments, a portion of the sample container 600A may be a real-world image captured by the imaging device 224 and the remaining portion of the image of the sample container 600A may be synthetically constructed by software. The semantic keypoints identification model 226 may identify semantic keypoints as shown in FIG. 5. In addition, the semantic keypoints identification model 226 may identify the keypoints 612 to locate and/or identify the anomaly 610 and also locate the center of mass of the sample container 600A as described herein.
[00113] The grasping location algorithm 230 may analyze the data generated by the semantic keypoints identification model 226 to determine the optimal grasping locations. In the embodiment of FIG. 6A, the optimal grasping locations are shown as a first semantic keypoint 650A and a second semantic keypoint 650B, which are locations where the nodes 404 (FIG. 4A) may contact the sample container 600A. The optimal grasping location may be above the barcode label 604 and the anomaly 610 in order to prevent the nodes 404 from damaging the barcode label 604 while avoiding the anomaly 610. The sample container 600A is relatively full, so the optimal grasping location may be high to prevent the sample container 600A from tipping during grasping and transportation.
[00114] Reference is made to FIG. 6B to provide another example of the operation of the diagnostic laboratory system 102 (FIG. 2). The image of the sample container 600B may be captured by the imaging device 224. In some embodiments, a portion of the sample container 600B may be captured by the imaging device 224 and the remaining portion of the image of the sample container 600B may be constructed by software. The semantic keypoints identification model 226 may identify semantic keypoints as shown in FIG. 5. In addition, the semantic keypoints identification model 226 may identify the keypoint 620 to locate the height of the sample 622 and a keypoint 623 to locate the cap 615. The semantic keypoints identification model 226 may also locate the center of mass of the sample container 600B as described herein. The center of mass may be centered in the sample container 600B because the height of the sample 622 is low and the sample container 600B has a cap 615.
[00115] The grasping location algorithm 230 may analyze the data generated by the semantic keypoints identification model 226 to determine the optimal grasping location. In the embodiment of FIG. 6B, the optimal grasping locations are shown as a first semantic keypoint 652A and a second semantic keypoint 652B, which are locations where the nodes 404 (FIG. 4A) may contact the sample container 600B. The optimal grasping location may be above the barcode label 614 and below the cap 615 in order to prevent the nodes 404 from damaging the barcode label 604 while avoiding the cap 615. The sample container 600B is relatively empty, so the optimal grasping location may alternatively be below the barcode label 614, which may avoid tipping the sample container 600B.
[00116] The determination of the optimal grasping location may also include the materials (e.g., glass or plastic) of the sample containers 210. For example, if the materials make a sample container susceptible to breaking, the optimal grasping locations may be located away from a top opening, which may be fragile. That is, if a grasping location is located close to the top opening of a sample container made of a weak material, the grasping may cause the sample container to break. The grasping properties may consider other factors, such as mechanical constraints of the robot 206, which includes joint limits, force/torque considerations, and available workspace for the robot 206. The surface friction of a sample container at the optimal grasping locations may be a factor in determining a force applied by the gripper 304 to the sample container.
[00117] Reference is now made to FIG. 10, which illustrates a flowchart of a method 1000 of grasping a sample container (e.g., sample containers 210) in a diagnostic laboratory system (e.g., laboratory system 102) using a robot (e.g., robot 206). The method 1000 includes, in block 1002, capturing a real-world image including a sample container configured to be used in a diagnostic laboratory system, wherein the captured real-world image includes image data (e.g., image data 222). The image data may be provided by the imaging device 224 affixed to the gripper 304 of the robot 206.
[00118] The method 1000 includes, in block 1004, executing a machine learning model (e.g., semantic keypoints identification model 226) to analyze the image data and locate one or more semantic keypoints (e.g., semantic keypoints 514) on the image of the sample container. The semantic keypoints may identify and/or locate geometric features and other characteristics of the sample container. Training data for the machine learning model may include real-world image data and/or synthetic image data of sample containers having different geometries and each having semantic keypoints annotated thereon. The method 1000 includes, in block 1106, determining a grasping location on the sample container at least partially based on the locations of the one or more semantic keypoints. The grasping locations may be optimal grasping locations determined by the grasping location algorithm 230. The method 1000 includes, in block 1008, directing, via a robot controller (e.g., robot controller 204) a gripper (e.g., gripper 304) of a robot (e.g., robot 206) to grasp the sample container at the grasping location. For example, the robot controller 204 executing on the processor 122 may direct the robot 206 to grasp the sample container 210A at an optimal grasping location.
[00119] Reference is now made to FIG. 11 , which illustrates a method 1100 of grasping a sample container (e.g., sample containers 210) in a diagnostic laboratory system (e.g., laboratory system 102) using a robot (e.g., robot 206). The method 1100 includes, in block 1102, capturing a real-world image including a sample container configured to be used in a diagnostic laboratory system, wherein the captured image includes image data (e.g., image data 222). The method 1100 includes, in block 1104, executing a machine learning model (e.g., semantic keypoints identification model 226) to analyze the image data and locate one or more semantic keypoints (e.g., semantic keypoints 514) on the image of the sample container. Training data for the machine learning model may include real-world image data and/or synthetic image data of sample containers having different geometries and each having semantic keypoints annotated thereon. The method 1100 includes, in block 1106, determining one or more surface properties or geometric properties of the sample container at least partially based on the one or more semantic keypoints. The method 1100 includes, in block 1108, determining a grasping location on the sample container at least partially based on the one or more surface properties or geometric properties. The method 1100 includes, in block 1110, directing, via a robot controller (e.g., robot controller 204), a gripper (e.g., gripper 304) of a robot (e.g., robot 206) to grasp the sample container at the grasping location. While the disclosure is susceptible to various modifications and alternative forms, specific method and apparatus embodiments have been shown by way of example in the drawings and are described in detail herein. It should be understood, however, that the particular methods and apparatus disclosed herein are not intended to limit the disclosure. NON-LIMITING ILLUSTRATIVE EMBODIMENTS
[00120] Illustrative embodiment 1 . A method of grasping a sample container in a diagnostic laboratory system using a robot, the method comprising: capturing an image including a sample container configured to be used in a diagnostic laboratory system, wherein the captured image includes image data; executing a machine learning model to analyze the image data and locate one or more semantic keypoints on the image of the sample container, wherein training data for the machine learning model includes real-world image data or synthetic image data of sample containers having different geometries and each having semantic keypoints annotated thereon; determining a grasping location on the sample container, via a grasping location algorithm, at least partially based on the locations of the one or more semantic keypoints; and directing, via a robot controller, a gripper of a robot to grasp the sample container at the grasping location.
[00121] Illustrative embodiment 2. The method according to the preceding illustrative embodiment, wherein the executing the machine learning model comprises executing a convolutional neural network to analyze the image data and locate one or more semantic keypoints on the sample container.
[00122] Illustrative embodiment 3. The method according to one of the preceding illustrative embodiments, wherein the image is captured using an imaging device, wherein the image data includes depth data, wherein the depth data includes a distance between the sample container and the imaging device, and wherein the determining the grasping location comprises determining the grasping location at least partially based on the depth data.
[00123] Illustrative embodiment 4. The method according to one of the preceding illustrative embodiments, wherein the image data includes color data, and wherein the determining a grasping location comprises determining the grasping location at least partially based on the color data.
[00124] Illustrative embodiment 5. The method according to one of the preceding illustrative embodiments, wherein the executing the machine learning model comprises executing a deep neural network to analyze the image data and locate the one or more semantic keypoints on the sample container.
[00125] Illustrative embodiment 6. The method according to one of the preceding illustrative embodiments, wherein the determining the grasping location via the grasping location algorithm comprises executing a deep neural network trained to determine the grasping location on the sample container at least partially based on analyzing the locations of the one or more semantic keypoints.
[00126] Illustrative embodiment 7. The method according to one of the preceding illustrative embodiments, further comprising moving the sample container from a first location to a second location using the gripper of the robot in response to the directing. [00127] Illustrative embodiment 8. The method according to one of the preceding illustrative embodiments, wherein: the gripper has at least one degree of freedom; and the determining the grasping location via the grasping location algorithm comprises determining the grasping location for the gripper at least partially based on the at least one degree of freedom.
[00128] Illustrative embodiment 9. The method according to one of the preceding illustrative embodiments, wherein the sample container includes a biological sample.
[00129] Illustrative embodiment 10. The method according to one of the preceding illustrative embodiments, wherein the sample container has one or more geometric properties and wherein the determining the grasping location via the grasping location algorithm comprises determining the grasping location for the gripper based on the one or more geometric properties.
[00130] Illustrative embodiment 11. The method according to one of the preceding illustrative embodiments, wherein: the one or more semantic keypoints on the sample container locate an indicia; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location such that the gripper avoids contacting the indicia.
[00131] Illustrative embodiment 12. The method according to one of the preceding illustrative embodiments, wherein: the one or more semantic keypoints on the sample container locate a barcode; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location such that the gripper avoids contacting the barcode. [00132] Illustrative embodiment 13. The method according to one of the preceding illustrative embodiments, wherein: the one or more semantic keypoints on the sample container locate an anomaly; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location such that the gripper avoids contacting the anomaly.
[00133] Illustrative embodiment 14. The method according to one of the preceding illustrative embodiments, wherein: the one or more semantic keypoints on the sample container locate a cap; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location at least partially based on the location of the cap.
[00134] Illustrative embodiment 15. The method according to one of the preceding illustrative embodiments, wherein: the one or more semantic keypoints on the sample container identify a height of a liquid in the sample container; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location at least partially based on the height of the liquid.
[00135] Illustrative embodiment 16. The method according to one of the preceding illustrative embodiments, wherein: the one or more semantic keypoints on the container identify a height of the sample container; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location at least partially based on the height of the sample container.
[00136] Illustrative embodiment 17. The method according to one of the preceding illustrative embodiments, wherein: the one or more semantic keypoints on the container identify a center of mass of the sample container; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location at least partially based on the center of mass.
[00137] Illustrative embodiment 18. An apparatus configured to grasp a container in a diagnostic laboratory system comprising: a robot having a gripper; an imaging device configured to capture images of sample containers within the diagnostic laboratory system; a processor; and a memory coupled to the processor and including computer program instructions that, when executed by the processor, cause the processor to: receive image data of a sample container captured by the image capture device; execute a machine learning model to analyze the image data and locate one or more semantic keypoints on the image of the sample container, wherein training data for the machine learning model includes real-world image data or synthetic image data of sample containers having different geometries and each having semantic keypoints annotated thereon; determine a grasping location on the sample container at least partially based on the locations of the one or more semantic keypoints; and direct the robot to grasp the sample container at the grasping location using the gripper of the robot.
[00138] Illustrative embodiment 19. The apparatus according to the preceding illustrative embodiment, wherein the imaging device is attached to the robot and wherein the imaging device is movable with the robot.
[00139] Illustrative embodiment 20. A method of grasping a sample container in a diagnostic laboratory system using a robot, the method comprising: capturing an image including a sample container configured to be used in a diagnostic laboratory system, wherein the captured image includes image data; executing a machine learning model to analyze the image data and locate one or more semantic keypoints on the image of the sample container, wherein training data for the machine learning model includes real-world image data and synthetic image data of sample containers having different geometries and each having semantic keypoints annotated thereon; determining one or more surface properties or geometric properties at least partially based on the one or more semantic keypoints; determining a grasping location on the sample container, via a grasping location algorithm, at least partially based on the one or more surface properties or geometric properties; and directing, via a robot controller, a gripper of a robot to grasp the sample container at the grasping location.

Claims

WHAT IS CLAIMED IS:
1. A method of grasping a sample container in a diagnostic laboratory system using a robot, the method comprising: capturing an image including a sample container configured to be used in a diagnostic laboratory system, wherein the captured image includes image data; executing a machine learning model to analyze the image data and locate one or more semantic keypoints on the image of the sample container, wherein training data for the machine learning model includes real-world image data or synthetic image data of sample containers having different geometries and each having semantic keypoints annotated thereon; determining a grasping location on the sample container, via a grasping location algorithm, at least partially based on the locations of the one or more semantic keypoints; and directing, via a robot controller, a gripper of a robot to grasp the sample container at the grasping location.
2. The method of claim 1 , wherein the executing the machine learning model comprises executing a convolutional neural network to analyze the image data and locate one or more semantic keypoints on the sample container.
3. The method of claim 1 , wherein the image is captured using an imaging device, wherein the image data includes depth data, wherein the depth data includes a distance between the sample container and the imaging device, and wherein the determining the grasping location comprises determining the grasping location at least partially based on the depth data.
4. The method of claim 1 , wherein the image data includes color data, and wherein the determining a grasping location comprises determining the grasping location at least partially based on the color data.
5. The method of claim 1 , wherein the executing the machine learning model comprises executing a deep neural network to analyze the image data and locate the one or more semantic keypoints on the sample container.
6. The method of claim 1 , wherein the determining the grasping location via the grasping location algorithm comprises executing a deep neural network trained to determine the grasping location on the sample container at least partially based on analyzing the locations of the one or more semantic keypoints.
7. The method of claim 1 , further comprising moving the sample container from a first location to a second location using the gripper of the robot in response to the directing.
8. The method of claim 7, wherein: the gripper has at least one degree of freedom; and the determining the grasping location via the grasping location algorithm comprises determining the grasping location for the gripper at least partially based on the at least one degree of freedom.
9. The method of claim 1 , wherein the sample container includes a biological sample.
10. The method of claim 1 , wherein the sample container has one or more geometric properties and wherein the determining the grasping location via the grasping location algorithm comprises determining the grasping location for the gripper based on the one or more geometric properties.
11. The method of claim 1 , wherein: the one or more semantic keypoints on the sample container locate an indicia; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location such that the gripper avoids contacting the indicia.
12. The method of claim 1 , wherein: the one or more semantic keypoints on the sample container locate a barcode; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location such that the gripper avoids contacting the barcode.
13. The method of claim 1 , wherein: the one or more semantic keypoints on the sample container locate an anomaly; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location such that the gripper avoids contacting the anomaly.
14. The method of claim 1 , wherein: the one or more semantic keypoints on the sample container locate a cap; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location at least partially based on the location of the cap.
15. The method of claim 1 , wherein: the one or more semantic keypoints on the sample container identify a height of a liquid in the sample container; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location at least partially based on the height of the liquid.
16. The method of claim 1 , wherein: the one or more semantic keypoints on the container identify a height of the sample container; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location at least partially based on the height of the sample container.
17. The method of claim 1 , wherein: the one or more semantic keypoints on the container identify a center of mass of the sample container; and the determining the grasping location on the sample container via the grasping location algorithm comprises determining the grasping location at least partially based on the center of mass.
18. An apparatus configured to grasp a container in a diagnostic laboratory system comprising: a robot having a gripper; an imaging device configured to capture images of sample containers within the diagnostic laboratory system; a processor; and a memory coupled to the processor and including computer program instructions that, when executed by the processor, cause the processor to: receive image data of a sample container captured by the image capture device; execute a machine learning model to analyze the image data and locate one or more semantic keypoints on the image of the sample container, wherein training data for the machine learning model includes real-world image data or synthetic image data of sample containers having different geometries and each having semantic keypoints annotated thereon; determine a grasping location on the sample container at least partially based on the locations of the one or more semantic keypoints; and direct the robot to grasp the sample container at the grasping location using the gripper of the robot.
19. The apparatus of claim 18, wherein the imaging device is attached to the robot and wherein the imaging device is movable with the robot.
20. A method of grasping a sample container in a diagnostic laboratory system using a robot, the method comprising: capturing an image including a sample container configured to be used in a diagnostic laboratory system, wherein the captured image includes image data; executing a machine learning model to analyze the image data and locate one or more semantic keypoints on the image of the sample container, wherein training data for the machine learning model includes real-world image data and synthetic image data of sample containers having different geometries and each having semantic keypoints annotated thereon; determining one or more surface properties or geometric properties at least partially based on the one or more semantic keypoints; determining a grasping location on the sample container, via a grasping location algorithm, at least partially based on the one or more surface properties or geometric properties; and directing, via a robot controller, a gripper of a robot to grasp the sample container at the grasping location.
PCT/US2025/015351 2024-02-13 2025-02-11 Systems and methods for identifying grasping locations on sample containers in diagnostic laboratory systems Pending WO2025174726A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202463552865P 2024-02-13 2024-02-13
US63/552,865 2024-02-13

Publications (1)

Publication Number Publication Date
WO2025174726A1 true WO2025174726A1 (en) 2025-08-21

Family

ID=96773943

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2025/015351 Pending WO2025174726A1 (en) 2024-02-13 2025-02-11 Systems and methods for identifying grasping locations on sample containers in diagnostic laboratory systems

Country Status (1)

Country Link
WO (1) WO2025174726A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200338722A1 (en) * 2017-06-28 2020-10-29 Google Llc Machine learning methods and apparatus for semantic robotic grasping
US20230070495A1 (en) * 2021-09-07 2023-03-09 Mujin, Inc. Robotic gripper assemblies for openable object(s) and methods for picking objects
WO2024015534A1 (en) * 2022-07-14 2024-01-18 Siemens Healthcare Diagnostics Inc. Devices and methods for training sample characterization algorithms in diagnostic laboratory systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200338722A1 (en) * 2017-06-28 2020-10-29 Google Llc Machine learning methods and apparatus for semantic robotic grasping
US20230070495A1 (en) * 2021-09-07 2023-03-09 Mujin, Inc. Robotic gripper assemblies for openable object(s) and methods for picking objects
WO2024015534A1 (en) * 2022-07-14 2024-01-18 Siemens Healthcare Diagnostics Inc. Devices and methods for training sample characterization algorithms in diagnostic laboratory systems

Similar Documents

Publication Publication Date Title
JP7657879B2 (en) Robotic system for performing pattern recognition based inspection of pharmaceutical containers
CA2950296C (en) Drawer vision system
EP2776844B1 (en) Specimen container detection
JP6282060B2 (en) Specimen automation system
WO2025048995A1 (en) Systems and methods for grasping containers in diagnostic laboratory systems
EP4071485A1 (en) Sample analysis system and method, cell image analyzer, and storage medium
US20240230694A9 (en) Methods and apparatus adapted to identify 3d center location of a specimen container using a single image capture device
CN112557676A (en) Sample information providing method
US20250321241A1 (en) Methods and apparatus for determining a viewpoint for inspecting a sample within a sample container
US20250372236A1 (en) Devices and methods for training sample characterization algorithms in diagnostic laboratory systems
WO2025174726A1 (en) Systems and methods for identifying grasping locations on sample containers in diagnostic laboratory systems
CN105408917A (en) System and method for facilitating manual sorting of objects
US20250271454A1 (en) Sample handlers of diagnostic laboratory analyzers and methods of use
US20240385204A1 (en) Apparatus and methods of monitoring items in diagnostic laboratory systems
JP6522608B2 (en) Sample test automation system and sample check module
US20240230695A9 (en) Apparatus and methods of aligning components of diagnostic laboratory systems
CN119780402A (en) Sample analysis system and sample analysis method, electronic device and storage medium
WO2025178816A1 (en) Methods and apparatus for sample tube tilt orientation determination and correction
US12287320B2 (en) Methods and apparatus for hashing and retrieval of training images used in HILN determinations of specimens in automated diagnostic analysis systems
Logan et al. Autonomous Integration of Bench-Top Wet Lab Equipment
HK1235856B (en) Drawer vision system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 25755475

Country of ref document: EP

Kind code of ref document: A1