[go: up one dir, main page]

EP4275174B1 - Procédé et système basés sur la vision artificielle pour faciliter le déchargement d'une pile de cartons dans un système de manipulation de cartons - Google Patents

Procédé et système basés sur la vision artificielle pour faciliter le déchargement d'une pile de cartons dans un système de manipulation de cartons

Info

Publication number
EP4275174B1
EP4275174B1 EP22736980.8A EP22736980A EP4275174B1 EP 4275174 B1 EP4275174 B1 EP 4275174B1 EP 22736980 A EP22736980 A EP 22736980A EP 4275174 B1 EP4275174 B1 EP 4275174B1
Authority
EP
European Patent Office
Prior art keywords
sensor
hypotheses
carton
radiation
cartons
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP22736980.8A
Other languages
German (de)
English (en)
Other versions
EP4275174A4 (fr
EP4275174A1 (fr
Inventor
G. Neil Haven
Michael Kallay
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liberty Robotics Inc
Original Assignee
Liberty Robotics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/141,593 external-priority patent/US11436753B2/en
Application filed by Liberty Robotics Inc filed Critical Liberty Robotics Inc
Publication of EP4275174A1 publication Critical patent/EP4275174A1/fr
Publication of EP4275174A4 publication Critical patent/EP4275174A4/fr
Application granted granted Critical
Publication of EP4275174B1 publication Critical patent/EP4275174B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65GTRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
    • B65G59/00De-stacking of articles
    • B65G59/02De-stacking from the top of the stack
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • B25J9/1687Assembly, peg and hole, palletising, straight line, weaving pattern movement
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/77Determining position or orientation of objects or cameras using statistical methods
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40006Placing, palletize, un palletize, paper roll placing, box stacking
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection

Definitions

  • At least one embodiment of the present invention generally relates to machine vision-based methods and systems to facilitate the unloading of a pile or stack of cartons or boxes in a material handling system.
  • the document EP2751748B1 is known and relates to a packaging that is digitally watermarked over most of its extent for high-throughput identification at retail checkouts, wherein an image captured by cameras is processed to derive several different perspective views so as to minimize need to manually reposition for identification.
  • objects are typically boxes of varying volumes and weights that must be placed into a receptacle or conveyor line based on a set of rules such as: size of object, size of destination tote or conveyance. Additional rules may be inferred based on printed material on the box, or additionally the kind of box. Box types can vary widely including partial openings in a box top, shape of the box top, whether or not the box is plain cardboard or has been printed. In material handling there are several processes for moving these kinds of objects.
  • a manual single box pick process manual operators are presented an assembly of box-like objects and select an individual object to move from a plane or other conveyance to a tote or other conveyance for further processing.
  • the box handling is typically performed by a robot.
  • a decanting process builds on the single box picking process. Again, typically, manual labor is used to "decant" objects from a plane of objects. These objects are a set of box-like objects, that may or may not be adjacent to each other, that must be moved onto a conveyance or tote, either singly or as a group.
  • the pose of an object is the position and orientation of the object in space relative to some reference position and orientation.
  • the location of the object can be expressed in terms of X, Y, and Z.
  • the orientation of an object can be expressed in terms of Euler angles describing its rotation about the x-axis (hereinafter RX), rotation about the y-axis (hereinafter RY), and then rotation about the Z-axis (hereinafter RZ) relative to a starting orientation.
  • position coordinates might be expressed in spherical coordinates rather than in Cartesian coordinates of three mutually perpendicular axes; rotational coordinates may be express in terms of quaternions rather than Euler angles; 4x4 homogeneous matrices may be used to combine position and rotation representations; etc.
  • six variables X, Y, Z, RX, RY, and RZ suffice to describe the pose of a rigid object in 3D space.
  • Automated single box pick and decanting have some clear issues that humans can easily overcome. For instance, a human might recognize easily that a box position is tipped, rotated, or otherwise not be in a preset location on the plane of boxes. Additionally, a human may easily see that only so many box objects can be moved at a time. Humans also would be able to quickly understand if one object were overlapping another and be able to still move the objects.
  • the automation may include the introduction of robotic systems, sensors, conveyors, or other automated techniques to improve object processing. There is no prior art that handles these kinds of systems without large cost in development and training or significant maintenance in adding new products.
  • Feature extraction attempts to locate and extract features from the image. Segmentation processes use the extracted features to separate the foreground from the background to isolate the portion of an image with desirable data. The processes of feature extraction and segmentation may iterate to extract a final set of features for use in the detection phase. In the final detection phase, a classification, object recognition or measurement is given based on the features in the foreground.
  • the system must discover features during the training phase and look for things that are important to calculate about the images. Some of these features might include edge or color, or depth or gradients or some other feature that is known only to the training process. Humans gather this information by parallax. An additional downfall of this method is that the more types of input to the ML the longer the training phase.
  • Too little training on a ML will mean that the system does not have sufficient data for a trained set. Too much and the dataset will be oversized and degrade performance. A balance between the two is required and dependent on a holdout dataset to validate sufficient data.
  • An additional issue with ML training is accounting for new objects to be added into the process. Anytime a new object is introduced, or an existing object is changed, the system must be retrained for the new data.
  • An object of at least one embodiment of the present invention is to provide a machine vision-based method and system which overcome the above-noted shortcomings of Classical Image Processing and/or ML to provide a faster, more reliable process and system.
  • a machine vision-based method to facilitate the unloading of a pile of cartons within a work cell in an automated carton handling system includes the step of providing at least one 3-D or depth sensor having a field of view at the work cell.
  • the at least one sensor has a set of radiation sensing elements which detect reflected, projected radiation to obtain 3-D sensor data.
  • the 3-D sensor data includes a plurality of pixels. For each possible pixel location and each possible carton orientation, the method includes generating a hypothesis that a carton with a known structure appears at that pixel location with that container orientation to obtain a plurality of hypotheses. The method further includes ranking the plurality of hypotheses.
  • the step of ranking includes calculating a surprisal for each of the hypotheses to obtain a plurality of surprisals.
  • the step of ranking is based on the surprisals of the hypotheses. At least one carton of interest is unloaded from the pile based on the ranked hypotheses.
  • the method may further include utilizing an approximation algorithm to unload a plurality of cartons at a time from the pile in a minimum number of picks.
  • the work cell may be a robot work cell.
  • the sensor may be a hybrid 2-D/3-D sensor.
  • the sensor may include a pattern emitter for projecting a known pattern of radiation and a detector for detecting the known pattern of radiation reflected from a surface of the carton.
  • the pattern emitter may emit a non-visible pattern of radiation and the detector may detect the reflected non-visible pattern of radiation.
  • the sensor may be a volumetric sensor capable of capturing thousands of individual points in space.
  • At least one of the hypotheses may be based on print on at least one of the cartons.
  • a machine vision-based system to facilitate the unloading of a pile of cartons within a work cell in an automated carton handling system.
  • the system includes at least one 3-D or depth sensor having a field of view at the work cell.
  • the at least one sensor has a set of radiation sensing elements which detect reflected, projected radiation to obtain 3-D sensor data.
  • the 3-D sensor data including a plurality of pixels.
  • the system also includes at least one processor to process the 3-D sensor data and, for each possible pixel location and each possible carton orientation, generate a hypothesis that a carton with a known structure appears at that pixel location with that container orientation to obtain a plurality of hypotheses.
  • the at least one processor ranks the plurality of hypotheses.
  • Ranking includes calculating a surprisal for each of the hypotheses to obtain a plurality of surprisals. Ranking is based on the surprisals of the hypotheses.
  • the system further includes a vision-guided robot for unloading at least one carton of interest from the pile based on the ranked hypotheses.
  • the at least one processor may utilize an approximation algorithm so that the vision-guided robot unloads a plurality of cartons at a time from the pile in a minimum number of picks.
  • one or more 3-D or depth sensors 32 ( Figure 16 ) of at least one embodiment of the invention measure distance via massively parallel triangulation using a projected pattern (a "multi-point disparity" method).
  • the specific types of active depth sensors which are preferred are called multipoint disparity depth sensors.
  • Multipoint refers to the laser projector which projects thousands of individual beams (aka pencils) onto a scene. Each beam intersects the scene at a point.
  • Disposity refers to the method used to calculate the distance from the sensor to objects in the scene. Specifically, “disparity” refers to the way a laser beam's intersection with a scene shifts when the laser beam projector's distance from the scene changes.
  • Depth refers to the fact that these sensors are able to calculate the X, Y and Z coordinates of the intersection of each laser beam from the laser beam projector with a scene.
  • Passive Depth Sensors determine the distance to objects in a scene without affecting the scene in any way; they are pure receivers.
  • Active Depth Sensors determine the distance to objects in a scene by projecting energy onto the scene and then analyzing the interactions of the projected energy with the scene. Some active sensors project a structured light pattern onto the scene and analyze how long the light pulses take to return, and so on. Active depth sensors are both emitters and receivers.
  • each sensor 32 is preferably based on active monocular, multipoint disparity technology as a "multipoint disparity" sensor herein.
  • This terminology though serviceable, is not standard.
  • a preferred monocular (i.e. a single infrared camera) multipoint disparity sensor is disclosed in U.S. Patent No. 8,493.496 .
  • a binocular multipoint disparity sensor which uses two infrared cameras to determine depth information from a scene, is also preferred.
  • Multiple volumetric sensors 32 may be placed in key locations around and above the piles or stacks of cartons 25 ( Figure 16 ). Each of these sensors 32 typically captures hundreds of thousands of individual points in space. Each of these points has a Cartesian position in space. Before measurement, each of these sensors 32 is registered into a common coordinate system. This gives the present system the ability to correlate a location on the image of a sensor with a real world position. When an image is captured from each sensor 32, the pixel information, along with the depth information, is converted by a computer into a collection of points in space, called a "point cloud”.
  • Cumulative Distribution Function which are integrals of histograms, gives one the ability to observe the probability of one pixel. One looks at a histogram of one or more images and assign a probability to one particular pixel.
  • the distribution of the random variable, G is found by observation.
  • the algorithm is good enough that observation of a single image is sufficient, but by continuously updating the CDF as we go, the performance of the algorithm improves
  • At least one embodiment of the present invention eliminates the parameters in favor of a maximum computation on the entropy image: aside from the structuring model of Figure 8 , there are no parameters.
  • the algorithm for Pallet Decomposition will ideally partition a layer of boxes such that the number of partitions is minimal - one wants to empty the layer of boxes, by picking multiple boxes at a time, in the minimum number of picks.
  • a legal pick does not overlap existing boxes.
  • Pick tool does not overlay any box partially.
  • Illegal picks have tool picking overlapping boxes.
  • the system includes vision-guided robots 21 and one or more cameras 32 having a field of view 30.
  • the cameras 32 and the robots 21 may be mounted on support beams of a support frame structure of the system 10 or may rest on a base.
  • One of the cameras 32 may be mounted on one of the robots 21 to move therewith.
  • the vision-guided robots 21 have the ability to pick up any part within a specified range of allowable cartons using multiple-end-of-arm tooling or grippers.
  • the robots pick up the cartons and orient them at a conveyor or other apparatus.
  • Each robot 21 precisely positions self-supporting cartons on a support or stage.
  • the robots 21 are preferably six axis robots. Each robot 21 is vision-guided to identify, pick, orient, and present the carton so that they are self-supporting on the stage.
  • the grippers 17 accommodate multiple part families.
  • Benefits of Vision-based Robot Automation include but are not limited to the following:
  • a master control station or system controller determines locations and orientations of the cartons or boxes in the pile or stack of cartons using any suitable machine vision system having at least one camera (i.e. camera 32). Any one or more of various arrangements of vision systems may be used for providing visual information from image processors ( Figure 16 ) to the master controller.
  • the vision system includes two three-dimensional cameras 32 that provides infrared light over fields of vision or view 30. In various embodiments, the light may be infrared.
  • multiple cameras such as the cameras 32 can be situated at fixed locations on the frame structure at the station, or may be mounted on the arms of the robot 21. Two cameras 32 may be spaced apart from one another on the frame structure.
  • the cameras 32 are operatively connected to the master controller via their respective image processors.
  • the master controller also controls the robots of the system through their respective robot controllers. Based on the information received from the cameras 32, the master controller then provides control signals to the robot controllers that actuate robotic arm(s) or the one or more robot(s) 21 used in the method and system.
  • the master controller can include a processor and a memory on which is recorded instructions or code for communicating with the robot controllers, the vision systems, the robotic system sensor(s), etc.
  • the master controller is configured to execute the instructions from its memory, via its processor.
  • master controller can be host machine or distributed system, e.g., a computer such as a digital computer or microcomputer, acting as a control module having a processor and, as the memory, tangible, non-transitory computer-readable memory such as read-only memory (ROM) or flash memory.
  • ROM read-only memory
  • the master controller can also have random access memory (RAM), electrically-erasable, programmable, read only memory (EEPROM), a high-speed clock, analog-to-digital (A/D) and/or digital-to-analog (D/A) circuitry, and any required input/output circuitry and associated devices, as well as any required signal conditioning and/or signal buffering circuitry. Therefore, the master controller can include all software, hardware, memory, algorithms, connections, sensors, etc., necessary to monitor and control the vision subsystem, the robotic subsystem, etc. As such, a control method can be embodied as software or firmware associated with the master controller. It is to be appreciated that the master controller can also include any device capable of analyzing data from various sensors, comparing data, making the necessary decisions required to control and monitor the vision subsystem, the robotic subsystem, sensors, etc.
  • An end effector on the robot arm may include a series of grippers supported to pick up the cartons.
  • the robotic arm is then actuated by its controller to pick up the cartons with the particular gripper, positioning the gripper 17 relative to the cartons using the determined location and orientation from the visual position and orientation data of the particular vision subsystem including its camera and image processor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)

Claims (16)

  1. Procédé à base de vision artificielle pour faciliter le déchargement d'une pile de boîtes cartons depuis l'intérieur d'une cellule de travail dans un système automatisé de manutention de boîtes cartons, le procédé prévoyant les étapes de :
    utiliser au moins un capteur 3D ou de profondeur ayant un champ de vision couvrant la cellule de travail, ledit au moins un capteur ayant un ensemble d'éléments de détection de rayonnement qui détecte un rayonnement projeté, réfléchi afin d'obtenir des données de capteur 3D, les données de capteur 3D incluant une pluralité de pixels ;
    générer, pour chaque localisation possible de pixel et chaque orientation possible de carton, une hypothèse qu'un carton avec une structure connue apparaisse à l'emplacement de cette localisation de pixel avec l'orientation de cette boîte carton afin d'obtenir une pluralité d'hypothèses ;
    hiérarchiser la pluralité d'hypothèses, dans une étape de hiérarchisation incluant le calcul d'une entropie de Shannon pour chacune des hypothèses afin d'obtenir une pluralité d'entropies de Shannon et où l'étape de hiérarchisation repose sur les hypothèses ; et
    décharger au moins une boîte carton prise en considération, depuis la pile, sur la base des hypothèses hiérarchisées.
  2. Procédé selon la revendication 1, comprenant en outre l'utilisation d'un algorithme d'approximation afin de décharger en un minimum de prélèvements une pluralité de boîtes cartons à la fois depuis la pile.
  3. Procédé selon la revendication 1, dans lequel la cellule de travail est une cellule robotisée.
  4. Procédé selon la revendication 1, dans lequel ledit au moins un capteur est un capteur hybride 2D/3D.
  5. Procédé selon la revendication 1, dans lequel ledit au moins un capteur inclut un émetteur de motifs pour projeter un motif connu de rayonnement et un détecteur pour détecter le motif connu de rayonnement réfléchi par la surface du carton.
  6. Procédé selon la revendication 5, dans lequel l'émetteur de motifs émet un motif de rayonnement non visible et le détecteur détecte le motif de rayonnement non visible réfléchi.
  7. Procédé selon la revendication 1, dans lequel ledit au moins un capteur est au moins un capteur volumétrique capable d'acquérir des milliers de points individuels dans l'espace.
  8. Procédé selon la revendication 1, dans lequel au moins une des hypothèses repose sur la présence d'une impression sur au moins l'une des boîtes cartons parmi les boîtes cartons.
  9. Système à base de vision artificielle pour faciliter le déchargement d'une pile de boîtes cartons depuis l'intérieur d'une cellule de travail dans un système automatisé de manutention de boîtes cartons, le système comprenant :
    au moins un capteur 3D ou de profondeur ayant un champ de vision couvrant la cellule de travail, ledit au moins un capteur ayant un ensemble d'éléments de détection de rayonnement qui détecte un rayonnement projeté, réfléchi afin d'obtenir des données de capteur 3D, les données de capteur 3D incluant une pluralité de pixels ;
    au moins un processeur pour traiter les données de capteur 3D et, pour chaque localisation possible de pixel et chaque orientation possible de boîte carton, pour générer une hypothèse qu'une boîte carton avec une structure connue, apparaisse à l'emplacement de cette localisation de pixel avec l'orientation de cette boîte carton afin d'obtenir une pluralité d'hypothèses ;
    ledit au moins un processeur hiérarchisant la pluralité d'hypothèses, dans lequel la hiérarchisation inclut le calcul d'une entropie de Shannon pour chacune des hypothèses afin d'obtenir une pluralité d'entropies de Shannon et dans lequel la hiérarchisation repose sur les entropies de Shannon des hypothèses ; et
    un robot guidé par vision artificielle pour décharger au moins une boîte carton prise en considération depuis la pile sur la base des hypothèses hiérarchisées.
  10. Système selon la revendication 9, dans lequel ledit au moins un processeur utilise un algorithme d'approximation de façon que le robot guidé par vision artificielle décharge en un minimum de prélèvements une pluralité de boîtes cartons à la fois, depuis la pile.
  11. Système selon la revendication 9, dans lequel la cellule de travail est une cellule robotisée.
  12. Système selon la revendication 9, dans lequel ledit au moins un capteur est un capteur hybride 2D/3D.
  13. Système selon la revendication 9, dans lequel ledit au moins un capteur inclut un émetteur de motifs pour projeter un motif connu de rayonnement et un détecteur pour détecter le motif connu de rayonnement réfléchi par la surface de la boîte carton.
  14. Système selon la revendication 13, dans lequel l'émetteur de motifs émet un motif de rayonnement non visible et le détecteur détecte le motif de rayonnement non visible réfléchi.
  15. Système selon la revendication 9, dans lequel ledit au moins un capteur est un capteur volumétrique capable d'acquérir des milliers de points individuels dans l'espace.
  16. Système selon la revendication 9, dans lequel au moins une des hypothèses repose sur la présence d'une impression sur au moins l'une des boîtes cartons.
EP22736980.8A 2021-01-05 2022-01-04 Procédé et système basés sur la vision artificielle pour faciliter le déchargement d'une pile de cartons dans un système de manipulation de cartons Active EP4275174B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US17/141,593 US11436753B2 (en) 2018-10-30 2021-01-05 Machine vision-based method and system to facilitate the unloading of a pile of cartons in a carton handling system
PCT/US2022/011081 WO2022150280A1 (fr) 2021-01-05 2022-01-04 Procédé et système basés sur la vision artificielle pour faciliter le déchargement d'une pile de cartons dans un système de manipulation de cartons

Publications (3)

Publication Number Publication Date
EP4275174A1 EP4275174A1 (fr) 2023-11-15
EP4275174A4 EP4275174A4 (fr) 2024-10-30
EP4275174B1 true EP4275174B1 (fr) 2025-07-30

Family

ID=82358078

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22736980.8A Active EP4275174B1 (fr) 2021-01-05 2022-01-04 Procédé et système basés sur la vision artificielle pour faciliter le déchargement d'une pile de cartons dans un système de manipulation de cartons

Country Status (4)

Country Link
EP (1) EP4275174B1 (fr)
CA (1) CA3204014A1 (fr)
ES (1) ES3038195T3 (fr)
WO (1) WO2022150280A1 (fr)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US12450773B2 (en) 2021-01-05 2025-10-21 Liberty Robotics Inc. Method and system for manipulating a target item supported on a substantially horizontal support surface
US12437441B2 (en) 2021-01-05 2025-10-07 Liberty Robotics Inc. Method and system for decanting a plurality of items supported on a transport structure at one time with a picking tool for placement into a transport container
US12444080B2 (en) 2021-01-05 2025-10-14 Liberty Robotics Inc. Method and system for manipulating a multitude of target items supported on a substantially horizontal support surface one at a time
EP4639079A2 (fr) * 2022-12-20 2025-10-29 Liberty Robotics Inc. Procédé et système de manipulation d'un article cible supporté sur une surface de support sensiblement horizontale
CN117583281B (zh) * 2023-11-29 2024-04-19 广州赛志系统科技有限公司 板件机器人分拣码垛优化方法、控制系统及分拣生产线

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013033442A1 (fr) * 2011-08-30 2013-03-07 Digimarc Corporation Procédés et agencements d'identification d'objets
US20130329012A1 (en) * 2012-06-07 2013-12-12 Liberty Reach Inc. 3-d imaging and processing system including at least one 3-d or depth sensor which is continually calibrated during use
US10775505B2 (en) * 2015-01-30 2020-09-15 Trinamix Gmbh Detector for an optical detection of at least one object
WO2019039996A1 (fr) * 2017-08-25 2019-02-28 Maker Trading Pte Ltd Système de vision artificielle et procédé d'identification d'emplacements d'éléments cibles
US11314220B2 (en) * 2018-04-26 2022-04-26 Liberty Reach Inc. Non-contact method and system for controlling an industrial automation machine
US11328380B2 (en) * 2018-10-27 2022-05-10 Gilbert Pinter Machine vision systems, illumination sources for use in machine vision systems, and components for use in the illumination sources
US10776949B2 (en) * 2018-10-30 2020-09-15 Liberty Reach Inc. Machine vision-based method and system for measuring 3D pose of a part or subassembly of parts

Also Published As

Publication number Publication date
ES3038195T3 (en) 2025-10-10
EP4275174A4 (fr) 2024-10-30
WO2022150280A1 (fr) 2022-07-14
EP4275174A1 (fr) 2023-11-15
CA3204014A1 (fr) 2022-07-14

Similar Documents

Publication Publication Date Title
US11557058B2 (en) Machine vision-based method and system to facilitate the unloading of a pile of cartons in a carton handling system
EP4275174B1 (fr) Procédé et système basés sur la vision artificielle pour faciliter le déchargement d'une pile de cartons dans un système de manipulation de cartons
EP3683721B1 (fr) Procédé de manipulation de matériau, appareil et système d'identification d'une région d'intérêt
Zeng et al. Multi-view self-supervised deep learning for 6d pose estimation in the amazon picking challenge
US20210012524A1 (en) Learning dataset creation method and device
US12444080B2 (en) Method and system for manipulating a multitude of target items supported on a substantially horizontal support surface one at a time
US12248299B2 (en) Control system unit for use in a 3-dimensional object manufacturing system and a corresponding method of operating
US20230121334A1 (en) Method and System for Efficiently Packing a Transport Container with Items Picked from a Transport Structure
EP4584054A1 (fr) Étalonnage oeil-main pour un manipulateur robotique
EP4487269A1 (fr) Système et procédés de surveillance pour entrepôts automatisés
US12450773B2 (en) Method and system for manipulating a target item supported on a substantially horizontal support surface
US12437441B2 (en) Method and system for decanting a plurality of items supported on a transport structure at one time with a picking tool for placement into a transport container
US20230120703A1 (en) Method and System for Quickly Emptying a Plurality of Items from a Transport Structure
CN117011817B (zh) 叉车托盘的分割和定位方法、装置及智能叉车
CN115457494A (zh) 基于红外图像和深度信息融合的物体识别方法及其系统
US20230118445A1 (en) Method and System for Optimizing Pose of a Picking Tool with Respect to an Item to be Picked from a Transport Structure
EP4639480A2 (fr) Procédé et système de manipulation d'une multitude d'articles cibles supportés sur une surface de support sensiblement horizontale un à la fois
WO2024136988A1 (fr) Procédé et système pour vider rapidement une pluralité d'articles d'une structure de transport
WO2024136989A1 (fr) Procédé et système pour remplir efficacement un récipient de transport avec des articles prélevés à partir d'une structure de transport
WO2024137732A2 (fr) Procédé et système de manipulation d'un article cible supporté sur une surface de support sensiblement horizontale
EP4638322A1 (fr) Procédé et système de décantation d'une pluralité d'articles supportés sur une structure de transport en une fois avec un outil de prélèvement destiné à être placé dans un récipient de transport
WO2024137642A1 (fr) Procédé et système d'optimisation de pose d'un outil de prélèvement
Károly et al. Robotic Manipulation of Pathological Slides Powered by Deep Learning and Classical Image Processing
US20240375277A1 (en) Robotic Multi-Pick Detection
Kang et al. Implementation of Intelligent Robotic Systems for Pick and Place based on Image Processing

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230703

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20231116

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
RAP3 Party data changed (applicant data changed or rights of an application transferred)

Owner name: LIBERTY ROBOTICS INC

A4 Supplementary search report drawn up and despatched

Effective date: 20240930

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 7/73 20170101ALI20240924BHEP

Ipc: G01B 11/25 20060101ALI20240924BHEP

Ipc: B65G 47/26 20060101ALI20240924BHEP

Ipc: B65G 1/04 20060101ALI20240924BHEP

Ipc: B25J 9/16 20060101ALI20240924BHEP

Ipc: B07C 5/10 20060101ALI20240924BHEP

Ipc: G06T 7/00 20170101AFI20240924BHEP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Free format text: PREVIOUS MAIN CLASS: G06T0007000000

Ipc: B25J0009160000

Ref document number: 602022018506

Country of ref document: DE

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: G01B 11/25 20060101ALN20250206BHEP

Ipc: G06T 7/77 20170101ALI20250206BHEP

Ipc: B65G 59/02 20060101ALI20250206BHEP

Ipc: B25J 9/16 20060101AFI20250206BHEP

INTG Intention to grant announced

Effective date: 20250227

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: NL

Ref legal event code: FP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602022018506

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: ES

Ref legal event code: FG2A

Ref document number: 3038195

Country of ref document: ES

Kind code of ref document: T3

Effective date: 20251010

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1818459

Country of ref document: AT

Kind code of ref document: T

Effective date: 20250730

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20251202