EP4395965A1 - Robotic system with overlap processing mechanism and methods for operating the same - Google Patents
Robotic system with overlap processing mechanism and methods for operating the sameInfo
- Publication number
- EP4395965A1 EP4395965A1 EP22865576.7A EP22865576A EP4395965A1 EP 4395965 A1 EP4395965 A1 EP 4395965A1 EP 22865576 A EP22865576 A EP 22865576A EP 4395965 A1 EP4395965 A1 EP 4395965A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- detection result
- detection
- occlusion
- information
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G1/00—Storing articles, individually or in orderly arrangement, in warehouses or magazines
- B65G1/02—Storage devices
- B65G1/04—Storage devices mechanical
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G37/00—Combinations of mechanical conveyors of the same kind, or of different kinds, of interest apart from their application in particular machines or use in particular manufacturing processes
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G43/00—Control devices, e.g. for safety, warning or fault-correcting
- B65G43/10—Sequence control of conveyors operating in combination
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G47/00—Article or material-handling devices associated with conveyors; Methods employing such devices
- B65G47/74—Feeding, transfer, or discharging devices of particular kinds or types
- B65G47/90—Devices for picking-up and depositing articles or materials
- B65G47/91—Devices for picking-up and depositing articles or materials incorporating pneumatic, e.g. suction, grippers
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G67/00—Loading or unloading vehicles
- B65G67/02—Loading or unloading land vehicles
- B65G67/24—Unloading land vehicles
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/0014—Image feed-back for automatic industrial control, e.g. robot with camera
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B65—CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
- B65G—TRANSPORT OR STORAGE DEVICES, e.g. CONVEYORS FOR LOADING OR TIPPING, SHOP CONVEYOR SYSTEMS OR PNEUMATIC TUBE CONVEYORS
- B65G2203/00—Indexing code relating to control or detection of the articles or the load carriers during conveying
- B65G2203/02—Control or detection
- B65G2203/0208—Control or detection relating to the transported articles
- B65G2203/0233—Position of the article
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/39—Robotics, robotics to robotics hand
- G05B2219/39469—Grip flexible, deformable plate, object and manipulate it
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Definitions
- FIG. 2 is a block diagram illustrating the robotic system in accordance with one or more embodiments of the present technology.
- FIG. 3 illustrates a robotic transfer configuration in accordance with one or more embodiments of the present technology.
- Embodiments of the technology described below can process such imprints and deformations depicted in the corresponding images (e.g., top view images of the overlaid or stacked objects) for recognition.
- the robotic system can process the images to effectively distinguish between the surface deformations and/or any visual features/i mages on the object surfaces from actual edges (e.g., peripheral edges) of the objects.
- the robotic system can derive and implement motion plans that transfer the objects while accounting for and adjusting for the overlaps.
- Coupled can be used herein to describe structural relationships between components. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, “connected” can be used to indicate that two or more elements are in direct contact with each other. Unless otherwise made apparent in the context, the term “coupled” can be used to indicate that two or more elements are in either direct or indirect (with other intervening elements between them) contact with each other, or that the two or more elements co-operate or interact with each other (e.g., as in a cause- and-effect relationship, such as for signal transmission/reception or for function calls), or both. Suitable Environments
- the robotic system 100 can include and/or communicate with an unloading unit 102, a transfer unit 104 (e.g., a palletizing robot and/or a piece-picker robot), a transport unit 106, a loading unit 108, or a combination thereof in a warehouse or a distribution/shipping hub.
- a transfer unit 104 e.g., a palletizing robot and/or a piece-picker robot
- transport unit 106 e.g., a palletizing robot and/or a piece-picker robot
- a loading unit 108 e.g., a combination thereof in a warehouse or a distribution/shipping hub.
- Each of the units in the robotic system 100 can be configured to execute one or more tasks.
- the tasks can be combined in sequence to perform an operation that achieves a goal, such as to unload objects from a truck or a van and store them in a warehouse or to unload objects from storage locations and prepare them for shipping.
- the task can include placing the objects on a target location (e.g., on top of a pallet and/or inside a bin/cage/box/case).
- the robotic system can detect the objects and derive plans (e.g., placement locations/orientations, sequence for transferring the objects, and/or corresponding motion plans) for picking, placing, and/or stacking the objects.
- Each of the units can be configured to execute a sequence of actions (e.g., by operating one or more components therein) according to one or more of the derived plans to execute a task.
- the task can include manipulation (e.g., moving and/or reorienting) of a target object 112 (e.g., one of the packages, boxes, cases, cages, pallets, etc., corresponding to the executing task), such as to move the target object 112 from a start location 114 to a task location 1 16.
- a target object 112 e.g., one of the packages, boxes, cases, cages, pallets, etc., corresponding to the executing task
- the unloading unit 102 e.g., a devanning robot
- a carrier e.g., a truck
- the transport unit 106 can transfer the target object 112 from an area associated with the transfer unit 104 to an area associated with the loading unit 108, and the loading unit 108 can transfer the target object 112 (e.g., by moving the pallet, the container, and/or the rack carrying the target object 112) from the transfer unit 104 to a storage location (e.g., a location on the shelves).
- a storage location e.g., a location on the shelves.
- the robotic system 100 is described in the context of a shipping center; however, it is understood that the robotic system 100 can be configured to execute tasks in other environments/for other purposes, such as for manufacturing, assembly, packaging, healthcare, and/or other types of automation. It is also understood that the robotic system 100 can include and/or communicate with other units, such as manipulators, service robots, modular robots, etc., not shown in FIG. 1.
- other units can include a palletizing unit for placing objects onto a pallet, a depalletizing unit for transferring the objects from cage carts or pallets onto conveyors or other pallets, a container-switching unit for transferring the objects from one container to another, a packaging unit for wrapping the objects, a sorting unit for grouping objects according to one or more characteristics thereof, a piece-picking unit for manipulating (e.g., for sorting, grouping, and/or transferring) the objects differently according to one or more characteristics thereof, or a combination thereof.
- a palletizing unit for placing objects onto a pallet
- a depalletizing unit for transferring the objects from cage carts or pallets onto conveyors or other pallets
- a container-switching unit for transferring the objects from one container to another
- a packaging unit for wrapping the objects
- a sorting unit for grouping objects according to one or more characteristics thereof
- a piece-picking unit for manipulating (e.g., for sorting, grouping, and/or
- the robotic system 100 can include and/or be coupled to physical or structural members (e.g., robotic manipulator arms) that are connected at joints for motion (e.g., rotational and/or translational displacements).
- the structural members and the joints can form a kinetic chain configured to manipulate an end-effector (e.g., the gripper) configured to execute one or more tasks (e.g., gripping, spinning, welding, etc.) depending on the use/operation of the robotic system 100.
- the robotic system 100 can include and/or communicate with the actuation devices (e.g., motors, actuators, wires, artificial muscles, electroactive polymers, etc.) configured to drive or manipulate (e.g., displace and/or reorient) the structural members about or at a corresponding joint.
- the robotic units can include transport motors configured to transport the corresponding units/chassis from place to place.
- the robotic system 100 can include and/or communicate with sensors configured to obtain information used to implement the tasks, such as for manipulating the structural members and/or for transporting the robotic units.
- the sensors can include devices configured to detect or measure one or more physical properties of the robotic system 100 (e.g., a state, a condition, and/or a location of one or more structural members/joints thereof) and/or of a surrounding environment.
- Some examples of the sensors can include accelerometers, gyroscopes, force sensors, strain gauges, tactile sensors, torque sensors, position encoders, etc.
- the sensors can include one or more imaging devices (e.g., visual and/or infrared cameras, 2D and/or 3D imaging cameras, distance measuring devices such as lidars or radars, etc.) configured to detect the surrounding environment.
- the imaging devices can generate representations of the detected environment, such as digital images and/or point clouds, that may be processed via machine/computer vision (e.g., for automatic inspection, robot guidance, or other robotic applications).
- the robotic system 100 can process the digital image and/or the point cloud to identify the target object 112 and/or a pose thereof, the start location 114, the task location 1 16, or a combination thereof.
- the robotic system 100 can capture and analyze an image of a designated area (e.g., a pickup location, such as inside the truck or on the conveyor belt) to identify the target object 1 12 and the start location 114 thereof.
- the robotic system 100 can capture and analyze an image of another designated area (e.g., a drop location for placing objects on the conveyor, a location for placing objects inside the container, or a location on the pallet for stacking purposes) to identify the task location 1 16.
- the imaging devices can include one or more cameras configured to generate images of the pickup area and/or one or more cameras configured to generate images of the task area (e.g., drop area).
- the robotic system 100 can determine the start location 114, the task location 1 16, the object detection results including the associated poses, the packing/placement plan, the transfer/packing sequence, and/or other processing results.
- FIG. 2 is a block diagram illustrating components of the robotic system 100 in accordance with one or more embodiments of the present technology.
- the robotic system 100 e.g., at one or more of the units or assemblies and/or robots described above
- the robotic system 100 can include electronic/electrical devices, such as one or more processors 202, one or more storage devices 204, one or more communication devices 206, one or more input-output devices 208, one or more actuation devices 212, one or more transport motors 214, one or more sensors 216, or a combination thereof.
- the various devices can be coupled to each other via wire connections and/or wireless connections.
- the wireless connections can be based on, for example, cellular communication protocols (e.g., 3G, 4G, LTE, 5G, etc.), wireless local area network (LAN) protocols (e.g., wireless fidelity (WIFI)), peer-to-peer or device-to-device communication protocols (e.g., Bluetooth, Near-Field communication (NFC), etc.), Internet of Things (loT) protocols (e.g., NB-loT, Zigbee, Z- wave, LTE-M, etc.), and/or other wireless communication protocols.
- cellular communication protocols e.g., 3G, 4G, LTE, 5G, etc.
- LAN wireless local area network
- WIFI wireless fidelity
- peer-to-peer or device-to-device communication protocols e.g., Bluetooth, Near-Field communication (NFC), etc.
- Internet of Things (loT) protocols e.g., NB-loT, Zigbee, Z- wave, LTE-
- the processors 202 can include data processors (e.g., central processing units (CPUs), special-purpose computers, and/or onboard servers) configured to execute instructions (e.g., software instructions) stored on the storage devices 204 (e.g., computer memory).
- the processors 202 can implement the program instructions to control/interface with other devices, thereby causing the robotic system 100 to execute actions, tasks, and/or operations.
- the storage devices 204 can include non-transitory computer-readable mediums having stored thereon program instructions (e.g., software).
- Some examples of the storage devices 204 can include volatile memory (e.g., cache and/or randomaccess memory (RAM) and/or non-volatile memory (e.g., flash memory and/or magnetic disk drives). Other examples of the storage devices 204 can include portable memory drives and/or cloud storage devices.
- volatile memory e.g., cache and/or randomaccess memory (RAM)
- non-volatile memory e.g., flash memory and/or magnetic disk drives
- Other examples of the storage devices 204 can include portable memory drives and/or cloud storage devices.
- the storage devices 204 can be used to further store and provide access to master data, processing results, and/or predetermined data/thresholds.
- the storage devices 204 can store master data that includes descriptions of objects (e.g., boxes, cases, containers, and/or products) that may be manipulated by the robotic system 100.
- the master data can include a dimension, a shape (e.g., templates for potential poses and/or computer-generated models for recognizing the object in different poses), a color scheme, an image, an identification information (e.g., bar codes, quick response (QR) codes, logos, etc., and/or expected locations thereof), an expected mass or weight, or a combination thereof for the objects expected to be manipulated by the robotic system 100.
- the master data can include information about surface patterns (e.g., printed images and/or visual aspects of corresponding material), surface roughness, or any features associated with surfaces of the objects.
- the master data can include manipulation-related information regarding the objects, such as a center-of-mass location on each of the objects, expected sensor measurements (e.g., force, torque, pressure, and/or contact measurements) corresponding to one or more actions/maneuvers, or a combination thereof.
- the robotic system can look up pressure levels (e.g., vacuum levels, suction levels, etc.), gripping/pickup areas (e.g., areas or banks of vacuum grippers to be activated), and other stored master data for controlling transfer robots.
- the storage devices 204 can also store object tracking data.
- the tracking data includes registration data that indicates objects that are registered in the master data.
- the registration data can include information of objects that are expected to be stored at a start location and/or are expected to be moved at a drop location.
- the object tracking data can include a log of scanned or manipulated objects.
- the object tracking data can include image data (e.g., a picture, point cloud, live video feed, etc.) of the objects at one or more locations (e.g., designated pickup or drop locations and/or conveyor belts).
- the object tracking data can include locations and/or orientations of the objects at the one or more locations.
- the communication devices 206 can include circuits configured to communicate with external or remote devices via a network.
- the communication devices 206 can include receivers, transmitters, modulators/demodulators (modems), signal detectors, signal encoders/decoders, connector ports, network cards, etc.
- the communication devices 206 can be configured to send, receive, and/or process electrical signals according to one or more communication protocols (e.g., the Internet Protocol (IP), wireless communication protocols, etc.).
- IP Internet Protocol
- the robotic system 100 can use the communication devices 206 to exchange information between units of the robotic system 100 and/or exchange information (e.g., for reporting, data gathering, analyzing, and/or troubleshooting purposes) with systems or devices external to the robotic system 100.
- the input-output devices 208 can include user interface devices configured to communicate information to and/or receive information from human operators.
- the input-output devices 208 can include a display 210 and/or other output devices (e.g., a speaker, a haptics circuit, or a tactile feedback device, etc.) for communicating information to the human operator.
- the input-output devices 208 can include control or receiving devices, such as a keyboard, a mouse, a touchscreen, a microphone, a user interface (Ul) sensor (e.g., a camera for receiving motion commands), a wearable input device, etc.
- the robotic system 100 can use the input-output devices 208 to interact with the human operators in executing an action, a task, an operation, or a combination thereof.
- the structural members and the joints can form a kinetic chain configured to manipulate an end-effector (e.g., the gripper) configured to execute one or more tasks (e.g., gripping, spinning, welding, etc.) depending on the use/operation of the robotic system 100.
- the kinetic chain can include the actuation devices 212 (e.g., motors, actuators, wires, artificial muscles, electroactive polymers, etc.) configured to drive or manipulate (e.g., displace and/or reorient) the structural members about or at a corresponding joint.
- the kinetic chain can include the transport motors 214 configured to transport the corresponding units/chassis from place to place.
- the sensors 216 can be configured to obtain information used to implement the tasks, such as for manipulating the structural members and/or for transporting the robotic units.
- the sensors 216 can include devices configured to detect or measure one or more physical properties of the controllers, the robotic units (e.g., a state, a condition, and/or a location of one or more structural members/joints thereof), and/or for a surrounding environment.
- Some examples of the sensors 216 can include contact sensors, proximity sensors, accelerometers, gyroscopes, force sensors, strain gauges, torque sensors, position encoders, pressure sensors, vacuum sensors, etc.
- the sensors 216 can include one or more imaging devices 222 (e.g., 2-dimensional and/or 3-dimensional imaging devices), configured to detect the surrounding environment.
- the imaging devices can include cameras (including visual and/or infrared cameras), lidar devices, radar devices, and/or other distance-measuring or detecting devices.
- the imaging devices 222 can generate a representation of the detected environment, such as a digital image and/or a point cloud, used for implementing machine/computer vision (e.g., for automatic inspection, robot guidance, or other robotic applications).
- the robotic system 100 (via, e.g., the processors 202) can process image data and/or the point cloud to identify the target object 1 12 of FIG. 1 , the start location 114 of FIG. 1 , the task location 116 of FIG. 1 , a pose of the target object 1 12 of FIG. 1 , or a combination thereof.
- the robotic system 100 can use image data from the imaging device 222 to determine how to access and pick up objects. Images of the objects can be analyzed to detect the objects and determine a motion plan for positioning a vacuum gripper assembly to grip detected objects.
- the sensors 216 of FIG. 2 can include position sensors 224 of FIG. 2 (e.g., position encoders, potentiometers, etc.) configured to detect positions of structural members (e.g., the robotic arms and/or the end-effectors) and/or corresponding joints of the robotic system 100.
- the robotic system 100 can use the position sensors 224 to track locations and/or orientations of the structural members and/or the joints during execution of the task.
- the unloading unit, transfer unit, transport unit/assembly, and the loading unit disclosed herein can include the sensors 216.
- FIG. 3 illustrates a robotic transfer configuration in accordance with one or more embodiments of the present technology.
- the robotic transfer configuration can include a robotic arm assembly 302 having an end effector 304 (e.g., a gripper) configured to pick objects from a source container 306 (e.g., a bin having low and/or clear walls) and transfer them to a destination location.
- the robotic arm assembly 302 can have structural members and joints that function as a kinetic chain.
- the end effector 304 can include a vacuum-based gripper coupled to the distal end of the kinetic chain and configured draw in air and create a vacuum between a gripping interface (e.g., a bottom portion of the end effector 304) and a surface of the objects to grasp the objects.
- a gripping interface e.g., a bottom portion of the end effector 304
- the robotic transfer configuration can be adapted to grasp and transfer flexible objects 310 (also referred to as deformable objects, e.g., objects having physical traits, such as thickness and/or rigidity, that satisfy a predetermined threshold) out of the source container 306.
- the robotic transfer configuration can be adapted to use the vacuum-based gripper to grasp plastic pouches or clothes that may or may not be plastic-wrapped or bagged items from within the source container 306.
- an object may be considered flexible when the object lacks structural rigidity such that an overhanging or ungripped portion (e.g., a portion extending beyond a footprint of the grasping end-effector) of an object bends, folds, or otherwise fails to maintain a constant shape/pose when the object is lifted/moved.
- an overhanging or ungripped portion e.g., a portion extending beyond a footprint of the grasping end-effector
- the source container 306 includes five flexible objects.
- Objects C1 , C2, and C3 are located on (e.g., directly contacting and supported by) a bottom inner surface of the source container 306.
- An intermediate “object B” e.g., upper portions thereof as illustrated in FIG. 4A
- the remaining portion(s) of object B e.g., lower portions
- a top “object A” may overlap and be supported by objects C1 , C2, C3, and B.
- the imprint, bulge, or crease can be formed because object C3 has a top surface that is higher than the top surface of object C1 causing object A to bend.
- the obtained images can also depict any 2- dimensional printed surface features such as pictures or text printed on a surface of an object (e.g., logo 404).
- one or more of the objects in FIGs. 4A-4B and/or portions thereof can be transparent or translucent (e.g., packages having clear plastic wrappings, envelops, or sacks).
- the different dashed lines correspond to edges and/or surface prints of the underlying objects that are seen through the upper transparent or translucent object.
- the image processing described herein can be applied to transparent objects as well as flexible objects.
- the flexible objects may be referred to as thin flexible objects that have an average thickness below a thickness threshold or edge portions with a tapered shape, where the center of the object is thicker than the edge portions of the thin flexible objects.
- the thickness threshold can, in some embodiments, be one centimeter or less and, in other embodiments, the thickness threshold can be one millimeter or less.
- the thin flexible objects are stacked or piled on one another at random orientations with varying degrees of overlap, it may be difficult to determine which of the thin flexible objects or portions of the thin flexible objects are on top or above the other thing flexible objects in the stack.
- the robotic system can process the image data based on identifying contested portions (e.g., portions of the image processed as having probabilities of being associated with or belonging to one of multiple objects), generating one or more types of masks and analyzing the different types of portions to determine the grippable regions or areas for the grip locations.
- FIG. 5 is flow diagram of an example method 500 for operating a robotic system in accordance with one or more embodiments of the present technology.
- the method 500 can be implemented by the robotic system 100 of FIG. 1 (via, e.g., the controller and/or the processor 202 of FIG. 2) to process the obtained images and plan/perform tasks involving flexible objects.
- the method 500 is described below using the example illustrated in FIGs. 4A and 4B.
- the robotic system can generate detection features from the image data.
- the detection features can be elements of the image data that are processed and used for generating a detection hypothesis/result (e.g., an estimate of one or more object identities, corresponding poses/locations, and/or relative arrangements thereof associated with a portion of the image data) corresponding to the flexible objects depicted in the image data.
- the detection features can include edge features (e.g. lines in the image data that can correspond with the peripheral edges, or a portion thereof, of the flexible objects depicted in the image data) and key points that are generated from pixel in the 2D image data; and depth values and/or surface normal for three-dimensional (3D) points in the 3D point cloud image data.
- the robotic system can detect lines depicted in the obtained image data using one or more circuits and/or algorithms (e.g., Sobel filters) to detect the lines.
- the detected lines can be further processed to determine to generate the edge features.
- the robotic system can process the detected lines to identify lines 406a-406d as the edge features that correspond to the peripheral edges of object C1 .
- the robotic system may calculate a confidence measure for edge features as a representation of certainty/likelihood that the edge feature corresponds with the peripheral edge of one of the flexible objects.
- the robotic system can calculate the confidence measure based on a thickness/width, an orientation, a length, a shape/curvature, degree of continuity and/or other detected aspects of the edge features.
- the robotic system can process the pixels of the 2D image data using one or more circuits and/or algorithms, such as scale-invariant feature transform (SIFT) algorithms.
- SIFT scale-invariant feature transform
- the robotic system can generate the detection features to include estimates of sections or continuous surfaces bounded/defined by the edge features such as by identifying junctions between lines having different orientations.
- the edge features in FIG. 4A can intersect with each other in locations where the objects overlap with each other thereby forming junctions.
- the robotic system can estimate the sections based on a set of the edge features and junctions. In other words, the robotic system can estimate each section as an area bounded/defined by the set of joined/connected edges. Additionally, the robotic system may estimate each section based on relative orientations of connected edges (e.g., parallel opposing edges, orthogonal connections, at predefined angles corresponding to templates representing the flexible object, or the like).
- the robotic system can determine the edge features based on the depth values. For example, the robotic system can identify exposed peripheral edges and corners based on the detected edges. The peripheral edges and corners can be identified based on depth values from the three-dimensional image data. When a difference between the depth values at different sides of a line is greater than a predetermined threshold difference, the robotic system can identify the line to be the edge feature. [0052] The robotic system can calculate, such as based on the edge confidence measure, relative orientations of the connected edges, or the like, a confidence measure for each estimated section as a representation of certainty/likelihood that the estimated section is a continuous surface and/or a single one of the flexible object.
- the robotic system can determine positively identified areas (e.g., the robotic system can categorize certain portions of the detection result as positively identified).
- a positively identified area can represent a portion of the detection result that has been verified to match one of the registered objects.
- the robotic system can identify the portions of the detection result as the positively identified area when detection features in a corresponding portion of the image data matches the detection features of a template corresponding to the registered object and/or other physical attributes thereof (e.g., shape, a set of dimensions, and/or surface texture of the registered object).
- the robotic system can calculate a score representative of a degree of match/difference between the received image and the template/texture image of registered objects.
- the remaining portions of the estimated surface can correspond to portions that were not compared or that did not sufficiently match the template.
- the robotic system can identify the compared/matching portions within the received image data as the positively identified area.
- the robotic system can process each of the one or more detection results to identify contested portions of the one or more detection results.
- the robotic system can process an instance of the one or more detection results individual as a target detection results.
- the robotic system can identify contested portions of a detection result.
- the contested portions can represent areas of the detection result having one or more uncertainty factors (e.g., insufficient confidence values, insufficient amount of matching pixels, overlapping detection footprints, and/or the like).
- the robotic system can determine the occlusion state as a combination of each of the detection features in the occlusion region. More specifically, the robotic system can calculate the correspondence score to determine whether the edge features, the key points, and/or the depth values correspond or belong to the target detection result or the adjacent detection result.
- the correspondence score can be a composite score for each of the detection features while in other embodiments, the correspondence score can be calculated individually (e.g. an edge feature correspondence score, a key point correspondence score, and/or a depth value correspondence score) for each of the detection features and combine to calculate the correspondence score.
- the robotic system can include weights for each of the detection features to increase or decrease the contribution of the corresponding detection feature in calculating the correspondence score.
- the robotic system can identify, based on the comparison with the known objects in the master data, that such rectangularly shaped area having the dimensions d1 and d2 does not match with any of the known objects in the master data with a certain confidence measure (e.g., a confidence measure that is above a certain threshold confidence measure).
- the robotic system can, therefore, identify that the area as the contested portion 1 .
- the contested portion 2 defined by lines 408a and 408b corresponds to an area having an irregular shape (e.g., line 408b defines a shape of a rectangular while line 408a cuts off a portion of a top-left corner of the rectangular).
- the robotic system can also analyze the contested portions for surface features/textures, such as pictures or text on a surface of an object.
- the analyzing of the contested portions can include comparing the detected edges in a contested portion to known images, patterns, logos and/or pictures in the master data.
- the robotic system can determine that the surface corresponding to the contested portion belongs to a single object. For example, the robotic system can compare the logo 404 (e.g., corresponding to the contested portion 3) to known logos and pictures in the master data.
- the robotic system can identify the rectangular enclosed areas as contested portions since the dimensions (e.g., d1 and d2) are less than the minimum object dimension listed in the master data. These uncertainties may result in confidence levels that fall below one or more predetermined thresholds, thereby causing the robotic system to generate the occlusion mask A1 to block the contested portions from processing.
- the robotic system can similarly process the overlapped portions on the bottom of object A to generate the occlusion masks A2 and B.
- the robotic system can use the processing sequence that first determines whether one or more of the positively identified areas have a shape and/or a set of dimensions sufficient to encompass a footprint of the gripper. If such locations exist, the robotic system can process a set of grip poses within the positively identified areas according to other gripping requirements (e.g., locations/pose relative to CoM) to determine the grip location for the target object. If the positively identified areas for one object are each insufficient to surround the gripper footprint, the robotic system can then consider grip locations/poses that overlap and extend beyond the positively identified areas (e.g., into the uncertain regions). The robotic system can eliminate locations/poses that extend into or overlap the occlusion region. The robotic system can process the remaining locations/poses according to the other gripping requirements to determine the grip location for the target object.
- other gripping requirements e.g., locations/pose relative to CoM
- the robotic system can derive the motion plan by placing a modeled footprint for the end-effector at the grip location and iteratively calculating approach trajectories the target object, depart trajectories from the start location after grasping the target object, transfer trajectories between the start location and the destination location, or other trajectories for transfer of the target object between the start location and the destination location.
- the robotic system can consider other movement directions or maneuvers when the trajectories overlap obstacles or are predicted to cause a collision or other errors.
- the robotic system can use the trajectories and/or corresponding commands, settings, etc. as the motion plan for transferring the target object from the start location to the destination location.
- the robotic system can determine the grippable objects when the positively identified areas have dimensions that exceed the minimum grip requirements and when the robotic system can determine that the trajectories can be calculated to transfer the detected objects. In other words, if the robotic system is unable to calculate trajectories to transfer the detected object within a specified period of time and/or the positively identified areas for the detected object do not meet the minimum grip requirements, then the robotic system will not determine the detected object as the grippable object.
- the robotic system can select the target object from the grippable objects. In some embodiments, the robotic system can select the target object for the grippable object that does not include the occlusion regions. In other embodiments, the robotic system can select the target object as the grippable object for which trajectory calculations are completed first. In yet further embodiments, the robotic system can select the target object as the grippable object with the fasted transfer time.
- the robotic system can be configured to derive the motion plan for lifting the target object first and then laterally transferring the object. In some embodiments, the robotic system can derive the motion plan for sliding or laterally displacing the target object, such as to clear any object overlaps, before transferring the target object and/or re-obtaining and processing image data.
- the robotic system can implement the derived motion plan(s).
- the robotic system (via, e.g., the processor and the communication device) can implement the motion plan(s) by communicating the path and/or the corresponding commands, settings, etc. to the robotic arm assembly.
- the robotic arm assembly can execute the motion plan(s) to transfer the target object(s) from the start location to the destination location as indicated by the motion plan(s).
- the robotic system can obtain a new set of images after implementing the motion plan(s) and repeat the processes described above for blocks 502-512.
- the robotic system can repeat the processes until the source container is empty, until all targeted objects have been transferred, or when no viable solutions remain (e.g., error condition where the detected edges do not form at least one viable surface portion).
- the present technology is illustrated, for example, according to various aspects described below. Various examples of aspects of the present technology are described as numbered examples (1 , 2, 3, etc.) for convenience. These are provided as examples and do not limit the present technology. It is noted that any of the dependent examples can be combined in any suitable manner, and placed into a respective independent example. The other examples can be presented in a similar manner. 1 .
- An example method of operating a robotic system comprising: obtaining image data representative of at least a first object and a second object at a start location; based on the image data, determining that the first and second objects overlap each other; identifying an overlapping region based on the image data in response to the determination, wherein the overlapping region represents an area where at least a portion of the first object overlaps at least a portion of the second object; and categorizing the overlapping region based on one or more depicted characteristics for motion planning.
- identifying that there is insufficient evidence includes: identifying that the first object is under the second object in the overlapping region; and generating a second detection result based on the image data, wherein the second detection result identifies the second object and a location thereof and further indicates that the second object is under the first object in the overlapping region.
- generating the first detection result further includes identifying one or more portions of the image data corresponding to the first object as:
- Any robotic system comprising: at least one processor; and at least one memory including processor instructions that, when executed, causes the at least one processor to perform any one or more of example methods 1 -13 and/or a combination of one or more portions thereof.
- non-transitory computer readable medium including processor instructions that, when executed by one or more processors, causes the one or more processors to perform any one or more of example methods 1 -13 and/or a combination of one or more portions thereof.
Landscapes
- Engineering & Computer Science (AREA)
- Mechanical Engineering (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Robotics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Aviation & Aerospace Engineering (AREA)
- Manipulator (AREA)
- Image Analysis (AREA)
- De-Stacking Of Articles (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202163239795P | 2021-09-01 | 2021-09-01 | |
| PCT/US2022/042387 WO2023034533A1 (en) | 2021-09-01 | 2022-09-01 | Robotic system with overlap processing mechanism and methods for operating the same |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| EP4395965A1 true EP4395965A1 (en) | 2024-07-10 |
| EP4395965A4 EP4395965A4 (en) | 2025-07-30 |
Family
ID=85386627
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| EP22865576.7A Pending EP4395965A4 (en) | 2021-09-01 | 2022-09-01 | ROBOT SYSTEM WITH OVERLAP PROCESSING MECHANISM AND METHOD OF OPERATING THE SAME |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US20230071488A1 (en) |
| EP (1) | EP4395965A4 (en) |
| JP (2) | JP7398763B2 (en) |
| CN (2) | CN116194256A (en) |
| WO (1) | WO2023034533A1 (en) |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20230249345A1 (en) * | 2022-02-09 | 2023-08-10 | eBots Inc. | System and method for sequencing assembly tasks |
| JP2024155438A (en) * | 2023-04-21 | 2024-10-31 | 株式会社日立製作所 | OBJECT RECOGNITION DEVICE, OBJECT RECOGNITION METHOD, AND TRANSPORTATION ROBOT SYSTEM |
| US20240371127A1 (en) * | 2023-05-07 | 2024-11-07 | Plus One Robotics, Inc. | Machine vision systems and methods for robotic picking and other environments |
| WO2024257546A1 (en) * | 2023-06-15 | 2024-12-19 | 株式会社安川電機 | Robot system, robot control method, and robot control program |
| US20250010489A1 (en) * | 2023-07-03 | 2025-01-09 | Mitsubishi Electric Research Laboratories, Inc. | System and Method for Controlling Operation of Robotic Manipulator with Soft Robotic Touch |
| CN119858168B (en) * | 2025-03-24 | 2025-06-27 | 长春慧程科技有限公司 | Automobile production line safety monitoring system based on industrial Internet |
Family Cites Families (16)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2006097680A1 (en) * | 2005-03-17 | 2006-09-21 | British Telecommunications Public Limited Company | Method of tracking objects in a video sequence |
| JP4199264B2 (en) * | 2006-05-29 | 2008-12-17 | ファナック株式会社 | Work picking apparatus and method |
| FI20106387A7 (en) * | 2010-12-30 | 2012-07-01 | Zenrobotics Oy | Method, computer program and apparatus for determining a gripping location |
| JP6000029B2 (en) | 2012-09-10 | 2016-09-28 | 株式会社アプライド・ビジョン・システムズ | Handling system, handling method and program |
| US9904852B2 (en) * | 2013-05-23 | 2018-02-27 | Sri International | Real-time object detection, tracking and occlusion reasoning |
| US9259844B2 (en) * | 2014-02-12 | 2016-02-16 | General Electric Company | Vision-guided electromagnetic robotic system |
| EP4088889A1 (en) * | 2015-11-13 | 2022-11-16 | Berkshire Grey Operating Company, Inc. | Sortation systems and methods for providing sortation of a variety of objects |
| US10322510B2 (en) * | 2017-03-03 | 2019-06-18 | Futurewei Technologies, Inc. | Fine-grained object recognition in robotic systems |
| CN108319953B (en) * | 2017-07-27 | 2019-07-16 | 腾讯科技(深圳)有限公司 | Occlusion detection method and device, electronic equipment and the storage medium of target object |
| JP7062406B2 (en) * | 2017-10-30 | 2022-05-16 | 株式会社東芝 | Information processing equipment and robot arm control system |
| US11724401B2 (en) * | 2019-11-13 | 2023-08-15 | Nvidia Corporation | Grasp determination for an object in clutter |
| US10759054B1 (en) * | 2020-02-26 | 2020-09-01 | Grey Orange Pte. Ltd. | Method and system for handling deformable objects |
| US11559885B2 (en) * | 2020-07-14 | 2023-01-24 | Intrinsic Innovation Llc | Method and system for grasping an object |
| US11273552B2 (en) * | 2020-07-14 | 2022-03-15 | Vicarious Fpc, Inc. | Method and system for object grasping |
| WO2022015807A1 (en) * | 2020-07-14 | 2022-01-20 | Vicarious Fpc, Inc. | Method and system for object grasping |
| US11911919B2 (en) * | 2021-03-05 | 2024-02-27 | Mujin, Inc. | Method and computing system for performing grip region detection |
-
2022
- 2022-09-01 US US17/901,739 patent/US20230071488A1/en active Pending
- 2022-09-01 JP JP2022575865A patent/JP7398763B2/en active Active
- 2022-09-01 CN CN202280004989.6A patent/CN116194256A/en active Pending
- 2022-09-01 WO PCT/US2022/042387 patent/WO2023034533A1/en not_active Ceased
- 2022-09-01 EP EP22865576.7A patent/EP4395965A4/en active Pending
- 2022-09-01 CN CN202310549157.9A patent/CN116638509A/en active Pending
-
2023
- 2023-11-27 JP JP2023199732A patent/JP2024020532A/en active Pending
Also Published As
| Publication number | Publication date |
|---|---|
| CN116638509A (en) | 2023-08-25 |
| US20230071488A1 (en) | 2023-03-09 |
| JP2024020532A (en) | 2024-02-14 |
| WO2023034533A1 (en) | 2023-03-09 |
| CN116194256A (en) | 2023-05-30 |
| EP4395965A4 (en) | 2025-07-30 |
| JP2023539403A (en) | 2023-09-14 |
| JP7398763B2 (en) | 2023-12-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20230071488A1 (en) | Robotic system with overlap processing mechanism and methods for operating the same | |
| US11501445B2 (en) | Robotic system with automated package scan and registration mechanism and methods of operating the same | |
| JP7598112B2 (en) | Robot system with cooperative transport mechanism | |
| JP7398662B2 (en) | Robot multi-sided gripper assembly and its operating method | |
| JP7175487B1 (en) | Robotic system with image-based sizing mechanism and method for operating the robotic system | |
| US12390923B2 (en) | Robotic gripper assemblies for openable object(s) and methods for picking objects | |
| US12202145B2 (en) | Robotic system with object update mechanism and methods for operating the same | |
| JP7126667B1 (en) | Robotic system with depth-based processing mechanism and method for manipulating the robotic system | |
| CN111618852B (en) | Robot system with coordinated transfer mechanism | |
| CN115609569A (en) | Robot system with image-based sizing mechanism and method of operating the same | |
| CN115570556B (en) | Robotic system with depth-based processing mechanism and operation method thereof | |
| CN115258510A (en) | Robotic system with object update mechanism and method for operating the same |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
| PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
| STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
| 17P | Request for examination filed |
Effective date: 20240328 |
|
| AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
| DAV | Request for validation of the european patent (deleted) | ||
| DAX | Request for extension of the european patent (deleted) | ||
| A4 | Supplementary search report drawn up and despatched |
Effective date: 20250702 |
|
| RIC1 | Information provided on ipc code assigned before grant |
Ipc: B25J 9/16 20060101AFI20250626BHEP Ipc: B25J 13/08 20060101ALI20250626BHEP Ipc: B25J 11/00 20060101ALI20250626BHEP Ipc: G06T 7/73 20170101ALI20250626BHEP Ipc: G06T 7/11 20170101ALI20250626BHEP Ipc: G06T 7/13 20170101ALI20250626BHEP Ipc: G06T 1/00 20060101ALI20250626BHEP |