Breyer et al., 2019 - Google Patents
Comparing task simplifications to learn closed-loop object picking using deep reinforcement learningBreyer et al., 2019
View PDF- Document ID
- 2577970795328243289
- Author
- Breyer M
- Furrer F
- Novkovic T
- Siegwart R
- Nieto J
- Publication year
- Publication venue
- IEEE Robotics and Automation Letters
External Links
Snippet
Enabling autonomous robots to interact in unstructured environments with dynamic objects requires manipulation capabilities that can deal with clutter, changes, and objects' variability. This letter presents a comparison of different reinforcement learning-based approaches for …
- 230000002787 reinforcement 0 title abstract description 8
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/39—Robotics, robotics to robotics hand
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1674—Programme controls characterised by safety, monitoring, diagnostic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Breyer et al. | Comparing task simplifications to learn closed-loop object picking using deep reinforcement learning | |
| Sadeghi et al. | Sim2real viewpoint invariant visual servoing by recurrent control | |
| Finn et al. | Deep visual foresight for planning robot motion | |
| Seker et al. | Conditional Neural Movement Primitives. | |
| Krishnan et al. | Ddco: Discovery of deep continuous options for robot learning from demonstrations | |
| Kalashnikov et al. | Scaling up multi-task robotic reinforcement learning | |
| Tang et al. | Learning collaborative pushing and grasping policies in dense clutter | |
| Zhang et al. | Grasp for stacking via deep reinforcement learning | |
| Zhou et al. | 6dof grasp planning by optimizing a deep learning scoring function | |
| Piergiovanni et al. | Learning real-world robot policies by dreaming | |
| Breyer et al. | Flexible robotic grasping with sim-to-real transfer based reinforcement learning | |
| Wang et al. | Policy learning in se (3) action spaces | |
| Chen et al. | A probabilistic framework for uncertainty-aware high-accuracy precision grasping of unknown objects | |
| Ren et al. | Fast-learning grasping and pre-grasping via clutter quantization and Q-map masking | |
| Mosbach et al. | Grasp anything: Combining teacher-augmented policy gradient learning with instance segmentation to grasp arbitrary objects | |
| Ito et al. | Integrated learning of robot motion and sentences: Real-time prediction of grasping motion and attention based on language instructions | |
| Mavsar et al. | Simulation-aided handover prediction from video using recurrent image-to-motion networks | |
| Liu et al. | Sim-and-real reinforcement learning for manipulation: A consensus-based approach | |
| Wang et al. | Learning Dual-Arm Push and Grasp Synergy in Dense Clutter | |
| CN116852347A (en) | A state estimation and decision control method for autonomous grasping of non-cooperative targets | |
| Yan et al. | Maniflow: A general robot manipulation policy via consistency flow training | |
| Laezza et al. | Offline Goal-Conditioned Reinforcement Learning for Shape Control of Deformable Linear Objects | |
| Zhang et al. | Auto-conditioned recurrent mixture density networks for learning generalizable robot skills | |
| Aslan et al. | End-to-end learning from demonstation for object manipulation of robotis-Op3 humanoid robot | |
| Zheng et al. | Frame-By-Frame Motion Retargeting With Self-Collision Avoidance From Diverse Human Demonstrations |