Arduengo et al., 2023 - Google Patents
Gaussian-process-based robot learning from demonstrationArduengo et al., 2023
View HTML- Document ID
- 17585604798535736972
- Author
- Arduengo M
- Colomé A
- Lobo-Prat J
- Sentis L
- Torras C
- Publication year
- Publication venue
- Journal of Ambient Intelligence and Humanized Computing
External Links
Snippet
Learning from demonstration allows to encode task constraints from observing the motion executed by a human teacher. We present a Gaussian-process-based learning from demonstration (LfD) approach that allows robots to learn manipulation skills from …
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
- G06N3/0635—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/04—Architectures, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
- G06N5/04—Inference methods or devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6267—Classification techniques
- G06K9/6268—Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
- G06N5/02—Knowledge representation
- G06N5/022—Knowledge engineering, knowledge acquisition
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/0265—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
- G05B13/027—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06K—RECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K9/00—Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
- G06K9/62—Methods or arrangements for recognition using electronic means
- G06K9/6217—Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/004—Artificial life, i.e. computers simulating life
- G06N3/008—Artificial life, i.e. computers simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. robots replicating pets or humans in their appearance or behavior
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N7/00—Computer systems based on specific mathematical models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/18—Digital computers in general; Data processing equipment in general in which a programme is changed according to experience gained by the computer itself during a complete run; Learning machines
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Arduengo et al. | Gaussian-process-based robot learning from demonstration | |
| Singh et al. | Cog: Connecting new skills to past experience with offline reinforcement learning | |
| LeCun | A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27 | |
| Kilinc et al. | Reinforcement learning for robotic manipulation using simulated locomotion demonstrations | |
| Wu et al. | Model primitives for hierarchical lifelong reinforcement learning | |
| Triantafyllidis et al. | Hybrid hierarchical learning for solving complex sequential tasks using the robotic manipulation network roman | |
| Pignat et al. | Learning from demonstration using products of experts: Applications to manipulation and task prioritization | |
| Akbari et al. | Ontological physics-based motion planning for manipulation | |
| Valarezo Anazco et al. | Natural object manipulation using anthropomorphic robotic hand through deep reinforcement learning and deep grasping probability network | |
| Toussaint et al. | A bayesian view on motor control and planning | |
| Ting et al. | Locally weighted regression for control | |
| Tobin | Real-world robotic perception and control using synthetic data | |
| Takahashi | Comparison of high-dimensional neural networks using hypercomplex numbers in a robot manipulator control | |
| Liu et al. | Active object recognition using hierarchical local-receptive-field-based extreme learning machine | |
| Dash et al. | RETRACTED ARTICLE: Deep belief network-based probabilistic generative model for detection of robotic manipulator failure execution | |
| Luo et al. | Endowing robots with longer-term autonomy by recovering from external disturbances in manipulation through grounded anomaly classification and recovery policies | |
| Sajwan et al. | A Review on the Effectiveness of Machine Learning and Deep Learning Algorithms for Collaborative Robot. | |
| Tanwani et al. | Generalizing robot imitation learning with invariant hidden semi-Markov models | |
| Gams et al. | Manipulation learning on humanoid robots | |
| Afzali et al. | A modified convergence DDPG algorithm for robotic manipulation | |
| Deng et al. | Learning visual-based deformable object rearrangement with local graph neural networks | |
| Shi et al. | Efficient hierarchical policy network with fuzzy rules | |
| Zhang et al. | Multimodal embodied attribute learning by robots for object-centric action policies | |
| Arora et al. | I2RL: online inverse reinforcement learning under occlusion | |
| Qian et al. | Data-driven physical law learning model for chaotic robot dynamics prediction |