Belkhale et al., 2021 - Google Patents
Model-based meta-reinforcement learning for flight with suspended payloadsBelkhale et al., 2021
View PDF- Document ID
- 13256613271192241470
- Author
- Belkhale S
- Li R
- Kahn G
- McAllister R
- Calandra R
- Levine S
- Publication year
- Publication venue
- IEEE Robotics and Automation Letters
External Links
Snippet
Transporting suspended payloads is challenging for autonomous aerial vehicles because the payload can cause significant and unpredictable changes to the robot's dynamics. These changes can lead to suboptimal flight performance or even catastrophic failure. Although …
- 230000004301 light adaptation 0 abstract description 21
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/04—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/0265—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B19/00—Programme-control systems
- G05B19/02—Programme-control systems electric
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/0011—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement
- G05D1/0044—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement by providing the operator with a computer generated representation of the environment of the vehicle, e.g. virtual reality, maps
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
- G06N5/04—Inference methods or devices
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/10—Simultaneous control of position or course in three dimensions
- G05D1/101—Simultaneous control of position or course in three dimensions specially adapted for aircraft
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0287—Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Belkhale et al. | Model-based meta-reinforcement learning for flight with suspended payloads | |
| Thananjeyan et al. | Safety augmented value estimation from demonstrations (saved): Safe deep model-based rl for sparse cost robotic tasks | |
| EP3924884B1 (en) | System and method for robust optimization for trajectory-centric model-based reinforcement learning | |
| Chowdhary et al. | Bayesian nonparametric adaptive control using Gaussian processes | |
| Franceschetti et al. | Robotic arm control and task training through deep reinforcement learning | |
| US11334085B2 (en) | Method to optimize robot motion planning using deep learning | |
| Chatzilygeroudis et al. | Black-box data-efficient policy search for robotics | |
| Fu et al. | One-shot learning of manipulation skills with online dynamics adaptation and neural network priors | |
| Yang et al. | Path planning for single unmanned aerial vehicle by separately evolving waypoints | |
| Montiel et al. | Optimal path planning generation for mobile robots using parallel evolutionary artificial potential field | |
| JP6884685B2 (en) | Control devices, unmanned systems, control methods and programs | |
| Platt et al. | Efficient planning in non-gaussian belief spaces and its application to robot grasping | |
| Lambert et al. | Learning accurate long-term dynamics for model-based reinforcement learning | |
| Moldovan et al. | Optimism-driven exploration for nonlinear systems | |
| JP7480670B2 (en) | MOTION PLANNING APPARATUS, MOTION PLANNING METHOD, AND MOTION PLANNING PROGRAM | |
| US20210402598A1 (en) | Robot control device, robot control method, and robot control program | |
| Allamaraju et al. | Human aware UAS path planning in urban environments using nonstationary MDPs | |
| Omidshafiei et al. | Graph-based cross entropy method for solving multi-robot decentralized POMDPs | |
| Zhu et al. | Adaptive online distributed optimal control of very-large-scale robotic systems | |
| Power et al. | Keep it simple: Data-efficient learning for controlling complex systems with simple models | |
| Cavalcante et al. | Planning and evaluation of UAV mission planner for intralogistics problems | |
| KR20230171962A (en) | Systems, devices and methods for developing robot autonomy | |
| Kaushik et al. | Safeapt: Safe simulation-to-real robot learning using diverse policies learned in simulation | |
| Rafieisakhaei et al. | Feedback motion planning under non-gaussian uncertainty and non-convex state constraints | |
| CN118752492A (en) | Motion control method for multi-task and multi-robot based on deep reinforcement learning |