Kakish, 2021 - Google Patents
Robotic swarm control using deep reinforcement learning strategies based on mean-field modelsKakish, 2021
- Document ID
- 16999143788579883436
- Author
- Kakish Z
- Publication year
External Links
Snippet
As technological advancements in silicon, sensors, and actuation continue, the development of robotic swarms is shifting from the domain of science fiction to reality. Many swarm applications, such as environmental monitoring, precision agriculture, disaster response …
- 238000004805 robotic 0 title abstract description 28
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0268—Control of position or course in two dimensions specially adapted to land vehicles using internal positioning means
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0287—Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
- G05D1/0291—Fleet control
- G05D1/0295—Fleet control by at least one leading vehicle of the fleet
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/04—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric involving the use of models or simulators
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
- G05D1/0011—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement
- G05D1/0044—Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot associated with a remote control arrangement by providing the operator with a computer generated representation of the environment of the vehicle, e.g. virtual reality, maps
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B13/00—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion
- G05B13/02—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric
- G05B13/0265—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion
- G05B13/027—Adaptive control systems, i.e. systems automatically adjusting themselves to have a performance which is optimum according to some preassigned criterion electric the criterion being a learning criterion using neural networks only
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Florence et al. | Integrated perception and control at high speed: Evaluating collision avoidance maneuvers without maps | |
| Mavrommati et al. | Real-time area coverage and target localization using receding-horizon ergodic exploration | |
| Ergezer et al. | 3D path planning for multiple UAVs for maximum information collection | |
| González-Sieira et al. | Autonomous navigation for UAVs managing motion and sensing uncertainty | |
| Pairet et al. | Online mapping and motion planning under uncertainty for safe navigation in unknown environments | |
| Gómez et al. | Real-time stochastic optimal control for multi-agent quadrotor systems | |
| MahmoudZadeh et al. | A hierarchal planning framework for AUV mission management in a spatiotemporal varying ocean | |
| Chronis et al. | Dynamic navigation in unconstrained environments using reinforcement learning algorithms | |
| Mansouri et al. | A unified nmpc scheme for mavs navigation with 3d collision avoidance under position uncertainty | |
| Lei et al. | Bio-inspired intelligence-based multiagent navigation with safety-aware considerations | |
| Theocharous | Hierarchical learning and planning in partially observable Markov decision processes | |
| Spasojevic et al. | Active collaborative localization in heterogeneous robot teams | |
| Mettler et al. | Agile autonomous guidance using spatial value functions | |
| Srikanthan et al. | A data-driven approach to synthesizing dynamics-aware trajectories for underactuated robotic systems | |
| Kakish | Robotic swarm control using deep reinforcement learning strategies based on mean-field models | |
| Ichter et al. | Perception-aware motion planning via multiobjective search on gpus | |
| Vanegas Alvarez | Uncertainty based online planning for UAV missions in GPS-denied and cluttered environments | |
| Jardine | A reinforcement learning approach to predictive control design: autonomous vehicle applications | |
| Nagaraj et al. | A concise introduction to reinforcement learning in robotics | |
| Kim et al. | Joint detection and tracking of boundaries using cooperative mobile sensor networks | |
| Huang | Control Design and Motion Planning for Unmanned Aerial Vehicles: A Data-Driven Scheme | |
| Ko et al. | Autonomous Flight of UAV in Complex Multi-Obstacle Environment Using Data-Driven and Vision-Based Deep Reinforcement Learning and AirSim | |
| Garg | Reinforcement learning based motion planner and trajectory tracker for unmanned aerial systems | |
| Abdouni et al. | Challenges and Constraints in Trajectory Planning for Autonomous Robots | |
| Bansal | Safe and Data-Efficient Learning for Robotics: A Control Theoretic Approach |