Paper on arXiv | Live demo (browser) | Documentation | Zoo | Studio
Clone this repo, then build a Zoo example:
g++ -std=c++17 -Ofast -I include src/rl/zoo/l2f/sac.cpp
Run it ./a.out 1337
(number = seed) then run python3 -m http.server
to visualize the results. Open http://localhost:8000
and navigate to the ExTrack UI to watch the quadrotor flying.
- macOS: Append
-framework Accelerate -DRL_TOOLS_BACKEND_ENABLE_ACCELERATE
for fast training (~4s on M3) - Ubuntu: Use
apt install libopenblas-dev
and append-lopenblas -DRL_TOOLS_BACKEND_ENABLE_OPENBLAS
(~6s on Zen 5).
Algorithm | Example |
---|---|
TD3 | Pendulum, Racing Car, MuJoCo Ant-v4, Acrobot |
PPO | Pendulum, Racing Car, MuJoCo Ant-v4 (CPU), MuJoCo Ant-v4 (CUDA) |
Multi-Agent PPO | Bottleneck |
SAC | Pendulum (CPU), Pendulum (CUDA), Acrobot |
- Learning to Fly in Seconds: GitHub / arXiv / YouTube / IEEE Spectrum
- Data-Driven System Identification of Quadrotors Subject to Motor Delays GitHub / arXiv / YouTube / Project Page
Simple example on how to implement your own environment and train a policy using PPO:
Clone and checkout:
git clone https://github.com/rl-tools/example
cd example
git submodule update --init external/rl_tools
build and run:
mkdir build
cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
cmake --build .
./my_pendulum
Note this example does not have dependencies and should work on any system with CMake and a C++ 17 compiler.
The documentation is available at docs.rl.tools and consists of C++ notebooks. You can also run them locally to tinker around:
docker run -p 8888:8888 rltools/documentation
After running the Docker container, open the link that is displayed in the CLI (http://127.0.0.1:8888/...) in your browser and enjoy tinkering!
Chapter | Documentation | Interactive Notebook |
---|---|---|
0 | Overview | - |
1 | Containers | |
2 | Multiple Dispatch | |
3 | Deep Learning | |
4 | CPU Acceleration | |
5 | MNIST Classification | |
6 | Deep Reinforcement Learning | |
7 | The Loop Interface | |
8 | Custom Environment | |
9 | Python Interface |
To build the examples from source (either in Docker or natively), first the repository should be cloned.
Instead of cloning all submodules using git clone --recursive
which takes a lot of space and bandwidth we recommend cloning the main repo containing all the standalone code for RLtools and then cloning the required sets of submodules later:
git clone https://github.com/rl-tools/rl-tools.git rl_tools
There are three classes of submodules:
- External dependencies (in
external/
)- E.g. HDF5 for checkpointing, Tensorboard for logging, or MuJoCo for the simulation of contact dynamics
- Examples/Code for embedded platforms (in
embedded_platforms/
) - Redistributable dependencies (in
redistributable/
) - Test dependencies (in
tests/lib
) - Test data (in
tests/data
)
These sets of submodules can be cloned incrementally/independent of each other. For most use-cases (like e.g. most of the Docker examples) you should clone the submodules for external dependencies:
cd rl_tools
git submodule update --init --recursive -- external
The submodules for the embedded platforms, the redistributable binaries and test dependencies/data can be cloned in the same fashion (by replacing external
with the appropriate folder from the enumeration above).
Note: Make sure that for the redistributable dependencies and test data git-lfs
is installed (e.g. sudo apt install git-lfs
on Ubuntu) and activated (git lfs install
) otherwise only the metadata of the blobs is downloaded.
If you would like to take advantage of the features that require additional dependencies, but don't want to install them on your machine yet, you can use Docker. In our experiments on Linux using the NVIDIA container runtime we were able to achieve close to native performance.
Docker instructions & examples
While it depends on personal preferences, we believe that there are good reasons (ease of debugging, usage of IDEs etc.) to run everything natively when developing. We make sure that the additional dependencies requried for the full feature set are not invasive and usually available through your systems package manager. We believe sudo ./setup.sh
is harmful and should not exist. Instead we make the setup explicit so that users maintain agency over their systems.
For maximum performance and malleability for research and development we recommend to run RLtools natively. Since RLtools itself is dependency free the most basic examples don't need any platform setup. However, for an improved experience, we support HDF5 checkpointing and Tensorboard logging as well as optimized BLAS libraries which comes with some system-dependent requirements.
Pro tip: Enable lldb
data formatters to get nicely formatted, human- (and machine-) readable outputs for rl_tools::Matrix
and rl_tools::Tensor
while debugging. Instructions to use .lldbinit for CLion & VS Code
We provide Python bindings that available as rltools
through PyPI (the pip package index). Note that using Python Gym environments can slow down the trianing significantly compared to native RLtools environments.
pip install rltools gymnasium
Usage:
from rltools import SAC
import gymnasium as gym
from gymnasium.wrappers import RescaleAction
seed = 0xf00d
def env_factory():
env = gym.make("Pendulum-v1")
env = RescaleAction(env, -1, 1)
env.reset(seed=seed)
return env
sac = SAC(env_factory)
state = sac.State(seed)
finished = False
while not finished:
finished = state.step()
You can find more details in the Python Interface documentation and from the repository rl-tools/python-interface.
We use snake_case
for variables/instances, functions as well as namespaces and PascalCase
for structs/classes. Furthermore, we use upper case SNAKE_CASE
for compile-time constants.
When using RLtools in an academic work please cite our publication using the following Bibtex citation:
@misc{eschmann2023rltools,
title={RLtools: A Fast, Portable Deep Reinforcement Learning Library for Continuous Control},
author={Jonas Eschmann and Dario Albani and Giuseppe Loianno},
year={2023},
eprint={2306.03530},
archivePrefix={arXiv},
primaryClass={cs.LG}
}