Tutorial: Tools for Robotic Reinforcement Learning

Hands-on RL for Robotics with EAGERx and Stable-Baselines3

ICRA 2022, 8:30 AM - 5:20 PM (UTC−4), May 23, 2022 Philadelphia (PA), USA - Room 115A

Motivation

Reinforcement learning (RL) methods have received much attention due to impressive results in many robotic applications. While RL promises learning-based control of near-optimal behaviors in theory, successful learning can elude practitioners due to various implementation challenges. Even if the best-suited learning method was selected, learning performance can nonetheless disappoint due to badly chosen hyper-parameters or an unreliable implementation of the algorithm. Furthermore, a learning task can be made unnecessarily hard by incorrect specifications.
This full-day tutorial points-out these practical pitfalls and introduces the audience to the tools for robotic RL that will aid roboticists in successfully solving robotic learning tasks, both in simulation and the real-world.

Goals

We will cover the use of Engine Agnostic Gym Environment for Robotics (EAGERx) to define and create tasks that work both in simulation and on a real robot, and then learn to use the Stable-Baselines3 (SB3) library to solve it with SOTA algorithms, following best practices.
This tutorial will cover: creating tasks in EAGERx, basic usage of SB3, automatic hyperparameter optimization and managing RL experiments.

Requirements

Basic knowledge of reinforcement learning and python programming is required.

Videos

Replay of the tutorial can be found on YouTube

Discord server

Join the Discord server

Schedule

May 23, 8:30 AM - 5:20 PM (UTC−4), Room 115A

Time Talk Comments
8:30-8:35 Introduction
8:35-9:45 Getting Started with Gym and RL in practice Presenter: Antonin Raffin
9:30-10:30 Accelerating physics simulators for Robotics Reinforcement Learning Invited Speaker: Erwin Coumans
10:45-11:15 Hands-on Session with Gym and SB3 with colab notebooks
11:15-11:25 Break
11:25-11:55 Getting Started with EAGERx Part I - Part II Presenter: Jelle Luijkx
11:55-12:30 Hands-on Session with EAGERx with colab notebooks
12:30-13:30 Lunch Break
13:30-14:30 safe-control-gym: a Unified Benchmark Suite for Safe Learning-based Control and Reinforcement Learning Invited Speaker: Angela Schoellig
14:30-15:15 Automatic Hyperparameter Optimization Presenter: Antonin Raffin
15:15-16:00 Hyperparameter Tuning with Optuna with colab notebooks
16:00-16:15 Break
16:15-17:15 EAGERx Advanced usage with colab notebooks
17:15-17:20 Closing remarks

Speakers

Erwin Coumans @erwincoumans
Erwin Coumans is creator of the Bullet physics engine, former member of the Google Brain team, and now works in the NVIDIA Omniverse team. His interests include real-time physics simulation research and development, with a focus on robotics and machine learning.

Angela Schoellig @angelaschoellig
Angela Schoellig is an Associate Professor at the University of Toronto Institute for Aerospace Studies and a Faculty Member of the Vector Institute for Artificial Intelligence. She conducts research at the intersection of robotics, controls, and machine learning. Her goal is to enhance the performance, safety, and autonomy of robots by enabling them to learn from past experiments and from each other.

Organizers

Jelle Luijkx @jelledouwe
Jelle is a PhD candidate at the Cognitive Robotics department of the Delft University of Technology. He is working on deep learning tools for robot control within the OpenDR project and is co-creator of the Engine Agnostic Gym Environment for Robotics (EAGERx) toolkit.

Bas Van der Heijden @bheijden
Bas is a PhD candidate at TU Delft working on robotics and reinforcement learning. He is co-creator of the Engine Agnostic Gym Environment for Robotics (EAGERx) toolkit.

Antonin Raffin @araffin2
Antonin Raffin is a Research Engineer in Robotics and Machine Learning at the German Aerospace Center (DLR). He was previously working on state representation learning in the ENSTA robotics lab (U2IS) where he co-created the Stable-Baselines library with Ashley Hill. His research focus is now on applying reinforcement learning directly on real robots, for which he continues to maintain the Stable-Baselines3 library.

Jens Kober
Jens Kober is an associate professor at the Cognitive Robotics department, 3mE, TU Delft, Netherlands. He worked as a postdoctoral scholar jointly at the CoR-Lab, Bielefeld University, Germany and at the Honda Research Institute Europe, Germany. He graduated in 2012 with a PhD Degree in Engineering from TU Darmstadt and the MPI for Intelligent Systems. For his research he received the annually awarded Georges Giralt PhD Award for the best PhD thesis in robotics in Europe, the 2018 IEEE RAS Early Academic Career Award, and has received an ERC Starting grant. His research interests include motor skill learning, (deep) reinforcement learning, imitation learning, interactive learning, and machine learning for control.

Acknowledgements

The tutorial is supported by the EU H2020 projects VERtical Innovation in the Domain of Robotics Enabled by Artificial intelligence Methods and OpenDR .

The tutorial is also supported by the IEEE RAS Technical Committee on Robot Learning .