Robots. Machine Learning. Blues Dance.
A set of improved implementations of reinforcement learning algorithms in PyTorch.
Learning to drive smoothly in minutes using reinforcement learning on a Donkey Car.
A collection of 70+ pre-trained RL agents using Stable Baselines
S-RL Toolbox: Reinforcement Learning (RL) and State Representation Learning (SRL) for Robotics
A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
Autonomous Racing Robot With an Arduino, a Raspberry Pi and a Pi Camera
A simple and robust serial communication protocol. Implementation in C Arduino, C++, Python and Rust.
Spring-based actuators in legged locomotion provide energy-efficiency and improved performance, but increase the difficulty of controller design. Whereas previous works have focused on extensive modeling and simulation to find optimal controllers for such systems, we propose to learn model-free controllers directly on the real robot. In our approach, gaits are first synthesized by central pattern generators (CPGs), whose parameters are optimized to quickly obtain an open-loop controller that achieves efficient locomotion. Then, to make that controller more robust and further improve the performance, we use reinforcement learning to close the loop, to learn corrective actions on top of the CPGs. We evaluate the proposed approach in DLR’s elastic quadruped bert. Our results in learning trotting and pronking gaits show that exploitation of the spring actuator dynamics emerges naturally from optimizing for dynamic motions, yielding high-performing locomotion despite being model-free. The whole process takes no more than 1.5 hours on the real robot and results in natural-looking gaits.
We extend the original state-dependent exploration (SDE) to apply deep reinforcement learning algorithms directly on real robots. The resulting method, gSDE, yields competitive results in simulation but outperforms the unstructured exploration on the real robot.