Unsupervised learning of state representations for multiple tasks

Abstract

We present an approach for learning state representations in multi-task reinforcement learning. Our method learns multiple low-dimensional state representations from raw observations in an unsupervised fashion, without any knowledge of which task is executed, nor of the number of tasks involved. The method is based on a gated neural network architecture, trained with an extension of the learning with robotic priors objective. In simulated experiments, we show that our method is able to learn better state representations for reinforcement learning, and we analyze why and when it manages to do so.

Publication
Deep Learning Workshop at the Conference on Neural Information Processing Systems (NIPS)
Avatar
Antonin Raffin
Research Engineer in Robotics and Machine Learning

Robots. Machine Learning. Blues Dance.