András Béres and Bálint Gyires-Tóth

Enhancing Visual Domain Randomization with Real Images for Sim-to-Real Transfer

In order to train reinforcement learning algorithms, a significant amount of experience is required, so it is common practice to train them in simulation, even when they are intended to be applied in the real world. To improve robustness, camerabased agents can be trained using visual domain randomization, which involves changing the visual characteristics of the simulator between training episodes in order to improve their resilience to visual changes in their environment. In this work, we propose a method, which includes realworld images alongside visual domain randomization in the reinforcement learning training procedure to further enhance the performance after sim-to-real transfer. We train variational autoencoders using both real and simulated frames, and the representations produced by the encoders are then used to train reinforcement learning agents. The proposed method is evaluated against a variety of baselines, including direct and indirect visual domain randomization, end-to-end reinforcement learning, and supervised and unsupervised state representation learning. By controlling a differential drive vehicle using only camera images, the method is tested in the Duckietown self-driving car environment. We demonstrate through our experimental results that our method improves learnt representation effectiveness and robustness by achieving the best performance of all tested methods. 

Reference:

DOI: 10.36244/ICJ.2023.1.3

Download 

Please cite this paper the following way:

András Béres and Bálint Gyires-Tóth, "Enhancing Visual Domain Randomization with Real Images for Sim-to-Real Transfer", Infocommunications Journal, Vol. XV, No 1, March 2023, pp. 15-25., https://doi.org/10.36244/ICJ.2023.1.3

 

Technical Co-Sponsors


  

  

Supporter



 

National Cooperation Fund, Hungary