We consider the problem of reinforcement learning under safety requirements, in which an agent is trained to complete a given task, typically formalized as the maximization of a reward signal over time, while concurrently avoiding undesirable actions or states, associated to lower rewards, or penalties. The construction and balancing of different reward components can be difficult in the presence of multiple objectives, yet is crucial for producing a satisfying policy. For example, in reaching a target while avoiding obstacles, low collision penalties can lead to reckless movements while high penalties can discourage exploration. To circumvent this limitation, we examine the effect of past actions in terms of safety to estimate which are acceptable or should be avoided in the future. We then actively reshape the action space of the agent during reinforcement learning, so that reward-driven exploration is constrained within safety limits. We propose an algorithm enabling the learning of such safety constraints in parallel with reinforcement learning and demonstrate its effectiveness in terms of both task completion and training time.

Unconstrained Random Exploration Constrained Random Exploration

Reference

[pdf], [code]

@article{arxiv:pham:2018:ceres,
  title={Constrained Exploration and Recovery from Experience Shaping},
  author={Pham, Tu-Hoa and De Magistris, Giovanni and Agravante, Don Joven and Chaudhury, Subhajit and Munawar, Asim and Tachibana, Ryuki},
  journal={arXiv preprint arXiv:1809.08925},
  year={2018}
}