The Feasibility of Constrained Reinforcement Learning Algorithms: A Tutorial Study

Yujie Yang*1, Zhilong Zheng*1, Shengbo Eben Li†1, Masayoshi Tomizuka2, Changliu Liu3

*Equal contribution
Corresponding author. Email: lishbo@tsinghua.edu.cn
1School of Vehicle and Mobility and State Key Lab of Intelligent Green Vehicle and Mobility, Tsinghua University
2Department of Mechanical Engineering, University of California, Berkeley
3Robotics Institute, Carnegie Mellon University

paper | code


Abstract

Satisfying safety constraints is a priority concern when solving optimal control problems (OCPs). Due to the existence of infeasibility phenomenon, where a constraint-satisfying solution cannot be found, it is necessary to identify a feasible region before implementing a policy. Existing feasibility theories built for model predictive control (MPC) only consider the feasibility of optimal policy. However, reinforcement learning (RL), as another important control method, solves the optimal policy in an iterative manner, which comes with a series of non-optimal intermediate policies. Feasibility analysis of these non-optimal policies is also necessary for iteratively improving constraint satisfaction; but that is not available under existing MPC feasibility theories. This paper proposes a feasibility theory that applies to both MPC and RL by filling in the missing part of feasibility analysis for an arbitrary policy. The basis of our theory is to decouple policy solving and implementation into two temporal domains: virtual-time domain and real-time domain. This allows us to separately define initial and endless, state and policy feasibility, and their corresponding feasible regions. Based on these definitions, we analyze the containment relationships between different feasible regions, which enables us to describe the feasible region of an arbitrary policy. We further provide virtual-time constraint design rules along with a practical design tool called feasibility function that helps to achieve the maximum feasible region. We review most of existing constraint formulations and point out that they are essentially applications of feasibility functions in different forms. We demonstrate our feasibility theory by visualizing different feasible regions under both MPC and RL policies in an emergency braking control task.

Containment Relationships of Feasible Regions

Example of Emergency Braking Control


MPC with pointwise constraint

RL with pointwise constraint

MPC with control barrier function (CBF) constraint

RL with CBF constraint

MPC with safety index (SI) constraint

RL with SI constraint

MPC with Hamilton-Jacobi (HJ) reachability constraint

RL with HJ reachability constraint

SI synthesis via evolutionary optimization

SI synthesis via reinforcement learning

SI synthesis via adversarial optimization