Invited Session on Safe and Risk-Aware Planning and Control for Learning-Enabled Systems

ACC 2021 Invited Session ThA17

Thursday, May 27, 2021, 10:15-12:15, Track 17, Churchill 6

 

Organizers:

 

Mohammad Javad Khojasteh (Massachusetts Institute of Technology)

Chuchu Fan (Massachusetts Institute of Technology)

Nikolay Atanasov (University of California, San Diego)

Ali-akbar Agha-mohammadi (NASA-Jet Propulsion Laboratory, California Institute of Technology)

Allowing autonomous systems to operate safely and robustly in unstructured, dynamic real-world environments is an important yet challenging problem. Existing techniques, however, rely on delicate hand-designed dynamics models and safety rules that regularly come up short to account for both the uncertainty  and complexity of real-world operation. Recent advances in machine learning (ML) have inspired a new major research direction where learning-augmented planning and control can accomplish complex tasks that had been intractable in the past.

For example, ML inference techniques based on Gaussian Processes (GP) and Neural Networks (NN) have been used to estimate endogenous system states and dynamics as well as exogenous environment states, allowing enforcement of safety constraints via control theory techniques, such as Control Barrier Functions (CBFs) and Model Predictive Control (MPC). However, these early approaches either lack or do not utilize uncertainty quantification in the estimation process to ensure resiliency against uncertainty in the safety constraints. The predictions of the learning algorithm, their corresponding generalization error, and the robustness of the resulting ML-based controller have not been addressed in the literature. While ML-based planning and control techniques promise to impact the future of autonomy, their safety and robustness need careful theoretical and empirical analysis. 

The objective of this invited session is to bring forward the most recent results in and cultivate new ideas on safe and risk-aware planning and control when learning is a major design component. Our session is comprised of papers that propose novel techniques to enable uncertainty-aware adaptation and learning for safe autonomy. These papers also provide worst-case or probabilistic safety bounds and investigate the conservativeness and efficiency of the proposed safe learning-enabled controllers in high-data and low-data regimes. The broader goal of this session is to foster a community of people that blend multi-disciplinary ideas, inquire new questions, and create the foundations of this modern scientific area. The proposed session has a powerful multidisciplinary flavor, requiring a combination of different ideas from control, planning, optimization, formal methods, and machine learning. 

List of papers to be presented:

 

Rationally Inattentive Path-Planning Via RRT*
Pedram, Ali Reza (University of Texas at Austin)
Stefan, Jeb (University of Texas - Austin)
Funada, Riku (Tokyo Institute of Technology)
Tanaka, Takashi (University of Texas at Austin)  

Counterexample-Guided Synthesis of Perception Models and Control
Ghosh, Shromona (University of California, Berkeley)
Pant, Yash Vardhan (University of California, Berkeley)
Ravanbakhsh, Hadi (University of Colorado - Boulder)
Seshia, Sanjit A. (UC Berkeley)  

Risk-Sensitive Rendezvous Algorithm for Heterogeneous Agents in Urban Environments
Barsi Haberfeld, Gabriel (University of Illinois at Urbana-Champaign)
Gahlawat, Aditya (University of Illinois at Urbana-Champaign)
Hovakimyan, Naira (University of Illinois at Urbana-Champaign)  

Recurrent Neural Network Controllers for Signal Temporal Logic Specifications Subject to Safety Constraints
Liu, Wenliang (Boston University)
Mehdipour, Noushin (Boston University)
Belta, Calin (Boston University)  

Adaptive Shielding under Uncertainty
Pranger, Stefan (Graz University of Technology)
Koenighofer, Bettina (Graz University of Technology)
Tappler, Martin (Schaffhausen Institute of Technology)
Deixelberger, Martin (Graz University of Technology)
Jansen, Nils (Radboud University Nijmegen)
Bloem, Roderick (Graz University of Technology)  

Deep Learning-based Approximate Nonlinear Model Predictive Control with Offset-free Tracking for Embedded Applications
Chan, Kimberly J (University of California Berkeley)
Paulson, Joel (The Ohio State University)
Mesbah, Ali (University of California, Berkeley) 

Barrier-Certified Learning-Based Control of Systems with Uncertain Safe Set
Marvi, Zahra (Michigan State University)
Kiumarsi, Bahare (Michigan State University)  

Safe Reinforcement Learning with Nonlinear Dynamics via Model Predictive Shielding
Bastani, Osbert (University of Pennsylvania)  

 

Acknowledgement:

 

We thank Prof. Richard Murray for his suggestions and guidance when planning for the invited session.