Learning for Control
Monday, July 13, 11.00-13.00
Organizers
Manfred Morari, University of Pennsylvania, USA
Carsten Scherer,
University of Stuttgart, Germany
Summary
In the era of machine learning there has been a strong renewed interest in data driven control and optimization of dynamical systems during the last few years. This tutorial session serves to provide insights into the mechanisms for designing control systems that can learn from real-time data to improve system performance despite uncertainties in an a priori unknown environment. The first part of the session focuses on the use of Gaussian Processes for developing learning controllers, while the second is devoted to predictive control and scalable reinforcement learning techniques.
Program
Safe Learning for Control with Gaussian Processes
Speaker: Andreas Krause, ETH Zurich, Switzerland
Monday, July 13,
11:00-11:35
A key modern challenge is to design control systems that can learn from data, while still retaining reliability guarantees such as stability. In this tutorial, I will provide an introduction to a family of approaches designed towards this goal. The techniques rely on concentration bounds for rich, nonparametric Gaussian process models, and integrate them with methods from robust optimization and control, as well as formal verification. Crucially, since the bounds are data-driven, safe exploration techniques enable improved performance over time. Besides providing an overview of the methodology, I will discuss several applications in cyber-physical systems.
Achieving Safety, Performance and Reliability by Combining Models
and Data in a Closed-Loop System Architecture
Speaker: Angela Schöllig, University of Toronto, Canada
Monday,
July 13, 11:35-11:55
This tutorial will focus on our recent work on using Gaussian Processes (GPs) as a tool to model uncertainties and gradually learn unknown effects from data. We show how GPs can be combined with robust, nonlinear and predictive control approaches to achieve safe and high-performance system behavior. In this tutorial, we will show how theoretical guarantees can be derived for such approaches, demonstrate the algorithms’ performance on a toy example, and provide real-world application results from robotics. We will share the code used for the toy example.
Learning in Autonomous Systems: Predictions and
Hierarchies
Speaker: Francesco Borrelli, UC Berkeley,
USA
Monday, July 13, 11:55-12:15
Our research over the past decade has focused on control design for autonomous systems which systematically incorporates predictions and learning. In this talk I will first provide an overview of the theory and tools that we have developed for designing learning predictive controllers. Then, I will present recent results using hierarchies to formulate and solve learning control problems for tasks which are executed in an unknown environment, using stored data from previous tasks executions. Throughout the talk I will demonstrate the effectiveness of presented methods using experiments with autonomous racing cars and robotic manipulation.
Reinforcement Learning for Extended Intelligence
Speaker: Shie Mannor, Technion, Israel
Monday, July 13,
12:15-12:50
In this part of the tutorial I will discuss the essential elements needed for scaling reinforcement learning to real-world problems. I will present a scheme called "extended intelligence" that concerns the design of systems that participate as responsible, aware and robust elements of more complex systems. I will then deep dive into the question of how to create control policies from existing historical data and how to sample trajectories so that future control policies would have less uncertain return. This question has been central in reinforcement learning in the last decade if not more, and involves methods from statistics, optimization, and control theory.
Interactive Live Session
Monday, July 13,
12:50-13:00