Hybrid Dynamical Systems
Mathematical models for evolution (dynamical systems) are fundamental tools for applied mathematics. When facing a physical system whose features (concentrations, positions, velocities, temperatures) are changing over time, these models are able to predict, compute and control its evolution.
Historically, mathematical evolution models have been described by a set of nonlinear ordinary differential equations of the form dx(t)/dt =f(x(t)) or simply x'=f(x), in short. In these so-called “continuous-time” (CT) systems, the variable x is the n-dimensional vector comprising (an approximation of) the current state of the system, and the dynamics f is the velocity of the evolution of x. As the dynamics is sufficiently regular, then the evolution is a continuous curve on the phase-space.
Parallelly, strong mathematical attention has been devoted to nonlinear difference equations xn+1 = g(xn), or simply x+ = g(x) in short. The advantage of these so-called “discrete-time” (DT) systems is that their solutions “jump” from one discrete instant to the following one, thereby possibly capturing discontinuous behavior.
While these two research strands have been historically disconnected (although parallel), recent efforts have been devoted to building hybrid dynamical systems whose solutions may evolve continuously, just like in the CT case, but may also possibly experience jumps, thereby exhibiting a DT behavior.
Examples of such systems are switching power electronics, neuronal bursting, impacting systems, switching systems, computer networks, hybrid automata and others.
Among several existing analytical formulations of hybrid dynamics, a possible powerful tool corresponds to the theory reported in the recent book [1; R. Goebel, R. Sanfelice, and A. Teel. Hybrid Dynamical Systems: Modeling, Stability, and Robustness. Princeton University Press, 2012]. The authors propose the formulation x' = f(x), x ∈ C and x+ = g(x), x ∈ D allowing either CT or DT evolution of a solution (or both) depending whether its value x is in the flow set C, in the jump set D (or in their intersection).
An important feature of  is that powerful Lyapunov tools have been successfully developed for the hybrid context.
Hybrid Optimal Control
An optimal control problem comprises selecting “optimally” a control input function u(·), which is available for design, in such a way that a differential equation x' = f(x, u) (or a difference equation x+ = g(x, u)) produces solutions behaving “optimally”. Optimality is typically characterized in terms of minimization of a functional or cost J(x(·),u(·)) (resembling power consumption, energy consumption, pollution level, time to completion, etc) dependent on the selected input u(·) and the obtained solution.
Optimal control has been studied in the CT and DT setting since the 1950’s and there are two classical solution approaches: the Pontryagin Maximum Principle (PMP) which gives necessary conditions of optimality useful to synthesize an optimal control, and the Dynamic Programming Principle (DPP) which gives the Hamilton-Jacobi-Bellman (HJB) partial differential equation whose solution is a scalar function of x: the so-called “value function” V. Knowing the value function may also allow to apply PMP and to synthesize an optimal control.
Concerning hybrid controlled dynamics, several results in the direction of the PMP are founded in the literature (see for example [2; M. Garavello and B. Piccoli. Hybrid necessary principle. SIAM J. Control Optim, 43, 2005.] and references therein), whereas, in the direction of DPP and HJB, the literature is less wide.
The application of the promising formalism in  to optimal control problems for hybrid systems is still to be done. As a consequence, one of the goals of this project is to possibly fill in this gap, by exploiting the transversal competences of the three departments: Dipartimento di Ingegneria Industriale (DII), Dipartimento di Ingegneria e Scienze dell'Informazione (DISI) and Dipartimento di Matematica (MAT).
In particular, in our approach to the task, we will draw inspiration from relevant and challenging hybrid control applications (mostly provided by the DISI team) such as a complex robotic application consisting of an autonomous car racing on a known track, and the aim will be to develop general enough hybrid optimal control theory and algorithms that address a broad range of problems.
Tools and techniques
Necessary conditions of optimality for hybrid systems control.
Dynamic Programming Principle and Hamilton-Jacobi equation approach for hybrid optimal control.
Lyapunov analysis for hybrid systems.
Stochastic hybrid systems.
Numerical solutions of hybrid optimal control problems.
Experimental tests on a robotic car.