Day - 2
29 July 2022
9 AM – 10 AM
Engineers and scientists engaged in making artificially intelligent systems have successfully resolved many challenging technical problems and have demonstrated the practical viability of autonomous driving on test tracks and carefully selected roads. These are major milestones in engineering and a clear harbinger of a transformative new era of moving goods, supplies, and people from point A to point B. Yet, along with these accomplishments come many new challenges that are not only of a technical nature, but also of a broader social, legal, and even “ethical” nature. Such issues become more urgent and important as collisions and accidents involving self-driving or semi-autonomous vehicles occur more often – injuring and even killing humans in the real world. A key challenge that needs to be addressed is making sure that the artificially engineered automobiles and humans cohabit in a harmonious, safe, and secure manner. For researchers this provides the exciting opportunity to pursue important problems from a broad range of topics in distributed perception, cognition, planning, and control. We will present a “Human-Centered” approach for the development of highly automated vehicle technologies. We will also present a brief sampling of contributions in the development of systems and algorithms to perceive situational criticalities, predict intentions of intelligent agents, and plan/execute actions for safe & smooth maneuvers and control transitions. We will highlight major research milestones in the autonomous vehicles area and discuss issues that require deeper, critical examination and careful resolution to assure safe, reliable, and robust operation of these highly complex systems in the real world.
10 AM – 11 AM
[Tea Break + TCS Research Cafe]
11 AM – 11:45 AM
The immense versatility of animal locomotion made possible by exploiting a variety of physics at different scales suggests that engineered motion barely exploits what is possible. Modeling and intentional design for complex dynamics is important for locomotion of robots in unstructured environments, such as in the offroad environment or on or underwater or for soft and compliant robots with many degrees of freedom. In the case of fish-like swimming I will discuss a novel means of propulsion and maneuvering made possible through purely internal actuators and the role of passive degrees of freedom in improving the agility of a swimmer and enabling hydrodynamic sensing of flow around a swimmer which is otherwise ‘blind’. I will discuss two alternative means for controlling the motion of such a robot. The first involves training a physics informed reinforcement learning agent in steps (a curriculum) to learn a policy to track a path. The second approach uses a linear representation of the dynamics via the Koopman operator. This approach is amenable for linear MPC (but with high dimensions) and I will discuss its use in stabilizing the role of such a nonholonomic robot. Another example of complex dynamics in robots is for off-road ground vehicles where the terramechanics-vehicle interaction can distort sensor data (like camera images) and significantly influence the motion of the vehicle itself. In particular I will talk about the role that suspensions and the heave in determining the vehicle motion using standard half-car or bicycle models with suspensions. I will discuss some preliminary results of stabilizing a model vehicle and path tracking using a Koopman operator approach.
11:45 AM – 12:30 PM
Modern infrastructure systems are complex, interconnected cyber physical system (CPS) that form the lifeline of our society. Recent trends in security indicate the increasing threat of cyber-based attacks, both in numbers and sophistication, on critical infrastructure systems of the world in general. In this talk, I will address resource-constrained attack-defense scenarios on systems modelled as graphs. Specifically, I will consider utility functions that are additive. By leveraging necessary structural properties of Nash equilibria, I will present a characterization of the possible Nash equilibria and demonstrate the existence of a game exhibiting each type. Further, I will present a novel O(m^3) algorithm to compute equilibria based upon our structural analysis where m is the number of assets to be protected. Under the assumption that the defender can perturb the payoffs to the attacker, I will show that the problem of maximizing the expected outcome to the defender is weakly NP-hard in the case of Stackelberg equilibria and multiple attacker resources, and propose a pseudopolynomial time procedure based upon our theory of types to find a globally optimal solution to the problem of maximizing defender expected utility in the case of Nash equilibria under a disjointness assumption. I will conclude my talk with implications of these results for a zero-sum game on a restricted model of the utility function.
12:30 PM – 2 PM
2 PM – 4:30 PM
Reinforcement Learning (RL) is an effective way of designing model-free linear quadratic regulator (LQR) controllers for linear time-invariant (LTI) networks with unknown state-space models. However, when the network size is exceptionally large as in many practical systems such as electric power grids or transportation networks, conventional RL can result in unacceptably long learning times, making it impossible to control the network in real-time. In this tutorial, I will present three new designs, each with its own distinct formulation and control method, that resolve this problem by combining three different variants of dimensionality reduction with RL theory:
(1) The first design will consider the case when the network model exhibits a time-scale separation in its dynamics. Singular perturbation theory will be used for the dimensionality reduction and approximation of the RL controller, followed by stability proofs and sub optimality analysis.
(2) The second design will consider the case when the network model does not exhibit any time-scale separation, but its controllability gramian exhibits a low rank property implying redundancy of the control input. A method for compressing the state vector and designing a RL controller with this compressed state will be developed.
(3) The third design will consider the case when neither the network model has time-scale separation nor does the control input have any redundancy, but the control objective is separated into multiple sub-objectives. A nested RL controller will be developed by leveraging parallelization of local controllers with dimensionality reduction of a global controller.
For all three designs, parallels will be drawn from wide-area oscillation damping control of electric power system networks. Validation examples will be shown for RL-based wide-area control of the IEEE prototype power system models. The tutorial will end with a list of promising future research directions for model-free control of power grids with renewables.
4:30 PM – 5 PM
5 PM – 6 PM
|5.00 PM||Samarth Brahmbhatt||Intel Labs||Learning Compliant Object Insertion|
|5.15 PM||Ashok Urlana||IIIT Hyderabad||Butterflies: A New Source of Inspiration for Futuristic Aerial Robotics|
|5.30 PM||Gopika R||BITS Pilani, Goa Campus||Cluster Consensus in Multi-partitioned Matrix-weighted Graphs|
|5.45 PM||Aruna M V||IITM||Velocity tracking and Pitch Depth Regulation of Biomimetic Autonomous Underwater Vehicle using Different Control Strategies|