Mark your calendar! 2-5 September 2025 - Engelberg, Switzerland
Autonomous systems have undergone significant changes over the past five-ten years thanks to technological advancements that have been leveraged to meet a diverse set of interaction requirements driven by performance and capability needs. Conventional control strategies were typically designed for robustness and speed of the automated system within a controlled and well-regulated environment. However, recent demands for shared interactions between an automated system (including its controller), which can be modeled using first-principles techniques, and a human operator whose behavior is best understood through other modeling frameworks, have pushed the need for alternative control approaches. Making matters more challenging, the optimal blend of human input and input from the automatic control system depends sensitively on the automated system-, the environment-, and task-specific characteristics. This talk will focus on methods for utilizing cognitive hierarchy theory to characterize the bidirectional interactions between a human operator and an underlying system (vehicle + its controller), along with iterative learning techniques for deducing an optimal arbitration level between human and autonomous inputs in the context of a driver training simulator environment where a closed course is repeated. The talk will include initial driver-in-the-loop simulation results to illustrate the efficacy of the proposed approaches as well as several of the fundamental research challenges that lie ahead.
Kira Barton is a Professor in the Robotics and Mechanical Engineering Departments at the University of Michigan. She received her B.Sc. in Mechanical Engineering from the University of Colorado at Boulder in 2001, and her M.Sc. and Ph.D. in Mechanical Engineering from the University of Illinois at Urbana-Champaign in 2006 and 2010. She is also serving as the Associate Director for the Automotive Research Center, a Universitybased U.S. Army Center of Excellence for modeling and simulation of military and civilian ground systems. She was a Miller Faculty Scholar for the University of Michigan from 2017 – 2020. Prof. Barton’s research specializes in advancements in modeling, sensing, and control for applications in smart manufacturing and robotics, with a specialization in learning and multi-agent systems. Kira is the recipient of an NSF CAREER Award in 2014, 2015 SME Outstanding Young Manufacturing Engineer Award, the 2015 University of Illinois, Department of Mechanical Science and Engineering Outstanding Young Alumni Award, the 2016 University of Michigan, Department of Mechanical Engineering Department Achievement Award, and the 2017 ASME Dynamic Systems and Control Young Investigator Award. Kira was named 1 of 25 leaders transforming manufacturing by SME in 2022, and was selected as one of the 2022 winners of the Manufacturing Leadership Award from the Manufacturing Leadership Council. She became an ASME fellow in 2024.
TBC
Alexandre Bayen is the Associate Provost for Moffett Field Program Development, Liao-Cho Innovation Endowed Chair and a Professor of Civil of Environmental Engineering and Electrical Engineering and Computer Science at UC Berkeley. Bayen’s research focuses on modeling and control of distributed parameter systems, with applications to transportation systems (air traffic control, highway systems) and distribution systems (water distribution networks). His research involves the control of systems modeled by partial differential equations, combinatorial optimization, viability theory, and optimal control. He is also a member of several professional organizations, including the Institute of Electrical and Electronic Engineers (IEEE) and the Institute of Aeronautics and Astronautics (AIAA). Bayen has authored two books and over 200 articles in peer-reviewed journals and conferences. He is the recipient of the Ballhaus Award from Stanford University in 2004, the CAREER award from the National Science Foundation in 2009, and he was awarded the NASA Top 10 Innovators on Water Sustainability award in 2010.
Deployment of robots in human-inhabited environments requires allowing robots to react rapidly, robustly and safely to changes in the environment. Recent advances in machine learning to analyze and model a variety of data offer powerful solutions for real-time control. For these techniques to be efficiently deployed and endorsed, they must be accompanied with explicit guarantees on the learned model.
This talk will give an overview of a variety of methods to endow robots with the necessary reactivity to adapt their path at time-critical situations. The learned control laws are accompanied by theoretical guarantees for stability and boundedness. Paucity of data is a reality in many robotics tasks. I will present methods by which robots can learn control laws from only a handful of examples, while generalizing to the entire state space. I will present a variety of applications, from dynamic manipulation in interaction with humans to reactive navigation in crowded pedestrian environments.
Aude Billard is full professor, head of the LASA laboratory and the Associate Dean for Education in School at the School of Engineering at the Swiss Institute of Technology Lausanne (EPFL). Prof Billard currently serves as the President of the IEEE Robotics and Automation Society, director of the ELLIS Robot Learning Program and co-director of the Robot Learning Foundation, a non-profit corporation that serves as the governing body behind the Conference on Robot Learning (CoRL), and leads the Innovation Booster Robotics, a program funding technology transfer in robotics and powered by the Swiss Innovation Agency, Innosuisse.
Prof Billard holds a BSc and MSc in Physics from EPFL and a PhD in Artificial Intelligence from the University of Edinburgh. Prof Billard is an IEEE Fellow and the recipient of numerous recognitions, among which the Intel Corporation Teaching award, the Swiss National Science Foundation career award, the Outstanding Young Person in Science and Innovation from the Swiss Chamber of Commerce, the IEEE RAS Distinguished Award, and the IEEE-RAS Best Reviewer Award. Dr. Billard was a plenary speaker at major robotics, AI and Control conferences (ICRA, AAAI, CoRL, HRI, CASE, ICDL, ECML, L4DC, IFAC Symposium, ROMAN, Humanoids and many others) and acted on various positions on the organization committee of numerous International Conferences in Robotics. Her research spans the fields of machine learning and robotics with a particular emphasis on fast and reactive control and on safe human-robot interaction. This research received numerous best conference paper awards, as well as the prestigious King-Sun Fu Memorial Award for the best IEEE Transaction in Robotics paper, and is regularly featured in premier venues (BBC, IEEE Spectrum, Wired).
Jonas Buchli is a Senior Research Scientist with Deepmind, London. He has been working at the intersection of Machine Learning and Control for most of his career. He has been a contributor to a variety of interdisciplinary research projects in Disaster Assistance, Architecture, Biomedical Technology and Paleo-anthropology among others.
In this talk, we introduce methods that remove the barrier for applying neural networks in real-life power systems. Up to this moment, neural networks have been applied in power systems as a black-box; considering the high risks associated with power system operation, this has presented a major barrier for their adoption in practice. This talk first presents a short overview of the use of AI in power systems, and methods that lead to explainable and trustworthy AI. It then introduces a rigorous framework for Trustworthy Machine Learning in power systems and introduces methods for (i) physics-informed neural networks for power systems, and (ii) obtaining provable guarantees for the neural network performance. Such methods have the potential to build the missing trust of power system operators on neural networks, and unlock a series of new applications in power systems and other safety-critical systems.
Spyros Chatzivasileiadis is Full Professor at the Technical University of Denmark (DTU). He has served as the Head of Section for Power Systems at the DTU Department of Wind and Energy Systems until 2025. Before joining DTU, he was a postdoctoral researcher at the Massachusetts Institute of Technology (MIT), USA and at Lawrence Berkeley National Laboratory, USA. Spyros holds a PhD from ETH Zurich, Switzerland (2013) and a Diploma in Electrical and Computer Engineering from the National Technical University of Athens (NTUA), Greece (2007). He is currently working on trustworthy machine learning for power systems, quantum computing, and on optimization, dynamics, and control of power systems. Spyros has received the Best Teacher of the Semester Award at DTU Electrical Engineering, and is the recipient of an ERC Starting Grant in 2020.
A key challenge in reinforcement learning (RL) - in both single-agent and multi-agent settings - is how to tame uncertainty in a practically-implementable and theoreticallygrounded form, one that is amenable in the presence of complex function approximation such as large foundation models. In this talk, we develop both modelbased and model-free frameworks that incentivize exploration via regularization, and show they provably achieve the same rates as their standard RL counterparts, bypassing the need of sophisticated uncertainty quantification.
Dr. Yuejie Chi is a Professor in the Department of Statistics and Data Science at Yale University. She received her Ph.D. and M.A. from Princeton University, and B. Eng. (Hon.) from Tsinghua University, all in Electrical Engineering. Her research interests lie in the theoretical and algorithmic foundations of data science, signal processing, machine learning and inverse problems, with applications in sensing, imaging, decision making, and AI systems. Among others, Dr. Chi received the Presidential Early Career Award for Scientists and Engineers (PECASE), SIAM Activity Group on Imaging Science Best Paper Prize, IEEE Signal Processing Society Young Author Best Paper Award, and the inaugural IEEE Signal Processing Society Early Career Technical Achievement Award for contributions to high-dimensional structured signal processing. She is an IEEE Fellow (Class of 2023) for contributions to statistical signal processing with low-dimensional structures.
A key challenge in increasing renewable energy penetration is the limited utility-scale storage capacity of the power grid. Transportation electrification offers a promising solution, as idle electric vehicles (EVs) can provide battery storage services to the grid. This concept, known as EV-power grid integration, has the potential to significantly advance decarbonization efforts in both the electricity and transportation sectors. Additionally, flexible EV charging can help mitigate distribution network capacity risks.
However, ineffective scheduling of EV charging can paradoxically lead to higher operational costs and exacerbate capacity constraints. This issue arises form the inherent randomness in EV usage patterns and the strategic behavior of EV users.
To address these challenges, we propose a market-based solution for energy storage management. Our mechanism allows the system operator to efficiently integrate strategic EV fleets with unpredictable usage patterns, leveraging them as storage assets to meet EV demand, reduce costs, and maintain grid flexibility. We use this application to demonstrate the importance of information design with elicitation; a new area of research the enables information sale to support the operations of digital platforms.
We present computational results that demonstrate the effectiveness of this market-driven scheduling framework in enhancing the integration of time-flexible EVs for grid storage.
Munther A. Dahleh received his Ph.D. from Rice University in 1987 in Electrical and Computer Engineering. Since then, he has been with the Department of Electrical Engineering and Computer Science (EECS), MIT, where he is now the William A. Coolidge Professor of EECS. He is also a faculty affiliate of the Sloan School of Management. He is the founding director of the MIT Institute for Data, Systems, and Society (IDSS). He serves on multiple advisory boards including AI advisory board for Samsung and Ikigai. His research program at MIT focuses on Decisions under uncertainty, which spans a wide range of domains. He is a leader in online education focusing on data science and AI. He is the author of the recent book: Data, Systems, and Society: Harnessing AI for Societal Good, Cambridge University Press, April 2025.
Maryam Fazel is the Moorthy Family Professor of Electrical and Computer Engineering at the University of Washington, with adjunct appointments in Computer Science and Engineering, Mathematics, and Statistics. Maryam received her MS and PhD from Stanford University, her BS from Sharif University of Technology in Iran, and was a postdoctoral scholar at Caltech before joining UW. She is a recipient of the NSF Career Award, UWEE Outstanding Teaching Award, a UAI conference Best Student Paper Award with her student. She directs the Institute for Foundations of Data Science (IFDS), a multi-site NSF TRIPODS Institute. She serves on the Editorial board of the MOS-SIAM Book Series on Optimization, and as an Acting Editor of the Journal of Machine Learning Research. Her current research interests are in the area of optimization in machine learning and control.
Ernst Fehr has been Professor of Microeconomics and Experimental Economics at the University of Zürich since 1994. He served as director of the Institute for Empirical Research in Economics and chairman of the Department of Economics at the University of Zurich. He currently serves as director of the UBS International Center of Economics in Society. He has been a Global Distinguished Professor at New York University since 2011 and was an affiliated faculty member of the Department of Economics at the Massachusetts Institute of Technology from 2003 to 2011. He is a former president of the Economic Science Association and of the European Economic Association, an honorary member of the American Academy of Arts and Sciences, and John Kenneth Galbraith Fellow of the American Academy of Political and Social Sciences. He was recipient of the Marcel Benoist Prize in 2008 and the Gottlieb Duttweiler Prize in 2013. Ernst Fehr was born in Hard (Vorarlberg, Austria) in 1956. He studied Economics at the University of Vienna, where he later earned his doctorate and completed his habilitation. Ernst Fehr has numerous publications in international top journals including Science, Nature, Neuron, Quarterly Journal of Economics, American Economic Review, Econometrica, Journal of Political Economy, and Psychological Science. His research focuses on the proximate patterns and the evolutionary origins of human altruism and the interplay between social preferences, social norms and strategic interactions. He has conducted extensive research on the impact of social preferences on competition, cooperation and on the psychological foundations of incentives. More recently he has worked on the role of bounded rationality in strategic interactions and on the neurobiological foundations of social and economic behavior. Fehr’s work is characterized by the combination of game theoretic tools with experimental methods and the use of insights from economics, social psychology, sociology, biology and neuroscience for a better understanding of human social behavior.
Large Language Model (LLM) agents are increasingly making choices on behalf of humans in different scenarios such as recommending news stories, searching for relevant related research papers, or deciding which product to buy. What drives LLMs' choices in subjective decision-making scenarios, where reasonable humans could have made different choices exercising their free will? In this talk, I will explore how LLMs' latent trust in (and preferences for) brand identities of the information source (e.g., author / publisher of news stories or research papers), credentials of the information source (e.g., reputation/dis-reputation badges and measures such as awards or PageRank), endorsements from other influential sources (e.g., recommendations from critics and reviewers) impacts the choices of agents powered by the LLMs. I will present extensive experiments using 10 LLMs from 6 major providers, which show that LLMs exhibit clear latent trust in (preferences for) information from certain sources over others, recognizing the domain expertise of the sources. I will make the case for better understanding the origins of LLMs' latent trust / preferences (i.e., during pre-training or through fine-tuning and instruction tuning) and for better control over these implicit biases (i.e., eliminate undesired biases and align desired biases with humans or societies represented by the LLM agents).
Krishna Gummadi is a scientific director at the Max Planck Institute for Software Systems (MPI-SWS) in Germany. He also holds a professorship at the University of Saarland. He received his Ph.D. (2005) and B.Tech. (2000) degrees in Computer Science and Engineering from the University of Washington and the Indian Institute of Technology, Madras, respectively. Krishna's research interests are in the measurement, analysis, design, and evaluation of complex Internet-scale systems. His current projects focus on addressing the pressing scientific and engineering challenges arising out of our increasingly AI-driven Computing-mediated Societies. Krishna's works have been widely cited and his papers have received numerous (12) awards, including Test of Time Awards at ACM SIGCOMM and AAAI ICWSM.
After pre-training, large language models are aligned with human preferences based on pairwise comparisons. State-of-the-art alignment methods (such as PPO-based RLHF and DPO) are built on the assumption of aligning with a single preference model, despite being deployed in settings where users have diverse preferences. As a result, it is not even clear that these alignment methods produce models that satisfy users on average — a minimal requirement. Drawing on social choice theory and modeling users’ comparisons through individual Bradley-Terry (BT) models, we introduce an alignment method’s distortion: the worst-case ratio between the optimal achievable average utility, and the average utility of the learned policy. The notion of distortion helps draw sharp distinctions between alignment methods: Nash Learning from Human Feedback achieves the minimax optimal distortion of a constant, while the most commonly used methods of RLHF (PPO or DPO based) can suffer unbounded distortion.
Nika Haghtalab is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. She works broadly on the theoretical aspects of machine learning, artificial intelligence, and algorithmic economics. She received her Ph.D. from the Computer Science Department of Carnegie Mellon University, where her thesis won the CMU School of Computer Science Dissertation Award (ACM nomination) and the SIGecom Dissertation Honorable Mention. She is a co-founder of Learning Theory Alliance (LeT-All). Among her honors are an NSF CAREER award, Sloan fellowship, Schmidt Sciences AI2050 fellowship, NeurIPS and ICAPS best paper awards, an EC exemplary in AI track award, and several industry awards and fellowships.
Can we build neural architectures that go beyond Transformers by leveraging principles from dynamical systems? In this talk, I will introduce a novel approach to sequence modeling that draws inspiration from the emerging paradigm of online control to achieve efficient long-range memory, fast inference, and provable robustness. I will present theoretical insights, empirical results. and recent advancements in fast sequence generation and provable length generalization. The talk will be self-contained and accessible to researchers across STEM disciplines—no prior background in control theory or sequence prediction is required.
Elad Hazan is a professor of computer science at Princeton University. His research focuses on the design and analysis of algorithms for basic problems in machine learning and optimization. Among his contributions are the co-invention of the AdaGrad algorithm for deep learning, the first sublinear-time algorithms for convex optimization, and online nonstochastic control theory. He is the recipient of the Bell Labs Prize, the IBM Goldberg best paper award twice, a European Research Council grant, a Marie Curie fellowship, Google Research Award and is an ACM fellow. He served on the steering committee of the Association for Computational Learning and was program chair for the Conference on Learning Theory 2015. He is the co-founder and director of Google AI Princeton.
Stefanie Jegelka is an Associate Professor at MIT EECS, and a Humboldt Professor at TU Munich. At MIT, she am a member of CSAIL, IDSS, the Center for Statistics and Machine Learning at MIT, and also affiliated with the ORC. Before that, Stefanie was a postdoc in the AMPlab and computer vision group at UC Berkeley, and a PhD student at the Max Planck Institutes in Tuebingen and at ETH Zurich.
Stefanie Jegelka's research is in algorithmic machine learning, and spans modeling, optimization algorithms, theory and applications. In particular, working on exploiting mathematical structure for discrete and combinatorial machine learning problems, for robustness and for scaling machine learning algorithms.
Predictions in the social world generally influence the target of prediction, a phenomenon known as performativity. In machine learning applications, performativity surfaces as distribution shift. A predictive model deployed on a digital platform, for example, influences consumption and thereby changes the data-generating distribution that forms the basis for future predictions. Performative prediction offers a conceptual framework to study performativity in machine learning. A consequence is a natural equilibrium notion that corresponds to a fixed point of retraining. Another consequence is a distinction between learning and steering, two mechanisms at play in performative prediction. In this talk I will focus on presenting some key technical results in performative prediction and discuss the role of user agency in contesting algorithmic systems, while highlighting connections to control along the way.
Celestine Mendler-Dünner is a Principal Investigator at the ELLIS Institute Tübingen, coaffiliated with the MPI for Intelligent Systems and the Tübingen AI Center. Her research focuses on machine learning in social context and the role of prediction in digital economies. Celestine obtained her PhD in Computer Science from ETH Zurich. She spent two years as an SNSF postdoctoral fellow at UC Berkeley, and two years as a group leader at the MPI-IS. For the industrial impact of her work she was awarded the IBM Research Devision Award, the Fritz Kutter Prize and the ETH Medal. She is an ELLIS Scholar and a fellow of the Elisabeth Schiemann Kolleg.
Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education, and the development of shared social truths in democratic societies. LLMs produce responses that are plausible, helpful, and confident but that contain factual inaccuracies, inaccurate summaries, misleading references, and biased information. These subtle mistruths are poised to cause a severe cumulative degradation and homogenisation of knowledge over time. This talk examines the existence and feasibility of a legal duty for LLM providers to create models that “tell the truth.” LLM providers should be required to mitigate careless speech and better align their models with truth through open, democratic processes. Careless speech is defined and contrasted with the simplified concept of “ground truth” in LLMs and prior discussion of related truth-related risks in LLMs including hallucinations, misinformation, and disinformation. EU human rights law and liability frameworks contain some truth-related obligations for products and platforms, but they are relatively limited in scope and sectoral reach. The talk concludes by proposing a pathway to create a legal truth duty applicable to providers of both narrow- and general-purpose LLMs, and discusses “zero-shot translation” as a prompting method to constrain LLMs and better align their outputs with verified, truthful information.
Prof. Brent Mittelstadt is Professor of Data Ethics and Policy at the Oxford Internet Institute, University of Oxford, and Principal Investigator at the Weizenbaum Institute in Berlin. He is a data ethicist specializing in AI ethics, algorithmic fairness and explainability, and technology law and policy. At Oxford he founded the Governance of Emerging Technologies (GET) research programme which works across ethics, law, and emerging information technologies. At the Weizenbaum Institute he leads the Ethics and Governance of Innovation research group. Prof. Mittelstadt is the author of highly cited foundational works addressing the ethics of algorithms, AI, and Big Data; truth and accuracy in large language models (LLMs); fairness, accountability, and transparency in machine learning; data protection and non-discrimination law; group privacy; and ethical auditing of automated systems. His work in these areas is widely cited and have been implemented by researchers, policy-makers, and companies internationally, featuring in policy proposals and guidelines from the European Commission, European Parliament, United Nations, and US White House, as well as products from Google, Amazon, and Microsoft.
Data-driven and learning-based methods have attracted considerable attention in recent years both for the analysis of dynamical systems and for control design. While there are many interesting and exciting results in this direction, our understanding of fundamental limitations of learning for control is lagging. This talk will focus on the question of when learning can be hard or impossible in the context of dynamical systems and control. In the first part of the talk, I will discuss a new observation on immersions and how it reveals some potential limitations in learning Koopman embeddings. In the second part of the talk, I will show what makes it hard to learn to stabilize linear systems from a sample-complexity perspective. While these results might seem negative, I will conclude the talk with some thoughts on how they can inspire interesting future directions.
Necmiye Ozay is the Chen-Luan Family Faculty Development Professor of Electrical and Computer Engineering, and an associate professor of Electrical Engineering and Computer Science and of Robotics at the University of Michigan, Ann Arbor. She received her PhD in Electrical Engineering from Northeastern University in 2010. After a postdoctoral position at Caltech in Computing and Mathematical Sciences, she joined Michigan in 2013. Her research interests include dynamical systems, control, optimization, and formal methods with applications in learning-enabled cyber-physical systems, system identification, verification and validation, and safe autonomy. She received the 1938E Award and a Henry Russel Award from the University of Michigan for her contributions to teaching and research. She received five young investigator awards, including NSF CAREER Award. She is also the recipient of the 2021 Antonio Ruberti Young Researcher Prize from the IEEE Control Systems Society for her fundamental contributions to the control and identification of hybrid and cyber-physical systems.
Jan Peters is a full professor (W3) for Intelligent Autonomous Systems at the Computer Science Department of the Technische Universität Darmstadt and, at the same time, he is the dept head of the research department on Systems AI for Robot Learning (SAIROL) at the German Research Center for Artificial Intelligence (Deutsches Forschungszentrum für Künstliche Intelligenz, DFKI). He is also is a founding research faculty member of The Hessian Center for Artificial Intelligence.
Alberto Sangiovanni Vincentelli is the Edgar L. and Harold H. Buttner Chair of Electrical Engineering and Computer Sciences at the University of California at Berkeley. In 1980-1981, he was a Visiting Scientist at the Mathematical Sciences Department of the IBM T.J. Watson Research Center. In 1987, he was Visiting Professor at MIT. He is an author of over 800 papers, 17 books and 2 patents in the area of design tools and methodologies, large scale systems, embedded systems, hybrid systems and innovation.
He was a co-founder of Cadence and Synopsys, the two leading companies in the area of Electronic Design Automation and the founder and Scientific Director of the PARADES Research Center in Rome.
The framework of game-theoretic (or multi-agent) learning explores how individual agent strategies evolve in response to the strategies of others. A central question is whether these evolving strategies converge to classical solution concepts, such as Nash equilibrium. This talk adopts a control-theoretic perspective by recognizing that learning agents interacting with one another form a feedback system. Learning dynamics are modeled as open dynamical systems that map payoffs, regardless of their source, into strategy updates, while the game itself provides the feedback interconnection. The focus is on uncoupled learning, where agents update strategies based solely on observed payoffs, without explicit knowledge of utility functions (their own or of others). This perspective enables the use of control-theoretic tools to both analyze and synthesize learning dynamics. Leveraging that convergence to Nash equilibrium corresponds to feedback stability, we show that uncoupled learning can, in general, lead to mixed-strategy Nash equilibrium, while highlighting that the required learning dynamics are not universal and may sometimes involve seemingly irrational behavior.
Jeff Shamma is Department Head of Industrial and Enterprise Systems Engineering and Jerry S. Dobrovolny Chair at the University of Illinois Urbana-Champaign. He previously held faculty positions at the King Abdullah University of Science and Technology (KAUST) and at Georgia Tech as the Julian T. Hightower Chair in Systems and Controls. Jeff received a PhD in Systems Science and Engineering from MIT in 1988. He is a Fellow of IEEE and IFAC, a past Distinguished Lecturer of the IEEE Control Systems Society, and a recipient of the IFAC High Impact Paper Award, AACC Donald P. Eckman Award, and NSF Young Investigator Award. Jeff has been a plenary/semi-plenary speaker at NeurIPS, World Congress of the Game Theory Society, and IEEE Conference on Decision and Control. He was Editor-in-Chief of the IEEE Transactions on Control of Network Systems from 2020-2024. Jeff’s research focuses on decision and control, game theory, and multi-agent systems.