Mark your calendar! 2-5 September 2025 - Engelberg, Switzerland
Autonomous systems have undergone significant changes over the past five-ten years thanks to technological advancements that have been leveraged to meet a diverse set of interaction requirements driven by performance and capability needs. Conventional control strategies were typically designed for robustness and speed of the automated system within a controlled and well-regulated environment. However, recent demands for shared interactions between an automated system (including its controller), which can be modeled using first-principles techniques, and a human operator whose behavior is best understood through other modeling frameworks, have pushed the need for alternative control approaches. Making matters more challenging, the optimal blend of human input and input from the automatic control system depends sensitively on the automated system-, the environment-, and task-specific characteristics. This talk will focus on methods for utilizing cognitive hierarchy theory to characterize the bidirectional interactions between a human operator and an underlying system (vehicle + its controller), along with iterative learning techniques for deducing an optimal arbitration level between human and autonomous inputs in the context of a driver training simulator environment where a closed course is repeated. The talk will include initial driver-in-the-loop simulation results to illustrate the efficacy of the proposed approaches as well as several of the fundamental research challenges that lie ahead.
Kira Barton is a Professor in the Robotics and Mechanical Engineering Departments at the University of Michigan. She received her B.Sc. in Mechanical Engineering from the University of Colorado at Boulder in 2001, and her M.Sc. and Ph.D. in Mechanical Engineering from the University of Illinois at Urbana-Champaign in 2006 and 2010. She is also serving as the Associate Director for the Automotive Research Center, a Universitybased U.S. Army Center of Excellence for modeling and simulation of military and civilian ground systems. She was a Miller Faculty Scholar for the University of Michigan from 2017 – 2020. Prof. Barton’s research specializes in advancements in modeling, sensing, and control for applications in smart manufacturing and robotics, with a specialization in learning and multi-agent systems. Kira is the recipient of an NSF CAREER Award in 2014, 2015 SME Outstanding Young Manufacturing Engineer Award, the 2015 University of Illinois, Department of Mechanical Science and Engineering Outstanding Young Alumni Award, the 2016 University of Michigan, Department of Mechanical Engineering Department Achievement Award, and the 2017 ASME Dynamic Systems and Control Young Investigator Award. Kira was named 1 of 25 leaders transforming manufacturing by SME in 2022, and was selected as one of the 2022 winners of the Manufacturing Leadership Award from the Manufacturing Leadership Council. She became an ASME fellow in 2024.
This talk summarizes recent rapid progress in integrating machine learning techniques with cloud computing and automation in the context of mixed-autonomy (i.e. contexts in which humans interact with machines). The results are developed in the context of traffic automation. We present a new platform in which a cloud-based system broadcasts high-level “speed plans” (generated through optimal control and neural approximations of hyperbolic partial differential equations) via lossy communication channels such as the cellular network to a fleet of vehicles with a given level of automation. The vehicles, using their local automation stack, run deep reinforcement learning algorithms collaboratively to control surrounding traffic. The algorithms and the platform are designed to smooth “stop-and-go” waves on freeways, which are a significant cause of energy waste and accidents. Results from a large-scale experiment involving 100 automated vehicles are presented, in which Nissan, Toyota, and GM vehicles collectively run the algorithms on the I-24 freeway in Nashville, TN, demonstrating up to a 10% improvement in overall energy consumption.
Alexandre Bayen is the Associate Provost for Moffett Field Program Development, Liao-Cho Innovation Endowed Chair and a Professor of Civil of Environmental Engineering and Electrical Engineering and Computer Science at UC Berkeley. Bayen’s research focuses on modeling and control of distributed parameter systems, with applications to transportation systems (air traffic control, highway systems) and distribution systems (water distribution networks). His research involves the control of systems modeled by partial differential equations, combinatorial optimization, viability theory, and optimal control. He is also a member of several professional organizations, including the Institute of Electrical and Electronic Engineers (IEEE) and the Institute of Aeronautics and Astronautics (AIAA). Bayen has authored two books and over 200 articles in peer-reviewed journals and conferences. He is the recipient of the Ballhaus Award from Stanford University in 2004, the CAREER award from the National Science Foundation in 2009, and he was awarded the NASA Top 10 Innovators on Water Sustainability award in 2010.
Deployment of robots in human-inhabited environments requires allowing robots to react rapidly, robustly and safely to changes in the environment. Recent advances in machine learning to analyze and model a variety of data offer powerful solutions for real-time control. For these techniques to be efficiently deployed and endorsed, they must be accompanied with explicit guarantees on the learned model.
This talk will give an overview of a variety of methods to endow robots with the necessary reactivity to adapt their path at time-critical situations. The learned control laws are accompanied by theoretical guarantees for stability and boundedness. Paucity of data is a reality in many robotics tasks. I will present methods by which robots can learn control laws from only a handful of examples, while generalizing to the entire state space. I will present a variety of applications, from dynamic manipulation in interaction with humans to reactive navigation in crowded pedestrian environments.
Aude Billard is full professor, head of the LASA laboratory and the Associate Dean for Education in School at the School of Engineering at the Swiss Institute of Technology Lausanne (EPFL). Prof Billard currently serves as the President of the IEEE Robotics and Automation Society, director of the ELLIS Robot Learning Program and co-director of the Robot Learning Foundation, a non-profit corporation that serves as the governing body behind the Conference on Robot Learning (CoRL), and leads the Innovation Booster Robotics, a program funding technology transfer in robotics and powered by the Swiss Innovation Agency, Innosuisse.
Prof Billard holds a BSc and MSc in Physics from EPFL and a PhD in Artificial Intelligence from the University of Edinburgh. Prof Billard is an IEEE Fellow and the recipient of numerous recognitions, among which the Intel Corporation Teaching award, the Swiss National Science Foundation career award, the Outstanding Young Person in Science and Innovation from the Swiss Chamber of Commerce, the IEEE RAS Distinguished Award, and the IEEE-RAS Best Reviewer Award. Dr. Billard was a plenary speaker at major robotics, AI and Control conferences (ICRA, AAAI, CoRL, HRI, CASE, ICDL, ECML, L4DC, IFAC Symposium, ROMAN, Humanoids and many others) and acted on various positions on the organization committee of numerous International Conferences in Robotics. Her research spans the fields of machine learning and robotics with a particular emphasis on fast and reactive control and on safe human-robot interaction. This research received numerous best conference paper awards, as well as the prestigious King-Sun Fu Memorial Award for the best IEEE Transaction in Robotics paper, and is regularly featured in premier venues (BBC, IEEE Spectrum, Wired).
Jonas Buchli is a Senior Research Scientist with Deepmind, London. He has been working at the intersection of Machine Learning and Control for most of his career. He has been a contributor to a variety of interdisciplinary research projects in Disaster Assistance, Architecture, Biomedical Technology and Paleo-anthropology among others.
In this talk, we introduce methods that remove the barrier for applying neural networks in real-life power systems. Up to this moment, neural networks have been applied in power systems as a black-box; considering the high risks associated with power system operation, this has presented a major barrier for their adoption in practice. This talk first presents a short overview of the use of AI in power systems, and methods that lead to explainable and trustworthy AI. It then introduces a rigorous framework for Trustworthy Machine Learning in power systems and introduces methods for (i) physics-informed neural networks for power systems, and (ii) obtaining provable guarantees for the neural network performance. Such methods have the potential to build the missing trust of power system operators on neural networks, and unlock a series of new applications in power systems and other safety-critical systems.
Spyros Chatzivasileiadis is Full Professor at the Technical University of Denmark (DTU). He has served as the Head of Section for Power Systems at the DTU Department of Wind and Energy Systems until 2025. Before joining DTU, he was a postdoctoral researcher at the Massachusetts Institute of Technology (MIT), USA and at Lawrence Berkeley National Laboratory, USA. Spyros holds a PhD from ETH Zurich, Switzerland (2013) and a Diploma in Electrical and Computer Engineering from the National Technical University of Athens (NTUA), Greece (2007). He is currently working on trustworthy machine learning for power systems, quantum computing, and on optimization, dynamics, and control of power systems. Spyros has received the Best Teacher of the Semester Award at DTU Electrical Engineering, and is the recipient of an ERC Starting Grant in 2020.
A key challenge in reinforcement learning (RL) - in both single-agent and multi-agent settings - is how to tame uncertainty in a practically-implementable and theoreticallygrounded form, one that is amenable in the presence of complex function approximation such as large foundation models. In this talk, we develop both modelbased and model-free frameworks that incentivize exploration via regularization, and show they provably achieve the same rates as their standard RL counterparts, bypassing the need of sophisticated uncertainty quantification.
Dr. Yuejie Chi is a Professor in the Department of Statistics and Data Science at Yale University. She received her Ph.D. and M.A. from Princeton University, and B. Eng. (Hon.) from Tsinghua University, all in Electrical Engineering. Her research interests lie in the theoretical and algorithmic foundations of data science, signal processing, machine learning and inverse problems, with applications in sensing, imaging, decision making, and AI systems. Among others, Dr. Chi received the Presidential Early Career Award for Scientists and Engineers (PECASE), SIAM Activity Group on Imaging Science Best Paper Prize, IEEE Signal Processing Society Young Author Best Paper Award, and the inaugural IEEE Signal Processing Society Early Career Technical Achievement Award for contributions to high-dimensional structured signal processing. She is an IEEE Fellow (Class of 2023) for contributions to statistical signal processing with low-dimensional structures.
A key challenge in increasing renewable energy penetration is the limited utility-scale storage capacity of the power grid. Transportation electrification offers a promising solution, as idle electric vehicles (EVs) can provide battery storage services to the grid. This concept, known as EV-power grid integration, has the potential to significantly advance decarbonization efforts in both the electricity and transportation sectors. Additionally, flexible EV charging can help mitigate distribution network capacity risks.
However, ineffective scheduling of EV charging can paradoxically lead to higher operational costs and exacerbate capacity constraints. This issue arises form the inherent randomness in EV usage patterns and the strategic behavior of EV users.
To address these challenges, we propose a market-based solution for energy storage management. Our mechanism allows the system operator to efficiently integrate strategic EV fleets with unpredictable usage patterns, leveraging them as storage assets to meet EV demand, reduce costs, and maintain grid flexibility. We use this application to demonstrate the importance of information design with elicitation; a new area of research the enables information sale to support the operations of digital platforms.
We present computational results that demonstrate the effectiveness of this market-driven scheduling framework in enhancing the integration of time-flexible EVs for grid storage.
Munther A. Dahleh received his Ph.D. from Rice University in 1987 in Electrical and Computer Engineering. Since then, he has been with the Department of Electrical Engineering and Computer Science (EECS), MIT, where he is now the William A. Coolidge Professor of EECS. He is also a faculty affiliate of the Sloan School of Management. He is the founding director of the MIT Institute for Data, Systems, and Society (IDSS). He serves on multiple advisory boards including AI advisory board for Samsung and Ikigai. His research program at MIT focuses on Decisions under uncertainty, which spans a wide range of domains. He is a leader in online education focusing on data science and AI. He is the author of the recent book: Data, Systems, and Society: Harnessing AI for Societal Good, Cambridge University Press, April 2025.
TBA
Raffaello D’Andrea is an Italian–Canadian–Swiss professor, engineer, artist, and entrepreneur whose work bridges robotics, autonomous systems, and new media art. At Cornell University, he co-founded the Systems Engineering program and led the Robot Soccer Team to four RoboCup world championships, then the world’s leading robotics and AI competition. In 2002, he received the U.S. Presidential Early Career Award for Scientists and Engineers. In 2003, while on sabbatical, he co-founded Kiva Systems, whose adaptive, AI-driven fleets of mobile robots transformed warehouse logistics. Amazon acquired the company in 2012, rebranding it as Amazon Robotics in 2015; today, the technology powers over a million robots in Amazon facilities worldwide. In 2008, he joined the ETH Zurich faculty and founded the Institute for Dynamic Systems and Control. In 2013, he co-founded ROBO Global, the world’s first Robotics and AI exchange-traded fund. The following year, he founded Verity, a pioneer in spatial intelligence for industrial facilities whose cloud–edge solutions are deployed at nearly 200 sites worldwide. His honors include induction into the Logistics Hall of Fame, the National Academy of Engineering, and the National Inventors Hall of Fame. As a new media artist, his creations have been exhibited internationally, and his choreographed aerial robotics have illuminated live events for Metallica, Drake, Céline Dion, and Cirque du Soleil.
Learning Gaussian Mixture Models (GMMs) is a fundamental problem in machine learning, and the Expectation-Maximization (EM) algorithm and its variant gradient-EM are the most widely used algorithms in practice. When the ground-truth GMM and the learning model have the same number of components, m, a line of prior work has attempted to establish rigorous recovery guarantees; however, this has been shown only for the case of m=2, and EM methods are known to fail to recover the ground truth when m>2.
This talk considers the "over-parameterized" case, where the learning model uses n>m components to fit an m-component GMM. I will show that gradient-EM converges globally: for a well-separated GMM, I prove that with only mild over-parameterization n = \Omega(m log m), randomly initialized gradient-EM converges to the ground truth at a polynomial rate with polynomial samples. The analysis relies on novel techniques to characterize the geometric landscape of the likelihood loss. This is the first global convergence result for EM methods beyond the special case of m=2.
Maryam Fazel is the Moorthy Family Professor of Electrical and Computer Engineering at the University of Washington, with adjunct appointments in Computer Science and Engineering, Mathematics, and Statistics. Her current research is in Optimization in Machine Learning, Deep Learning Theory, and Learning and Control. Maryam received her PhD from Stanford University, her BS from Sharif University of Technology in Iran, and was a postdoctoral scholar at Caltech before joining UW. She is a recipient of the NSF Career Award, UWEE Outstanding Teaching Award, a UAI conference Best Student Paper Award with her student. She is the lead PI of the Institute for Foundations of Data Science (IFDS), a multi-site NSF TRIPODS Institute.
My presentation explores the deep-rooted human quest for fairness and equality, drawing on evidence from anthropology, behavioral experiments, and large-scale representative studies. For most of human history, egalitarian norms shaped hunter-gatherer societies, and traces of this ethos remain evident in small-scale communities today. Experimental research demonstrates that deviations from equal sharing trigger strong sanctions, highlighting the persistence of fairness concerns across cultures. Using representative Swiss and Danish data, the analysis identifies three dominant preference types—inequality averse, altruistic, and selfish—each with distinct behavioral and political implications. These social preferences significantly influence support for redistribution, charitable giving, and reactions to information about inequality. Moreover, fairness concerns shape labor market outcomes, from pay compression to resistance against wage cuts. Overall, the findings underscore that egalitarian motives are not only a legacy of human evolution but also a powerful force shaping political attitudes, economic behavior, and institutional design in modern societies.
Ernst Fehr has been Professor of Microeconomics and Experimental Economics at the University of Zürich since 1994. He served as director of the Institute for Empirical Research in Economics and chairman of the Department of Economics at the University of Zurich. He currently serves as director of the UBS International Center of Economics in Society. He has been a Global Distinguished Professor at New York University since 2011 and was an affiliated faculty member of the Department of Economics at the Massachusetts Institute of Technology from 2003 to 2011. He is a former president of the Economic Science Association and of the European Economic Association, an honorary member of the American Academy of Arts and Sciences, and John Kenneth Galbraith Fellow of the American Academy of Political and Social Sciences. He was recipient of the Marcel Benoist Prize in 2008 and the Gottlieb Duttweiler Prize in 2013. Ernst Fehr was born in Hard (Vorarlberg, Austria) in 1956. He studied Economics at the University of Vienna, where he later earned his doctorate and completed his habilitation. Ernst Fehr has numerous publications in international top journals including Science, Nature, Neuron, Quarterly Journal of Economics, American Economic Review, Econometrica, Journal of Political Economy, and Psychological Science. His research focuses on the proximate patterns and the evolutionary origins of human altruism and the interplay between social preferences, social norms and strategic interactions. He has conducted extensive research on the impact of social preferences on competition, cooperation and on the psychological foundations of incentives. More recently he has worked on the role of bounded rationality in strategic interactions and on the neurobiological foundations of social and economic behavior. Fehr’s work is characterized by the combination of game theoretic tools with experimental methods and the use of insights from economics, social psychology, sociology, biology and neuroscience for a better understanding of human social behavior.
Large Language Model (LLM) agents are increasingly making choices on behalf of humans in different scenarios such as recommending news stories, searching for relevant related research papers, or deciding which product to buy. What drives LLMs' choices in subjective decision-making scenarios, where reasonable humans could have made different choices exercising their free will? In this talk, I will explore how LLMs' latent trust in (and preferences for) brand identities of the information source (e.g., author / publisher of news stories or research papers), credentials of the information source (e.g., reputation/dis-reputation badges and measures such as awards or PageRank), endorsements from other influential sources (e.g., recommendations from critics and reviewers) impacts the choices of agents powered by the LLMs. I will present extensive experiments using 10 LLMs from 6 major providers, which show that LLMs exhibit clear latent trust in (preferences for) information from certain sources over others, recognizing the domain expertise of the sources. I will make the case for better understanding the origins of LLMs' latent trust / preferences (i.e., during pre-training or through fine-tuning and instruction tuning) and for better control over these implicit biases (i.e., eliminate undesired biases and align desired biases with humans or societies represented by the LLM agents).
Krishna Gummadi is a scientific director at the Max Planck Institute for Software Systems (MPI-SWS) in Germany. He also holds a professorship at the University of Saarland. He received his Ph.D. (2005) and B.Tech. (2000) degrees in Computer Science and Engineering from the University of Washington and the Indian Institute of Technology, Madras, respectively. Krishna's research interests are in the measurement, analysis, design, and evaluation of complex Internet-scale systems. His current projects focus on addressing the pressing scientific and engineering challenges arising out of our increasingly AI-driven Computing-mediated Societies. Krishna's works have been widely cited and his papers have received numerous (12) awards, including Test of Time Awards at ACM SIGCOMM and AAAI ICWSM.
After pre-training, large language models are aligned with human preferences based on pairwise comparisons. State-of-the-art alignment methods (such as PPO-based RLHF and DPO) are built on the assumption of aligning with a single preference model, despite being deployed in settings where users have diverse preferences. As a result, it is not even clear that these alignment methods produce models that satisfy users on average — a minimal requirement. Drawing on social choice theory and modeling users’ comparisons through individual Bradley-Terry (BT) models, we introduce an alignment method’s distortion: the worst-case ratio between the optimal achievable average utility, and the average utility of the learned policy. The notion of distortion helps draw sharp distinctions between alignment methods: Nash Learning from Human Feedback achieves the minimax optimal distortion of a constant, while the most commonly used methods of RLHF (PPO or DPO based) can suffer unbounded distortion.
Nika Haghtalab is an Assistant Professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. She works broadly on the theoretical aspects of machine learning, artificial intelligence, and algorithmic economics. She received her Ph.D. from the Computer Science Department of Carnegie Mellon University, where her thesis won the CMU School of Computer Science Dissertation Award (ACM nomination) and the SIGecom Dissertation Honorable Mention. She is a co-founder of Learning Theory Alliance (LeT-All). Among her honors are an NSF CAREER award, Sloan fellowship, Schmidt Sciences AI2050 fellowship, NeurIPS and ICAPS best paper awards, an EC exemplary in AI track award, and several industry awards and fellowships.
Can we build neural architectures that go beyond Transformers by leveraging principles from dynamical systems? In this talk, I will introduce a novel approach to sequence modeling that draws inspiration from the emerging paradigm of online control to achieve efficient long-range memory, fast inference, and provable robustness. I will present theoretical insights, empirical results. and recent advancements in fast sequence generation and provable length generalization. The talk will be self-contained and accessible to researchers across STEM disciplines—no prior background in control theory or sequence prediction is required.
Elad Hazan is a professor of computer science at Princeton University. His research focuses on the design and analysis of algorithms for basic problems in machine learning and optimization. Among his contributions are the co-invention of the AdaGrad algorithm for deep learning, the first sublinear-time algorithms for convex optimization, and online nonstochastic control theory. He is the recipient of the Bell Labs Prize, the IBM Goldberg best paper award twice, a European Research Council grant, a Marie Curie fellowship, Google Research Award and is an ACM fellow. He served on the steering committee of the Association for Computational Learning and was program chair for the Conference on Learning Theory 2015. He is the co-founder and director of Google AI Princeton.
Understanding the behavior of neural networks includes understanding their behavior from a computation perspective. In this talk, I will give two examples where a graph perspective on the computation offers avenues towards analyzing and understanding properties of training and prediction.
First, transformers (LLMs) are known to preferably focus on certain positions in a long input sequence; for instance, they tend to "get lost" in the middle and focus on the beginning and end. By taking a graph perspective on the flow of computation inside the transformer, we can better understand how these biases arise from masking and positional encodings, and quantify them.
Second, parameter symmetries within neural networks -- automorphisms in the computation graph -- affect properties related to training and model merging. We derive methods for removing such symmetries and empirically investigate the effects.
Stefanie Jegelka is an Associate Professor at MIT EECS, and a Humboldt Professor at TU Munich. At MIT, she am a member of CSAIL, IDSS, the Center for Statistics and Machine Learning at MIT, and also affiliated with the ORC. Before that, Stefanie was a postdoc in the AMPlab and computer vision group at UC Berkeley, and a PhD student at the Max Planck Institutes in Tuebingen and at ETH Zurich.
Stefanie Jegelka's research is in algorithmic machine learning, and spans modeling, optimization algorithms, theory and applications. In particular, working on exploiting mathematical structure for discrete and combinatorial machine learning problems, for robustness and for scaling machine learning algorithms.
Accelerated by rapid advances in machine learning and AI, there has been tremendous success in the design of learning-enabled autonomous systems in areas such as autonomous driving, intelligent transportation, and robotics. However, these exciting developments are accompanied by new fundamental challenges that arise regarding the safety and reliability of these increasingly complex control systems in which sophisticated algorithms interact with unknown environments. In this talk, I will provide new insights and discuss exciting opportunities to address these challenges.
Imperfect learning algorithms, system unknowns, and uncertain environments require design techniques to rigorously account for uncertainties. I advocate for the use of conformal prediction (CP) — a statistical tool for uncertainty quantification — due to its simplicity, generality, and efficiency as opposed to existing optimization-based neural network verification techniques that are either conservative or not scalable, especially during runtime. I first provide an introduction to CP for the non-expert who is interested in applying CP to address real-world engineering problems. My goal is then to show how we can use CP to solve the problem of predicting failures of learning-enabled systems during their operation. Particularly, we leverage CP and design two predictive runtime verification algorithms (an accurate and an interpretable version) that compute the probability that a high-level system specifications is violated. Finally, we will discuss how we can use robust versions of CP to deal with distribution shifts that arise when the deployed learning-enabled system is different from the system during design time.
Lars Lindemann is currently an Assistant Professor for Algorithmic Systems Theory in the Automatic Control Laboratory at ETH Zürich. From 2023 to 2025 he was an Assistant Professor in the Thomas Lord Department of Computer Science at the University of Southern California. From 2020 to 2022 he was a Postdoctoral Fellow in the Department of Electrical and Systems Engineering at the University of Pennsylvania. He received his Ph.D. degree in Electrical Engineering from KTH Royal Institute of Technology in 2020. His research interests include systems and control theory, formal methods, machine learning, and autonomous systems. Professor Lindemann received the Outstanding Student Paper Award at the 58th IEEE Conference on Decision and Control and the Student Best Paper Award (as an advisor) at the 60th IEEE Conference on Decision and Control. He was finalist for the Best Paper Award (as an advisor) at the 2024 International Conference on Cyber-Physical Systems, the Best Paper Award at the 2022 Conference on Hybrid Systems: Computation and Control, and the Best Student Paper Award at the 2018 American Control Conference.
Predictions in the social world generally influence the target of prediction, a phenomenon known as performativity. In machine learning applications, performativity surfaces as distribution shift. A predictive model deployed on a digital platform, for example, influences consumption and thereby changes the data-generating distribution that forms the basis for future predictions. Performative prediction offers a conceptual framework to study performativity in machine learning. A consequence is a natural equilibrium notion that corresponds to a fixed point of retraining. Another consequence is a distinction between learning and steering, two mechanisms at play in performative prediction. In this talk I will focus on presenting some key technical results in performative prediction and discuss the role of user agency in contesting algorithmic systems, while highlighting connections to control along the way.
Celestine Mendler-Dünner is a Principal Investigator at the ELLIS Institute Tübingen, coaffiliated with the MPI for Intelligent Systems and the Tübingen AI Center. Her research focuses on machine learning in social context and the role of prediction in digital economies. Celestine obtained her PhD in Computer Science from ETH Zurich. She spent two years as an SNSF postdoctoral fellow at UC Berkeley, and two years as a group leader at the MPI-IS. For the industrial impact of her work she was awarded the IBM Research Devision Award, the Fritz Kutter Prize and the ETH Medal. She is an ELLIS Scholar and a fellow of the Elisabeth Schiemann Kolleg.
Alberto Sangiovanni Vincentelli is the Edgar L. and Harold H. Buttner Chair of Electrical Engineering and Computer Sciences at the University of California at Berkeley. In 1980-1981, he was a Visiting Scientist at the Mathematical Sciences Department of the IBM T.J. Watson Research Center. In 1987, he was Visiting Professor at MIT. He is an author of over 800 papers, 17 books and 2 patents in the area of design tools and methodologies, large scale systems, embedded systems, hybrid systems and innovation.
He was a co-founder of Cadence and Synopsys, the two leading companies in the area of Electronic Design Automation and the founder and Scientific Director of the PARADES Research Center in Rome.
The framework of game-theoretic (or multi-agent) learning explores how individual agent strategies evolve in response to the strategies of others. A central question is whether these evolving strategies converge to classical solution concepts, such as Nash equilibrium. This talk adopts a control-theoretic perspective by recognizing that learning agents interacting with one another form a feedback system. Learning dynamics are modeled as open dynamical systems that map payoffs, regardless of their source, into strategy updates, while the game itself provides the feedback interconnection. The focus is on uncoupled learning, where agents update strategies based solely on observed payoffs, without explicit knowledge of utility functions (their own or of others). This perspective enables the use of control-theoretic tools to both analyze and synthesize learning dynamics. Leveraging that convergence to Nash equilibrium corresponds to feedback stability, we show that uncoupled learning can, in general, lead to mixed-strategy Nash equilibrium, while highlighting that the required learning dynamics are not universal and may sometimes involve seemingly irrational behavior.
Jeff Shamma is Department Head of Industrial and Enterprise Systems Engineering and Jerry S. Dobrovolny Chair at the University of Illinois Urbana-Champaign. He previously held faculty positions at the King Abdullah University of Science and Technology (KAUST) and at Georgia Tech as the Julian T. Hightower Chair in Systems and Controls. Jeff received a PhD in Systems Science and Engineering from MIT in 1988. He is a Fellow of IEEE and IFAC, a past Distinguished Lecturer of the IEEE Control Systems Society, and a recipient of the IFAC High Impact Paper Award, AACC Donald P. Eckman Award, and NSF Young Investigator Award. Jeff has been a plenary/semi-plenary speaker at NeurIPS, World Congress of the Game Theory Society, and IEEE Conference on Decision and Control. He was Editor-in-Chief of the IEEE Transactions on Control of Network Systems from 2020-2024. Jeff’s research focuses on decision and control, game theory, and multi-agent systems.