Focus period Linköping 2022

Seminar series by visiting scholars

The visiting scholars participating in the 2022 ELLIIT Focus Period will all give a public seminar on campus. The seminars are open for everyone and no registration is needed.

}

October 21, 10:15 - 12:00

Ada Lovelace, B-house, Campus Valla

Minimizing Entropy to Discover Good Solutions to Recurrent Mixed Integer Programs

Charly Robinson Larocca: PhD student at Université de Montréal, Canada

Charly Robinson Larocca

Abstract

Current state-of-the-art solvers for mixed-integer programming (MIP) problems are designed to perform well on a wide range of problems. However, for many real-world use cases, problem instances come from a narrow distribution. This has motivated the development of methods that can exploit the information in historical datasets to guide the design of heuristics. Recent works have shown that machine learning (ML) can be integrated with the MIP solver to inject domain knowledge and efficiently close the optimality gap. This hybridization is usually done with deep learning (DL), requiring a large dataset and extensive hyperparameter tuning to perform well. This presentation proposes a method that uses the notion of entropy to efficiently build a model with minimal training data and tuning. We test our method on the locomotive assignment problem (LAP); a recurring real-world problem that is challenging to solve at scale. Experimental results show a speed up an order of magnitude faster than CPLEX with a gap of less than 2%.

Biography

Charly Robinson La Rocca is a 30-year-old Ph.D. student in Computer Science supervised by Emma Frejinger at Université de Montréal. He is half French Canadian, half Italian and grew up in Montreal, where he spent most of his childhood learning about music and developed a deep interest for the classical guitar. In the beginning of his career, he acquired a degree in engineering physics and then pursued a master’s in operations research under the supervision of Jean-François Cordeau. Charly researched the electric taxi dispatching problem with a local start-up for which he worked as an employee to develop and implement the algorithms in production. He discovered the power of AI in 2018 and decided to do his Ph.D. in 2019. Charly’s thesis is on the integration of learning algorithm to accelerate the discovery of solutions for combinatorial optimization problems such as the locomotive assignment problem.

Machine learning-supported decomposition algorithms for a large scale hub location problem

Alexander Helber: PhD student at RWTH Aachen University, Germany

Alexander Helber

Abstract

Decomposition methods like Dantzig-Wolfe (DW) or Benders decomposition allow solving large scale optimization problems exactly, but rely on various heuristic decisions during the solving process. For example when a DW reformulated problem is solved with branch-price-and-cut, such decisions could be the selection of which pricing problems to solve or whether to branch before column generation terminates. Using machine learning to improve these heuristic decisions can speed up the solution process considerably.
We present a recently started industry project dealing with large scale hub location problems, which are often solved with decomposition methods. This talk covers concepts from literature and preliminary ideas for integrating machine learning into the solving process for this class of problems.

Biography

Alexander Helber is a Ph.D. student at the Chair of Operations Research at RWTH Aachen, Germany, since 2022. He likes modeling and solving real-world problems using mixed-integer programming and decomposition methods. His main research focus is optimization in logistics, where AI may be used to predict input data or speed up the solving process.

Alexander did his B.Sc. and M.Sc. in Business Administration and Electrical Power Engineering at RWTH Aachen with a focus on Operations Research and Management.

}

October 24, 10:15 - 12:00

Ada Lovelace, B-house, Campus Valla

Graph Neural Networks as Fast Global Search Heuristics for Constraint Satisfaction

Jan Tönshoff: PhD student at RWTH Aachen University, Germany

Jan Tönshoff

Abstract

We propose a universal Graph Neural Network architecture which can be trained as an end-2-end search heuristic for any Constraint Satisfaction Problem (CSP). Our architecture can be trained unsupervised with policy gradient descent to generate problem specific heuristics for any CSP in a purely data driven manner. The approach is based on a novel graph representation for CSPs that is both generic and compact and enables us to process every possible CSP instance with one GNN, regardless of constraint arity, relations or domain size. Unlike previous RL-based methods, we operate on a global search action space and allow our GNN to modify any number of variables in every step of the stochastic search. This enables our method to properly leverage the inherent parallelism of GNNs. We perform a thorough empirical evaluation where we learn heuristics for well known and important CSPs from random data, including graph coloring, MaxCut, 3-SAT and MAX-k-SAT. Our approach outperforms prior approaches for neural combinatorial optimization by a substantial margin. It can compete with, and even improve upon, conventional search heuristics on test instances that are several orders of magnitude larger and structurally more complex than those seen during training.

Biography

Jan Tönshoff is a PhD student at the Chair for Computer Science 7 of the RWTH Aachen, where he work for Prof. Martin Grohe. His research focuses on applying machine learning to relational data, such as graphs and databases. A special focus of their research is to develop end-2-end Neural Networks for combinatorial optimization problems, such as CSPs.

}

October 25, 10:15 - 12:00

Ada Lovelace, B-house, Campus Valla

Learning-based model predictive control with applications to autonomous racing and multi-agent coverage control

Andrea Carron: Senior Lecturer at ETH Zürich, Switzerland

Empty box

Abstract

In recent years, the combination of learning-based techniques and model predictive control has received considerable attention from the control and learning communities due to the capabilities of handling constrained nonlinear systems and improving performance using data. In this talk, we present two applications of learning-based control to a robotic platform composed of miniature RC race cars. After introducing the robotics platform, we will study the problem of changing racing conditions in an autonomous racing setup, e.g., the weight of the car or tire grip. We show that in order to achieve satisfactory performance, it is not only required to adapt the predictive model but also the controller parameters need to be adjusted. We leverage Bayesian regression to learn an improved predictive model and contextual Bayesian optimization to data-efficiently learn the controller parameters for different contexts. We perform an extensive evaluation with more than 3’000 driven laps demonstrating that our approach successfully optimizes the lap time across different contexts faster compared to standard Bayesian optimization. The second problem deals with multi-agent coverage control, where a fleet of agents covers an area, also called the environment, where different regions of the environment can have different importance according to a so-called utility function. We will discuss how it is possible to deal with constrained nonlinear agents as well as learn the utility function from data and still converge to optimal partitioning.

Biography

Andrea Carron is a Senior Lecturer at ETH Zürich. He received the bachelor’s, master’s, and Ph.D. degrees in control engineering from the University of Padova, Padova, Italy. During his master and Ph.D. studies, he spent three stays abroad as a Visiting Researcher: the first at the University of California at Riverside, Riverside, CA, USA, the second at the Max Planck Institute, Tubingen, Germany, and the third at the University of California at Santa Barbara, Santa Barbara, CA, USA. From 2016 to 2019, he was a Post-Doctoral Fellow with the Intelligent Control Systems Group, ETH Zürich, Zürich, Switzerland. His is research interests include safe-learning, learning-based control, multiagent systems, and robotics.

Sample Complexity Analysis and Self-Regulation in Identification of Overparameterized ARX Model

Zexiang Liu: PhD student at the University of Michigan, USA

Zexiang Liu

Abstract

AutoRegressive eXogenous (ARX) models are widely studied in control theory, econometrics, and statistics, but they are yet to be understood in terms of their finite sample identification analysis due to the strong temporal statistical dependency between data samples. In this work, for ARX models with potentially unknown orders, we study the sample complexity of the ordinary least squares (OLS) estimator that identifies the ARX parameters from either a single trajectory or multiple i.i.d. trajectories. Our results show that as long as the orders of the model are chosen greater than the ground-truth one (that is, we overparameterize the ARX model), the OLS will converge to the true (low-order) ARX parameters with high probability. This occurs without the aid of any regularization and thus is referred to as the self-regularization property of the OLS estimator.

Biography

Zexiang Liu is a fifth-year PhD student at University of Michigan, under the supervision of Prof. Necmiye Ozay. Before that, he received a B.S. degree in Engineering from Shanghai Jiao Tong University in 2016, and an M.S. degree in Electrical and Computer Engineering from University of Michigan in 2018.

His research interests include formal methods in control, system identification, and data-driven control, and he is currently working on sample complexity analysis for identifying ARMAX models, and safety control for systems with preview information.

}

October 26, 10:15 - 12:00

Ada Lovelace, B-house, Campus Valla

Identification of nonlinear stochastic differential-algebraic equation models

Håkan Hjalmarsson: Professor at KTH Royal Institute of Technology, Sweden

Håkan Hjalmarsson

Abstract

Differential-algebraic equations, commonly used to model physical systems, are the basis for many equation based object-oriented modeling languages. In this talk we discuss how to estimate parameters in such models from systems that are subject to process disturbances. Statistical optimal estimation methods, such as maximum likelihood and the prediction error method using the optimal one-step ahead predictor, are generically expensive to compute. Here we take a step back, sacrificing statistical optimality for computational tractability and propose a simpler version of the prediction error method which still is consistent.

Biography

Håkan Hjalmarsson was born in 1962. He received the M.S. degree in Electrical Engineering in 1988, and the Licentiate degree and the Ph.D. degree in Automatic Control in 1990 and 1993, respectively, all from Linköping University, Sweden. He has held visiting research positions at California Institute of Technology, Louvain University and at the University of Newcastle, Australia. He has served as an Associate Editor for Automatica (1996-2001), and IEEE Transactions on Automatic Control (2005-2007) and been Guest Editor for European Journal of Control and Control Engineering Practice. He is Professor at the School of Electrical Engineering, KTH, Stockholm, Sweden. He is an IEEE Fellow and past Chair of the IFAC Coordinating Committee CC1 Systems and Signals. In 2001 he received the KTH award for outstanding contribution to undergraduate education. His research interests include system identification, learning of dynamical systems for control and estimation in communication networks

Mixed-integer optimization for AI

Jan Kronqvist: Assistant Professor at KTH Royal Institute of Technology, Sweden

Jan Kronqvist

Abstract

In this talk, we discuss how mixed-integer programming (MIP) can be used for analyzing deep neural networks (DNNs).  MIP can, e.g., be used to analyze robustness of DNNs and for finding adversarial examples. In the talk, we also look at how MIP can provide some degree of explainability for DNNs in certain types of AI/ML problems. We also discuss the challenges of solving MIP optimization problems to identify inputs that produce extreme outputs of the DNN given some constraints. We discuss some shortcomings of the classical approaches for encoding a DNN as a MIP, e.g., the weak relaxations of big-M formulations and problem size of the so-called “ideal” node formulation. For these MIP problems it is possible to obtain significant advantages through a new class of formulations called P-split formulations, and we will go though these formulations in more detail.

Biography

Jan Kronqvist is an Assistant Professor in Optimization and Systems Theory at the Department of Mathematics at KTH Royal Institute of Technology in Sweden. His research focuses on mixed-integer optimization, specifically on algorithm development, strong convex relaxations, and mixed-integer optimization for AI/ML. Kronqvist is a leading researcher in convex mixed-integer nonlinear optimization, having developed several state-of-the-art algorithms and award-winning software.

Jan Kronqvist received the Howard Rosenbrock Prize in 2022 for best paper in the journal Optimization and Engineering. He was awarded best paper at the CPAIOR 2021 conference in Vienna for his work on intermediate relaxations of disjunctive constraints. The intermediate relaxations have resulted in great benefits for AI/ML applications, such as optimization over ReLU DNNs and mixed-integer clustering. He has papers in AAAI-20 and NEURIPS-21 on mixed-integer optimization for robust verification and optimal adversarial inputs of DNNs. He is funder and developer of the open-source MINLP solver SHOT, for which he won the COIN-OR-Cup in 2018 during INFORMS Annual Meeting in Phoenix.

Jan Kronqvist was awarded a Newton International Fellowship in 2018 by the Royal Society in the UK, and he spent 2 years as a postdoc researcher at Imperial College London. He graduated from Abo Akademi University in Finland in 2018 and received prize for best PhD thesis at the Faculty of Science and Engineering. He has served as a cluster chair at INFORMS Annual Meeting, CORS-INFORMS, and EUROPT. He was an organizer of MINLP (Virtual) Workshop 2021 and the head organizer of Stockholm Optimization Days 2022.

}

October 27, 10:15 - 12:00

Ada Lovelace, B-house, Campus Valla

Learning Probabilistic Circuits Using Stochastic Computation Graphs

Pedro Zuidberg Dos Martires: Postdoc at Örebro University, Sweden

Pedro Zuidberg

Abstract

Probabilistic circuits are an expressive class of probabilistic models that allow, under mild conditions, for closed-form parameter learning and the tractable computation of probabilistic queries, e.g. marginal probabilities. However, learning the structure of probabilistic circuits from data becomes a computationally hard task that is usually solved by greedy search approaches or other non-differentiable techniques. In this talk we introduce continuous relaxations of the discrete structure search using the concept of stochastic computation graphs. The main advantage of our approach is its amenability to gradient based learning and deployment on modern coprocessors (e,g, GPUs).

Biography

Pedro Zuidberg Dos Martires is currently a postdoctoral researcher in the Machine Perception and Interaction lab at the University of Örebro. His research is best described as “probabilistic neuro-symbolic artificial intelligence”, as he focuses on performing probabilistic inference on sensor data, while taking symbolic constraints into account. Prior to joining the University of Örebro, Pedro obtained his PhD under the supervision of Luc De Raedt from the KU Leuven in 2020, where he was subsequently also employed as a PostDoc for a year.

Multilayer Hybrid Models for Power Outage Analysis of a Brazilian Distribution

Marcelo Azevedo Costa: Professor at Federal University of Minas Gerais, UFMG/Brazil

Marcelo Azevedo Costa

Abstract

The quality of electrical energy distribution services is undoubtedly negatively correlated with power outages. Nevertheless, power outage is affected by both managerial and non-managerial factors. For instance, environmental factors are known drivers of power outage, worldwide. To tackle the complex behavior of power outage, a multilayer hybrid model combining spatial modelling, statistical regression and machine learning models is proposed. The case study and the main motivation is the prediction of the power outage indicator in the largest Brazilian distribution service operator, located in the southeast region. The estimated model will drive future management decisions to reduce compensation paid to consumers and fines charged by the regulator, consequently increasing investments in network expansion and quality of the services.

Biography

Prof. Marcelo Azevedo Costa is with the Federal University of Minas Gerais (UFMG/Brazil), currently as a Full Professor in the Department of Production Engineering. He is a collaborating professor in the Laboratory of Intelligence and Computational Technology (LITC), developing research projects in artificial neural networks and hybrid models. He is the coordinator of Decision Support and Reliability Laboratory (LADEC), developing research projects in economic regulation in the electricity sector. His research interests include statistical models applied to the electrical sector, applied statistical methods, network analysis, spatial statistics, time series analysis, artificial neural network theory and applications.

}

November 3, 10:15-12:00

Ada Lovelace, B-house, Campus Valla

(Reinforcement) Learning for Guiding Metaheuristics

Günther Raidl: Professor at the Institute of Logic and Computation, TU Wien, Austria

Günther Raidl

Abstract

The machine learning boom of the last years also led to interesting new developments in the area of metaheuristics.
Classical heuristic techniques for approaching hard combinatorial optimization problems are frequently based on construction heuristics, local search but also tree search, sometimes in combination with (mixed integer) linear programming or constraint programming principles. While end-to-end machine learning approaches are still far from replacing these established techniques in combinatorial optimization, it has been recognized that the latter may benefit from incorporating learning for certain purposes.
One may say the aim is to “learn how to better optimize”. This talk will give an overview on some promising recent developments in this direction. For example for tree search based approaches, variants have been proposed that learn better performing branching and node selection strategies. In beam search, guidance heuristics may be learned that yield better results than leading manually crafted heuristics. Large neighborhood search approaches were proposed in which the construction of the neighborhoods to be applied is learned. Some of these methods rely on imitation or supervised learning where labeled training data or some powerful other method to learn from need to be available. More versatile may be methods based on reinforcement learning principles, on which we will also have a look at.

Biography

Günther Raidl is Professor at the Institute of Logic and Computation, TU Wien, Austria, and member of the Algorithms and Complexity Group. He received his PhD in 1994 and completed his habilitation in Practical Computer Science in 2003 at TU Wien. In 2005 he received a professorship position for combinatorial optimization at TU Wien.

His research interests include algorithms and data structures in general and combinatorial optimization in particular, with a specific focus on metaheuristics, mathematical programming, intelligent search methods, and hybrid optimization approaches. His research work typically combines theory and practice for application areas such as scheduling, network design, transport optimization, logistics, and cutting and packing.

Günther Raidl is associate editor for the INFORMS Journal on Computingand the International Journal of Metaheuristics and at the editorial
board of several journals including Algorithms, Metaheuristics and the Evolutionary Computation. He is co-founder and steering committee member of the annual European Conference on Evolutionary Computation in Combinatorial Optimization (EvoCOP). Since 2016 he is also faculty member of the Vienna Graduate School on Computational Optimization.

Günther Raidl has co-authored a text book on hybrid metaheuristics and over 180 reviewed articles in scientific journals, books, and conference proceedings. Moreover, he has co-edited 13 books and co-authored one book on hybrid metaheuristics. More information can be found at http://www.ac.tuwien.ac.at/raidl.

Using learning and model-based methods for combinatorial problem solving

Herke van Hoof: Assistant Professor at the University of Amsterdam, the Netherlands

Herke van Hoof

Abstract

Large-scale combinatorial problems can usually not be solved exactly. Engineered heuristics are a popular alternative. I will present joint work with Wouter Kool and Max Welling on learning such heuristics from data rather than hand-engineering them. In our work, we focus on finding methods and models that suit the characteristic properties of such problems, such as symmetries and determinism. The resulting approach shows strong performance on routing problems such as the TSP. Rather than replacing traditional model-based methods, we also propose a hybrid approach combining data-driven and model-based elements.

Biography

Herke van Hoof is currently assistant professor at the University of Amsterdam in the Netherlands, where he is part of the Amlab. He is interested in reinforcement learning with structured data and prior knowledge. Reinforcement learning is a very general framework, but this tends to result in extremely data-hungry algorithms. Exploiting structured prior knowledge, or using value function or policy parametrizations that respect known structural properties, is a promising avenue to learn more with less data. Examples of this line of work include reinforcement learning (RL) for combinatorial optimisation, RL with symbolic prior knowledge, and equivariant RL.

Before joining the University of Amsterdam, Herke van Hoof was a postdoc at McGill University in Montreal, Canada, where he worked with Professors Joelle Pineau, Dave Meger, and Gregory Dudek. He obtained his PhD at TU Darmstadt, Germany, under the supervision of Professor Jan Peters, where he graduated in November 2016. Herke got his bachelor and master degrees in Artificial Intelligence at the University of Groningen in the Netherlands.

}

November 4, 10:15 - 12:00

Ada Lovelace, B-house, Campus Valla

Machine learning-based algorithms for dynamic patient scheduling problems with uncertainty

Tu-San Pham: Postdoc at Polytechnique Montréal, Canada

Tu-San_Pham

Abstract

In dynamic patient scheduling problems, scheduling decisions are taken periodically (daily, weekly… ) and are evaluated in a rolling-horizon. As patient arrivals are stochastic, it is important to take into account uncertainty. Machine learning can provide useful tools for extracting knowledge from historical distribution of patient arrivals to assist scheduling decisions. In this talk, we will present two case-studies where machine learning-based algorithms successfully solve dynamic scheduling problems in the healthcare domain.
The first case study is an online dynamic appointment scheduling in the context of radiotherapy treatment. We solve the problem in a rolling horizon setting where treatment requests of different priorities and deadlines are revealed daily. A prediction-based approach is proposed, where a regression model learns from offline solutions to smartly delay treatments of non-urgent patients and make space for emergency.
In the second case study, we investigate a dynamic home healthcare scheduling and routing problem, where care-givers travel to patients’ houses to provide care multiple time a week. When a new service request is received, they have to decide to accept or refuse the request. The objective is to maximize the number of accepted requests over a period of time. A reinforcement learning algorithm is proposed for such purpose.

Biography

After obtaining her PhD in Operations Research at KU Leuven, Belgium, Tu-San Pham worked as a postdoctoral fellow in the same university before joining Polytechnique Montréal in November 2020 under the supervision of Prof. Louis-Martin Rousseau. During her PhD, Tu-San studied hybrid methods that combine techniques from both Artificial Intelligent and Operations Research. At Polytechnique Montreal, as a member of HANALOG (the Canadian research chair on analytics and logistics of healthcare), she applies her knowledge in hybrid methods to optimize the design and delivery of healthcare. She has been working on multiple projects to improve the quality of care at several hospitals and health centres in Montréal, Canada. Her latest project involves applying Reinforcement Learning in dynamic multi-appointment patient scheduling under uncertainty. Her current research interest focuses on incorporating stochastic factors in decision-making under uncertainty and their applications in appointment scheduling.

Title TBC

Giuseppe Nicosia: Associate Professor at the University of Catania, Italy

Giuseppe Nicosia

Abstract

TBC

Biography

Giuseppe Nicosia is Associate Professor in Bioengineering and Artificial Intelligence at the Department of Biomedical & Biotechnological Sciences – School of Medicine, University of Catania, Italy.

Giuseppe Nicosia received the PhD degree in Computer Science and Synthetic Biology. From 2017 he is Visiting Professor at the University of Cambridge, UK. Giuseppe Nicosia has got the National Scientific Qualification for Full Professor in the area of Bioengineering and Computer Science. He is currently involved in the design and development of deep learning algorithms for systems and synthetic biology, and bioengineering. He is author and co-author of more than 175 papers in international journals and conference proceedings, book chapters and 20 edited books. He has chaired several international conferences, summer schools, advanced courses and workshops in the research areas of artificial intelligence, machine learning, data science, computational biology, synthetic biology, and bioengineering. He has founded and chaired the Advanced Course on Data Science and Machine Learning – ACDL (2018-’22), the Synthetic and Systems Biology Summer School – SSBSS (2014-’19), the Conference on machine Learning, Optimization and Data Science – LOD (2015-’22) and the International Advanced Course on Artificial Intelligence & Neuroscience – ACAIN (2021-’22). He received several research grants as PI in the area of Bioengineering, Synthetic Biology, Systems Biology, Artificial Intelligence, Machine Learning and Data Science. His research has received awards at several premier conferences and journals. Giuseppe Nicosia is regularly serving as Area Chair or Senior Program Committee member for several top Conferences. He has been visiting professor and scholar at the MIT, University of Cambridge, University of Florida, Nicolaus Copernicus University and New York University.

}

November 7, 10:15 - 12:00

Ada Lovelace, B-house, Campus Valla

Learning Planning Representations As A Combinatorial Optimization Problem: Formulations, Techniques and Results

Andrés Occhipinti Liberman: Postdoc at Universitat Pompeu Fabra, Spain

Andrés Occhipinti Liberman

Abstract

Many automated planners require planning problems to be represented in a first-order, symbolic language. Such planning representations are typically hand-coded. In this talk, I will look at the problem of learning such representations from data. In particular, I will discuss how this learning problem can be cast and solved as a combinatorial optimization task. I will give an overview of specific learning tasks tackled with this approach, introduce specific solution techniques used so far (such as answer set programming), and present examples of learned representations.

Biography

Andrés Occhipinti Liberman is a postdoc at the AI&ML Group of Universitat Pompeu Fabra in Barcelona, Spain. He is involved in the Representation Learning for Planning (RLeap) project. Currently, he is working on learning dynamics models for automated planning and/or model-based reinforcement learning.

Before coming to Barcelona, Andrés did his Ph.D. in AI and logic at the Department of Applied Mathematics and Computer Science of the Technical University of Denmark (DTU). Earlier, he completed the M.Sc. in Logic at the Institute for Logic, Language and Computation (ILLC) of the University of Amsterdam.

}

November 8, 10:15 - 12:00

Ada Lovelace, B-house, Campus Valla

Bayesian Inference with Projected Densities

Martin S. Andersen: Associate Professor at DTU, Denmark

Abstract

Constraints are a natural form of prior information in Bayesian inference. In some applications, the parameters of interest are known to lie on the boundary of some set, but existing models based on truncated priors such as a truncated normal distribution result in posterior distributions that assign zero probability to the boundary. To address this issue, we construct a posterior distribution that assigns positive probability to the boundary of the constraint set by imposing an implicit prior that corresponds to a projection of posterior mass onto the constraint set. We apply the method to Bayesian linear inverse problems and show that samples from the posterior distribution can be obtained by solving a sequence of perturbed constrained least-squares problems. Finally, we derive a Gibbs sampler for a hierachical extension of the model in the special case where the constraint set is a polyhedral cone. 

Joint work with Jasper Everink (DTU) and Yiqiu Dong (DTU).

Biography

Martin Andersen is with the Department of Applied Mathematics and Computer Science at the Technical University of Denmark. He received his PhD degree in Electrical Engineering from the University of California, Los Angeles, in 2011. His research interests include numerical optimization and linear algebra, and he is currently focusing on hierarchical approximation methods for optimization with applications in data science and machine learning.

Formulas for inverse optimality: H_infty control, rate of convergence, and constrained problems

Taouba Jouini: Postdoc at Karlsruher Institut of Technology (KIT), Germany

Taouba Jouini

Abstract

In this talk, we extend the study of inverse optimal control problems into three directions. First, we consider a robust control formulation, where the disturbance enters in the cost and the system dynamics. We explicitly derive the cost functional that renders a given stabilizing controller the solution of the resulting H infinity optimal control problem. 
Second, we draw a link between second-order methods, known for their high-speed of convergence, and the tuning of inverse optimal controllers for discrete-time input-affine dynamical systems. Third, we study a class of input and state constrained inverse optimal control problems and suggest an approach to explicitly solve them. 
Our results apply to the control of inverters in power systems. Namely, we show that the angular droop control is inverse optimal stabilizing, suggest an appropriate tuning for it and accommodate for angle and power constraints. 

Biography

Taouba Jouini is a postdoctoral researcher at the Institute for Automation and Applied Informatics (IAI) at Karlsruhe Institute of Technology (KIT) in Germany. She was a graduate research assistant and a doctoral student at Automatic Control Laboratory (IfA) at ETH Zurich in Switzerland and at the Department of Automatic Control at LTH, Lund university in Sweden from December 2016 to January 2022, where she worked on system-theoretic solutions to frequency synchronization in high-order oscillators and inverse optimality for nonlinear systems affected by disturbances, mainly applied to power systems. She received both my Bachelor and Master of Science degrees in Cybernetics Engineering (Technische Kybernetik) from the University of Stuttgart in Germany in 2013 and 2016.

For more information, please visit https://sites.google.com/view/taoubajouini/home.

}

November 9, 10:15 - 12:00

Ada Lovelace, B-house, Campus Valla

Graph Neural Networks as Fast Global Search Heuristics for Constraint Satisfaction

Jan Tönshoff: PhD student at RWTH Aachen University, Germany

Jan Tönshoff

Abstract

We propose a universal Graph Neural Network architecture which can be trained as an end-2-end search heuristic for any Constraint Satisfaction Problem (CSP). Our architecture can be trained unsupervised with policy gradient descent to generate problem specific heuristics for any CSP in a purely data driven manner. The approach is based on a novel graph representation for CSPs that is both generic and compact and enables us to process every possible CSP instance with one GNN, regardless of constraint arity, relations or domain size. Unlike previous RL-based methods, we operate on a global search action space and allow our GNN to modify any number of variables in every step of the stochastic search. This enables our method to properly leverage the inherent parallelism of GNNs. We perform a thorough empirical evaluation where we learn heuristics for well known and important CSPs from random data, including graph coloring, MaxCut, 3-SAT and MAX-k-SAT. Our approach outperforms prior approaches for neural combinatorial optimization by a substantial margin. It can compete with, and even improve upon, conventional search heuristics on test instances that are several orders of magnitude larger and structurally more complex than those seen during training.

Biography

Jan Tönshoff is a PhD student at the Chair for Computer Science 7 of the RWTH Aachen, where he work for Prof. Martin Grohe. His research focuses on applying machine learning to relational data, such as graphs and databases. A special focus of their research is to develop end-2-end Neural Networks for combinatorial optimization problems, such as CSPs.

Learning Stationary Nash Equilibrium Policies in n-Player Stochastic Games with Independent Chains

Rasoul Etesami: Assistant Professor at the University of Illinois Urbana-Champaign, USA

Rasoul Etesami

Abstract

We consider a subclass of n-player stochastic games, in which players have their own internal state/action spaces while they are coupled through their payoff functions. It is assumed that players’ internal chains are driven by independent transition probabilities. Moreover, players can receive only realizations of their payoffs, not the actual functions, and cannot observe each other’s states/actions. Under some assumptions on the structure of the payoff functions, we develop efficient learning algorithms based on dual averaging and dual mirror descent, which provably converge almost surely or in expectation to the set of $\epsilon$-Nash equilibrium policies. In particular, we derive upper bounds on the number of iterates that scale polynomially in terms of the game parameters to achieve an $\epsilon$-Nash equilibrium policy. In addition to Markov potential games and linear-quadratic stochastic games, this work provides another subclass of n-player stochastic games that provably admit polynomial-time learning algorithms for finding their $\epsilon$-Nash equilibrium policies.

Biography

Rasoul Etesami is an Assistant Professor in the Department of Industrial and Systems Engineering at the University of Illinois Urbana-Champaign. He is also affiliated with the Coordinated Science Laboratory (CSL), where he is a member of the Decision and Control (DCL) group. From 2016 to 2017, he was a Postdoctoral Research Fellow in the Department of Electrical Engineering at Princeton University and WINLAB. He received his Ph.D. in Electrical and Computer Engineering from the University of Illinois Urbana-Champaign in December 2015, during which he spent one summer as a Research Intern at Alcatel-Lucent Bell Labs. His research interests include analysis of complex social, communication, and decision-making systems using tools from control and game theory, optimization, and learning theory. He was a recipient of the Best CSL Ph.D. Thesis Award at the Engineering College of the University of Illinois Urbana–Champaign in 2016, Springer Outstanding Ph.D. Thesis Award in 2017, and NSF CAREER Award in 2020. He currently serves as an Associate Editor of the Journal of IET Smart Grid.

}

November 10, 10:15 - 12:00

Ada Lovelace, B-house, Campus Valla

Dynamic time scan forecasting for multi-step wind speed prediction

Marcelo Azevedo Costa: Professor at Federal University of Minas Gerais, UFMG/Brazil

Marcelo Azevedo Costa

Abstract

Multi-step forecasting of wind speed time series, especially for day-ahead and longer time horizons, is still a challenging problem in the wind energy sector. In this work, a novel analog-based methodology to perform multi-step forecasting in univariate time series, named dynamic time scan forecasting (DTSF), is presented. DTSF is a fast time series forecasting methodology for large data sets. Thus, the proposed method is optimal for forecasting renewable energy features such as wind speed, in which standard statistical and soft computing methods present limitations.

Biography

Prof. Marcelo Azevedo Costa is with the Federal University of Minas Gerais (UFMG/Brazil), currently as a Full Professor in the Department of Production Engineering. He is a collaborating professor in the Laboratory of Intelligence and Computational Technology (LITC), developing research projects in artificial neural networks and hybrid models. He is the coordinator of Decision Support and Reliability Laboratory (LADEC), developing research projects in economic regulation in the electricity sector. His research interests include statistical models applied to the electrical sector, applied statistical methods, network analysis, spatial statistics, time series analysis, artificial neural network theory and applications.