Thursday May 19 at 10.15 in Ada Lovelace, Campus Valla, Linköping

Professor Jeannette Wing, Columbia University

Abstract

Recent years have seen an astounding growth in deployment of AI systems in critical domains such as autonomous vehicles, criminal justice, and healthcare, where decisions taken by AI agents directly impact human lives. Consequently, there is an increasing concern if these decisions can be trusted. How can we deliver on the promise of the benefits of AI but address scenarios that have life-critical consequences for people and society? In short, how can we achieve trustworthy AI?

Under the umbrella of trustworthy computing, employing formal methods for ensuring trust properties such as reliability and security has led to scalable success. Just as for trustworthy computing, formal methods could be an effective approach for building trust in AI-based systems.
However, we would need to extend the set of properties to include fairness, robustness, and interpretability, etc.; and to develop new verification techniques to handle new kinds of artifacts, e.g., data distributions and machine-learned models. This talk poses a new research agenda, from a formal methods perspective, for us to increase trust in AI systems.

Jeannette M. Wing is Executive Vice President for Research and Professor of Computer Science at Columbia University. She previously served as Avanessians Director of the Data Science Institute. She has also been on the international scientific advisory board of WASP.