May 13 – May 15, 2025

Focus Period Symposium: Visualization-Empowered Human-in-the-Loop Artificial Intelligence

Elite Grand Hotel, Norrköping

In the past years, experts in visualization, human-computer interaction and related fields have substantially contributed to the topic, for instance, by the development of visualization approaches to open the typically closed black box design of popular machine learning methods. However, the rapid developments in AI/ML potentially trigger a fundamental change in our understanding of the capabilities and applicability of the models as they are now also able to “interact” with the general population. What are the implications in terms of trust into the analytical results and potential biases that may occur? How should visualization research react and adapt to increase trust and call our attention to critical biases to avoid them?

The ELLIIT Focus Period Symposium is the highlight of the five-week focus period, during which young international scholars, ELLIIT researchers and other well-established international academics gathered in Norrköping to work together in these joint research challenges.

Focus Period
Linköping University 2025
Campus Norrköping

Detailed program

Please note that the program is still subject to change.

May 12, 2025

}

17:00 - 19:00

Elite Grand Hotel

Tyska Torget 2, 600 41 Norrköping

Welcome reception at Elite Grand Hotel

A welcome drink and some hors d’oeuvres will be served.

Day 1 – May 13, 2025

}

08:30 - 09:00

Registration

}

09:00 - 09:15

Opening

Professor Andreas Kerren, Linköping University
Associate Professor Kostiantyn Kucher, Linköping University
Associate Professor Katerina Vrotsou, Linköping University

}

09:15 - 09:30

Introduction to ELLIIT and the division for Media and Information Technology at Linköping University

Professor Anders Ynnerman, Linköping University

}

09:30 - 10:30

Presentations from the Focus Period Visiting Scholar Program

}

10:30 - 11:00

Coffee

}

11:00 - 11:45

Fostering Mixed-Initiative Visual Analytics through AI Guidance

Alex Endert, Georgia Institute of Technology (USA)

Abstract

Visual analytic tools emphasize the importance of combining interactive visualizations with data analytic models to give people insight into data and AI. Through user interactions with these systems, people prepare data, explore and analyze it, and make decisions. Often, various computational or AI models attempt to guide or assist users throughout their exploration. However, designing such systems is challenging, both from a UI and AI design perspective. This talk will discuss the challenges and opportunities in designing and deploying mixed-initiative visual analytic tools with guidance capabilities. This talk will give examples from prior research and reflect on how the field is moving closer towards the goals and principles of mixed-initiative systems. 

Biography

Alex Endert is an Associate Professor and the Associate Chair of Operations and Special Initiatives at the School of Interactive Computing at the Georgia Institute of Technology. He directs the Visual Analytics Lab and conducts research to help people make sense of data and AI through interactive visualizations and visual analytic systems. His lab’s research is also often tested in practice in domains such as intelligence analysis, cybersecurity, manufacturing safety, and others. The lab’s work receives support from NSF, DARPA, DOD, DHS, NIJ, and generous industry partners. In 2018, Endert was awarded an NSF CAREER Award for his work on Visual Analytics by Demonstration. In 2013, his work on Semantic Interaction was awarded the IEEE VGTC VPG Pioneers Group Doctoral Dissertation Award and the Virginia Tech Computer Science Best Dissertation Award.

Luc-De-Raedt
}

11:45 - 12:30

Interactive Visual Exploration of Rule-Based ML Models

Natalia and Gennady Andrienko, Fraunhofer Institute IAIS (Germany) and City University London (UK)

Abstract

   The talk focuses on enhancing the interpretability of machine learning models, particularly, Random Forest. The project aims to enable understanding of the model’s internal workings, check alignment with human logic and domain knowledge, and evaluate the trustworthiness of machine learning models. We introduces Rule Explorer, a software tool designed to support visual exploration of ML models represented by systems of decision rules. The tool addresses challenges such as the impracticality of examining a large number of rules and the interdependencies between features involved in the rules.

   A running example uses a classification model for COVID-19 prediction, which predicts future COVID-19 incidence levels based on historical pandemic and mobility data. We demonstrate representation of individual rules, overview of the distribution of the feature value intervals, and interactive filtering of the rule ensemble. Rule Explorer is also capable of iterative aggregation and generalization of rules, detection and removal of contradictory rules, and checking model performance by applying rules to data. We come to a conclusion that a model with reasonable performance may not be fully trustworthy when its internal workings are considered.

   Finally, a second case study is presented, focusing on a regression model for predicting the potency of chemical compounds in collaboration with the Life Sciences applied research area of the Lamarr Institute. We made an experiment on employing topic modelling for finding combinations of components pertinent to molecules with high potency.

Biography

Prof. Dr. Natalia Andrienko is a lead scientist responsible for visual analytics research at Fraunhofer Institute IAIS, a part-time professor at City University London and Co-PI of the Lamarr Institute for Machine Learning and Artificial Intelligence. Results of her research have been published in two monographs, Exploratory Analysis of Spatial and Temporal Data: A Systematic Approach (Springer, 2006) and Visual Analytics of Movement (Springer, 2013), and a textbook Visual Analytics for Data Scientists (Springer, 2020). Natalia Andrienko is an associate editor of Visual Informatics and IEEE Transactions on Visualization and Computer Graphics. She received test of time award at IEEE VAST (2018) and best paper awards at AGILE (2006), IEEE VAST (2011 and 2012) and EuroVis (2015) conferences and EuroVA workshops (2018 and 2019).

Prof. Dr. Gennnady Andrienko is a lead scientist responsible for Visual Analytics Research with Fraunhofer Institute IAIS, 53754, Sankt Augustin, Germany, Co-PI of the Lamarr Institute for Machine Learning and Artificial Intelligence, and professor with City University of London, London, U.K. His research interests are focused on visual analytics methods for different kinds of temporal and spatio-temporal data.

Adnan Darwiche
Adnan Darwiche
}

12:30 - 14:00

Lunch

}

14:00 - 14:45

Human – AI interaction: what are explanations from AI-systems good for anyway?

Maria Riveiro, Jönköping University (Sweden)

Abstract

Artificial intelligence (AI) is rapidly integrating into daily life, fundamentally transforming how we work, access services, and interact with technology. In this talk I would like to present some results from empirical studies evaluating the impact of explanations on users interacting with AI-systems for supporting particular tasks. I will cover aspects like expectations, needs, presentation modalities and impact of explanations on problem-solving and decision-making.  Finally, I will also discuss the value of these results and other empirical works for informing theory.

Biography

Maria Riveiro received a Ph.D. degree in Computer Science from Örebro University, Sweden, in 2011. She is currently a Professor of Computer Science at the Department of Computer Science and Informatics at the School of Engineering, Jönköping University, with the Human-Centered Technology Group. Her main research interests are human-centered AI, explainable AI and visual analytics; she has worked designing, developing, and evaluating technologies that make our lives easier and better, with people in mind. Prof. Riveiro is the recipient of a starting and consolidator grant from the Swedish Research Council investigating how to tailor explanations from AI systems to users’ expectations and how to evaluate explainable AI systems.

}

14:45 - 15:30

The challenge and paradox of data quality

Irina Shklovski, University of Copenhagen (Denmark) and Linköping University (Sweden)

Abstract

Despite the many critical debates, conferences and special issues on the topic of data, little attention has been paid to the notion of data quality. Amid the calls for responsible AI development and data justice, data quality is simply presumed as something that, while ultimately necessary, can remain under-defined. There is a glaring lack of systematic research on assessing data quality across fields and domains that have spilled much on ink on technical data management and on critical considerations of data. Analyses of existing industry standards on data quality demonstrate significant disparities in terminology and scope with respect to the apparent expectation of the EU AI Act. Among AI developers, issues of data quality tend to be dismissed in favor of model considerations. The little existing research in this domain rarely considers how data quality is achieved. Where such considerations are extended, they tend to focus on data provenance and the reputation of data providers as proxies for quality assurance. Yet it is clear that datasets and their flaws are deeply situated in the contexts of their production. I will discuss data quality, as an under-theorized and largely ignored problem, which underpins current challenges in the data visualization domain.

Biography

Irina Shklovski is Professor of Communication and Computing in the Department of Computer Science at the University of Copenhagen. She holds a WASP-HS visiting professorship at Linköping University in TEMA GENUS. Her main research areas include speculative AI futures, responsible and ethical technology design, information privacy, creepy technologies and the sense of powerlessness people experience in the face of massive personal data collection. Current projects explore problems of data quality and synthetic data, explainable AI, and the moral stress people experience when attempting to design, develop, and implement AI systems responsibly.

Neil Yorke-Smith
}

15:30 - 16:00

Coffee

}

16:00 - 16:45

Show me your model!

Przemysław Biecek, Warsaw University of Technology and University of Warsaw (Poland)

Abstract

Big data has become fuel for data science, in which exploratory analysis using data visualization plays an important role.
Similarly, big models have become fuel for model science, in which exploratory analysis using model visualization plays an important role.
In this talk, I will show how a sequence of complementary explanations allows a better understanding of how ML models work, how it helps human-model interaction, and what questions users ask of the model during such interactions.

We will also see how generative models open up new possibilities for explaining computer vision ML models, and present further paths for developing methods from the GenXAI area.

Biography

Przemyslaw Biecek is the leader of the MI2.AI research team conducting research in the area of explainable artificial intelligence. Over the years, he has conducted research related to visualization and exploration of data, and today, he uses these experiences to create new approaches to visualizing and exploring predictive models. Before academia, he worked as a principal data scientist at Samsung and led the Data Visualization Group at Netezza.
His main achievements are summarised in the Explanatory Model Analysis (https://ema.drwhy.ai/) monography and DALEX (https://dalex.drwhy.ai/) open software, which provides descriptive and model-agnostic local explanations for machine learning models.
Trivia: in his spare time, he creates comics about data exploration and visualization, such as Charts Runners (https://betaandbit.github.io/Charts/) or The Hitchhiker’s Guide to Responsible Machine Learning (https://rml.mi2.ai/).

Panos M Pardalos
}

16:45 - 17:30

Q&A / Panel

Day 2 – May 14, 2025

}

09:00 - 09:45

What is a Good Projection – and Why We Should Care About It

Alexandru Telea, Utrecht University (The Netherlands)

Abstract

Projections, also known as dimensionality reduction (DR), are key techniques involved in creating visualizations of high dimensional data and, as such, present in the vast majority of visual analytics (VA) systems for explainable AI. Yet, it is well known that projections cannot capture all aspects of large, complex, and high-dimensional data. To help users understanding what a projection can show (or not), various quality metrics have been designed. In this talk, I will present an overview of such quality metrics and ways to use them in actual VA exploratory workflows. More importantly, I will cover some fundamental limitations that many current quality metrics share. These limitations expose deeper questions that relates to the actionable usage of projections as tools to understand high-dimensional data: When is one projection better than another one? When, and what for, can we actually use projections to reason about such data? What do we miss, in visualization research, to answer such questions?

Biography

Alexandru Telea is Professor of Visual Data Analytics at the Department of Information and Computing Sciences, Utrecht University, where he leads the Visualization and Graphics (VIG) group. He holds a PhD from Eindhoven University and has been active in the visualization field for over 25 years. He has been the program co-chair, general chair, or steering committee member of several conferences and workshops in visualization, including EuroVis, VISSOFT, SoftVis, EGPGV, and SIBGRAPI. His main research interests cover unifying information visualization and scientific visualization, high-dimensional visualization, and visual analytics for machine learning. He has authored over 350 papers. He is the author of the textbook “Data Visualization: Principles and Practice” (CRC Press, 2014), a worldwide reference in teaching data visualization.

Panos M Pardalos
}

09:45 - 10:30

Large Language Models and Europe

Magnus Sahlgren, AI Sweden (Sweden)

Abstract

Large Language Models (LLMs) are currently at the top of the AI hype curve with new and increasingly powerful models being produced at an unprecedented pace. This development is mainly driven by private companies with access to substantial economic and computational resources, and models have as a result largely been kept proprietary and only made accessible via commercial APIs. Last year saw a surprising shift in this trend, with an increasing number of model developers releasing their models as open weight. Such open models are becoming increasingly competent, prompting some commentators to argue that the performance gap between proprietary and open models may be slowly closing. But how is Europe doing in this development? This talk presents a brief overview over the current developments of open LLMs with a special focus on the European region. Are we catching up, or are we lagging behind?

Biography

Magnus Sahlgren is Head of Research for Natural Language Understanding at AI Sweden. Sahlgren has a PhD in computational linguistics, and his research lies at the intersection between computational linguistics, philosophy, and artificial intelligence. He is primarily known for his work on computational models of meaning, and he is currently leading the initiative to train large language models in Sweden. Sahlgren has previously held positions at the Research Institutes of Sweden (RISE), the Swedish Defense Research Agency (FOI), the Swedish Institute of Computer Science (SICS), and Stockholm university.

Panos M Pardalos
}

10:30 - 11:00

Coffee

}

11:00 - 11:45

Balancing Trust and Over-Reliance in Visual Analytics: The AI-in-the-Loop Dilemma

Alvitta Ottley, Washington University in St. Louis (USA)

Abstract

When AI is embedded within visual analytics systems, how do we ensure users don’t trust it too much? While visualization has long been used to enhance AI transparency—helping users “peek inside the black box” and even interact with models—these same visual tools can inadvertently foster over-reliance and automation bias. Unlike traditional AI applications, where the concern is often building trust, AI-in-the-loop visual analytics systems demand a different approach: tempering trust to prevent users from blindly following AI-generated insights.

This talk explores the evolving role of AI in visual analytics and the unique trust dynamics it creates. We present empirical findings on how users perceive AI-integrated visual interfaces, uncovering key factors influencing trust and decision-making. We highlight how conventional wisdom about AI transparency may not fully apply in these systems and discuss strategies for balancing trust while mitigating over-reliance. By recognizing the risks of misplaced confidence in AI-driven visual analytics, we can design more effective, reliable, and user-aware decision-support systems.

Biography

Dr. Alvitta Ottley is an Associate Professor in the Computer Science & Engineering Department at Washington University in St. Louis, Missouri, USA. She also holds a courtesy appointment in the Psychological and Brain Sciences Department. Her research uses interdisciplinary approaches to solve problems such as how best to display information for effective decision-making and designing human-in-the-loop visual analytics interfaces that are more attuned to how people think. Dr. Ottley received an NSF CRII Award in 2018 for using visualization to support medical decision-making, the NSF Career Award for creating context-aware visual analytics systems, and the 2022 EuroVis Early Career Award. In addition, her work has appeared in leading conferences and journals such as CHI, VIS, and TVCG, achieving the best paper and honorable mention awards.

Andrea Lodi
}

11:45 - 12:30

Trustworthy AI by Human-AI-Collaboration

Hendrik Strobelt, MIT-IBM AI Lab (USA)

Abstract

With the increasing adoption of machine learning models across domains, we have to think about the roles of humans and AI to build trustworthy systems. In the last few years, my collaborators and I have created a series of tools that utilize visualization and visual user interaction to help investigate behavior of machine learning models. I will present a quick intro to trustworthy AI in general and will show some of these tools that demonstrate how intuitive ideas can foster human-AI collaboration − and therefore contribute to building trustworthy systems.

Biography

Hendrik Strobelt is the Explainability Lead at the MIT-IBM Watson AI Lab and Senior Research Scientist at IBM Research. His recent research is on visualization for and human collaboration with AI models to foster explainability and intuition. His work involves NLP models and generative models while he is advocating to utilize a mix of data modalities to solve real-world problems. His research is applied to tasks in machine learning, in NLP, in the biomedical domain, and in chemistry. Hendrik joined IBM in 2017 after postdoctoral positions at Harvard SEAS and NYU Tandon. He received a Ph.D. (Dr. rer. nat.) from the University of Konstanz in computer science (Visualization) and holds an MSc (Diplom) in computer science from TU Dresden. His work has been published at venues like IEEE VIS, ICLR, ACM Siggraph, ACL, NeurIPS, ICCV, PNAS, Nature BME, or Science Advances. He received multiple best paper/honorable mention awards at EuroVis, BioVis, VAST, ACL Demo, or NeurIPS demo. He received the Lohrmann medal from TU Dresden as the highest student honor. Hendrik has served in program committees and organization committees for IEEE VIS, BioVis, EuroVis. He served on organization committees for IEEE VIS, VISxAI, ICLR, ICML, NeurIPS. Hendrik is visiting researcher at MIT CSAIL. (more: http://hendrik.strobelt.com)

Andrea Lodi
}

12:30 - 14:00

Lunch

}

14:00 - 14:45

Scaling Data-Driven Decision-making Through Human-AI Interaction

Anamaria Crisan, University of Waterloo (Canada)

Abstract

Decision-making with data is primarily conducted by professionals lacking formal training in data science, statistics, or machine learning. Aiding these professionals are emerging technologies supported by machine learning (ML) and/or artificial intelligence (AI) techniques that automate many aspects of data work. However, this collaboration between humans and ML/AI technology is far from frictionless and can produce incorrect or misleading results. In this talk I present a human-centered approach for ML/AI that aims to address these limitations.

Biography

Anamaria Crisan is an Assistant Professor in the School of Computer Science at the University of Waterloo. She is also affiliated with teh WaterlooHCI lab and is a member of the Waterloo Artificial Intelligence Institute. She conduct interdisciplinary research at the intersection of Human-Computer Interaction, Data Visualization, and Applied AI/ML.

Her areas of focus include:

  • Human-Centered Artificial Intelligence (AI) and Machine Learning (ML): Developing responsible, transparent, and trustworthy AI/ML systems guided by and aligned with human intents
  • Interactive Visualization Systems: Designing visualization systems that support data-driven decision-making, from insight discovery to action
  • Data Science in Healthcare, Public Health, and Biomedicine: Leveraging data science and visualization to improve outcomes in

She run the Insight Lab at the University of Waterloo.

Andrea Lodi
}

14:45 - 15:30

User-Centered Design of Visual Analytics and AI Solutions

Jörn Kohlhammer, Fraunhofer Institute for Computer Graphics Research (Germany)

Abstract

The rapid development and advance of AI methods like LLMs have a strong influence on the expectations of users on what a data-driven, visual user interface can possibly provide. At the same time, the specific data, tasks, and users define what a visual user interface can and should reasonably provide with respect to (X)AI capabilities. The development of visual analytics solutions with (X)AI functionality requires an extension of the user-centered design approach. First of all, we need to identify the tasks that can be supported with specific machine learning methods that require an understanding of the underlying model or an explanation of the predicted results. Equally important is the securing and verification of adequate data sources for such tasks. Finally, we need to build visual and interactive tools that provide the level of trustworthiness and comprehensibility of AI methods that matches the requirements of the user roles who work with the solution. Examples from applied research in healthcare and cybersecurity will show that clearly addressing each of these areas in an extended UCD process is vital to ensure the viability and usefulness of XAI solutions.

Biography

Jörn Kohlhammer is head of the Competence Center for Information Visualization and Visual Analytics and honorary professor at TU Darmstadt. His center develops innovative visualization solutions for several industry sectors, including medical data analysis of electronic health records, decision support in the public sector, and visualization for cyber-security. Jörn is author of more than 80 publications in journals and books, and of monographs and conference papers. He is regular member of program committees for conferences like IEEE VAST and EuroVis, and acts as reviewers in many conferences and journals. His research interests include decision-centered information visualization based on semantics, and visual business analytics. Jörn Kohlhammer studied computer science with a minor in business administration at the Ludwig-Maximilian University in Munich, Germany. After that he received his diploma in 1999, he worked as a research scientist at the Fraunhofer Center for Research in Computer Graphics (CRCG) in Providence, RI, USA until 2003. In 2004 he joined Fraunhofer IGD in Darmstadt, Germany to finish his PhD on decision-centered visualization in 2005. Jörn is senior member of IEEE and the Gesellschaft für Informatik (GI), and is Associate Editor for IEEE TVCG.

Andrea Lodi
}

15:30 - 16:00

Coffee

}

16:00 - 16:45

Q&A / Panel

}

18:15 - 23:00

Visualization Center C, Kungsgatan 54,
602 33, Norrköping

Poster session and Symposium dinner

Poster session with welcome drinks from 18.15.
Dinner will be served at 19.15.
Dome show at 21.15.
Mingle in the exhibitions + bar open until 23.00

Day 3 – May 15, 2025

}

09:00 - 09:45

On the Importance of Visualisation in a Data Driven Society

Daniel Archambault, Newcastle University (UK)

Abstract

Machine learning and data science are receiving significant attention and rightly so. The results that can be produced by distilling large amounts data are amazing.  Yet, what society expects is human oversight at an appropriate level and trust of system results.  Oversight and trust are not for machines; oversight and trust are for humans.  Effective solutions thus require careful human-machine collaboration and in turn careful visualisation design.  This talk motivates why visualisation design forms a necessary part of a data driven society.  It provides motivation for carefully designed visualisations that must take into account human perceptual factors, target audience, and the automated processes applied to the information before visualisation.  It provides practical examples where all three must be carefully thought about in order to deliver effective data science.

Biography

Prof. Daniel Archambault is a Professor of Visualisation/Data Science at Newcastle University in the United Kingdom where he co-leads the Scalable Computing Research Group which brings together researchers in visualisation, AI, and scalable computing. His research primarily is in the area of visualisation, network visualisation, and visualisation for data science and AI where he has made contributions to fundamental techniques and assessing such techniques for perceptual effectiveness.

Empty box
}

09:45 - 10:30

Intelligence Augmentation: Bridging Human and Artificial Intelligence

Mennatallah El-Assady, ETH Zürich (Switzerland)

Abstract

Intelligence augmentation through mixed-initiative systems promises to combine AI’s efficiency with humans’ effectiveness. Central to this vision are co-adaptive visual interfaces, which facilitate seamless collaboration between humans and machines. In this talk, I will explore the importance of human-AI collaboration in decision-making and problem-solving, highlighting how tailored visual interfaces can enhance interaction with machine learning models. These interfaces promote the understanding, diagnosis, and refinement of models. I will present various workflow designs for computational linguistics analysis and discuss emerging methods for integrating diverse human feedback. The talk will conclude with insights on the challenges we face today and the promising research directions that lie ahead.

Biography

Mennatallah El-Assady is an Assistant Professor at the Department of Computer Science, ETH Zürich. She heads the Interactive Visualization and Intelligence Augmentation (IVIA) lab. Prior to that, she was a research fellow at the ETH AI Center; and before that, she was a research associate in the group for Data Analysis and Visualization at the University of Konstanz (Germany) and in the Visualization for Information Analysis lab at the OntarioTech University (Canada). She works at the intersection of data analysis, visualization, computational linguistics, and explainable artificial intelligence. Her main research interest is studying interactive human-AI collaboration interfaces for effective problem-solving and decision-making. In particular, she is interested in empowering humans by teaming them up with AI agents in co-adaptive processes. She has gained experience working in close collaboration with political science and linguistic scholars over several years, which led to the development of the LingVis.io platform. El-Assady has co-founded and co-organized several workshop series, notably Vis4DH and VISxAI.

Andrea Lodi
}

10:30 - 11:00

Coffee

}

11:00 - 11:45

AI literacy: Empowering People in Human-AI interactions

Cagatay Turkay, University of Warwick (UK)

Abstract

With the ubiquity of AI systems, the society and its citizens need to be better equipped with the necessary knowledge, tools and mechanisms in interacting with them. In this talk, I will discuss our ongoing research exploring how people can be better supported in understanding and acting in response to AI systems and decisions made through them. I will present our research on designing human-AI collaborative reasoning systems and making computational models understandable. I will talk about our empirical research with a UK regulator body to understand how to empower individuals in responding to AI-assisted decisions. I will also invite the audience to critically reflect on the dominance of the individual in HCI and data visualisation research and explore how we can move into the collective and the social.

Biography

Cagatay Turkay is a Professor at the Centre for Interdisciplinary Methodologies at the University of Warwick, UK. He was formerly a Turing Fellow at the Alan Turing Institute and worked several years at City, University of London. His research investigates the interactions between data, algorithms and people, and explores the role of interactive visualisation and other interaction mediums such as natural language at this intersection. He designs techniques and algorithms that are sensitive to their users in various decision-making scenarios involving primarily high-dimensional and spatio-temporal phenomena, and develops methods to study how people work interactively with data and computed artefacts.

Link: https://warwick.ac.uk/fac/cross_fac/cim/people/cagatay-turkay

Andrea Lodi
}

11:45 - 12:30

Awareness, Behavior, Decisions, Oh My! Detecting and Mitigating Biases in Visualization

Emily Wall, Emory University (USA)

Abstract

Recent high-profile scenarios have demonstrated that, in spite of the “data” in data-driven decision making, analysis practices nonetheless can lead to poor outcomes, given numerous junctures where bias can be introduced. Data may contain culturally embedded biases, algorithms may propagate or exacerbate those biases, and people’s decisions can be influenced by their own cognitive biases. While it is not yet possible to completely remove these varying biases from data analysis, some techniques exist to mitigate the effects by providing guidance or other forms of intervention. In this talk, I describe the development of complementary measures and mitigation strategies for addressing human biases: via perspectives on awareness, behavior, and decisions. This talk will detail recent and ongoing work in the Cognition and Visualization Lab at Emory University on metacognitive awareness of biases, observable and interruptible behavioral patterns, and resulting decisions that contribute to biased analysis processes. Interventions can thus lead to more socially responsible and conscientious data analysis practices. 

Biography

Emily Wall is an Assistant Professor in the Computer Science Department at Emory University where she directs the Cognition and Visualization Lab. Her research interests lie at the intersection of cognitive science and data visualization. Particularly, her research has focused on increasing awareness of unconscious and implicit human biases through the design and evaluation of (1) computational approaches to quantify bias from user interaction and (2) interfaces to support visual data analysis. Her research has been funded by the National Science Foundation and Emory Office of the Provost on Racial Justice and Racial Equity.

Andrea Lodi
}

12:30 - 14:00

Lunch

}

14:00 - 14:45

Norms and adaptive technologies

Stefan Larsson, Lund University (Sweden)

Biography

Stefan Larsson is an Associate Professor in Technology and Social Change at Lund University, Sweden. As a lawyer and socio-legal researcher he leads a research group (AI and Society) focusing on social and normative implications of AI and adaptive technologies in both private and public domains, ranging from public sector decision-making to mammography and social robotics.

Andrea Lodi
}

14:45 - 15:15

Q&A / Panel

}

15:15 - 15:30

Closing

}

15:30 - 16:00

Farewell coffee