Tuesday, April 29
09:15 - 10:00
t-viSNE: A Visual Inspector for the Interactive Assessment and Interpretation of t-SNE Projections
Angelos Chatzimparmpas, Assistant Professor at Utrecht University (the Netherlands)
Abstract
t-Distributed Stochastic Neighbor Embedding (t-SNE) for the visualization of multidimensional data has proven to be a popular approach, with successful applications in a wide range of domains. Despite their usefulness, t-SNE projections can be hard to interpret or even misleading, which hurts the trustworthiness of the results. Understanding the details of t-SNE itself and the reasons behind specific patterns in its output may be a daunting task, especially for non-experts in dimensionality reduction.
In this talk, I will present t-viSNE, a web-based tool (available at https://sheldon.lnu.se/t-viSNE) for the interactive visual exploration of t-SNE projections that allows analysts to examine their quality and meaning from various perspectives, such as the effects of hyperparameters, distance and neighborhood preservation, densities and costs of particular neighborhoods, and correlations between dimensions and visual patterns. With this talk, I will also summarize past research findings, discuss work in progress, and highlight future research plans.
Biography
Angelos Chatzimparmpas is an Assistant Professor in Information and Computing Sciences at Utrecht University, the Netherlands, and a member of the VIG research group. His main research interests include visual exploration of the inner parts and the quality of machine learning (ML) models with a specific focus on making complex ML models better understandable and explainable, as well as providing reliable trust in the ML models and their results. In addition to developing visual analytics systems, he also focuses on model uncertainty quantification, utilizing deep learning architectures for visualization evaluation, and supporting users in detecting AI-generated images (deepfakes). Angelos has made significant contributions by publishing journal articles and conference papers at top visualization and HCI venues. His doctoral thesis was awarded the EuroVis Best PhD Dissertation Award in 2024 and was recognized by the Royal Swedish Academy of Engineering Sciences with its inclusion in the IVA-100 list in 2023.

10:15 - 11:00
Ecological Visual Analytics Interfaces for Time- and Safety-critical Applications
Elmira Zohrevandi, Postdoctoral researcher at Linköping University (Sweden)
Abstract
In complex time- and safety-critical environments such as air traffic control, making decisions during unexpected situations can become challenging as the operator needs to have a clear understanding of various resolution strategies and their consequences in a limited amount of time (i.e. within a couple of minutes). In such environments, interface designers must carefully consider how to present information to operators in a way that avoids wastefully consuming their cognitive resources, thereby reducing the risk of failure.
In this talk, I will present three visual analytics interfaces designed to support operators’ real-time decision-making. These interfaces were developed by integrating the ecological interface design framework into the development of visual representations and were evolved through a series of design studies. I will demonstrate how this approach can shape novice and expert operators’ decision-making towards domain-specific functional goals while allowing them to maintain their individual problem-solving strategies.
Biography
Elmira Zohrevandi is a postdoctoral researcher in information visualization with focus on design and evaluation of visual analytics interfaces to strengthen human-automation collaboration.
Automation in today’s world has helped human operators having several tasks accomplished in limited time. Therefore, human’s role in automated environments are shifting from operational tasks to supervisory tasks. In complex environments, supervisory tasks become sometimes difficult to manage as the operator needs to have a clear understanding of different operational levels in the system and make decisions safely and efficiently. Elmira Zohrevandis work focuses on strengthening human analytical reasoning by visualizing the constraints and relationships between system parameters to them. Her research expertise includes domain problem characterization through work domain analysis, visual encoding design, evaluation study design and simulation design.
Elmira Zohrevandi has a M.Sc. degree in Transportation Systems Engineering and Logistics from the division of communication and transport (KTS) and a Ph.D. degree in Visualization and Media Technology from the division of media and information technology (MIT) at the Institute of science and technology (ITN) in Linköping University.

Wednesday, April 30
09:15 - 10:00
Interactive Visualization for Interpretable Machine Learning
Dennis Collaris, Postdoc at Eindhoven University of Technology (the Netherlands)
Abstract
Machine learning is a very populair technique to make automatic predictions based on data. This enables businesses to make sense of their data and make predictions about future events. But the modern machine models we use are typically very complex and difficult to understand: we only know what the model predicts, but not why it decided this, and what are the crucial factors leading to that prediction. Currently, there is a strong demand for understanding how specific models operate and how certain decisions are made, which is particularly important in high-impact domains such as credit, employment, and housing. In these cases, the decisions made using machine learning can significantly impact the lives of real people. The field of eXplainable Artificial Intelligence (XAI) aims to help experts understand complex machine learning models. In recent years, various techniques have been proposed to open up the black box of machine learning. However, because interpretability is an inherently subjective concept, it remains challenging to define what a good explanation is. To address this, we argue we should actively involve data scientists in the process of generating explanations, and leverage their expertise in the domain and machine learning. Interactive visualization provides an excellent opportunity to both involve and empower experts.
In my work, we collaborated with Achmea, a large insurance company in the Netherlands. In our discussions with their data scientists, we noticed that different teams required different perspectives to interpret their models, ranging from local explanation of single predictions to global explanation of the entire model. In this talk, we explore interactive visualization approaches for machine learning interpretation from these different (and often complementary) perspectives, ranging from local explanation of single predictions to global explanation of the entire model.
Biography
Dennis Collaris is a postdoc researcher at Eindhoven University of Technology focusing on visual analytics for interpretable machine learning, as well as a scientific programmer at Utrecht University, and founder of Xaiva: a startup offering a human oversight platform for AI to understand and explain decision-making through interactive visualisation.

10:15 - 11:00
Title TBC
Linhao Meng, PhD student at Eindhoven University of Technology (the Netherlands)
Biography
Linhao Meng is a final year PhD student in computer science at Eindhoven University of Technology. Her research focuses on designing visualization and creating interactive tools to enhance human’s ability to understand data and models in machine learning tasks.

11:15 - 12:00
Preprocessing Matters: Understanding and Optimising the Effect of Data Preprocessing over Decision Fairness
Vlad González, Lecturer at Newcastle University (UK)
Abstract
Improving fairness by manipulating the preprocessing stages of classification pipelines is an active area of research, closely related to Auto ML. Through genetic methods it is possible to optimise for user-defined combinations of fairness and accuracy and for multiple definitions of fairness, providing flexibility in the fairness-accuracy trade-off. These near-optimal solutions may be presented as a Pareto front. Optimal pipelines differ for different datasets, suggesting that no “universal best” pipeline exists.
Biography
Dr Vlad Gonzalez-Zelaya is a Lecturer in Machine Learning and Data Science at Newcastle University, UK. His research interests are centred around Responsible AI, with a focus on algorithmic fairness and data privacy. He has developed several fairness-correcting methods for classification tasks based on data pre-processing, publishing his findings in journals such as ACM TKDD, and presented it at conferences such as ICDE, KDD, EDBT, and MDAI. Other research interests of him include responsible federated learning, fair resource allocation mechanisms, and combinatorial game theory.

Monday, May 5
09:15 - 10:00
Leveraging Human-Centered Machine Learning to Create More Explainable Machine Learning Models
Bahavathy Kathirgamanathan, Postdoctoral research scientist at Fraunhofer IAIS (Germany)
Abstract
Involving humans at every stage of developing a machine learning model is crucial for making AI systems more human-centric, both in model development and generating explanations. Furthermore, the integration of human knowledge into ML models helps to improve their trustworthiness and explainability. Human expertise provides invaluable insights that can enhance AI-supported decision making. By incorporating interactive visual interfaces, generalised workflows can be developed to aid the knowledge injection. We propose a methodology to augment the model with domain knowledge through a generalised workflow that can be applied to various data types such as temporal data, geographic data, or social data and show it applied to some case studies.
Biography
Bahavathy Kathirgamanathan is a postdoctoral research scientist at Fraunhofer Institute for Intelligent Analysis and Information Systems (IAIS) and a scientific coordinator for Human Centered AI systems at the Lamarr institute for Machine Learning and Artificial Intelligence. She completed her PhD in 2023 in Computer Science at University College Dublin (Ireland). Her research focus for her PhD dissertation was in multivariate time series classification of wearable sensor data for applications in sports analytics. Since completing her PhD, Bahavathy has shifted her focus towards the development of human-centric models. She is particularly interested in developing techniques that involve humans at every stage of the modeling process, from data representation to model development, and providing explanations back to the user. Her research interests include using visual analytics as a tool to aid this process, with a particular emphasis on applications involving temporal and/or spatio-temporal data.

10:15 - 11:00
Trustworthiness for healthcare applications
Cornelia Käsbohrer, PhD student at Umeå University (Sweden)
Abstract
Healthcare professionals often face overwhelming workloads, juggling high patient volumes with limited time for each case. In an effort to support faster and more accurate decision-making, artificial intelligence (AI) is being increasingly adopted for medical diagnosis and treatment recommendations. However, for AI to be truly beneficial in healthcare, it must be more than just technically accurate—it needs to be transparent, fair, and reliable. If clinicians do not trust these systems, they may be reluctant to rely on AI-assisted insights, which could, in turn, lead to missed opportunities or reinforce
existing biases in patient care.
A key challenge is that many AI models operate as “black boxes,” making it difficult for medical professionals to understand and validate their predictions. To bridge this gap, we examine the role of visualization and human-in-the-loop approaches in enhancing trust. We will discuss how feature importance visualization can help clinicians interpret model decisions, how counterfactual explanations can provide actionable insights into AI predictions, and how fairness dashboards enable bias detection and mitigation.We can empower healthcare professionals to evaluate, challenge, and refine AI-driven recommendations by integrating interactive AI explanations into clinical workflows. In this talk, we will present practical examples, discuss the challenges of designing trustworthy AI systems, and suggest ways to improve the collaboration between humans and AI in healthcare settings.
Biography
Cornelia Käsbohrer is a PhD student at Umeå University, where she is working on fairness in decision-making systems for healthcare. She wants to explore the intersection of causal inference, fairness-aware machine learning, and interpretability. Her work includes analyzing fairness in datasets and applying structural causal models to evaluate biases. Cornelia is also interested in combining statistical fairness metrics with human-inthe-loop methodologies. Particularly, she is curious about how visualization can enhance understanding and decision-making in AI systems. Prior to her PhD studies, Cornelia received a Master’s degree in Industrial Mathematics (University of Hamburg) focusing on Machine Learning and a Bachelor’s degree in Mathematics (Technical University Darmstadt).

11:15 - 12:00
Circular Projection for Novel Designs in Text Visualization
Raphael Buchmüller, PhD Student at University of Konstanz (Germany)
Abstract
Circular Projection is an approach for projecting high-dimensional data such as word embeddings. By utilizing radial symmetry and a user-defined origin, cPro allows for flexible and interpretable exploration of semantic relationships in text data. This talk will highlight how cPro enables focused analysis of word groupings, revealing underlying patterns and facilitating intuitive, visually compelling representations of linguistic structures.
Biography
Raphael Buchmüller is a researcher at the University of Konstanz, working under the Data Analysis and Visualization Chair supervised by Prof. Keim since 2022. He works at the intersection of data analysis, visualization, computational linguistics, and explainable artificial intelligence with particular research focus on Visual Text Analytics and User-centered Representation. with further interests in the digital humanities and network analysis. Raphael holds a Master’s degree in Information Engineering from the University of Konstanz and the NTNU in Trondheim.

Tuesday, May 6
09:15 - 10:00
Language model-based text visualizations for digital humanities
Maria Skeppstedt, Research engineer at Uppsala University (Sweden)
Abstract
Many visualization techniques used within digital humanities have been borrowed from other research fields that are more focused on quantitative approaches. This talk discusses our novel text visualization methods, which instead have been tailored to the needs of researchers within the humanities. The methods — Word Rain and Topic Timelines — are built on different types of language models, and aim to provide the user with an overview of large text collections, as well as with the possibility to zoom in to access content on a more detailed level. The talk will describe the theory behind the methods, as well as provide examples of how researchers interact with the visualizations to explore large text collections.
Biography
Maria Skeppstedt received her PhD in Computer and Systems Sciences in 2015. After a few years of postdoctoral research in applied natural language processing, she has spent the past six years working within research infrastructures, developing tools for searching, processing, annotating, and visualizing terms and text. As a research engineer at the Centre for Digital Humanities and Social Sciences at Uppsala University, she specializes in creating new text visualization techniques tailored to the needs of researchers within the humanities.

10:15 - 11:00
On the Relationship of Explainable Artificial Intelligence and Essential Complexity – Findings and the Application of the SDMX Business Case
Tim Barz-Cech, Researcher at the University of Potsdam (Germany)
Biography
Tim Barz-Cech is with HMS Analytical Software GmbH and the University of Potsdam. Since February 2025, he has worked as a Data Scientist for HMS Analytical Software GmbH. Before that, he worked as a scientific staff member (pre-doc) at the University of Potsdam for three years. Currently, his PhD thesis is under review.

Monday, May 19
09:15 - 10:00
Designing Explanation Mechanisms for Transformer-based Models in Industrial Applications
Elmira Zohrevandi, Postdoctoral researcher at Linköping University (Sweden)
Abstract
Designing explainable AI (XAI) solutions for process industries is a non-trivial challenge as they deal with large multivariate time-series sensor data. Moreover, the integration of predictive machine learning models into the workflows of operators introduces the challenging task of understanding the model’s operations for them. This calls for developing explanation mechanisms that encourage operators to engage in collaboration with the model. In this talk I will present the design of two interactive linked- and coordinated-views dashboards for operators of time-constant processes. Both interfaces feature a time-series transformer-based model that predicts the process outcomes. This inclusion is vital, as time-series analysis provides the granularity and precision needed for accurate forecasting in time-constant processes. I will further demonstrate the versatility of both dashboards by showcasing the applicability of the designed visual encodings to the operators of two process industries, copper mining and paper-pulp production.
Biography
Elmira Zohrevandi is a postdoctoral researcher in information visualization with focus on design and evaluation of visual analytics interfaces to strengthen human-automation collaboration.
Automation in today’s world has helped human operators having several tasks accomplished in limited time. Therefore, human’s role in automated environments are shifting from operational tasks to supervisory tasks. In complex environments, supervisory tasks become sometimes difficult to manage as the operator needs to have a clear understanding of different operational levels in the system and make decisions safely and efficiently. Elmira Zohrevandis work focuses on strengthening human analytical reasoning by visualizing the constraints and relationships between system parameters to them. Her research expertise includes domain problem characterization through work domain analysis, visual encoding design, evaluation study design and simulation design.
Elmira Zohrevandi has a M.Sc. degree in Transportation Systems Engineering and Logistics from the division of communication and transport (KTS) and a Ph.D. degree in Visualization and Media Technology from the division of media and information technology (MIT) at the Institute of science and technology (ITN) in Linköping University.

10:15 - 11:00
On Fairness and Privacy: Common Goals towards Trustworthy ML
Vlad González, Lecturer at Newcastle University (UK)
Abstract
Privacy protection for personal data and fairness in automated decisions are fundamental requirements for responsible Machine Learning. Both may be enforced through data preprocessing and share a common target: data should remain useful for a task, while becoming uninformative of the sensitive information. The intrinsic connection between privacy and fairness implies that modifications performed to guarantee one of these goals, may have an effect on the other, e.g., hiding a sensitive attribute from a classification algorithm might prevent a biased decision rule having such attribute as a criterion. This work resides at the intersection of algorithmic fairness and privacy. We show how the two goals are compatible, and may be simultaneously achieved, with a small loss in predictive performance.
Biography
Dr Vlad Gonzalez-Zelaya is a Lecturer in Machine Learning and Data Science at Newcastle University, UK. His research interests are centred around Responsible AI, with a focus on algorithmic fairness and data privacy. He has developed several fairness-correcting methods for classification tasks based on data pre-processing, publishing his findings in journals such as ACM TKDD, and presented it at conferences such as ICDE, KDD, EDBT, and MDAI. Other research interests of him include responsible federated learning, fair resource allocation mechanisms, and combinatorial game theory.

11:15 - 12:00
Causal inference to support trustworthiness
Cornelia Käsbohrer, PhD student at Umeå University (Sweden)
Abstract
Artificial intelligence (AI) is increasingly being used to assist medical professionals in diagnosing diseases and making treatment recommendations. However, AI models often rely on correlations rather than true causal relationships, which can lead to misleading predictions and unintended biases. To build trustworthy AI systems, it is essential to move beyond correlations and ensure that AI decisions are causally grounded — particularly in high-stakes domains like healthcare.
Causal inference provides a framework to identify and understand cause-and-effect relationships in medical decision-making. By leveraging techniques such as structural causal models (SCMs), counterfactual reasoning, and mediation analysis, we can gain deeper insights into why an AI model makes certain predictions and whether biased or spurious factors influence those predictions. In this presentation, we explore how causal visualization tools and human-in-the-loop approaches can empower medical professionals to interrogate AI decisions, simulate hypothetical interventions, and assess the robustness of AI recommendations.
We will demonstrate how causal inference can enhance transparency, fairness, and accountability in healthcare AI through real-world examples. By integrating causal reasoning into model evaluation, we can create AI systems that are not only predictive but also actionable, interpretable, and ethically sound.
Biography
Cornelia Käsbohrer is a PhD student at Umeå University, where she is working on fairness in decision-making systems for healthcare. She wants to explore the intersection of causal inference, fairness-aware machine learning, and interpretability. Her work includes analyzing fairness in datasets and applying structural causal models to evaluate biases. Cornelia is also interested in combining statistical fairness metrics with human-inthe-loop methodologies. Particularly, she is curious about how visualization can enhance understanding and decision-making in AI systems. Prior to her PhD studies, Cornelia received a Master’s degree in Industrial Mathematics (University of Hamburg) focusing on Machine Learning and a Bachelor’s degree in Mathematics (Technical University Darmstadt).

Tuesday, May 20
09:15 - 10:00
Who’s In Charge Here? Exploring the Paradox of Human-In-the-loop, Automated Data Science
Jen Rogers, Researcher and Engineer at the Idaho National Laboratory (USA)
Abstract
Automated Data Science can lower the barrier to data work, making it more efficient and accessible. However, even with current technology, we still depend on human intervention. This talk explores the complex interplay between automation and human involvement in data science from two perspectives that traditionally lie at opposite ends of the human-machine axis: Automated Data Science and Visualization. Ultimately, we leave you with the question: What should we automate?
Biography
Jen Rogers is a researcher and visualization engineer at Idaho National Lab’s Applied Visualization Lab. Her work focuses on scientific visualization, visualization for high-dimensional data analysis, and human factors in AI/ML.
She received her Ph.D. in Computing from the Scientific Computing and Imaging Institute at the University of Utah. When she is not working, she spends much of her time climbing and skiing near her home in Teton Valley.

10:15 - 11:00
Interactive Visualization for Interpretable Machine Learning
Dennis Collaris, Postdoc at Eindhoven University of Technology (the Netherlands)
Abstract
Machine learning is a very populair technique to make automatic predictions based on data. This enables businesses to make sense of their data and make predictions about future events. But the modern machine models we use are typically very complex and difficult to understand: we only know what the model predicts, but not why it decided this, and what are the crucial factors leading to that prediction. Currently, there is a strong demand for understanding how specific models operate and how certain decisions are made, which is particularly important in high-impact domains such as credit, employment, and housing. In these cases, the decisions made using machine learning can significantly impact the lives of real people. The field of eXplainable Artificial Intelligence (XAI) aims to help experts understand complex machine learning models. In recent years, various techniques have been proposed to open up the black box of machine learning. However, because interpretability is an inherently subjective concept, it remains challenging to define what a good explanation is. To address this, we argue we should actively involve data scientists in the process of generating explanations, and leverage their expertise in the domain and machine learning. Interactive visualization provides an excellent opportunity to both involve and empower experts.
In my work, we collaborated with Achmea, a large insurance company in the Netherlands. In our discussions with their data scientists, we noticed that different teams required different perspectives to interpret their models, ranging from local explanation of single predictions to global explanation of the entire model. In this talk, we explore interactive visualization approaches for machine learning interpretation from these different (and often complementary) perspectives, ranging from local explanation of single predictions to global explanation of the entire model.
Biography
Dennis Collaris is a postdoc researcher at Eindhoven University of Technology focusing on visual analytics for interpretable machine learning, as well as a scientific programmer at Utrecht University, and founder of Xaiva: a startup offering a human oversight platform for AI to understand and explain decision-making through interactive visualisation.

11:15 - 12:00
Title TBC
Xinhuan Shu, Lecturer at Newcastle University (UK)
Biography
Xinhuan Shu is a lecturer (assistant professor) at the School of Computing, Newcastle University, Prior to that, she was a PostDot at VisHub in the Univeristy of Edinburgh and received her PhD from the Hong Kong University of Science and Technology (HKUST). Her research aims to engage humans in communicating, interacting with, and making use of data through visualization and AI. She focuses on designing expressive visualization techniques and human-AI interfaces that facilitate a broad spectrum of data activities, such as data transformation, analysis, decision-making, and storytelling, with a strong emphasis on promoting data literacy and creativity for all.

Wednesday, May 21
09:15 - 10:00
Visual Analytics for Facilitating AI-Accelerated Scientific Discovery
Shayan Monadjemi, Visual analytics research scientist at ORNL (USA)
Abstract
The discovery of novel materials and designs have led to many exciting advances in human history. These discoveries have resulted in more efficient aircrafts and more powerful computing chips. They are also expected to lead to reliable quantum computers of the future. The underlying scientific discovery process involves iterations of designing experiments, collecting data, and understanding the data. In the era of AI, scientists are interested in automating parts of this lengthy and expensive pipeline in order to build futuristic scientific labs where AI and humans collaborate together to accelerate discovery. This talk will will discuss how mixed-initiative visual analytics can be a key facilitator of AI-accelerated scientific discovery. We will discuss insights from an interview study and demonstrate how they have informed the design of a visual analytics tool targeted towards domain scientists.
Biography
Shayan Monadjemi is a Visual Analytics Research Scientist at the Oak Ridge National Laboratory. He conducts research on mixed-initiative systems that guide users in data exploration and discovery. Catering to domain experts, his research aims to make data-driven and AI-guided decision making more accessible by lowering the overhead technical skills necessary to utilize intelligent systems for domains including scientific discovery, cybersecurity, advanced manufacturing, and grid resilience. Prior to joining ORNL, Shayan Monadjemi pursued his PhD at Washington University in St. Louis under the supervision of Professors Alvitta Ottley and Roman Garnett. For more information, please visit https://smonadjemi.github.io.

10:15 - 11:00
Title TBC
Igor Cherepanov, Associate researcher at Fraunhofer IGD (Germany)
Biography
Igor Cherepanov is an associate researcher at the Fraunhofer Institute for Computer Graphics Research, supervised by Jörn Kohlhammer. He received his master’s degree in computer science with a focus on machine learning from the Technical University of Darmstadt. His research interests include the intersection of machine learning and visual analytics in cybersecurity. His work centers on explainable AI, aiming to enhance the understanding of cybersecurity data while improving the transparency and interpretability of machine learning models.
