
This spring, Linköping University hosted a five-week ELLIIT focus period dedicated to Visualization-Empowered Human-in-the-Loop Artificial Intelligence. Among the invited international researchers was Dennis Collaris, a postdoctoral researcher at Eindhoven University of Technology, a scientific programmer at Utrecht University, and founder of the startup Xaiva.
“Being together with so many colleagues working on human-in-the-loop AI was very nice. We ideated new research ideas and started several promising projects during the stay,” says Collaris.
Five weeks of research, exchange, and ideation
Dennis Collaris participated in the full duration of the focus period – including two research blocks, a one-week symposium, and a public seminar series. Throughout, he actively contributed to both project work and discussions.
“I was involved in a large literature survey on the role of human feedback in AI – we reviewed 2943 papers in just two weeks. We also built a working proof of concept for generating natural language descriptions of clusters using LLMs,” he says.
In addition, Collaris joined efforts to:
- Develop methods to measure broader desiderata of machine learning models beyond accuracy.
- Explore how users might adapt explanation granularity based on their needs.
- Present research findings during the public seminar series and interact with leading figures in XAI during the symposium week.
“All in all, it was a very packed five weeks,” Collaris says with a smile.

Visual analytics for interpretable machine learning
Collaris’ research sits at the intersection of visualization and machine learning. Through the platform explaining.ml, he shares visual analytics tools designed to help data scientists understand and explain complex models. His startup, Xaiva (link below), builds on this work by offering interactive visualizations that support human oversight in AI.
“My own research was a perfect fit with the subject of this year’s focus period. In addition, my industry-perspective from the startup helped to steer discussions to solutions that I think would truly benefit real-world applications,” says Collaris.
Breaking traditional boundaries in XAI
Collaris held two public seminars during the focus period, sharing insights from his PhD and postdoctoral research. His talks introduced a series of tools and concepts for going beyond standard paradigms in explainable AI.
“In explainable AI we often distinguish between local explanations concerning individual predictions, and global explanations concerning the model as a whole. However, in my work we break free from this XAI local/global explanation paradigm, and instead combine multiple, complementary perspectives on the model,” he says.
“For instance, I talked about ExplainExplore, a tool that helps to explain individual predictions, which is a traditionally local approach, but includes more context by showing how predictions and explanations change for small input perturbations, to verify robustness,” Collaris continues.
His second talk featured a global explanation system called StrategyAtlas, which reveals the different strategies a model might use for the same prediction. He also previewed a recent user study testing whether people actually learn from XAI explanations over time.
Collaboration and real-world relevance
Reflecting on the focus period, Collaris says he leaves Linköping with a network of new collaborators, continued research ideas, and a sharpened view of real-world impact.
“In addition to being fully up to date with current academic work, I think it also became clear that a lot more work is needed in order to fully understand the effectiveness of AI explanations,” says Collaris.
Would he come back?
“I am grateful for the opportunity to join this year’s focus period – it was a blast, and I certainly learned a lot. I would absolutely recommend other researchers to apply!
Xaiva
Read more about the startup Xaiva.
ELLIIT Focus Period
Read more about the focus period on Visualization-Empowered Human-in-the-Loop Artificial Intelligence