Upcoming InDeep Masterclass:
Interpreting and Understanding LLMs and Other Deep Learning Models
You are warmly invited to our next InDeep Masterclass Interpreting and Understanding LLMs and other Deep Learning Models, scheduled for Thursday, 4 December 2025. -REGISTRATION is closed now-
This masterclass will feature a series of insightful presentations and a hands-on tutorial focused on explainability techniques for Large Language Models (LLMs) and other deep learning architectures. Participants will gain both insights and practical experience in interpreting and understanding the inner workings of modern AI systems. We look forward to welcoming you to this engaging and informative event.
-REGISTRATION is closed now-
Event details
Date: Thursday 4 December 2025
Time: 08.45-12.30
Location: Deloitte Amsterdam. Gustav Mahlerlaan 2970, Amsterdam
Participation is free of charge, but places are limited.
8:45 – 9:00 Walk in
9:00 – 11:00 Talks and Discussion
– Gabriele Sarti (RUG Groningen) – Interpreting Large Language Models
– Jelle Zuidema (UvA) – Too Much Information: How do we generate comprehensible and Faithful explanations when the input or output is…just a bit much?
– Antske Fokkens (VU) – Explanatory Evaluation and Robustness
11:00 – 11:30 Break
11:30 – 12:30 Hands-on Session with Gabriele Sarti
Speakers:
Gabriele Sarti (RUG Groningen) Interpreting Large Language Models
This presentation will provide a general introduction to popular interpretability approaches for studying large language models. Particularly, we will focus on attributional methods to identify the influence of context on model predictions and mechanistic techniques to locate and intervene in model knowledge and behaviors.
Jelle Zuidema (UvA) “Too Much Information: How do we generate comprehensible and faithful explanations when the input or output is… just a bit much?”As Generative AI models are deployed in more and more situations, the need to provide explanations for their outputs is more and more pressing. Good explanations are both comprehensible and faithful to the true underlying causes of the output. Many classic XAI methods fail on the faithfulness criterion; the field of mechanistic interpretability is making progress, but many of its ‘interpretations’ fail on the comprehensibility criterion, especially in situations with long inputs (such as, e.g., thousands of documents in a summarization or RAG pipeline) or long outputs (such as, e.g., in reasoning models). I will briefly discuss RAG attribution methods (and their limitations), and ‘reasoning traces’ (and their own faithfulness problem), and then sketch a possible —but not yet established— way out based on surrogate models.
Antske Fokkens (VU) Explanatory Evaluation and RobustnessGaining insights into how Generative AI models work is far from trivial. Insights into how the model came to a decision are currently either diffuclt to interpret or do not necessarily provide faithful information on how the model came to this decision. An alternative approach is to focus purely on their behavior. In this session, we will present challenge sets: systematic tests that are specifically designed to identify what a model can do and where it fails.
October Workshop: How Computing is Changing the World

Workshop: “How Computing is Changing the World: Exploring Synergies and Challenges of AI and Quantum Technologies for Society”
In October 2025, the QISS and InDeep research groups met at the Sustainalab (Matrix One) in Amsterdam to explore in a joint workshop the synergies and challenges of AI and quantum technologies. As both fields advance rapidly, the workshop focused on their technological progress as well as their ethical, legal, and societal implications.
The morning session addressed technological perspectives. After a welcome, InDeep project leader Jelle Zuidema outlined the current state of AI, highlighting the black-box problem. Mehrnoosh Sadrzadeh (UCL) and Martha Lewis (UvA) followed with quantum-inspired approaches to modelling language and vision in AI. Ronald de Wolf (CWI & UvA, QuSoft) concluded with an overview of quantum computing, linking it to Zuidema’s AI analysis. A panel discussion with Christian Schaffner (UvA, QuSoft) and chaired by Joris van Hoboken (UvA) explored common challenges, research stages, and opportunities across the two domains.

The afternoon session focused on societal themes. Evert van Nieuwenburg (Leiden University) opened with a presentation on how quantum games, such as Quantum Tic-Tac-Toe, can help people learn quantum principles. Eline de Jong (UvA) shared five lessons for embedding new technologies in society, suggesting they apply to quantum tech as well. Angela van Sprang (UvA, ILLC) presented concept-bottleneck models as tools for improved AI oversight, and Matteo Fabbri (UvA) argued for proactive standards for recommender systems and other emerging technologies. In the panel discussion, led by Zuidema, participants from both AI and quantum backgrounds reflected on responsible innovation and the risks and opportunities of uncertain technological futures.

The workshop offered an engaging exchange at the intersection of technology, ethics, and society. Participants valued the lively discussions and expressed enthusiasm for organising a follow-up event in 2026, potentially establishing a recurring series on the intersections of AI and quantum technologies.
The Quantum Impact on Societal Security (QISS) consortium is based at the University of Amsterdam and analyses the ethical, legal and societal impact of the upcoming society-wide transition to quantum-safe cryptography. The consortium’s objective is to contribute to the creation of a Dutch ecosystem where quantum-safe cryptography can thrive, and mobilize this ecosystem to align technological applications with ethical, legal, and social values.
This workshop is made possible with funding from the Quantum Delta Netherlands growthfund program.
InDeep at Interspeech 2025

Many InDeep researchers presented at the Interspeech 2025 conference in Rotterdam, one of the largest and most prominent conferences on speech and speech technology world-wide. Therefore, we also organized a social event including drinks and snacks.
This was InDeep at Interspeech 2025:
Sunday 17 August
– Speech Science Festival 10.00-17.30. InDeep provided a demonstration of emotion-decoding and removal. There were many other demonstrations for a broad audience.
– Tutorial, 15.30-18.30: InDeep members organized the tutorial Interpretability Techniques for Speech Model.
– InDeep Social, 19.00: InDeep held a social event with drinks and snacks in the Stuurboord bar at the Foodhallen.
Monday 18 August
– Special Session on Interpretability in Audio and Speech Technology, 11:00-13:00. Session with three papers by InDeep authors:
+ On the reliability of feature attribution methods for speech classification.
+ Word stress in self-supervised speech models: A cross-linguistic comparison.
+ What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training.
Consortium Meeting Groningen

On Friday 20 September 2024 InDeep met in the Groningen House of Connections for the next Consortium Meeting. In the morning RUG researchers Daniel Herrman (Philosophy of AI) and Yevgen Matusevych (Cognitive Plausibility of Modern LMs) gave illuminating talks on standards for belief representations in LLMs and bias in visually grounded speech models.

In the afternoon highlights from the various InDeep academic partners were shared, there was a poster session by researchers on their latest research results, as well as fruitful discussions on the future of interpretability methods in language, speech and music models and future directions of the InDeep project.

Consortium Meeting Amsterdam

On Friday 8 March 2024 InDeep had a Consortium meeting in the Amsterdam University Library. It was a day with both talks on recent progress in research, a tutorial, and lots of opportunity to discuss the strengths and weaknesses of interpretability techniques in text, translation, speech and music.
InDeep Masterclass: Explaining Foundation Models


On Thursday 2 November 2023, InDeep organized a Masterclass on Explaining Foundation Models. The event featured some inspiring talks, hands-on experience with explainability techniques for Large Language Models, and the opportunity to share insights and experiences. After the Masterclass there was a Meetup organized by Indeep and Amsterdam Al, on Alternatives for ChatGPT: How good are open source and in-house LLMs? To learn more about this event, you can download here an overview of the presentations and slides.