
Mulini Lab
Multimodality, Language, & Interpretability
Institute for Logic, Language and Computation (ILLC)
University of Amsterdam
About
The Multimodality, Language, and Interpretability Lab, led by Sandro Pezzelle, focuses on developing AI systems that understand and use language as humans do. Our research spans Natural Language Processing (NLP), machine learning, and cognitive science, combining ideas and methods from linguistic theory, behavioral and brain studies, and multimodal communication. Our goal is to develop systems that communicate naturally and collaborate effectively with people, guided by the long-term vision of creating language technology that truly serves humans.
Research lines
What we focus on, a little more concretely
Understanding and narration of visual events: How good are current vision-language models (VLMs) at understanding and narrating visual events, and how can we evaluate these skills? Can we improve narration abilities by leveraging human behavioral and cognitive patterns?
- Where is the multimodal goal post? On the Ability of Foundation Models to Recognize Contextually Important Moments (arXiv)
- Movie Facts and Fibs (MF2): A Benchmark for Long Movie Understanding (arXiv)
- Natural Language Generation from Visual Events: Challenges and Future Directions (arXiv)
- Not (yet) the whole story: Evaluating Visual Storytelling Requires More than Measuring Coherence, Grounding, and Repetition (Findings EMNLP 2024)
Ambiguous, underspecified, and implicit language: How do large language models (LLMs) and VLMs deal with ambiguous (~multiple interpretations), underspecified (~missing information), and implicit (~implying or presupposing a message) language? Can we boost models’ semantic and pragmatic understanding using insights from linguistics?
- They want to pretend not to understand: The Limits of Current LLMs in Interpreting Implicit Content of Political Discourse (Findings ACL 2025)
- Do Pre-Trained Language Models Detect and Understand Semantic Underspecification? Ask the DUST! (Findings ACL 2024)
- Dealing with Semantic Underspecification in Multimodal NLP (ACL 2023)
Human-inspired mechanistic interpretability: What computational LLM/VLM subgraphs (circuits) and features are causally responsible for a certain specific behavior? Do they, to some extent, mirror the cognitive and neural mechanisms observed in humans?
- Latent Planning Emerges with Scale (ICLR 2026)
- Are formal and functional linguistic mechanisms dissociated in language models? (Computational Linguistics 2025)
- How Language Models Conflate Logical Validity with Plausibility: A Representational Analysis of Content Effects (arXiv)
- Have Faith in Faithfulness: Going Beyond Circuit Overlap When Finding Model Mechanisms (CoLM 2024)
Benchmarking and evaluation: Can LLMs be used as reliable tools in real-life communicative and collaborative contexts?
- Beyond Divergent Creativity: A Human-Based Evaluation of Creativity in Large Language Models (EACL 2026)
- From Tools to Teammates: Evaluating LLMs in Multi-Session Coding Interactions (ACL 2025)
- LLMs instead of Human Judges? A Large Scale Empirical Study across 20 NLP Evaluation Tasks (ACL 2025)
Members
Meet the lab members and contact them
Michael Hanna
Core member
ELLIS PhD candidate supervised by Sandro Pezzelle and Yonatan Belinkov (Technion)




Vera Neplenbroek
Affiliated member
PhD candidate supervised by Raquel Fernández and Arianna Bisazza (Groningen)




Joris Baan
Affiliated member
ELLIS PhD candidate supervised by Raquel Fernández and Barbara Plank (Munich)




Anna Bavaresco
Affiliated member
ELLIS PhD candidate supervised by Raquel Fernández and Sien Moens (Leuven)




News
Stay up to date on the latest news from the lab
- February 2026. Michael Hanna will visit Tal Linzen at NYU in New York, USA.
- Jan 2026. Aditya Surikuchi presented his work on the ability of foundation models to recognize contextually important moments in football games at the Vision-and-Language Lab at Utrecht University.
- Jan 2026. New arXiv preprints out! Check out the latest work led by Aditya Surikuchi, Leonardo Bertolazzi, and Kumiko Nakajima!
- Jan 2026. One paper from our lab, led by Michael Hanna, has been accepted at ICLR 2026 and will be presented at the conference in Rio de Janeiro, Brazil!
- Jan 2026. Two papers from our lab have been accepted at EACL 2026 and will be presented at the conference in Rabat, Morocco, in March 2026! Congratulations, Anna Bavaresco and Kumiko Nakajima, for leading the efforts!
- Jan 2026. Michael Hanna visited Michael Hanh at Saarland University in Saarbrücken, Germany.
- Oct 2025. Michael Hanna was awarded a Google PhD Fellowship for his work on mechanistic interpretability. Congratulations, Michael, on this big recognition!
- Sept 2025: Sandro gave a tutorial on Language and Vision Models at CLIC-it 2025 in Cagliari.
- Sept 2025: Happy to welcome Leonardo Bertolazzi to our lab for three months. Great to have you here, Leonardo!
- Sept 2025: Our paper Are formal and functional linguistic mechanisms dissociated in language models?, led by Michael Hanna, has been published in the journal Computational Linguistics by MIT Press!
The word “mulini” [muˈliːni] means ‘mills’ in Italian. Mills are one of the symbols of the Netherlands. Moreover, they only work when a clean, natural force moves them. Like science with ideas.
Students, Guests, & Alumni
People who currently work with us or have done so in the past
Leonardo Bertolazzi, PhD candidate from the University of Trento
May Lee, Master AI student at the University of Amsterdam
Emma Boccaletti, Master AI student at the University of Amsterdam
Samuele Punzo, Master AI student at the University of Amsterdam
Tobias Groot, Master AI student at the University of Amsterdam & TNO
Mette Andersen, MoL student at the University of Amsterdam
Tianhao Dai, Master Digital Humanities at the EPFL Lausanne
*
Kumiko Nakajima, MBCS student at the University of Amsterdam (alumnus)
Frank Wildenburg, MoL student at the University of Amsterdam (alumnus)
Zoë Prins, Master AI student at the University of Amsterdam (alumnus)
Yunchong Chang, MoL student at the University of Amsterdam (alumnus)
Contact us
Get in touch with us for collaborations, questions, and interviews













