Skip to content

Events


On Friday March 8th we will have the next consortium meeting of the InDeep project. This edition will happen in the Amsterdam University Library, Singel 425, room C1.13 (Belle van Zuylenzaal), between 10.00 and 17.30. It will be a day with both talks on recent progress in research, a tutorial, and lots of opportunity to discuss the strengths and weaknesses of interpretability techniques in text, translation, speech and music. On invitation only.

Program:

Morning program
10h00 Coffee & Welcome
10h30-12h30 Tutorial on Transformer-specific Interpretability Methods, by Hosein Mohebbi, Jaap Jumelet, Michael Hanna, Jelle Zuidema & Afra Alishahi.
The tutorial starts with briefly visiting classic attribution techniques (integrated gradients, LIME, SHAP), discuss how they can be evaluated, and explain why they are often less useful than hoped in Transformer-based language models. It then dives deeper into methods that exploit specific design features of transformers to quantify context mixing and method that test for causal influences to reveal the information flow through Transformers. In the final part, the tutorial introduces the intuition and techniques behind the Mechanistic Interpretability methods that have suddenly become enormously popular in NLP and ML in the last year.
12h30-13h30 Lunch break (included)

Afternoon program
13h30-17h00 Presentations, Discussion and Brainstorm on Explainable AI for High-stakes Decisions
The program starts with a presentation by Vincent Slot, Textkernel’s lead R&D, on Responsible AI + transparency and how the Textkernel is dealing with this, given the changes in legislation, and the ethical responsibilities to ensure a safe and fair job matching process. Other presentations by will be given by researchers and developers (TBC) in our network on hybrid deep learning-symbolic methods, aimed at finding the best of both worlds: leverage both the power of deep learning to successfully process unstructured data, but produce symbolic, well-structured output that allows for formal verification, human interpretability and generation of explanations/justifications for high stake decisions. 
We will reserve time for discussion, both in smaller groups and plenary, to explore use cases from the various industrial partners of InDeep and the applicability of ‘interpretability toolkit’ that has grown enormously in the last two years.
17h00- Drinks


On Thursday 2 November 2023, InDeep organized a Masterclass on Explaining Foundation Models. The event featured some inspiring talks, hands-on experience with explainability techniques for Large Language Models, and the opportunity to share insights and experiences. After the Masterclass there was a Meetup organized by Indeep and Amsterdam Al, on Alternatives for ChatGPT: How good are open source and in-house LLMs? To learn more about this event, you can download here an overview of the presentations and slides.