Hosein Mohebbi is a PhD candidate at Tilburg University, supervised by Afra Alishahi, Willem Zuidema, and Grzegorz Chrupała. His research focuses on developing analysis methods that can faithfully elucidate the interplay and flow of information within deep neural models of language (both written and spoken). Previously, he completed his Master’s in Artificial Intelligence at Iran University of Science and Technology, where his research revolved around the interpretation of pre-trained language models and the utilization of interpretability techniques to accelerate their inference time. His research has been published in leading NLP venues, including ACL, EACL, and EMNLP. He is currently one of the organizers of BlackboxNLP, a workshop focusing on analyzing and interpreting neural networks for NLP.
Gaofei Shen started his PhD at Tilburg University, working with Grzegorz Chrupała on “Analysis and control techniques for spoken language applications.” He completed his master’s in Voice Technology from University of Groningen, working on multilingual speech recognition for Dutch and Frisian using the wav2vec2.0 pre-trained models for his master’s thesis. His bachelor’s degree was in linguistics from Reed College, where he wrote a thesis on code-switching grammar preferences for Mandarin-English bilingual speakers. He will explore some of the topics that emerged from his master’s thesis such as using intermediate layers outputs from wav2vec 2.0 models for language identification.
Gabriele Sarti is a PhD candidate at the Center for Language and Cognition of the University of Groningen, supervised by Arianna Bisazza, Malvina Nissim and Grzegorz Chrupała. His research adopts a user-centric perspective on interpretability for generative language models, with particular attention to the machine translation task. Previously, he worked as an applied scientist intern at AWS AI Labs and as a research scientist for the Italian synthetic data startup Aindo. He obtained his MSc in Data Science at the University of Trieste in Italy, with a thesis on interpreting NLMs trained on human behavioral data in collaboration with the ILC-CNR in Pisa.
Charlotte Pouw is a PhD student at the University of Amsterdam, working with Jelle Zuidema (UvA) and Afra Alishahi (Tilburg University) on “Interpretability Methods for Text and Audio”. Charlotte received her bachelor’s degree in Linguistics at Leiden University, and her master’s degree in Human Language Technology at Vrije Universiteit Amsterdam, with a thesis supervised by InDeep’s Lisa Beinborn and Antske Fokkens. She also worked on the Network Institute project “Interpretability Metrics for Neural Models of Text Adequacy”, for which she received the prize for Best Academy Project of 2021-2022.
Jane Arleth dela Cruz started as a PhD student at Radboud University, working with Iris Hendrickx and Martha Larson on the project “Interpretable Information Extraction for Disaster Relief” together with Floodtags. Prior to joining InDeep, Jane was working as a Data Scientist with Ayala Corporation, one of the largest and most diversified business groups in the Philippines with a commitment to national development. She did data analysis and machine learning to solve business problems aligned with the C-level strategy and agenda of the group. Her Master’s and Bachelor’s degree are both in Electronics Engineering from Ateneo de Manila University. She did research and published papers on disaster resilient communication systems, and research internships at CERN in Switzerland, and Weathernews Inc., and Nara Institute of Science and Technology both in Japan.
Marcel Vélez started his PhD position at the University of Amsterdam, working with Chordify, Ashley Burgoyne, Henkjan Honing, and Jelle Zuidema on “explainable AI for measuring playability from musical audio”. He obtained both his bachelor’s and master’s degree in Artificial Intelligence at the University of Amsterdam. For his master’s thesis, he proposed a generalization of CLMR and a novel use of U-Nets (presented at ISMIR 2022). Recently, Marcel have proposed a rhythm guitar playability metric and dataset (presenting at ISMIR 2023). He will work on, among other things, probing music machine learning models, music representation
Jonathan Kamp is a PhD candidate at the Computational Linguistics and Text Mining Lab (CLTL) at Vrije Universiteit Amsterdam, working with Antske Fokkens and Lisa Beinborn on interpretability in argument mining. His research focuses on evaluating interpretability methods and approaching interpretability from a user perspective. Prior to InDeep, Jonathan worked as a Data Scientist at Converz Analytics, specializing in conversational data in a business setting. He obtained his master’s degree in Linguistics from Utrecht University, with a thesis on statistical modeling at the syntax-semantics interface, combining linguistic theory and machine learning.