{"id":229,"date":"2023-07-10T13:00:33","date_gmt":"2023-07-10T13:00:33","guid":{"rendered":"https:\/\/projects.illc.uva.nl\/indeep\/?page_id=229"},"modified":"2025-12-02T21:18:12","modified_gmt":"2025-12-02T21:18:12","slug":"publications","status":"publish","type":"page","link":"https:\/\/projects.illc.uva.nl\/indeep\/publications\/","title":{"rendered":"Publications"},"content":{"rendered":"\n<p class=\"has-neve-link-hover-color-color has-text-color has-link-color has-large-font-size wp-elements-6d710ba55fd8e65f0c91f1f3bc0a2c84\"><strong>2025<\/strong><\/p>\n\n\n\n<p class=\"has-medium-font-size\"><\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2025.coling-main.135.pdf\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:27px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Basar, E., Sun, X., <strong>Hendrickx, I<\/strong>., Wit, J. de, Bosse, T., Bruijn, G. de, Bosch, J.A., Krahmer, E. How well can large language models reflect? a human evaluation of LLM-generated reflections for motivational interviewing dialogues, In <em>Proceedings of the 31st International Conference on Computational Linguistics<\/em>. 2025, pp. 1964\u20131982.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/www.isca-archive.org\/interspeech_2025\/bentum25_interspeech.html#\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:27px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Bentum, M., ten Bosch, L., Lentz, T.O.<\/strong> (2025) Word stress in self-supervised speech models: A cross-linguistic comparison. In <em>Proceedings Interspeech 2025<\/em>, 251-255.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2025.gem-1.49\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Cruz, J.A. dela<\/strong>, <strong>Hendrickx, I<\/strong>., and Larson, M. &nbsp;Improving Large Language Model Confidence Estimates using Extractive Rationales for Classification. In <em>Proceedings of GEM2 Workshop: Generation, Evaluation Metrics<\/em>, Vienna, Austria, 2025. Association for Computational Linguistics. 549\u2013560.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2025.findings-acl.836\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Cruz, J.A. dela<\/strong>, <strong>Hendrickx, I<\/strong>., and Larson, M. &nbsp;Evaluating Large Language Models for Confidence-based Check Set Selection. In<em> Findings of the Association for Computational Linguistics: ACL 2025<\/em>, pages 16249\u201316265, 2025. Association for Computational Linguistics.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/www.isca-archive.org\/interspeech_2025\/deheerkloots25_interspeech.html#\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>de Heer Kloots, M., Mohebbi, H., Pouw, C., Shen, G., Zuidema, W., Bentum, M. <\/strong>(2025) What do self-supervised speech models know about Dutch? Analyzing advantages of language-specific pre-training. In <em>Proceedings Interspeech 2025<\/em>, 256-260.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2025.coling-main.293\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Khurana, U., Nalisnick, E., <strong>Fokkens, A.<\/strong> DefVerify: Do Hate Speech Models Reflect Their Dataset\u2019s Definition? In <em>Proceedings of the 31st International Conference on Computational Linguistics<\/em>. 2025, pp. 4341-4358.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/escholarship.org\/uc\/item\/5dr7h5tp#author\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Lian, Y., <strong>Bisazza, A.<\/strong>, and Verhoef, T. Simulating the Emergence of Differential Case Marking with Communicating Neural-Network Agents. <em>Proceedings of the Annual Meeting of the Cognitive Science Society<\/em>, 47, 2025, pp. 508-515.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/www.clinjournal.org\/clinj\/article\/view\/198\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Lejeune, S&nbsp; and <strong>Hendrickx<\/strong>, <strong>I<\/strong>. Location-focused translation of flooding events in news articles. In <em>Computational Linguistics in the Netherlands Journal,<\/em> 14, 241\u2013254, 2025.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/repository.tilburguniversity.edu\/server\/api\/core\/bitstreams\/515dc6da-cea9-4a46-a4f4-d472a5659927\/content\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Lepp, L., Shterionov, D., De Sisto, M., and<strong> Chrupa\u0142a, G.<\/strong> (2025). Co-Creation for Sign LanguageProcessing and Translation Technology.<em> Information<\/em>, 16(4), 290.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/openreview.net\/forum?id=17yFbHmblo#discussion\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Madani, M.R.G., Gema, A.P.,&nbsp; Zhao, Y.,&nbsp; <strong>Sarti, G<\/strong>.,&nbsp; Minervini,P.,&nbsp; Passerini, A. Noiser: Bounded Input Perturbations for Attributing Large Language Models. In <em>Proceedings of the 2nd Conference on Language Modeling (COLM)<\/em>. 2025.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/ismir2025program.ismir.net\/poster_77.html\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Morris, L., Newman, M., Tang, X., Singh, R., <strong>V\u00e9lez V\u00e1squez, M.<\/strong>, Leger, R., and Lee, J.H. (2025). Expanding the HAISP dataset: AI\u2019s impact on songwriting across two contests. In <em>Proceedings of the 26th International Conference of the Society for Music Information Retrieval<\/em>. Daejeon, Korea.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2025.conll-1.9\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Pouw, C.<\/strong>, <strong>Alishahi, A.<\/strong>, <strong>Zuidema. W<\/strong>. (2025) A Linguistically Motivated Analysis of Intonational Phrasing in Text-to-Speech Systems: Revealing Gaps in Syntactic Sensitivity. In <em>Proceedings of the 29th Conference on Computational Natural Language Learning<\/em>, pages 126\u2013140.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2024.cl-4.11\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Pouw, C<\/strong>., <strong>de Heer Kloots, M<\/strong>., <strong>Alishahi, A<\/strong>., <strong>Zuidema, W<\/strong>. (2025), Perception of phonological assimilation by neural speech recognition models. <em>Computational Linguistics<\/em>, 50(4):1557\u20131585.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/arxiv.org\/abs\/2503.03044\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Sarti, G.<\/strong>, Zouhar, V., <strong>Chrupa\u0142a<\/strong>,<strong> G.<\/strong>,&nbsp; Guerberof-Arenas, A., Nissim, M., <strong>Bisazza, A. <\/strong>2025. QE4PE: Word-level Quality Estimation for Human Post-Editing.&nbsp; In <em>Transactions of the Association for Computational Linguistics<\/em>. [Preprint]<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2025.emnlp-main.924\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Sarti, G., <\/strong>Zouhar, V., Nissim, M., <strong>Bisazza, A<\/strong>. Unsupervised Word-level Quality Estimation for Machine Translation Through the Lens of Annotators (Dis)agreement. In <em>Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing (EMNLP)<\/em>, pages 18320\u201318337.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/arxiv.org\/abs\/2505.16612\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Scalena, D., <strong>Gabriele Sarti, G., Bisazza, A.<\/strong>, Fersini, E., Nissim, M. Steering Large Language Models for Machine Translation Personalization. 2025. [Preprint]<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/www.isca-archive.org\/interspeech_2025\/shen25b_interspeech.html#\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Shen, G., Mohebbi, H., Bisazza, A., Alishahi, A., Chrupa\u0142a, G.<\/strong> (2025). On the reliability of feature attribution methods for speech classification. In <em>Proceedings Interspeech 2025<\/em>, 266-270.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/data.ru.nl\/collections\/ru\/cls\/blimp-nl_dsc_550\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Suijkerbuijk, M., Prins, Z., <strong>de Heer Kloots, M<\/strong>., <strong>Zuidema, W<\/strong>., Frank, S.L. (2025), BLiMP-NL: The Benchmark of Linguistic Minimal Pairs for Dutch. Computational Linguistics<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/ismir2025program.ismir.net\/poster_266.html\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>V\u00e9lez V\u00e1squez, M.<\/strong>, Baelemans, M., Driedger, J., <strong>Burgoyne, J.A.<\/strong> (2025). FretboardFlow: A dual-model approach to optimize chord voicing on the guitar fretboard. In <em>Proceedings of the 26th International Conference of the Society for Music Information Retrieval<\/em>. Daejeon, Korea.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/openreview.net\/forum?id=bmrYu2Ekdz\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">van der Wal, O., Lesci, P., Muller-Eberstein, M., Saphra, N., Schoelkopf, H., <strong>Zuidema, W<\/strong>. (2025), Polypythias: Stability and outliers across fifty language model pre-training runs. Proceedings ICLR&#8217;25.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2025.findings-emnlp.580\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Zhang, Y,&nbsp; \u00dcrker, E., Verhoef, T., Boleda, G.,&nbsp; and <strong>Bisazza, A<\/strong>. NeLLCom-Lex: A Neural-agent Framework to Study the Interplay between Lexical Systems and Language Use. In <em>Findings of the Association for Computational Linguistics: EMNLP 2025<\/em>, pages 10929\u201310945, Suzhou, China. Association for Computational Linguistics.<\/p>\n\n\n\n<p class=\"has-text-color has-link-color has-large-font-size wp-elements-eeaece23e6f2c2912ea50a293eae9c03\" style=\"color:#17397d\"><strong>2024<\/strong><\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2024.blackboxnlp-1.21\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Amirzadeh, H., <strong>Alishahi, A., Mohebbi, H.<\/strong> How Language Models Prioritize Contextual Grammatical Cues?&nbsp; In <em>Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP<\/em>, 2024, pp. 315\u2013336.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/link.springer.com\/article\/10.1007\/s11023-024-09695-9\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Bachmann, D., van der Wal, O., Chvojka, E., <strong>Zuidema, W. H.<\/strong>, van Maanen, L., Schulz, K. fl-IRT-ing with Psychometrics to Improve NLP Bias Measurement. <em>Minds and Machines<\/em>, 2024, <em>34<\/em>(4), Article 37, pp. 1-34.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/www.isca-archive.org\/interspeech_2024\/bentum24_interspeech.html#\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Bentum, M<\/strong>.,<strong> ten Bosch, L<\/strong>., <strong>Lentz, T<\/strong>. The Processing of Stress in End-to-End Automatic Speech Recognition Models. In <em>Proc. Interspeech 2024<\/em>, pp. 2350-2354.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-031-63787-2_14\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Cruz, J.A. dela<\/strong>, <strong>Hendrickx, I<\/strong>., Larson, M. (2024). NoNE Found: Explaining the Output of Sequence-to-Sequence Models When No Named Entity Is Recognized. In: Longo, L., Lapuschkin, S., Seifert, C. (eds) <em>Explainable Artificial Intelligence. xAI 2024. Communications in Computer and Information Science<\/em>, vol 2153. 2024, pp. 265-284.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/direct.mit.edu\/tacl\/article\/doi\/10.1162\/tacl_a_00651\/120650\/Are-Character-level-Translations-Worth-the-Wait\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Edman, L,&nbsp; <strong>Sarti<\/strong>,<strong> G<\/strong>, Toral, A., van Noord, G.,<strong> Bisazza, A<\/strong>. Are Character-level Translations Worth the Wait? Comparing ByT5 and mT5 for Machine Translation. In <em>Transactions of the Association for Computational Linguistics<\/em>, 2024, 12: 392\u2013410.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/arxiv.org\/abs\/2405.00208\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Ferrando, J.,&nbsp;<strong>Sarti, G.<\/strong>, &nbsp;<strong>Bisazza, A.<\/strong>, Costa-jussa, M. A Primer on the Inner Workings of Transformer-based Language Models, 2024.<a href=\"https:\/\/arxiv.org\/abs\/2405.00208\" target=\"_blank\" rel=\"noreferrer noopener\">&nbsp;<\/a>[Pre-print]<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/escholarship.org\/uc\/item\/1fp7m6nf#author\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Fresen, A., Choenni, R. M. V. K., Heilbron, M., <strong>Zuidema, W. H.<\/strong>, &amp; de Heer Kloots, M. L. S. Language Models That Accurately Represent Syntactic Structure Exhibit Higher Representational Similarity To Brain Activity. In <em>Proceedings of the 46th Annual Conference of the Cognitive Science Society<\/em>. 2024 (Vol. 46), pp. 675-683.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2024.conll-babylm.23\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Haga, A., Fukatsu, A., Oba, M., <strong>Bisazza, A<\/strong>., and Oseki, Y. BabyLM Challenge: Exploring the effect of variation sets on language model training efficiency. In <em>The 2nd BabyLM Challenge at the 28th Conference on Computational Natural Language Learning<\/em>. ACL 2024, pp. 252\u2013261.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/www.isca-archive.org\/interspeech_2024\/deheerkloots24_interspeech.html#\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Heer Kloots, M. de, <strong>Zuidema, W.<\/strong> Human-like Linguistic Biases in Neural Speech Models: Phonetic Categorization and Phonotactic Constraints in Wav2Vec2.0. <em>Interspeech<\/em>, <em>2024<\/em>, pp. 4593-4597.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2024.findings-acl.877\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Jumelet, J., <strong>Zuidema, W.<\/strong>, Arabella Sinclair, A.. Do Language Models Exhibit Human-like Structural Priming Effects? In <em>Findings of the Association for Computational Linguistics: ACL 2024<\/em>. 2024, pp. 14727\u201314742.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2024.lrec-main.1397\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Kamp, J<\/strong>., Beinborn, L.,  <strong>Fokkens, A<\/strong>. The Role of Syntactic Span Preferences in Post-Hoc Explanation Disagreement. In <em>Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)<\/em>. 2024, pp. 16066-16078.<a href=\"https:\/\/doi.org\/10.21437\/Interspeech.2024-2490\"><\/a><\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2024.findings-naacl.296\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Langedijk, A., <strong>Mohebbi, H.<\/strong>,<strong> Sarti, G.<\/strong>, <strong>Zuidema<\/strong>, <strong>W.,<\/strong> Jumelet, J. DecoderLens: Layerwise Interpretation of Encoder-Decoder Transformers. In <em>Findings of the Association for Computational Linguistics: NAACL 2024<\/em>, (2024) 4764\u20134780.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2024.conll-1.19.pdf\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Lian, Y., Verhoef, T., <strong>Bisazza, A<\/strong>. NeLLCom-X: A Comprehensive Neural-Agent Framework to Simulate Language Learning and Group Communication.&nbsp; In <em>Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)<\/em>. 2024, pp. 243\u2013258.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2024.eacl-tutorials.4\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Mohebbi, H<\/strong>., Jumelet, J., Hanna, M., <strong>Alishahi, A.<\/strong>,<strong> Zuidema, W<\/strong>. Transformer-specific Interpretability. In <em>Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics: Tutorial Abstracts<\/em>. 2024, pp. 21-26.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2024.cl-4.11\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Pouw, C<\/strong>., Kloots, M. D. H., <strong>Alishahi, A.,  Zuidema, W.<\/strong> Perception of phonological assimilation by neural speech recognition models. <em>Computational Linguistics<\/em>, <em>50<\/em>(4). 2024, pp. 1557-1585.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2024.emnlp-main.347.pdf\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Qi, J.,&nbsp;<strong>Sarti, G.<\/strong>, &nbsp;Fern\u00e1ndez, R.,&nbsp;<strong>Bisazza, A<\/strong>. Model Internals-based Answer Attribution for Trustworthy Retrieval-Augmented Generation. In&nbsp;<em>Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP)<\/em>, 2025, pp. 6037-6053.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/openreview.net\/forum?id=XTHfNGI3zT\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Sarti, G<\/strong>., <strong>Chrupa\u0142a, G.<\/strong>,, Nissim, M., <strong>Bisazza, A<\/strong>. Quantifying the Plausibility of Context Reliance in Neural Machine Translation. In <em>Proceedings of the Twelfth International Conference on Learning Representations<\/em>. 2024, pp. 1-29.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/ceur-ws.org\/Vol-3793\/paper_37.pdf\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Sarti<\/strong>, <strong>G.<\/strong>, Feldhus, N., Qi, J., Nissim, M., <strong>Bisazza, A<\/strong>. Democratizing Advanced Attribution Analyses of Generative Language Models with the Inseq Toolkit. In <em>Joint Proceedings of the Late-breaking Work, Demos and Doctoral Consortium of the 2nd World Conference on eXplainable Artificial Intelligence (xAI 2024)<\/em>. 2024, pp. 289-296.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2024.blackboxnlp-1.34\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Scalena, D., <strong>Sarti<\/strong>, <strong>G.<\/strong>, Nissim, M. Multi-property Steering of Large Language Models with Dynamic Activation Composition. In <em>Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP<\/em>, 2024, pp. 577\u2013603.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2024.naacl-long.239\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Shen, G.<\/strong>, Watkins, M.,&nbsp;<strong>Alishahi, A<\/strong>.,&nbsp;<strong>Bisazza, A.<\/strong>,&nbsp;<strong>Chrupa\u0142a, G<\/strong>. Encoding of Lexical Tone in Self-supervised Models of Spoken Language. In&nbsp;<em>Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)<\/em>, 2024, pp. 4250\u20134261.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/ismir2024program.ismir.net\/poster_119.html#paper\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>V\u00e9lez V\u00e1squez, M.A<\/strong>., <strong>Pouw, C.<\/strong>, <strong>Burgoyne, J.A<\/strong>., <strong>Zuidema, W<\/strong>. Exploring the inner mechanisms of large generative music models. In <em>Proceedings of the 25th International Conference of the Society for Music Information Retrieval. <\/em>San Francisco, CA. 2024, pp. 1-8. <\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/doi.org\/10.1080\/0144929X.2024.2420871\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Voorveld, H., Panteli, A., Schirris, Y., Ischen, C., Kanoulas, E.,  <strong>Lentz, T. <\/strong> Examining the persuasiveness of text and voice agents: prosody aligned with information structure increases human-likeness, perceived personalisation and brand attitude. <em>Behaviour &amp; Information Technology<\/em>. 2024, pp. 1-16.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/www.jair.org\/index.php\/jair\/article\/view\/15195\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Wal, O. van der , Bachmann, D., Leidinger, A., van Maanen, L., <strong>Zuidema, W.<\/strong>, Schulz, K. Undesirable Biases in NLP: Addressing Challenges of Measurement. <em>Journal of Artificial Intelligence Research<\/em>, 2024, <em>79<\/em>, pp. 1-40.<a href=\"https:\/\/doi.org\/10.1613\/jair.1.15195\"><\/a><\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2024.lrec-main.516\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Zhang, Y., Verhoef, T., van Noord, G., <strong>Bisazza, A<\/strong>. Endowing Neural Language Learners with Human-like Biases: A Case Study on Dependency Length Minimization. In <em>Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation<\/em>, 2024, pp. 5819\u20135832.<\/p>\n\n\n\n<p class=\"has-neve-link-hover-color-color has-text-color has-link-color has-large-font-size wp-elements-9b8588c13bcb24e34ada4d0efc45f556\"><strong>2023<\/strong><\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/9781119678816.iehc0636\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Beeksma,<strong>&nbsp;<\/strong>M,<strong>&nbsp;Hendrickx, I.,<\/strong>&nbsp;Das, E. The International Encyclopedia of Health Communication, Chapter Text Mining. Wiley-Blackwell, 2023.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/doi.org\/10.1038\/d41586-023-03266-1\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Bockting, C.L., Dis, van E.A.M., Rooij, van R.,&nbsp;<strong>Zuidema, W.<\/strong>, Bollen, J. Living guidelines for generative AI &#8211; why scientists must oversee its use. <em>Nature<\/em>. 2023; 622 (7984), 693-696.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/www.isca-speech.org\/archive\/pdfs\/interspeech_2023\/tenbosch23_interspeech.pdf\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Bosch, L. ten, Bentum, M.<\/strong>, Boves, L. Phonemic competition in end-to-end ASR models. Proc of Interspeech 2023, pp. 586-590.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/doi.org\/10.5281\/zenodo.10265412 \"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Burgoyne, J. A.,<\/strong> Spijkervet, J., Baker, D.J. Measuring the Eurovision Song Contest: A living dataset for real-world MIR. In <em>Proceedings of the 24th International Society for Music Information Retrieval Conference<\/em>, 2023, pp. 817\u201323.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2023.blackboxnlp-1.29\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Chintam, A., Beloch, R.,&nbsp;<strong>Zuidema, W<\/strong>., Hanna, M., Wal, O. van der.&nbsp;Identifying and Adapting Transformer-Components Responsible for Gender Bias in an English Language Model. 2023. In Proceedings of the 6th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pp.&nbsp;&nbsp;379\u2013394.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2023.findings-acl.495\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Chrupa\u0142a, G.<\/strong>&nbsp;Putting Natural in Natural Language Processing. Findings of the ACL 2023, pp.&nbsp;7820-7827.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/journals.sagepub.com\/doi\/10.1177\/25152459231162567\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Coretta, S., Casillas, J.V., Roessig, S.,&nbsp;<strong>Lentz, T. O<\/strong>.,&nbsp;et al. Multidimensional Signals and Analytic Flexibility: Estimating Degrees of Freedom in Human-Speech Analyses. Advances in Methods and Practices in Psychological Science. 2023; 6 (3).&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/idl.iscram.org\/files\/cruz\/2023\/2541_Cruz_etal2023.pdf\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Cruz, J.A. dela, Hendrickx, I.,<\/strong>&nbsp;Larson, M. Towards XAI for Information Ex- traction on Online Media Data for Disaster Risk Management. 2023. In Proceedings of the 20th International ISCRAM Conference, pp. 478\u2013486.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/ceur-ws.org\/Vol-3583\/paper29.pdf\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Cruz, J.A. dela, Hendrickx, I.,<\/strong>&nbsp;Larson, M.&nbsp;Understanding Fine-tuned BERT Models for Flood Location Extraction on Twitter Data.&nbsp;In&nbsp;Proceedings of MediaEval\u201922: Multimedia Evaluation Workshop, 2023.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/www.nature.com\/articles\/d41586-023-00288-7\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Dis, E. A van., Bollen, J.,&nbsp;<strong>Zuidema, W.<\/strong>, Rooij, R. van, Bockting, C. L. (2023).&nbsp;ChatGPT: five priorities for research.&nbsp;<em>Nature<\/em>,&nbsp;614 (7947), 224-226.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2023.findings-emnlp.237\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Hoeken, S., Ala\u00e7am, \u00d6.,&nbsp;<strong>Fokkens, A.<\/strong>, Sommerauer, P. 2023. Methodological Insights in Detecting Subtle Semantic Shifts with Contextualized and Static Language Models. In&nbsp;<em>Findings of the Association for Computational Linguistics: EMNLP 2023<\/em>, pp. 3662-3675.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2023.iwcs-1.7\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Huynh, M. H.,<strong>&nbsp;Lentz, T<\/strong>., Miltenburg, E. van. Implicit causality in GPT-2: a case study. In:&nbsp;<em>Proceedings of the 15th International Conference on Computational Semantics<\/em>, 2023, pp. 67-77.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2023.findings-acl.554\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Jumelet, J., <strong>Zuidema, W<\/strong>. 2023. Feature Interactions Reveal Linguistic Structure in Language Models. In <em>Findings of the Association for Computational Linguistics: ACL 2023<\/em>, pp. 8697\u20138712.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/direct.mit.edu\/tacl\/article\/doi\/10.1162\/tacl_a_00587\/117219\/Communication-Drives-the-Emergence-of-Language\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Lian, Y.,<strong> Bisazza, A<\/strong>., Verhoef, T. Communication Drives the Emergence of Language Universals in Neural Agents: Evidence from the Word-order\/Case-marking Trade-off.&nbsp;<em>Transactions of the Association for Computational Linguistics<\/em>, 2023, 11: 1033\u20131047.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2023.emnlp-main.379\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Kamp, J.,&nbsp;&nbsp;Beinborn, L., Fokkens, A.<\/strong>&nbsp;Dynamic Top-k Estimation Consolidates Disagreement between Feature Attribution Methods.&nbsp;In&nbsp;<em>Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing<\/em>, 2023, pp.&nbsp;6190-6197.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2023.emnlp-main.513\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Mohebbi, H.<\/strong>,&nbsp;<strong>Chrupa\u0142a, G.<\/strong>,&nbsp;<strong>Zuidema, W.<\/strong>,&nbsp;<strong>Alishahi, A<\/strong>. 2023.&nbsp;Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers. In&nbsp;<em>Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing<\/em>, 2023,pp. 8249\u20138260.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2023.eacl-main.245\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Mohebbi, H.<\/strong>,&nbsp;<strong>Zuidema, W.<\/strong>,&nbsp;<strong>Chrupa\u0142a, G.,<\/strong>&nbsp;<strong>Alishahi, A<\/strong>. Quantifying Context Mixing in Transformers. In <em>Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics<\/em>, pp. 3378\u20133400.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2023.findings-eacl.49\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Pouw, C.<\/strong>, Hollenstein, N.,&nbsp;<strong>Beinborn, L.<\/strong>&nbsp;(2023). Cross-Lingual Transfer of Cognitive Processing Complexity. In <em>Findings of the Association for Computational Linguistics: EACL 2023<\/em>, pp. 655\u2013669.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2023.acl-demo.40\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Sarti, G.<\/strong>, Feldhus, N., Sickert, L., <strong>Wal<\/strong>, <strong>O. van der<\/strong>.&nbsp;Inseq: An Interpretability Toolkit for Sequence Generation Models.&nbsp;In&nbsp;<em>Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)<\/em>, 2023, pp. 421\u2013435.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/www.isca-archive.org\/interspeech_2023\/shen23_interspeech.pdf\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Shen, G<\/strong>.,&nbsp;<strong>Alishahi, A.<\/strong>,&nbsp;<strong>Bisazza, A<\/strong>.,<strong> Chrupa\u0142a, G.&nbsp;<\/strong>(2023). Wave to Syntax: Probing spoken language models for syntax. Proc of Interspeech 2023, pp. 1259-1263.&nbsp;&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/doi.org\/10.5281\/zenodo.10265390 \"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>V\u00e9lez V\u00e1squez, M.,<\/strong> Baelemans, M., Driedger, J.<strong>, Zuidema, W., Burgoyne, J.A.<\/strong>&nbsp;Quantifying the ease of playing song chords on the guitar. In&nbsp;<em>Proceedings of the 24th International Society for Music Information Retrieval Conference<\/em>, 2023, pp. 725\u201332.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2023.nodalida-1.11\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Yang, X., Chen, J., Eerden, A. van, Mozib Samin, A., <strong>Bisazza, A<\/strong>.&nbsp;Slaapte or Sliep? Extending Neural-Network Simulations of English Past Tense Learning to Dutch and German.&nbsp;In&nbsp;<em>Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)<\/em>, 2023, pp. 92\u2013102.&nbsp;<\/p>\n\n\n\n<p class=\"has-text-color has-link-color has-large-font-size wp-elements-cf421d80fc151bf483a94b16469b6b98\" style=\"color:#504bd6\"><strong>2022<\/strong><\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2022.ltedi-1.3.pdf\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Adams, J., Poelmans, K.,&nbsp;<strong>Hendrickx, I.<\/strong>, Larson, M. Doing not Being: Concrete Language as a Bridge from Language Technology to Ethnically Inclusive Job Ads. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion, 2022, pp. 19\u201325.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2022.lrec-1.107\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Bentum, M.<\/strong>,&nbsp;<strong>Bosch, L. ten<\/strong>, Heuvel, H. van den, Wills, S., Niet, D. van der, Dijkstra, J., and Velde, H. van de.&nbsp;A. Speech Recognizer for Frisian\/Dutch Council Meetings. Proceedings of the Thirteenth Language Resources and Evaluation Conference, 2022, pp.1009\u20131015.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/doi.org\/10.1613\/jair.1.12967\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:29px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Chrupa\u0142a, G.<\/strong>&nbsp;Visually grounded models of spoken language: A survey of datasets, architectures and evaluation techniques.&nbsp;<em>Journal of Artificial Intelligence Research,<\/em>&nbsp;73, (2022), 673-707.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2022.emnlp-main.575\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:29px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Haghighatkhah, P.,&nbsp;<strong>A. Fokkens<\/strong>, A., Sommerauer, P., Speckmann, B., Verbeek, K. Better Hit the Nail on the Head than Beat around the Bush: Removing Protected Attributes with a Single Projection. In&nbsp;<em>Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing<\/em>, 2022, pp. 8395-8416.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2022.argmining-1.5\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Kamp, J.<\/strong>,&nbsp;<strong>Beinborn, L<\/strong>.,&nbsp;<strong>Fokkens, A.<\/strong>&nbsp;(2022).&nbsp;Perturbations and Subpopulations for Testing Robustness in Token-Based Argument Unit Recognition. In:&nbsp;<em>Proceedings of 9th Workshop on Argument Mining<\/em>, 2022, 62-73.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/tijdvoortijdschriften.nl\/losse-nummers\/tekstblad-nr-6-van-2022\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:27px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Lentz, T<\/strong>.&nbsp;Waarom virtuele assistenten onge\u00efnteresseerd kunnen overkomen: De uitdaging van prosodie.<em>Tekstblad<\/em>,&nbsp;<em>28<\/em>(5\/6), 2022, 20.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/doi.org\/10.4000\/ijcol.965\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:26px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Miaschi, A.,&nbsp;<strong>Sarti<\/strong>,&nbsp;<strong>G.<\/strong>, Brunato, D., Dell\u2019Orletta, F., Venturi, G. Probing Linguistic Knowledge in Italian Neural Language Models across Language Varieties. Italian Journal of Computational Linguistics, 8(1), 2022, 25-44.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/doi.org\/10.18653\/v1\/2022.acl-long.1\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:27px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Modarressi, A.,&nbsp;<strong>H. Mohebbi<\/strong>, M. T. Pilehvar. AdapLeR: Speeding up Inference by Adaptive Length Reduction. ACL 2022.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/doi.org\/10.1162\/tacl_a_00498\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:26px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Nikolaus, M.,&nbsp;<strong>Alishahi, A., Chrupa\u0142a, G.<\/strong>&nbsp;Learning English with Peppa Pig.&nbsp;<em>Transactions of the Association for Computational Linguistics<\/em>, 10, 2022, 922\u2013936.&nbsp;&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2022.emnlp-main.532\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:23px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Sarti, G., Bisazza<\/strong>,&nbsp;<strong>A.,<\/strong>&nbsp;Guerberof Arenas, A., Toral, A. DivEMT: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages. Proceedings EMNLP 2022, 7795-7816.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/aclanthology.org\/2022.eamt-1.46\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:23px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Sarti, G, Bisazza, A.<\/strong>&nbsp;InDeep \u00d7 NMT: Empowering Human Translators via Interpretable Neural Machine Translation. EAMT 2022.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/academic.oup.com\/edited-volume\/41992\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:23px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Shanahan, D.,<strong>&nbsp;Burgoyne J.A. ,<\/strong>&nbsp;Quinn, I. (eds.). 2022. The Oxford Handbook of Music and Corpus Studies. New York: Oxford University Press.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/doi.org\/10.1162\/tacl_a_00504\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:25px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Sinclair, A., Jumelet, J.,&nbsp;<strong>Zuidema, W.<\/strong>, Fern\u00e1ndez, R. Structural persistence in language models: Priming as a window into abstract language representations.&nbsp;<em>Transactions of the Association for Computational Linguistics<\/em>,&nbsp;10,&nbsp;2022, 1031-1050.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/www.clinjournal.org\/clinj\/article\/view\/143\/151\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:25px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Vanmassenhove, E., De Sisto, M., Alhama, R. G.,&nbsp;<strong>Lentz, T. O<\/strong>., Engelen, J.,  Shterionov, D. (2022).&nbsp;Preface.&nbsp;<em>Computational Linguistics in the Netherlands Journal<\/em>,&nbsp;12, 3\u20135.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/ismir2022program.ismir.net\/poster_109.html\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:26px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>V\u00e9lez V\u00e1squez<\/strong>,&nbsp;<strong>M.A.,&nbsp;Burgoyne, J.A.<\/strong>&nbsp;Tailed U-Net: Multi-Scale Music Representation Learning. Proceedings ISMIR. 2022.<\/p>\n\n\n\n<p class=\"has-text-color has-link-color has-large-font-size wp-elements-27c3af433abc0d12a77888ebafbd13db\" style=\"color:#0888a2\"><strong>2021<\/strong><\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/arxiv.org\/abs\/2107.06546\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:27px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Alishahi, A.,&nbsp;Chrupa\u0142a, G<\/strong>., Cristi\u00e0, A., Dupoux, E.,&nbsp;<strong>Higy, B.<\/strong>, Lavechin, M., R\u00e4s\u00e4nen, O., Yu, C. ZR-2021VG: Zero-Resource Speech Challenge, Visually-Grounded Language Modelling track, 2021 edition.&nbsp;CoRR, abs\/2107.06546.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/direct.mit.edu\/tacl\/article\/doi\/10.1162\/tacl_a_00424\/108198\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Bisazza, A<\/strong>,<strong>&nbsp;<\/strong>\u00dcst\u00fcn, A., Sportel, S. On the Difficulty of Translating Free-Order Case-Marking Languages. TACL, 9, (2021), pp.1233-1248.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/www.isca-speech.org\/archive\/pdfs\/interspeech_2021\/bosch21_interspeech.pdf\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:27px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Bosch, L. ten<\/strong>, Boves, L. Word Competition: An Entropy-Based Approach in the DIANA Model of Human Word Comprehension. Proceedings of Interspeech 2021, pp. 531-535.&nbsp;<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/pure.uva.nl\/ws\/files\/67945373\/SysMus21_Catafolk.pdf\" rel=\"https:\/\/pure.uva.nl\/ws\/files\/67945373\/SysMus21_Catafolk.pdf\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-2.png\" alt=\"\" class=\"wp-image-240\" style=\"width:27px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-2.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-2-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-2-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Cornelissen, B.,&nbsp;<strong>Zuidema, W.<\/strong>,<strong>&nbsp;Burgoyne, J. A<\/strong>. Catafolk: Cataloguing Folk Music Datasets for Comparative Musicology. International Conference of Students of Systematic Musicology.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/archives.ismir.net\/ismir2021\/paper\/000016.pdf\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:26px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Cornelissen, B.,&nbsp;<strong>Zuidema, W.<\/strong>,&nbsp;<strong>Burgoyne, J.A<\/strong>. \u201cCosine Contours: A Multipurpose Representation for Melodies\u201d, in&nbsp;Proceedings of the 22nd International Society for Music Information Retrieval Conference, Online, Nov 7-12, 2021.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\" https:\/\/conferences.iftawm.org\/wp-content\/uploads\/2021\/06\/Cornelissen_Abstract.pdf\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:26px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"> Cornelissen, B.,&nbsp;<strong>Zuidema, W., Burgoyne, J. A<\/strong>. Musical Modes as Statistical Modes: Classifying Modi in Gregorian Chant. Proceedings of the 6th International Conference on Analytical Approaches to World Music.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\" https:\/\/repository.ubn.ru.nl\/handle\/2066\/235508\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:26px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Hendrickx, I.H.E<\/strong>., Basar, M.E., Caro, L. de, Kunneman, F., Musi, E., Rapp, A. Towards a new generation of personalized intelligent conversational agents 2021, Proc. of the 29th ACM Conference on User Modeling, Adaptation and Personalization, pp. 373-374.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\" https:\/\/aclanthology.org\/2021.blackboxnlp-1.11\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:28px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Higy, B<\/strong>., Gelderloos, L.,&nbsp;<strong>Alishahi<\/strong>,&nbsp;<strong>A.<\/strong>,&nbsp;<strong>Chrupa\u0142a, G.<\/strong>&nbsp;Discrete Representations in Neural Models of Spoken Language. Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, 2021.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/psyarxiv.com\/dg5mw\/\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:27px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Lentz, T.<\/strong>, Nixon, J.S., Rij, J van.&nbsp;Temporal response modelling uncovers electrophysiological correlates of trial-by-trial error-driven learning.&nbsp;PsyArXiv, 14 Oct. 2021.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/research.vu.nl\/en\/publications\/no-nlp-task-should-be-an-island-multi-disciplinarity-for-diversit\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:27px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Reuver, M. E.,&nbsp;<strong>Fokkens, A<\/strong>, Verberne, S. No NLP Task Should be an Island: Multi-disciplinarity for Diversity in News Recommender Systems. Proceedings of the EACL 2021 Hackashop on News Media Content Analysis and Automated Report  Generation. Toivonen, H, Boggia, M. (eds.). Association of Computational Linguistics, pp. 45\u201355.<\/p>\n\n\n<div class=\"wp-block-image is-resized\">\n<figure class=\"alignleft size-full\"><a href=\"https:\/\/link.springer.com\/article\/10.3758\/s13428-020-01449-6\"><img loading=\"lazy\" decoding=\"async\" width=\"512\" height=\"512\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png\" alt=\"\" class=\"wp-image-244\" style=\"width:25px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3.png 512w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-300x300.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/61044-3-150x150.png 150w\" sizes=\"auto, (max-width: 512px) 100vw, 512px\" \/><\/a><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Rodd, J., Decuyper, C.H., Bosker, H.R.,&nbsp;<strong>Bosch, L.F.M. ten<\/strong>.&nbsp;A tool for efficient and accurate segmentation of speech data. Announcing POnSS. Behavior Research Methods, 53, 2, (2021), pp. 744-756.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>2025 Basar, E., Sun, X., Hendrickx, I., Wit, J. de, Bosse, T., Bruijn, G. de, Bosch, J.A., Krahmer, E. How well can large language models reflect? a human evaluation of LLM-generated reflections for motivational interviewing dialogues, In Proceedings of the 31st International Conference on Computational Linguistics. 2025, pp. 1964\u20131982. Bentum, M., ten Bosch, L., Lentz,&hellip;&nbsp;<a href=\"https:\/\/projects.illc.uva.nl\/indeep\/publications\/\" rel=\"bookmark\">Read More &raquo;<span class=\"screen-reader-text\">Publications<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"class_list":["post-229","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/pages\/229","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/comments?post=229"}],"version-history":[{"count":121,"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/pages\/229\/revisions"}],"predecessor-version":[{"id":1109,"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/pages\/229\/revisions\/1109"}],"wp:attachment":[{"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/media?parent=229"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}