{"id":489,"date":"2024-01-04T14:17:59","date_gmt":"2024-01-04T14:17:59","guid":{"rendered":"https:\/\/projects.illc.uva.nl\/indeep\/?page_id=489"},"modified":"2026-01-05T13:53:21","modified_gmt":"2026-01-05T13:53:21","slug":"news","status":"publish","type":"page","link":"https:\/\/projects.illc.uva.nl\/indeep\/news\/","title":{"rendered":"News and Newsletter"},"content":{"rendered":"\n<p class=\"has-medium-font-size\"><a href=\"https:\/\/mailchi.mp\/a592373c1a61\/signup-indeep-newsletter\" data-type=\"link\" data-id=\"https:\/\/mailchi.mp\/a592373c1a61\/signup-indeep-newsletter\">Join our <em>InDeep Newsletter<\/em><\/a> where we will periodically give you an update of current news, articles and upcoming events related to the project.<strong> <\/strong>Read our most recent newsletter <a href=\"https:\/\/mailchi.mp\/11650c44bb82\/indeep-newsletter-50714?e=%5BUNIQID%5D\">her<\/a><a href=\"https:\/\/mailchi.mp\/c8be224a87a4\/indeep-newsletter\">e<\/a>.<\/p>\n\n\n<style>.wp-block-kadence-spacer.kt-block-spacer-489_c12f63-6d .kt-block-spacer{height:60px;}.wp-block-kadence-spacer.kt-block-spacer-489_c12f63-6d .kt-divider{border-top-width:3px;height:1px;border-top-color:var(--nv-secondary-accent);width:80%;border-top-style:solid;}<\/style>\n<div class=\"wp-block-kadence-spacer aligncenter kt-block-spacer-489_c12f63-6d\"><div class=\"kt-block-spacer kt-block-spacer-halign-center\"><hr class=\"kt-divider\"\/><\/div><\/div>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"692\" height=\"510\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/IMG_5488-3-1.png\" alt=\"\" class=\"wp-image-1161\" style=\"width:464px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/IMG_5488-3-1.png 692w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/IMG_5488-3-1-300x221.png 300w\" sizes=\"auto, (max-width: 692px) 100vw, 692px\" \/><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Our InDeep PhD candidate\u00a0<strong>Gabriele Sarti\u00a0<\/strong>successfully defended his dissertation,\u00a0<em>From Insights to Impact: Actionable Interpretability for Neural Machine Translation<\/em>, on 11 December 2025 at the\u00a0University of Groningen. Gabriele is the\u00a0first PhD candidate within InDeep to complete and defend his doctorate, and he did so with outstanding expertise and enthusiasm, earning the distinction of\u00a0<strong>cum laude<\/strong>. We warmly congratulate <strong>Dr. Gabriele Sarti <\/strong>and wish him every success in his future career. <br><span style=\"caret-color: rgb(0, 0, 0); font-family: var(--bodyfontfamily);\">You can read his dissertation <\/span><a style=\"caret-color: rgb(0, 0, 0); font-family: var(--bodyfontfamily);\" href=\"https:\/\/research.rug.nl\/en\/publications\/from-insights-to-impact-actionable-interpretability-for-neural-ma\/\">here<\/a><span style=\"caret-color: rgb(0, 0, 0); font-family: var(--bodyfontfamily);\">.<\/span><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"764\" height=\"238\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/IMG_4374_2-1.png\" alt=\"\" class=\"wp-image-1160\" style=\"width:1018px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/IMG_4374_2-1.png 764w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/IMG_4374_2-1-300x93.png 300w\" sizes=\"auto, (max-width: 764px) 100vw, 764px\" \/><\/figure><\/div>\n\n<style>.wp-block-kadence-spacer.kt-block-spacer-489_dc9751-57 .kt-block-spacer{height:60px;}.wp-block-kadence-spacer.kt-block-spacer-489_dc9751-57 .kt-divider{border-top-width:3px;height:1px;border-top-color:var(--nv-secondary-accent);width:80%;border-top-style:solid;}<\/style>\n<div class=\"wp-block-kadence-spacer aligncenter kt-block-spacer-489_dc9751-57\"><div class=\"kt-block-spacer kt-block-spacer-halign-center\"><hr class=\"kt-divider\"\/><\/div><\/div>\n\n\n\n<p class=\"has-text-align-center has-large-font-size\"><strong>InDeep\u2019s Fourth Year<\/strong><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignleft size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"697\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/2fd12eed-694f-1110-11c7-c1e43553400d-1024x697.jpg\" alt=\"\" class=\"wp-image-1130\" style=\"width:280px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/2fd12eed-694f-1110-11c7-c1e43553400d-1024x697.jpg 1024w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/2fd12eed-694f-1110-11c7-c1e43553400d-300x204.jpg 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/2fd12eed-694f-1110-11c7-c1e43553400d-768x523.jpg 768w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/2fd12eed-694f-1110-11c7-c1e43553400d-1536x1045.jpg 1536w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/2fd12eed-694f-1110-11c7-c1e43553400d-2048x1394.jpg 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">In September 2025 Indeep turned 4! The fourth year has been fantastically productive. Unfortunately, however, reaching our 4th birthday also means that the two first InDeep PhD students \u2013&nbsp;<strong>Gabriele Sarti<\/strong>&nbsp;and&nbsp;<strong>Hosein Mohebbi<\/strong>&nbsp;\u2013 have reached the end of their contracts. Gabriele will defend his&nbsp;<strong>PhD thesis<\/strong>&nbsp;on December 11th, 2025 at the Academiegebouw in Groningen. Hosein\u2019s defense still needs to be scheduled, but is coming up as well.<\/p>\n\n\n\n<p class=\"has-medium-font-size\">One of the highlights of the fourth year has been the tutorial on&nbsp;<strong>Interpretability Techniques for Speech Models<\/strong>&nbsp;that a large team of InDeep researchers prepared and delivered at Interspeech\u201925 in Rotterdam.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignright size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"360\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/b13c105a-b316-ebd0-5572-fc8f6377eff7-1024x360.jpg\" alt=\"\" class=\"wp-image-1131\" style=\"width:798px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/b13c105a-b316-ebd0-5572-fc8f6377eff7-1024x360.jpg 1024w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/b13c105a-b316-ebd0-5572-fc8f6377eff7-300x106.jpg 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/b13c105a-b316-ebd0-5572-fc8f6377eff7-768x270.jpg 768w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/b13c105a-b316-ebd0-5572-fc8f6377eff7-1536x540.jpg 1536w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/b13c105a-b316-ebd0-5572-fc8f6377eff7-2048x721.jpg 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">The materials for this tutorial are still available&nbsp;<a href=\"https:\/\/interpretingdl.github.io\/speech-interpretability-tutorial\/index.html\">here<\/a>; they cover probing for phone identity, stress patterns, syllable type and many other aspects of linguistic structure; feature attribution and representational similarity analysis as practical tools to understand better how, e.g., speech recognition systems arrive at their output; and value zeroing and interchange interventions as examples of causal interventions that go beyond just correlational evidence.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignleft size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"564\" height=\"654\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/64d83ccb-a018-5e32-6454-162b4922cf93.png\" alt=\"\" class=\"wp-image-1133\" style=\"width:305px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/64d83ccb-a018-5e32-6454-162b4922cf93.png 564w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/64d83ccb-a018-5e32-6454-162b4922cf93-259x300.png 259w\" sizes=\"auto, (max-width: 564px) 100vw, 564px\" \/><\/figure><\/div>\n\n<div class=\"wp-block-image\">\n<figure class=\"alignleft size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"600\" height=\"400\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/5b427f2c-55e1-c31f-2f10-8841cf8de8ed.png\" alt=\"\" class=\"wp-image-1134\" style=\"width:297px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/5b427f2c-55e1-c31f-2f10-8841cf8de8ed.png 600w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2026\/01\/5b427f2c-55e1-c31f-2f10-8841cf8de8ed-300x200.png 300w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Another highlight of the fourth year is the successful completion and publication of the QE4PE project:&nbsp;<strong>Quality Estimation for Post Editing.<\/strong>&nbsp;This project addresses a major issue for the practical use of machine translation technology, that in 2025 has become very useful but is still far from error free: where do you focus human effort to check and improve translation output? The results from the study are nuanced: even the currently best currently available automatic tools for highlighting words that the translation technology is unsure about, do not necessarily help human translators to become more productive. For obtaining these results, the project developed datasets and tools that we will build on in future projects, and involved 42 professional post-editors across two translation direction. It was published in the field\u2019s prime venue,&nbsp;<a href=\"https:\/\/direct.mit.edu\/tacl\/article\/doi\/10.1162\/TACL.a.46\/133799\/QE4PE-Word-level-Quality-Estimation-for-Human-Post\">Transactions of the ACL<\/a>.&nbsp;Slator&nbsp;wrote a&nbsp;<a href=\"https:\/\/slator.com\/does-word-level-quality-estimation-really-improve-ai-translation-post-editing\/\" target=\"_blank\" rel=\"noreferrer noopener\">piece<\/a>&nbsp;on the QE4PE findings. &nbsp;<\/p>\n\n\n\n<p class=\"has-medium-font-size\"><strong>Explainability and&nbsp;Interpretability&nbsp;Worldwide<\/strong> <strong>2025<\/strong><\/p>\n\n\n\n<p class=\"has-medium-font-size\">While the InDeep project steadily progresses, developments in AI wordwide have continued at breakneck speed. That includes developments in&nbsp;<strong>Mechanistic Interpretability&nbsp;<\/strong>(MechInterp), that has created much excitement, and all the major frontier labs in (generative) AI now have dedicated MechInterp teams, including at Anthropic, at OpenAI and at Google DeepMind. The mechinterp community has published hundreds of papers in the past year. One result from Anthropic\u2019s team has garnered considerable interest: in&nbsp;<a href=\"https:\/\/transformer-circuits.pub\/2025\/attribution-graphs\/biology.html#dives-poems\">experiments with the Claude Haiku 3.5<\/a>&nbsp;the team showed the model does nontrivial planning ahead when asked to generate poetry that rhymes. The finding involved a popular interpretability techniques, Sparse Autoencoders, and causal interventions. Although it is not of immediate practical use, it has started playing a big role in the scientific understanding of and debates about how far the latest generation of LLMs moves beyond \u201cmere\u201d next word prediction.<br>Another success-story of the interpretability field is a project from colleagues at the University of California (Berkeley and San Francisco) where they used the internal states of a state-of-the-art Speech Model to help patients with brain damage to speak again. The paper on the project, \u201c<strong>A streaming brain-to-voice neuroprosthesis to restore naturalistic communication<\/strong>\u201d was published in&nbsp;<a href=\"https:\/\/www.nature.com\/articles\/s41593-025-01905-6\">Nature&nbsp;<\/a>in March 2025. The paper crowns a series of interpretability papers from the Berkeley group of Amunchipalli, with PhD student Cheol Jun Cho, where they discover using diagnostic probing techniques that the internal states of big Speech Models have learned to represent the movements of the mouth, tongue and vocal tract (\u201carticulatory trajectories\u201d), just from being exposed to hundreds of thousands of hours of spoken language. There is a video showing the neuroprothesis in action with a real patient on&nbsp;<a href=\"https:\/\/www.youtube.com\/watch?v=MGSoKGGbbXk\">YouTube<\/a>. The work of Cho and Amunchipalli was also featured in InDeep\u2019s&nbsp;<em>Interpretability Techniques for Speech Models<\/em>&nbsp;tutorial discussed above.<\/p>\n\n\n<style>.wp-block-kadence-spacer.kt-block-spacer-489_e01134-7a .kt-block-spacer{height:60px;}.wp-block-kadence-spacer.kt-block-spacer-489_e01134-7a .kt-divider{border-top-width:3px;height:1px;border-top-color:var(--nv-secondary-accent);width:80%;border-top-style:solid;}<\/style>\n<div class=\"wp-block-kadence-spacer aligncenter kt-block-spacer-489_e01134-7a\"><div class=\"kt-block-spacer kt-block-spacer-halign-center\"><hr class=\"kt-divider\"\/><\/div><\/div>\n\n\n\n<p><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"936\" height=\"477\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2025\/12\/together2.2.png\" alt=\"\" class=\"wp-image-1125\" style=\"width:518px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2025\/12\/together2.2.png 936w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2025\/12\/together2.2-300x153.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2025\/12\/together2.2-768x391.png 768w\" sizes=\"auto, (max-width: 936px) 100vw, 936px\" \/><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">InDeep Principal Investigator&nbsp;<strong>Iris Henrickx&nbsp;<\/strong>presented with master student Daan Brugmans a new speech-based Demo at the&nbsp;<em>Speech Science Festival<\/em>&nbsp;\u2013&nbsp;<em>Interspeech2025<\/em>,&nbsp;August 17, Rotterdam. Our demo had the form of a game where the user talks with the system in Dutch or English and the system tried to guess the emotions of the user on the basis of the speech signal.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter is-resized\"><img decoding=\"async\" src=\"https:\/\/mcusercontent.com\/d66edf9219ab4eed6e95083a4\/images\/977abbf3-ea9d-f2ad-c6b6-5630fbf47ba0.jpg\" alt=\"\" style=\"width:356px;height:auto\"\/><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">The demo showcased what the state of the art is in speech technology, and which type of information is still very hard for current systems to make interpretable for users.&nbsp;Many researchers from all over the world took part in the Speech Science Festival and the event attracted a few hundred visitors, with over 50 participants to our InDeep Demo.&nbsp;<\/p>\n\n\n<style>.wp-block-kadence-spacer.kt-block-spacer-489_2ed589-02 .kt-block-spacer{height:60px;}.wp-block-kadence-spacer.kt-block-spacer-489_2ed589-02 .kt-divider{border-top-width:3px;height:1px;border-top-color:var(--nv-secondary-accent);width:80%;border-top-style:solid;}<\/style>\n<div class=\"wp-block-kadence-spacer aligncenter kt-block-spacer-489_2ed589-02\"><div class=\"kt-block-spacer kt-block-spacer-halign-center\"><hr class=\"kt-divider\"\/><\/div><\/div>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignleft is-resized has-custom-border\"><img decoding=\"async\" src=\"https:\/\/mcusercontent.com\/d66edf9219ab4eed6e95083a4\/images\/abc0e911-441d-1b62-7d83-ecce203df262.jpg\" alt=\"\" style=\"border-width:3px;border-radius:18px;width:461px;height:auto\"\/><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Our own PhD student&nbsp;<strong>Marcel V\u00e9lez V\u00e1squez<\/strong>&nbsp;co-organised the AI Song Contest 2025 at the Melkweg in Amsterdam, with support from&nbsp;<strong>John Ashley Burgoyne<\/strong>&nbsp;collecting and verifying jury and public voting results. After an electrifying award show featuring performances from all ten finalists, GENEALOGY emerged as this year\u2019s winner, claiming both the public vote and the overall award.&nbsp;Visit&nbsp;<a href=\"https:\/\/us22.mailchimp.com\/mctx\/clicks?url=http%3A%2F%2Faisongcontest.com%2F&amp;xid=2f63da9aad&amp;uid=212120574&amp;iid=18a29b7e01&amp;pool=cts&amp;v=2&amp;c=1764751875&amp;h=d89e14fc0642e4b8ec3b5f33f8de1fdc349c28d6a2e98ae2d276d6a46db93a8e\">aisongcontest.com<\/a>&nbsp;to watch the full awards show and to learn more about all of the contest entries.<\/p>\n\n\n<style>.wp-block-kadence-spacer.kt-block-spacer-489_219f93-0a .kt-block-spacer{height:60px;}.wp-block-kadence-spacer.kt-block-spacer-489_219f93-0a .kt-divider{border-top-width:3px;height:1px;border-top-color:var(--nv-secondary-accent);width:80%;border-top-style:solid;}<\/style>\n<div class=\"wp-block-kadence-spacer aligncenter kt-block-spacer-489_219f93-0a\"><div class=\"kt-block-spacer kt-block-spacer-halign-center\"><hr class=\"kt-divider\"\/><\/div><\/div>\n\n\n\n<p class=\"has-medium-font-size\">Our InDeep member&nbsp;<strong>Afra Alishahi<\/strong>&nbsp;has been appointed Professor of Computational Linguistics within the Department of Cognitive Science and Artificial Intelligence of Tilburg School of Humanities and Digital Sciences. Congratulations Afra!&nbsp;Read more&nbsp;<a href=\"https:\/\/www.tilburguniversity.edu\/about\/schools\/tshd\/news\/afra-alishahi-appointed-professor-computational-linguistics\">here<\/a>.&nbsp;&nbsp;<\/p>\n\n\n<style>.wp-block-kadence-spacer.kt-block-spacer-489_79b73e-36 .kt-block-spacer{height:60px;}.wp-block-kadence-spacer.kt-block-spacer-489_79b73e-36 .kt-divider{border-top-width:3px;height:1px;border-top-color:var(--nv-secondary-accent);width:80%;border-top-style:solid;}<\/style>\n<div class=\"wp-block-kadence-spacer aligncenter kt-block-spacer-489_79b73e-36\"><div class=\"kt-block-spacer kt-block-spacer-halign-center\"><hr class=\"kt-divider\"\/><\/div><\/div>\n\n\n\n<p class=\"has-medium-font-size\">\u200bExciting news! InDeep has made a video series that explains and explores how to open the Black Box of Large Language Models. We hope to make more video\u2019s as there\u2019s plenty more in the world of interpretability and circuits beyond this. Go&nbsp;to our <a href=\"https:\/\/projects.illc.uva.nl\/indeep\/indeep-video-series\/\">video series page<\/a> to view them.<\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"389\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/11\/Blackboxtitle2-1024x389.png\" alt=\"\" class=\"wp-image-701\" style=\"width:628px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/11\/Blackboxtitle2-1024x389.png 1024w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/11\/Blackboxtitle2-300x114.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/11\/Blackboxtitle2-768x292.png 768w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/11\/Blackboxtitle2-1536x584.png 1536w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/11\/Blackboxtitle2-2048x778.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/div>\n\n<style>.wp-block-kadence-spacer.kt-block-spacer-489_d80ebb-b9 .kt-block-spacer{height:60px;}.wp-block-kadence-spacer.kt-block-spacer-489_d80ebb-b9 .kt-divider{border-top-width:3px;height:1px;border-top-color:rgba(47, 90, 174, 0.61);width:80%;border-top-style:solid;}<\/style>\n<div class=\"wp-block-kadence-spacer aligncenter kt-block-spacer-489_d80ebb-b9\"><div class=\"kt-block-spacer kt-block-spacer-halign-center\"><hr class=\"kt-divider\"\/><\/div><\/div>\n\n\n\n<p class=\"has-medium-font-size\">The paper \u201c<a href=\"https:\/\/www.nature.com\/articles\/d41586-023-00288-7\" target=\"_blank\" rel=\"noreferrer noopener\">ChatGPT: Five priorities for research<\/a>\u201d from InDeep member&nbsp;<strong>Willem Zuidema<\/strong>&nbsp;and colleagues, published in <em>Nature<\/em>, was listed as #15 in the most cited papers in all of AI in 2023, in&nbsp;<a href=\"https:\/\/www.zeta-alpha.com\/post\/analyzing-the-homerun-year-for-llms-the-top-100-most-cited-ai-papers-in-2023-with-all-medals-for-o\">an analysis by Zeta-Alpha<\/a>.<\/p>\n\n\n<style>.wp-block-kadence-spacer.kt-block-spacer-489_d1cc5a-b1 .kt-block-spacer{height:60px;}.wp-block-kadence-spacer.kt-block-spacer-489_d1cc5a-b1 .kt-divider{border-top-width:3px;height:1px;border-top-color:rgba(47, 90, 174, 0.61);width:80%;border-top-style:solid;}<\/style>\n<div class=\"wp-block-kadence-spacer aligncenter kt-block-spacer-489_d1cc5a-b1\"><div class=\"kt-block-spacer kt-block-spacer-halign-center\"><hr class=\"kt-divider\"\/><\/div><\/div>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignright size-full is-resized has-custom-border\"><img loading=\"lazy\" decoding=\"async\" width=\"528\" height=\"404\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/02\/VIDI2.png\" alt=\"\" class=\"has-border-color has-nv-dark-bg-border-color wp-image-600\" style=\"border-width:3px;border-radius:18px;width:134px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/02\/VIDI2.png 528w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/02\/VIDI2-300x230.png 300w\" sizes=\"auto, (max-width: 528px) 100vw, 528px\" \/><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">InDeep principal investigator <strong>Arianna Bisazza<\/strong> was recently awarded a Vidi, a prestigious personal grant funded by NWO. The project is starting in February 2024 and aims to improve language modeling for low-resource languages, taking inspiration from child language acquisition insights. <\/p>\n\n\n<style>.wp-block-kadence-spacer.kt-block-spacer-489_f3652d-71 .kt-block-spacer{height:60px;}.wp-block-kadence-spacer.kt-block-spacer-489_f3652d-71 .kt-divider{border-top-width:3px;height:1px;border-top-color:rgba(47, 90, 174, 0.61);width:80%;border-top-style:solid;}<\/style>\n<div class=\"wp-block-kadence-spacer aligncenter kt-block-spacer-489_f3652d-71\"><div class=\"kt-block-spacer kt-block-spacer-halign-center\"><hr class=\"kt-divider\"\/><\/div><\/div>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignright size-full is-resized has-custom-border\"><img loading=\"lazy\" decoding=\"async\" width=\"418\" height=\"394\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/02\/Scherm\u00adafbeelding-2024-02-02-om-15.35.34.png\" alt=\"\" class=\"wp-image-590\" style=\"border-width:3px;border-radius:18px;width:133px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/02\/Scherm\u00adafbeelding-2024-02-02-om-15.35.34.png 418w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/02\/Scherm\u00adafbeelding-2024-02-02-om-15.35.34-300x283.png 300w\" sizes=\"auto, (max-width: 418px) 100vw, 418px\" \/><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\"><strong>Arianna Bisazza<\/strong>, together with Jirui Qi and Raquel Fernandez, was awarded an <strong>Outstanding Paper Award<\/strong> at 2023 EMNLP and a <strong>Best Data Award<\/strong> at the GenBench Workshop 2023 for the paper&nbsp;<a href=\"https:\/\/aclanthology.org\/2023.emnlp-main.658\" target=\"_blank\" rel=\"noreferrer noopener\">Cross-Lingual Consistency of Factual Knowledge in Multilingual Language Models<\/a><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignleft size-large is-resized has-custom-border\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"892\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/GA-mWyEW4AA7rQL_2-1024x892.jpg\" alt=\"\" class=\"wp-image-495\" style=\"border-width:3px;border-radius:18px;width:261px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/GA-mWyEW4AA7rQL_2-1024x892.jpg 1024w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/GA-mWyEW4AA7rQL_2-300x261.jpg 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/GA-mWyEW4AA7rQL_2-768x669.jpg 768w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/GA-mWyEW4AA7rQL_2-1536x1338.jpg 1536w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/GA-mWyEW4AA7rQL_2-2048x1784.jpg 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">And, also at the EMNLP conference, InDeep PhD candidate <strong>Hossein Mohebbi<\/strong> received together with his supervisors&nbsp;Grzegorz Chrupa\u0142a, Jelle Zuidema, and Afra Alishahi the&nbsp;<strong>Outstanding Paper Award<\/strong>&nbsp;for <em>Homophone Disambiguation Reveals Patterns of Context Mixing in Speech Transformers<\/em>.&nbsp;<br>Read the award winning paper&nbsp;<a href=\"https:\/\/aclanthology.org\/2023.emnlp-main.513\/\">here<\/a>.&nbsp;<\/p>\n\n\n\n<p><\/p>\n\n\n<style>.wp-block-kadence-spacer.kt-block-spacer-489_4dac31-2a .kt-block-spacer{height:79px;}.wp-block-kadence-spacer.kt-block-spacer-489_4dac31-2a .kt-divider{border-top-width:3px;height:1px;border-top-color:rgba(47, 90, 174, 0.61);width:80%;border-top-style:solid;}<\/style>\n<div class=\"wp-block-kadence-spacer aligncenter kt-block-spacer-489_4dac31-2a\"><div class=\"kt-block-spacer kt-block-spacer-halign-center\"><hr class=\"kt-divider\"\/><\/div><\/div>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"alignleft size-large is-resized has-custom-border\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"568\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/Scherm\u00adafbeelding-2024-01-12-om-10.14.16-1024x568.png\" alt=\"\" class=\"wp-image-514\" style=\"border-width:3px;border-radius:18px;width:323px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/Scherm\u00adafbeelding-2024-01-12-om-10.14.16-1024x568.png 1024w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/Scherm\u00adafbeelding-2024-01-12-om-10.14.16-300x166.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/Scherm\u00adafbeelding-2024-01-12-om-10.14.16-768x426.png 768w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/Scherm\u00adafbeelding-2024-01-12-om-10.14.16.png 1352w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">Following the launch and media storm on <em>ChatGPT<\/em> , several InDeep members worked hard to inform the general public about developments in AI and NLP. <\/p>\n\n\n\n<p class=\"has-medium-font-size\">InDeep members&nbsp;<strong>Arianna Bisazza<\/strong>&nbsp;and&nbsp;<strong>Gabriele Sarti<\/strong>&nbsp;were among the presenters in the event &#8220;An Evening with ChatGPT&#8221; by the GroNLP group, an open event about risks and opportunities of new language technologies. You can watch the presentations of&nbsp;<a href=\"https:\/\/youtu.be\/PgpmbXHMEsI?si=BeICPj9ThA58QM-S&amp;t=2982\">Arianna Bisazza<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/youtu.be\/PgpmbXHMEsI?si=G3u-yGiRQze2p7yg&amp;t=1884\">Gabriele Sarti<\/a>, or go the&nbsp;<a href=\"https:\/\/youtu.be\/PgpmbXHMEsI?si=BeICPj9ThA58QM-S&amp;t=2982\">full event<\/a> and see the <a href=\"https:\/\/gronlp.github.io\/chatgptslides.pdf\">slides<\/a>.<\/p>\n\n\n<div class=\"wp-block-image is-style-rounded\">\n<figure class=\"alignright size-large is-resized has-custom-border\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"532\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/Scherm\u00adafbeelding-2024-01-19-om-17.12.29-1024x532.png\" alt=\"\" class=\"wp-image-565\" style=\"border-width:3px;border-radius:18px;width:316px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/Scherm\u00adafbeelding-2024-01-19-om-17.12.29-1024x532.png 1024w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/Scherm\u00adafbeelding-2024-01-19-om-17.12.29-300x156.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/Scherm\u00adafbeelding-2024-01-19-om-17.12.29-768x399.png 768w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/Scherm\u00adafbeelding-2024-01-19-om-17.12.29-1536x798.png 1536w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/Scherm\u00adafbeelding-2024-01-19-om-17.12.29-2048x1064.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/div>\n\n\n<p class=\"has-medium-font-size\">And project leader&nbsp;<strong>Jelle Zuidema<\/strong>&nbsp;appeared in various Dutch popular media to comment on developments, including the tv program <a href=\"https:\/\/nos.nl\/nieuwsuur\/video\/2456689-chatgpt-is-een-rage-hoe-werkt-het-en-wat-is-de-toekomst-ervan\">Nieuwsuur<\/a>, and national newspapers NRC and Volkskrant, and in \u2018An Afternoon with ChatGPT\u2019 at the University of Amsterdam, modelled after the Groningen initiative.<\/p>\n\n\n<style>.wp-block-kadence-spacer.kt-block-spacer-489_577298-1d .kt-block-spacer{height:60px;}.wp-block-kadence-spacer.kt-block-spacer-489_577298-1d .kt-divider{border-top-width:3px;height:1px;border-top-color:rgba(47, 90, 174, 0.63);width:80%;border-top-style:solid;}<\/style>\n<div class=\"wp-block-kadence-spacer aligncenter kt-block-spacer-489_577298-1d\"><div class=\"kt-block-spacer kt-block-spacer-halign-center\"><hr class=\"kt-divider\"\/><\/div><\/div>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"671\" height=\"487\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/ComputerSayNo-Sticker.png\" alt=\"\" class=\"wp-image-558\" style=\"width:138px;height:auto\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/ComputerSayNo-Sticker.png 671w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2024\/01\/ComputerSayNo-Sticker-300x218.png 300w\" sizes=\"auto, (max-width: 671px) 100vw, 671px\" \/><\/figure><\/div>\n\n\n<p class=\"has-text-align-left has-medium-font-size\"><em>Computer says no, but why?<\/em> NWO published in February 2022 a feature on the InDeep project <a href=\"https:\/\/www.nwo.nl\/en\/cases\/computer-says-no-why\">in their magazine<\/a>, with quotes from Jelle Zuidema, Ashley Burgoyne and Jurjen Wagemakers (Floodtags).&nbsp;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Join our InDeep Newsletter where we will periodically give you an update of current news, articles and upcoming events related to the project. Read our most recent newsletter here. Our InDeep PhD candidate\u00a0Gabriele Sarti\u00a0successfully defended his dissertation,\u00a0From Insights to Impact: Actionable Interpretability for Neural Machine Translation, on 11 December 2025 at the\u00a0University of Groningen. Gabriele&hellip;&nbsp;<a href=\"https:\/\/projects.illc.uva.nl\/indeep\/news\/\" rel=\"bookmark\">Read More &raquo;<span class=\"screen-reader-text\">News and Newsletter<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","footnotes":""},"class_list":["post-489","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/pages\/489","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/comments?post=489"}],"version-history":[{"count":63,"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/pages\/489\/revisions"}],"predecessor-version":[{"id":1165,"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/pages\/489\/revisions\/1165"}],"wp:attachment":[{"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/media?parent=489"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}