{"id":12,"date":"2023-06-15T14:53:06","date_gmt":"2023-06-15T14:53:06","guid":{"rendered":"https:\/\/projects.illc.uva.nl\/indeep\/?page_id=12"},"modified":"2023-07-21T11:21:16","modified_gmt":"2023-07-21T11:21:16","slug":"indeep-interpreting-deep-learning-models-for-text-and-sound","status":"publish","type":"page","link":"https:\/\/projects.illc.uva.nl\/indeep\/","title":{"rendered":"InDeep                                                                               Interpreting Deep Learning Models                                          for Text and Sound"},"content":{"rendered":"\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"461\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/Hoofdpagina-Tekst4-tekst-alleen2-1-1024x461.png\" alt=\"\" class=\"wp-image-316\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/Hoofdpagina-Tekst4-tekst-alleen2-1-1024x461.png 1024w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/Hoofdpagina-Tekst4-tekst-alleen2-1-300x135.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/Hoofdpagina-Tekst4-tekst-alleen2-1-768x346.png 768w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/Hoofdpagina-Tekst4-tekst-alleen2-1-1536x692.png 1536w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/Hoofdpagina-Tekst4-tekst-alleen2-1.png 1966w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"168\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/Hoofdpagina-Tekst4-3-1024x168.png\" alt=\"\" class=\"wp-image-318\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/Hoofdpagina-Tekst4-3-1024x168.png 1024w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/Hoofdpagina-Tekst4-3-300x49.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/Hoofdpagina-Tekst4-3-768x126.png 768w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/Hoofdpagina-Tekst4-3-1536x252.png 1536w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/Hoofdpagina-Tekst4-3-2048x336.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p class=\"has-medium-font-size\">In the InDeep project pioneering researchers in the domain of interpretability of deep learning models of text, language, speech and music are brought together. They collaborate with companies and non-for-profit institutions working with language, speech and music technology, to develop applications that help assess the usefulness of alternative interpretability techniques on a range of different tasks. In \u201cjustification\u201d tasks, we look at how interpretability techniques help give users meaningful feedback. Examples include legal and medical document text mining and audio search. In \u201caugmentation\u201d tasks we look at how these techniques facilitate the use of domain knowledge and models from outside deep learning to make the models perform even better. Examples include machine translation, music recommendation and writing feedback. In \u201cinteraction\u201d tasks we allow users to influence the functioning of their automated systems, by providing both interpretable information on how the system operates, and letting human produced output find its way into the internal states of the learning algorithm. Examples include adapting speech recognition to non-standard accents and dialects, interactive music generation, and machine assisted translation.<\/p>\n\n\n\n<p class=\"has-medium-font-size\"><em>Activities<\/em><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li class=\"has-medium-font-size\">Fundamental research on interpretability methods in NLP, speech and music processing<\/li>\n\n\n\n<li class=\"has-medium-font-size\">Applied research on interpretability, in tight collaboration with the partners<\/li>\n\n\n\n<li class=\"has-medium-font-size\">A public outreach program, involving citizen science projects, lectures, concerts, debates, demos and nights in the museum<\/li>\n\n\n\n<li class=\"has-medium-font-size\">An industrial outreach program, involving master classes on deep learning and interpretability in NLP, speech and music processing<\/li>\n\n\n\n<li class=\"has-medium-font-size\">Software packages and online demos<\/li>\n<\/ul>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/logos-unis-bijelkaar-1-1024x683.png\" alt=\"\" class=\"wp-image-192\" width=\"768\" height=\"512\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/logos-unis-bijelkaar-1-1024x683.png 1024w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/logos-unis-bijelkaar-1-300x200.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/logos-unis-bijelkaar-1-768x512.png 768w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/logos-unis-bijelkaar-1-1536x1024.png 1536w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/logos-unis-bijelkaar-1-2048x1365.png 2048w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/logos-unis-bijelkaar-1-930x620.png 930w\" sizes=\"auto, (max-width: 768px) 100vw, 768px\" \/><\/figure><\/div>\n\n\n<p class=\"has-text-align-center has-medium-font-size\"><strong>Our partners<\/strong><\/p>\n\n\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"544\" src=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/logos-bijelkaar_xl6-1024x544.png\" alt=\"\" class=\"wp-image-358\" srcset=\"https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/logos-bijelkaar_xl6-1024x544.png 1024w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/logos-bijelkaar_xl6-300x159.png 300w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/logos-bijelkaar_xl6-768x408.png 768w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/logos-bijelkaar_xl6-1536x816.png 1536w, https:\/\/projects.illc.uva.nl\/indeep\/wp-content\/uploads\/2023\/07\/logos-bijelkaar_xl6-2048x1088.png 2048w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/div>\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the InDeep project pioneering researchers in the domain of interpretability of deep learning models of text, language, speech and music are brought together. They collaborate with companies and non-for-profit institutions working with language, speech and music technology, to develop applications that help assess the usefulness of alternative interpretability techniques on a range of different&hellip;&nbsp;<a href=\"https:\/\/projects.illc.uva.nl\/indeep\/\" rel=\"bookmark\">Read More &raquo;<span class=\"screen-reader-text\">InDeep                                                                               Interpreting Deep Learning Models                                          for Text and Sound<\/span><\/a><\/p>\n","protected":false},"author":2,"featured_media":29,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"neve_meta_sidebar":"default","neve_meta_container":"default","neve_meta_enable_content_width":"","neve_meta_content_width":70,"neve_meta_title_alignment":"left","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"off","neve_meta_disable_footer":"","neve_meta_disable_title":"on","footnotes":""},"class_list":["post-12","page","type-page","status-publish","has-post-thumbnail","hentry"],"_links":{"self":[{"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/pages\/12","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/comments?post=12"}],"version-history":[{"count":64,"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/pages\/12\/revisions"}],"predecessor-version":[{"id":359,"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/pages\/12\/revisions\/359"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/media\/29"}],"wp:attachment":[{"href":"https:\/\/projects.illc.uva.nl\/indeep\/wp-json\/wp\/v2\/media?parent=12"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}