Skip to content

In the InDeep project pioneering researchers in the domain of interpretability of deep learning models of text, language, speech and music are brought together. They collaborate with companies and non-for-profit institutions working with language, speech and music technology, to develop applications that help assess the usefulness of alternative interpretability techniques on a range of different tasks. In “justification” tasks, we look at how interpretability techniques help give users meaningful feedback. Examples include legal and medical document text mining and audio search. In “augmentation” tasks we look at how these techniques facilitate the use of domain knowledge and models from outside deep learning to make the models perform even better. Examples include machine translation, music recommendation and writing feedback. In “interaction” tasks we allow users to influence the functioning of their automated systems, by providing both interpretable information on how the system operates, and letting human produced output find its way into the internal states of the learning algorithm. Examples include adapting speech recognition to non-standard accents and dialects, interactive music generation, and machine assisted translation.

Activities

  • Fundamental research on interpretability methods in NLP, speech and music processing
  • Applied research on interpretability, in tight collaboration with the partners
  • A public outreach program, involving citizen science projects, lectures, concerts, debates, demos and nights in the museum
  • An industrial outreach program, involving master classes on deep learning and interpretability in NLP, speech and music processing
  • Software packages and online demos

Our partners