Lab Meetings

25 September 2018

No labmeetingCLS: Ekaterina Shutova, ILLC

2 October 2018

Labmeeting 16:00-17:30 in the PostDoc Meeting Room F2.02. CLS starts at 12:00.

  • Paper (see email, please don’t distribute): Zuidema, French, Alhama et al (draft). Five ways in which computational models can help advancing Artificial Grammar Learning research.
    Abstract. Computational modelling in cognitive science can sometimes appear as an inward looking field — a domain separated from experimental research, where obscure technical details dominate, hungry for data but seldomly giving something back. In this paper, we argue that this is not at all how things need to be. Computational techniques, even simple ones that are straightforward to use, can greatly facilitate designing, implementing and analyzing experiments, and generally help lift research to a new level. We focus on the domain of artificial grammar learning, and give five concrete examples in this domain for: (1) Formalizing and clarifying theories, (2) Generating stimuli, (3) Visualization, (4) Model selection, and (5) Automatically generating alternative hypotheses.

9 October 2018 (?)

Labmeeting 16:00-17:30 in the PostDoc Meeting Room F2.02

  • Presentation 1: Jaap Jumelet. Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items
    Abstract. In this paper, we attempt to link the inner workings of a neural language model to linguistic theory, focusing on a complex phenomenon well discussed in formal linguis- tics: (negative) polarity items. We briefly discuss the leading hypotheses about the licensing contexts that allow negative polarity items and evaluate to what extent a neural language model has the ability to correctly process a subset of such constructions. We show that the model finds a relation between the licensing context and the negative polarity item and appears to be aware of the scope of this context, which we extract from a parse tree of the sentence. With this research, we hope to pave the way for other studies linking formal linguistics to deep learning.
  • Presentation 2: Mario Giulianelli, Jack Harding & Florian Mohnert. Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information
    Abstract. How do neural language models keep track of number agreement between subject and verb? We show that `diagnostic classifiers’, trained to predict number from the internal states of a language model, provide a detailed understanding of how, when, and where this information is represented. Moreover, they give us insight into when and where number information is corrupted in cases where the language model ends up making agreement errors. To demonstrate the causal role played by the representations we find, we then use agreement information to influence the course of the LSTM during the processing of difficult sentences. Results from such an intervention reveal a large increase in the language model’s accuracy. Together, these results show that diagnostic classifiers give us an unrivalled detailed look into the representation of linguistic information in neural models, and demonstrate that this knowledge can be used to improve their performance.

16 October 2018

Note: CLS (Mehrnoosh Sadrzadeh, Queen Mary University of London, UK) at 12:00.

23 October 2018

Labmeeting 16:00-17:30 in the PostDoc Meeting Room F2.02

Agenda TBD

30 October 2018

No labmeetingCLS: Andreas Vlachos, University of Cambridge, UK

?? November 2018

No labmeetingCLS: Vlad Niculae, Instituto de Telecomunica??es, Lisbon, Portugal

27 November 2018

No labmeetingCLS: Antoine Bordes, Facebook AI Research

11 December 2018

No labmeetingCLS: Lisa Beinborn, ILLC

Past meetings

11 September 2018

  • Coordinates: 16:00-17:30 in the PostDoc Meeting Room F2.02
  • Paper: Hale, Dyer, Kuncoro & Brennan (2018). Finding Syntax in Human Encephalography with Beam Search
    Abstract. Recurrent neural network grammars (RNNGs) are generative models of (tree,string) pairs that rely on neural networks to evaluate derivational choices. Parsing with them using beam search yields a variety of incremental complexity metrics such as word surprisal and parser action count. When used as regressors against human electrophysiological responses to naturalistic text, they derive two amplitude effects: an early peak and a P600-like later peak. By contrast, a non-syntactic neural language model yields no reliable effects. Model comparisons attribute the early peak to syntactic composition within the RNNG. This pattern of results recommends the RNNG+beam search combination as a mechanistic model of the syntactic processing that occurs during normal human language comprehension.