Speaker: Raffaella Bernardi (University of Trento)
Date and Time: Thursday, February 6th 2025, 16:30-18:00
Venue: on location, to be announced
Title: The interplay between language and reasoning.
Abstract. Large Language Models, and ChatGPT in particular, have recently grabbed the attention of the community and the media. Having reached high language proficiency, attention has been shifting toward its reasoning capabilities. It has been shown that ChatGPT can carry out some simple deductive reasoning steps when provided with a series of facts out of which it is tasked to draw some inferences. In this talk, I will argue for the need for models whose language generation is driven by an implicit reasoning process and a communication goal. To support my claim, I will present two papers recently produced within my group: one evaluates LLMs’ formal reasoning skills and the other focuses on LLMs’ information-seeking strategies; to this end, we take syllogisms and the 20-Questions game as test beds. These tasks have been used extensively in cognitive sciences to study human reasoning skills, hence they provide us with a variety of experiments to inspect the language and reasoning interplay in LLMs.
Leonardo Bertolazzi, Albert Gatt, Raffaella Bernardi: A Systematic Analysis of Large Language Models as Soft Reasoners: The Case of Syllogistic Inferences, EMNLP 2024
Davide Mazzaccara, Alberto Testoni, Raffaella Bernardi: Learning to Ask Informative Questions: Enhancing LLMs with Preference Optimization and Expected Information Gain, EMNLP 2024