Date October 31st , 2013
Place: Roeterseiland complex, building H, room 0.03 (REC-H 0.03 Symposiumzaal)
11.00 COFFEE and OPENING
11.15 - 12.00 Michael Franke ( Amsterdam): On the emergence of compositionality and sequentiality in signaling games
12.00 -12.45 Paul Vogt (Tilburg): Theoretical limitations of cross-situational word learning
12.45- 13.45 LUNCH
13.45 - 14.30 Michael Spranger (Paris): Evolutionary Semantics
14.30- 15.15 Simon Pauw (Amsterdam, Paris) Title Grounding quantifiers in spatial attention: Evolutionary language experiments with robots.The workshop is concluded with a public SMART lecture in a different building
16.00 Luc Steels (Brussels, Paris, Barcelona): Fluid Construction GrammarPlace: Building P (Euclides), room 2.27 (Plantage Muidergracht 24, Amsterdam)
A decisive step in the evolution of language was the transition from a holophrastic term language to a compositional language. A compositional language has structured linguistic expressions which are built up from simpler individually meaningful parts. The meaning of a complex expression is related in a systematic way to the meaning of the parts that it comprises. Much earlier work has suggested mechanism by which structured and compositional languages can emerge in populations of language using agents. The question to be addressed in this talk is slightly more concrete: how "smart" do language using agents have to be in order to acquire behavior that is indicative of rudimentary forms of compositionality and sequentiality? Game theoretic modeling suggests that not much sophistication is needed in principle, but that the emergence of compositionality and basic grammatical structure is not a sure outcome either, as it rather requires a fine mixture of stimulus generalization and expectations of regularity for proto-compositionality to evolve.
Abstract: In order to learn language, one needs to learn the mapping between form and meaning. An increasingly popular learning mechanism that is studied experimentally and computationally is cross-situational learning, where the learner acquires the correct mapping by analysing the co-variance across different situations in which a word occurs. In this talk, I will review a number of computational, mathematical and experimental studies on cross-situational learning that I have conducted over the years. I will not only focus on the strengths of cross-situational learning, but especially on the limitations of this mechanism.
Abstract: Natural Language is evidence for the ingenious ways humans conceptualize reality. English, for instance, provides various ways for talking about objects using many spatial relations such as front, back, up etc. Importantly, English allows to use these different spatial relations in many different conceptualization strategies. For instance, spatial relations can be used in group-based reference (adjectives), or to denote regions (prepositional use). Another type of evidence for the complexity of Natural language semantics comes from the analysis of cross-cultural variation which shows that languages other than English have found radically different ways for conceptualizing reality. E.g. Tzeltal speakers exclusively use absolute spatial relations such as uphill/downhill for talking about objects. These findings point to semantics as an evolutionary system in itself with wide-ranging impact on the evolution of language as a whole.
This talk will introduce a computational system that allows to model complex conceptualizations underlying natural language, and enables robots to automatically conceptualize the world. In a second part we show how the system can be used to investigate the evolution of rich, open-ended semantics similar to those found in natural language. The talk primarily uses examples from investigations into spatial language, but other areas of language are also discussed.
This talk builds on the idea that human language can be seen as the result of an evolutionary process. The evolutionary paths that language follows are strongly influenced by the cognitive capabilities and limitations of humans. I will illustrate this link between cognition and language evolution for the quantifiers many and few through a number of robotic multi-agent experiments.
The talk discusses the research on language evolution that I have conducted with my students over the past decade. After a general introduction to our methodology and achievements, illustrated with some videoclips of language game experiments on real robots, I will focus on one spin-off of this research, namely a new computational formalism for the parsing, production and learning of construction-based grammars, known as Fluid Construction Grammar (FCG). FCG tries to combine the best ideas that have been around in computational linguistics for the past decades. The result is a new synthesis that allows integration of the constructional perspective, as well as great flexibility and robustness. The presentation is illustrated with a live demonstration of the FCG system in action.