GRK 2340

Graduiertenkolleg "Computational Cognition"

Osnabrück University navigation and search

Main content

Top content

Sophie Lehfeldt


Tel. +49 (0)541 969-3372
Room 50/106

Institute of Cognitive Science,
Wachsbleiche 27,
49090 Osnabrück, Germany


Gordon Pipa
Jutta L. Mueller

Self-organised grammar learning with a plastic recurrent network

Preverbal children are equipped with a remarkable ability to detect and learn repeating temporal patterns, and thus the precursors of grammatical structures of natural languages, from the acoustic signal. However, the underlying neural mechanisms of children's grammar learning remain largely unknown. A current working hypothesis assumes that children achieve successful learning in an automatic, associative fashion due to a low level of cognitive control expressed at this age. In order to examine this hypothesis in more detail, it is therefore valuable to ask if state-of-the-art computational models of associative learning, such as spike-timing dependent plasticity (STDP), can reproduce experimental findings about infant grammar learning when applied to recurrent neural network models of cortex.

A major goal of this PhD project is thus to train a recurrent neural network to learn grammatical structures as found in natural language in a self-organised fashion. The network will be trained with artificial grammar stimuli ranging from symbolic input sequences up to subsymbolic representations of spoken grammatical samples in form of spatio-temporal spike patterns that incorporate basic neural coding schemes of acoustic stimuli in the brain. In order to learn grammars successfully, the recurrent network will perform several computations ranging from (i) learning the identity of individual linguistic elements, (ii) learning the standard structural composition of grammatical sequences by integrating stimulus identities with their temporal occurrences and (iii) detecting wrong grammatical samples by eliciting a deviant or mismatch response to the rule violating element. Further, an additional sophisticated computation would comprise (iv) a generalisation performance of trained networks in response to unknown samples of learned grammatical structures. Taken together, this PhD project will promote a deeper understanding of infant grammar learning and its underlying mechanisms at the neural level. Specifically, the performance of linguistic operations in a neurobiologically motivated modelling substrate provides a potential link for neural mechanisms and linguistic computations in the human brain.

Top content