Neuro-Computational Models of Natural Language

Neuro-Computational Models of Natural Language

We build computational models of sentence understanding and evaluate their fit against neural signals collected from people performing a relatively natural task, like listening to a story.

Funding NSF #IIS-1607251, 2017-2019

Avatar
Jonathan R. Brennan
Associate Professor of Linguistics & Psychology

Neurolinguistics, semantics, and syntax.

Publications

Domain-specific training typically makes NLP systems work better. We show that this extends to cognitive modeling as well by relating …

The grammar, or syntax, of human language is typically understood in terms of abstract hierarchical structures. However, theories of …

Recurrent neural network grammars (RNNGs) are generative models of (tree, string) pairs that rely on neural networks to evaluate …

On some level, human sentence comprehension must involve both memory retrieval and structural composition. This study differentiates …