Despite quite detailed knowledge about the representations and processes underpinning language in the brain, two research traditions have prevailed in the field: (1) Language research is typically modality specific, with a dissociation between the production and comprehension of language in terms of research
strategies, paradigms and models; (2) The integration of linguistic function to brain function is mainly based
on a correlational one-to-one mapping of psycholinguistic theory and brain responses elicited by the production or comprehension of language. The objective of the current project is to explore the nature of linguistic representations and processes from an integrated perspective, both in terms of language modalities and in terms of brain-language mappings. Specifically, the aim is (1) to offer a systematic comparison of the neurophysiology in response to both the perception and production of words, in order to highlight the spatiotemporal similarities and differences between the two language modalities; and (2) to spell-out those differences and similarities in terms of neural coding theories with the objective to obtain a causal, mechanistic account of brain-language integration underlying our capacity to speak and hear words. To do so, 7 experiments (studies) are proposed utilising spatiotemporally sensitive techniques (3 EEG experiments, 2 MEG experiments, 2 TMS experiments). Brain indexes elicited by the production (object naming) and perception (passive listening with semantic catch trials) of words will be compared within the same participants, for the same stimuli and for the same experimental manipulations (contrasts). Predictions and theory with respect to the temporal progression and spatial organization of brain activity linked to different linguistic processing components between the production and perception of words (e.g., lexico-semantic vs. acoustic-articulatory processing), will be guided by adopting well-known neural coding schemes of cortical organization and communication (e.g., hierarchical convergence coding, hierarchical population coding, nonhierarchical assembly coding) to psycholinguistic theory of word retrieval (e.g., discrete serial dynamics, cascaded and interactive dynamics, parallel distributed dynamics). This mechanistic brain-language integration serves to generate unambiguous, theory-contrasting predictions concerning the neural differences and similarities in space and time between word production and comprehension. In this manner, the data that will be collected in this project, can elucidate further the nature and mechanisms that sustain language production and language perception, respectively, in the human brain, as well as offer a first and novel view on the integration and cortical interaction between hearing and speaking words. Furthermore, beyond the specific objectives and theoretical advancements of the current project, such integrated brain-language map across modalities, will be an important and essential basis to start studying the neurophysiology of language during natural communication (i.e., hearing and speaking); a complex, but necessary and exciting challenge for future research on the neurobiology of language.
LPL: Elin Runnqvist, Sophie Dufour, Amandine Michelas
CNRS (not LPL): Mireille Bonnard (INS: CNRS & AMU), Jean-Michel Badier (INS: CNRS & AMU)
Exterior: Friedemann Pulvermuller (FU Berlin, Germany), Robert Hartsuiker (Ghent University, Belgium)