Séminaire DIC: «Linguistic theory and deep language models» par Yair Lakretz

    Séminaire ayant lieu dans le cadre du doctorat en informatique cognitive, en collaboration avec le centre de recherche CRIA    

     

    TITRE :  Linguistic theory and deep language models

     

    Yair LAKRETZ

    Jeudi le 2 avril 2026 à 10h30

    Local PK-5115 (Il est possible d'y assister en virtuel en vous inscrivant ici)                 

     

    RÉSUMÉ

    Linguistic theory suggests that human language is organized by a latent hierarchical structure, which allows us to convey complex meanings and to seamlessly interpret sentences we have never encountered before. A central debate in the field is whether the brain encodes this abstract structure independently from the specific semantic content of words. While probing the human brain remains a challenge, modern neural AI models provide new means to test these theories by allowing for more granular analysis of how syntactic rules and meanings are represented within their architectures. In this talk, I will discuss evidence suggesting that these models can learn to decouple structure from content, aligning with the functional modularity proposed in linguistic theory. I will describe various methods for studying the underlying neural representations and mechanisms in these models, ranging from behavioral tasks to internal weight analysis. Finally, I will show how neuroscientific experiments can be designed to test specific predictions derived from these models, exploring how the human brain acquires and processes the structures of language.

     

    BIOGRAPHIE

    Yair LAKRETZ is a CNRS Research Scientist at the Laboratoire de Sciences Cognitives et Psycholinguistique (LSCP) in Paris, where he heads the Neuro-Linguae-AI team. His research focuses the neural mechanisms underlying language processing, particularly how the human brain and artificial intelligence systems encode and compute complex linguistic structures. He investigates these questions by combining the analysis of neural models (AI) with empirical studies on humans, utilizing neuroimaging techniques including fMRI, E/MEG, and intracranial recordings.

     

    RÉFÉRENCES

    Lakretz, Y., Hupkes, D., Vergallito, A., Marelli, M., Baroni, M., & Dehaene, S. (2021). Mechanisms for handling nested dependencies in neural-network language models and humans. Cognition, 213, 104699.

    Evanson, L., Lakretz, Y., & King, J. R. (2023). Language acquisition: do children and language models follow similar learning stages?. In Findings of the Association for Computational Linguistics: ACL 2023 (pp. 12205-12218).

    Lakretz, Y., Desbordes, T., Hupkes, D., & Dehaene, S. (2022). Can transformers process recursive nested constructions, like humans?. In Proceedings of the 29th International Conference on Computational Linguistics (pp. 3226-3232).

    Diego Simon, P. J., d'Ascoli, S., Chemla, E., Lakretz, Y., & King, J. R. (2024). A polar coordinate system represents syntax in large language models. Advances in Neural Information Processing Systems, 37, 105375-105396.

    Rambaud, V., Mascarenhas, S., & Lakretz, Y. (2025). MapFormer: Self-Supervised Learning of Cognitive Maps with Input-Dependent Positional Embeddings. arXiv preprint arXiv:2511.19279.

    BilletteriechevronRightCreated with Sketch.

    clockCreated with Sketch.Date / heure

    jeudi 2 avril 2026
    10 h 30

    pinCreated with Sketch.Lieu

    UQAM - Pavillon Président-Kennedy (PK)
    PK-5115 et en ligne
    201, avenue du Président-Kennedy
    Montréal (QC)

    dollarSignCreated with Sketch.Prix

    Gratuit

    personCreated with Sketch.Renseignements

    Visiter le site webchevronRightCreated with Sketch.

    Mots-clés

    Groupes