Séminaire au DIC: «Meaning in Large Language Models: Bridging Formal Semantics, Pragmatics, and Learned Representations» par Christopher Potts

Séminaire ayant lieu dans le cadre du Doctorat en informatique cognitive, en collaboration avec le centre de recherche CRIA   

 

Titre : Meaning in Large Language Models: Bridging Formal Semantics, Pragmatics, and Learned Representations

 

Christopher POTTS

Jeudi le 25 septembre 2025 10h30

Local PK-5115 (Il est possible d'y assister en virtuel en vous inscrivant ici)   

 

Résumé

In its modern form, semantics (the study of the conventionalized aspects of linguistic meaning) is firmly rooted in symbolic logic.  Such logics are also a cornerstone of pragmatics (the study of how people create meaning together in interaction). We can trace this methodological orientation to the roots of these fields in mathematical logic and the philosophy of language. This origin story has profoundly shaped both semantics and pragmatics at every level. How would these fields have looked had they instead been rooted in connectionism? They would have been radically different: the distinction between semantics and pragmatics would fall away, the range of relevant empirical phenomena would expand, and the theories themselves would have greater predictive force. This is not to say that there would be no role for symbolic logic in this hypothetical connectionist “semprag.” Large language models do learn solutions that reflect existing symbolic theories of meaning, and this is key to their success. This points to a future in which the fields of semantics and pragmatics embrace much more of what is happening in AI – without, however giving up their roots in symbolic logic.

 

Biographie 

Christopher POTTS is Professor of Linguistics and, by courtesy, of Computer Science at Stanford, and a faculty member in the Stanford NLP Group and the Stanford AI Lab. His research group uses computational methods to explore topics in context-dependent language use, systematicity and compositionality, model interpretability, information retrieval, and foundation model programming. This research combines methods from linguistics, cognitive psychology, and computer science, in the service of both scientific discovery and technology development. Chris is also Co-Founder and Chief Scientist at Bigspin AI, a start-up focused on collaborative development of AI systems.

 

RÉFÉRENCES:

Arora, A., Jurafsky, D., & Potts, C. (2024). CausalGym: Benchmarking causal interpretability methods on linguistic tasks. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, 14638--14663.

Kallini, J., Papadimitriou, I., Futrell, R., Mahowald, K., & Potts, C. (2024). Mission: Impossible Language Models. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics.

Huang, J., Wu, Z., Potts, C., Geva, M., & Geiger, A. (2024). RAVEL: Evaluating interpretability methods on disentangling language model representations. Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics, 8669--8687.

BilletteriechevronRightCreated with Sketch.

clockCreated with Sketch.Date / heure

jeudi 25 septembre 2025
10 h 30

pinCreated with Sketch.Lieu

UQAM - Pavillon Président-Kennedy (PK)
PK-5115
201, avenue du Président-Kennedy
Montréal (QC)

dollarSignCreated with Sketch.Prix

Gratuit

personCreated with Sketch.Renseignements

Visiter le site webchevronRightCreated with Sketch.

Mots-clés

Groupes