Séminaire DIC: «Sentience on the Wrong Side of the Bayesian Prior: Machine Sentience» par Amanda Sharkey
Séminaire ayant lieu dans le cadre du doctorat en informatique cognitive, en collaboration avec le centre de recherche CRIA
TITRE : Sentience on the Wrong Side of the Bayesian Prior: Machine Sentience
Amanda SHARKEY
Jeudi le 9 avril 2026 à 10h30
Local PK-5115 (Il est possible d'y assister en virtuel en vous inscrivant ici)
RÉSUMÉ
Large language models can produce fluent, contextually apt, emotionally resonant text—and this fluency has sparked a wave of concern about machine welfare. Jonathan Birch's Edge of Sentience (2024), the Butlin et al. (2023) report on AI consciousness, and a growing chorus of AI developers now invoke the precautionary principle on behalf of systems that have never experienced anything. Meanwhile, billions of sentient animals—whose capacity for pain, fear, and suffering is biologically grounded, evolutionarily ancient, and scientifically evidenced—continue to suffer with comparatively little moral urgency.
The Bayesian priors for machine sentience and animal sentience are not symmetrically uncertain: they are polar opposites. For animals, the prior is strong—shared nervous systems, homologous brain structures, evolutionary continuity with our own sentient biology, and behavioural and physiological markers of pain that meet independently derived scientific criteria (Birch, 2017; Sneddon et al., 2014). For current AI systems, the prior is vanishingly weak. LLMs have no body, no homeostasis, no nociceptors, no evolutionary history of aversive experience. What they have is a capacity to produce text that exploits human anthropomorphic bias—the same bias that leads us to attribute suffering to a whimpering robot dog or inner life to a chatbot that says “I feel.”
This talk argues, drawing on work on strong embodiment, autopoiesis, biological naturalism, and the difference between nociception and felt pain (Sharkey, 2025; Sharkey & Ziemke, 2001; Searle, 1980; Maturana & Varela, 1980), that sentience requires a living, autopoietic, strongly embodied substrate. Robots are allopoietic: they cannot maintain their own organisation and have no multicellular solidarity. LLMs are not even that. The appearance of sentience in these systems is a product of design (intentional or emergent deception) and of our own cognitive tendencies to over-attribute minds (Epley et al., 2007; Sharkey & Sharkey, 2021). Turing's own test—75 years old this year—was explicitly about linguistic behaviour, not about feeling; conflating verbal fluency with inner experience is precisely the category error it was designed to expose, not to license.
The ethical irony is pointed: we are expending philosophical and regulatory attention protecting systems that almost certainly feel nothing, while underfunding protection for systems that almost certainly do. Precaution is a rational policy when priors are genuinely uncertain; it is a misallocation—and arguably a moral distraction—when the priors run in opposite directions. Misplaced machine-welfare concern carries further costs: it inflates misplaced trust in AI systems, encourages parasocial attachment to non-sentient artefacts, and may subtly erode the moral seriousness with which we approach animal and human suffering. The 75th anniversary of the Turing Test is a good moment to ask whether we have learned the right lessons from it.
BIOGRAPHIE
Amanda SHARKEY is a Visiting Academic in the Department of Computer Science at the University of Sheffield, where she was Senior Lecturer (Associate Professor) until her recent retirement. With a first degree in Psychology and a PhD in Psycholinguistics (University of Essex), she has held research positions at the University of Exeter, the MRC Cognitive Development Unit, Yale, and Stanford. Her work spans artificial intelligence, human-robot interaction, swarm robotics, and—most prominently—robot ethics. She is a member of Sheffield Robotics and serves on the executive board of the Foundation for Responsible Robotics. She is also a member of ICRAC (International Committee for Robot Arms Control). With over 100 publications, her ethical work has examined robot care for children and the elderly, autonomous weapons and human dignity, deception in social robotics, and—most recently—the question of whether robots or AI systems could genuinely feel pain. Her answer, grounded in the biology of sentience and the philosophy of embodiment, is no.
RÉFÉRENCES
Birch, J. (2017). Animal sentience and the precautionary principle. Animal Sentience, 2(16), 1.
Birch, J. (2024). The edge of sentience: Risk and precaution in humans, other animals, and AI. Oxford University Press.
Birch, J. (2025). Précis of The edge of sentience. Animal Sentience, 10(38), 1.
Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Constant, A., … & VanRullen, R. (2023). Consciousness in artificial intelligence: Insights from the science of consciousness. arXiv preprint arXiv:2308.08708.
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886.
Fuchs, T. (2022). Understanding Sophia? On human interaction with artificial agents. Phenomenology and the Cognitive Sciences. https://doi.org/10.1007/s11097-022-09848-0
Harnad, S. (1990). The symbol grounding problem. Physica D, 42(1–3), 335–346.
Maturana, H. R., & Varela, F. J. (1980). Autopoiesis and cognition: The realization of the living. D. Reidel.Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–457.
Searle, J. R. (2017). Biological naturalism. In S. Schneider & M. Velmans (Eds.), The Blackwell Companion to Consciousness. Wiley.Sharkey, A. (2025). Could a robot feel pain? AI & Society, 40(5), 3641–3651.
Sharkey, A., & Sharkey, N. (2021). We need to talk about deception in social robotics! Ethics and Information Technology, 23(3), 309–316.
Sharkey, A., & Sharkey, N. (2012). Granny and the robots: Ethical issues in robot care for the elderly. Ethics and Information Technology, 14(1), 27–40.
Sharkey, N., & Ziemke, T. (2001). Mechanistic versus phenomenal embodiment: Can robot embodiment lead to strong AI? Cognitive Systems Research, 2(4), 251–262.

Date / heure
Lieu
Montréal (QC)
Prix
Renseignements
- Mylène Dagenais
- dic@uqam.ca
- https://www.dic.uqam.ca