
The Decibel What social media for AI bots can tell us about consciousness
18 snips
Feb 18, 2026 Karina Vold, assistant professor of philosophy at the University of Toronto who studies cognitive science, AI and ethics. She unpacks how language models generate text and what autonomous AI agents do differently. She explores why people anthropomorphize machines, the practical harms of misattribution on AI platforms, and what consciousness and sentience might mean for nonhuman systems.
AI Snips
Chapters
Books
Transcript
Episode notes
How LLMs Generate Language
- Large language models generate text by predicting the next token using learned statistical patterns from huge datasets.
- They produce novel strings rather than retrieving prewritten answers, which enables generative AI capabilities.
What Makes AI Agents Different
- AI agents are LLM-based systems that autonomously carry out human-given tasks online with some unpredictability.
- They remain prompt-driven but can perform sequences of actions without continuous human input.
Risks Of Anthropomorphizing AI
- Humans have a strong tendency to anthropomorphize nonhuman systems, attributing them goals or self-awareness prematurely.
- Karina Vold warns this can create unwarranted moral obligations and misdirect responsibilities toward systems that may not merit them.



