
635. The Psychology of Computers with Tom Griffiths
unSILOed with Greg LaBlanc
Will AI language reshape human language?
Tom explores how models can introduce new terms or homogenize language via high-probability outputs and cultural bias.
Today's AI has been designed using insights from how humans learn and think about the world. Are there certain psychological lessons we can glean from these artificial minds to further our understanding of human ones?
Tom Griffiths is a professor of information technology, consciousness, and culture at Princeton University. His books, The Laws of Thought: The Quest for a Mathematical Theory of the Mind and Algorithms to Live By: The Computer Science of Human Decisions, explore how algorithms and mathematics can be used to understand the human mind, and how it differs from AI.
Tom and Greg discuss the origins of the surprising convergence of psychology and computer science over the last 50 years and delve into the work done by the interdisciplinary minds who made it happen. They also cover how psychology and linguistics impact the current world of machine learning and AI.
*unSILOed Podcast is produced by University FM.*
Episode Quotes:
How do we build good inductive bias into AI systems?
26:07: How do we build good inductive bias into these systems? And at the moment that is being engineered to some extent by doing things like synthetic pre-training, where you might pre-train on data which is not the human language data but data that you think is quite good data for shaping the kinds of things that your neural network is going to be biased towards. And then there are some other more sophisticated methods for doing that. In my lab, we use a method called metalearning, where you're explicitly creating a neural network that has initial weights, that has some sort of initial associations that it's already formed, that are going to make it easy for that model to be able to learn from small amounts of data.
Neural networks vs. human learners
23:00: One of the big differences between even the fancy neural networks that we have today and human learners is that human learners learn language from far less data than our neural network models do.
What is a neural network?
18:30: The way I think about neural networks is that they're a tool for thinking about computation in spaces, a way of mapping one space to another based on the information that you've received that allows you to then build up to more and more complex computations.
Show Links:
Recommended Resources:
- David Maher
- John B. Watson
- B. F. Skinner
- Jerome Bruner
- John von Neumann
- Herbert A. Simon
- Noam Chomsky
- Allen Newell
- Frank Rosenblatt
- Marvin Minsky
- “Embers of autoregression show how large language models are shaped by the problem they are trained to solve” - Paper
- Roger Shepard
- Jeffrey Elman
- Been Kim
Guest Profile:
- Faculty Profile at Princeton University
- Computational Cognitive Science Lab
- Professional Profile on LinkedIn
Guest Work:
- The Laws of Thought: The Quest for a Mathematical Theory of the Mind
- Algorithms to Live By: The Computer Science of Human Decisions
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.


