
CrowdScience Could AI present CrowdScience?
Mar 27, 2026
They probe whether AI could take over presenting by cloning voices and simulating interviews. Experts explain how large language models are trained and how AI performance is measured. Researchers demonstrate why synthetic speech still sounds slightly off and why many languages lag behind. The BBC’s experiments and ethical rules about using AI for content are also discussed.
AI Snips
Chapters
Transcript
Episode notes
LLM Progress Measured By Task Time
- Large language models (LLMs) predict the next word from massive unlabeled text and are shaped into chatbots using reinforcement learning with human feedback.
- Alex Hern notes tasks LLMs can do at 50% accuracy double roughly every seven months, moving quickly from multi-hour tasks toward full workdays.
Jobs Break Down Into Automatable Tasks
- Whether AI replaces jobs depends on which specific tasks within jobs can be automated and how employers or society respond.
- Alex Hern stresses choices matter: employers may speed output expectations, reduce headcount, or grant leisure depending on policy and power dynamics.
Cloned Voices Still Lack Natural Prosody
- Voice cloning is possible from minutes of audio but still sounds slightly robotic because models lack explicit control over prosody and question intonation.
- Caroline Steel heard her cloned voice reading a BBC article and noticed pitch and delivery differences.
