unSILOed with Greg LaBlanc cover image

293. Stop Torturing Data feat. Gary Smith

unSILOed with Greg LaBlanc

00:00

The Dangers of Large Language Models

The danger of large language models is not that they're going to take over the world, but that we're going to trust them too much. The fundamental problem remains that these algorithms don't know what words mean. They have no way of internally telling whether something is true or false because they literally do not know what any words mean. You trust a large language model to decide if you should be given a job. If you should be approved for a mortgage, how many years you should be sent to prison,. Those kinds of stuff, when it has no knowledge of what words mean, it's just looking for statistical patterns. It's treacherous.

Play episode from 45:00
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app