Casual debate about whether we should 'chill out' on AI alarmism in 2028, exploring tone, persuasion, and communication pitfalls. They weigh capabilities trajectories, singleton versus multipolar risk views, and likely real-world warning shots. Conversations cover human-guided AI creativity, an UnSlop fiction contest, fundraising dynamics, sustainable motivation for safety work, and plans to revive a rationalist learning community.
01:52:41
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
volunteer_activism ADVICE
Polish AI Creations With Heavy Human Iteration
Invest human taste and iterative critique to turn AI-generated art or stories into high-quality work rather than settling for quick 'slop'.
Use many rounds of criticism and refinement, spending compute and hours to polish rhythm, details, and avoid defensive over-explanation.
question_answer ANECDOTE
Gwern's UnSlop AI Short Story Competition
Gwern launched an "UnSlop" competition funding AI-only short stories with judges like Alexander Wales and a $10,000 prize.
The contest recommends at least $100 in compute and encourages many hours of human-led refinement to remove 'slop'.
volunteer_activism ADVICE
Run Diverse Critics Over AI Outputs
Use multiple critic filters and repeated improvement passes when generating AI fiction to catch new regressions and maintain readability.
Stephen Zuber notes failure modes like becoming overly defensive and unreadable after iterative strengthening.
Get the Snipd Podcast app to discover more snips from this episode
Matt returns to discuss a post urging us to chill out on AI if there isn’t imminent doom in 2028.
There is no video or preshow chat today, due to user error in setting up an in-person video recording. Sorry.