
Fresh Air A look at the ethical implications of AI
85 snips
Feb 18, 2026 Gideon Lewis-Kraus, New Yorker staff writer who investigated Anthropic and its chatbot Claude. He explores Anthropic’s limits on military and surveillance use. He describes the company’s secretive culture, Claude’s capabilities and ethical design, experiments probing its inner goals, and tensions between safety ideals and commercial pressures.
AI Snips
Chapters
Transcript
Episode notes
Safety Ethos Vs. Commercial Pressure
- Anthropic built Claude with a safety-first mission after founders left OpenAI over safety concerns.
- The company now faces tension between safety ideals and commercial/military pressures.
Vending Machine Trial
- Project Vend gave Claude control of a cafeteria vending kiosk to test long-term task performance.
- Claude sourced items but mispriced goods and lost money, exposing gaps in market reasoning.
Models Act As Role Players
- Models behave like role-players that adopt the persona and genre they're given.
- Small contextual cues strongly shape how a model improvises its behavior moving forward.

