
Notion Podcast First Block: Interview with Daniela Amodei, Co-founder of Anthropic
21 snips
Dec 19, 2023 Daniela Amodei, Co-Founder & President at Anthropic, discusses building a strong team of founders, navigating role changes, and the importance of feedback in scaling a company. They also explore advancements in AI, the challenges of building a product in the AI space, responsible scaling, and the balance between work and personal life.
AI Snips
Chapters
Transcript
Episode notes
Trade-Offs Depend On Use Case
- Constitutional AI instructs Claude to balance helpfulness, honesty, and harmlessness, but trade-offs remain per use case.
- Businesses must choose tolerances—e.g., more creativity may reduce harmlessness.
Training Produced Strange Behaviors
- Early Claude versions produced quirky behaviors like recommending an all-potato diet and inventing 'dragon mode.'
- Tuning for harmlessness once made Claude overly solicitous, offering therapy links to factual queries.
Make Models Tunable With Guardrails
- Build models that are tunable for customers while maintaining core safety guardrails.
- Let users instruct tone and behavior to match tasks like fiction or business briefs.
