
Perplexity AI Anthropic Launches "Code Review" to Fix AI Code Security Issues
Mar 9, 2026
A rundown of Anthropic's new AI tool that scans AI-written code for bugs and security risks. Discussion of why mass AI code generation overloads manual review workflows. Explanation of how the tool integrates with GitHub and leaves actionable comments on pull requests. Notes on multi-agent architecture, severity labeling, customization, and pricing for enterprise use.
AI Snips
Chapters
Transcript
Episode notes
AI Is Producing Most Company Code
- AI is now generating a majority of code in some companies, creating scale that outpaces human review.
- Jaeden Schafer cites estimates of 70–90% AI-generated code and says that flood creates a new bottleneck for quality control.
Prioritize Logic Errors Over Style In Automated Reviews
- Use automated reviewers that focus on logic errors rather than style to surface high-impact problems.
- Anthropic's Code Review will label severity, explain reasoning step-by-step, and suggest fixes so engineers can act quickly.
Open Source Project Got Flooded With Pull Requests
- OpenClaw's open-source surge created so many pull requests its lone maintainer was overwhelmed.
- Jaeden Schafer recounts the founder saying he was bogged down reviewing contributions after the project went viral.
