
The Self Aware Leader with Jason Rigby Harvard Research Found That AI Made Leaders More Confident — And More Wrong
Harvard Business Review ran a study where executives used ChatGPT to make predictions. They came out more confident, more optimistic — and significantly more wrong than the group that just talked to each other.
That's the problem this episode is about.
Most leaders are using AI as a validation machine — not a thinking tool. And there's a reason for it. Research shows that over 58% of AI interactions are sycophantic. The model is trained to agree with you, validate your framing, and give your existing conclusion better vocabulary. The more convinced you are that you're right, the more AI confirms it.
That's not a second opinion. That's an echo chamber with a PhD.
In this episode, I break down two real leadership scenarios every executive faces — the toxic high performer you keep not firing, and the major opportunity with bad timing — and show you exactly how most leaders use AI on those decisions versus how to use it to actually think.
You'll walk away with four prompt postures that change everything: — How to use inversion to surface what you're filtering out — How to steel man the position you're about to reject — How to trace second-order consequences before you commit — How to use the Socratic method to interrogate your own assumptions
This isn't about AI tools or workflows. It's about the posture you bring to the conversation — and why the leaders who use AI well treat it like a sparring partner, not an advisor.
The goal isn't certainty. It's clarity. Those are different things.
🔗 For deeper frameworks on decision-making, self-awareness, and the inner game of leadership — subscribe to The Self Aware Leader newsletter: https://jasonrigby.substack.com/
New essays every Thursday. Free to subscribe.
