
SparX by Mukesh Bansal The Dark Side of AI No One Talks About | Mukesh Bansal | Connor Leahy | SparX
What happens when the most powerful technology ever built is also the one nobody fully understands - including its creators?
Connor Leahy, US Director of Control AI and one of the clearest thinkers in the AI safety space, joins Mukesh Bansal for a conversation that cuts through the hype and lands somewhere far more unsettling: we may have two to five years before AI systems cross a threshold we cannot reverse.
This isn't a doom scroll. It's a strategic briefing.
In this episode of SparX, Connor breaks down why the race to superintelligence is a national security issue, not a technology one and why the people building it aren't villains, they're just operating inside a system with no brakes.
He challenges Yann LeCun's repeated prediction that we're nowhere close, explains why Dario Amodei's admission that we understand only ~3% of how AI works should terrify us, and unpacks why Sam Altman and the labs he leads are racing toward a goal they cannot fully control. He also explains why the "we'll run out of data" argument keeps being wrong, how AI systems are now learning by interacting with environments (just like humans do), and why the moment superintelligence arrives, we probably won't recognise it.
We also ask: What can India and other middle powers actually do? Why did the climate movement fail? What must the AI safety movement learn from that? Is Senator Blackburn's Trump AI Act a sign that Washington is finally waking up? And with Bernie Sanders and AOC now speaking out, could AI safety become a defining issue in the 2026 elections?
Plus - in a first for the podcast - four frontier AI models (Claude, GPT, Gemini, and Grok) listen live to the conversation and jump in with questions. The results are equal parts fascinating and telling.
Guest: Connor Leahy : AI Safety Researcher | Co-founder of EleutherAI | US Director at Control AI
