AI Rebels

Jacob and Spencer
undefined
Apr 7, 2026 • 52min

Why Your AI Projects Keep Failing (It's Not the Tech) ft. Barbara Wittman

Barbara Wittman has spent 25 years cleaning up broken transformation projects, and the root cause is never the technology. In this episode, she breaks down how AI is exposing dysfunction that companies have been hiding for years: misalignment between business and IT, unchecked assumptions, and a total lack of shared understanding. She explains why boards pressure CEOs into AI adoption out of fear, why the people actually executing get sidelined, and why most “AI use cases” fall apart the moment you pick them apart. Barbara makes the case for “human infrastructure” as a real budget line item, not a buzzword, and shares frameworks so simple they fit on the back of a napkin. If you’re a founder, a transformation leader, or anyone trying to figure out why your AI investments aren’t paying off, this conversation will reframe how you think about the problem.
undefined
Apr 1, 2026 • 58min

Bhaskar Sunkara: The AI Agent That Never Sleeps. How Bicycle AI Catches Revenue Leaks in Real Time

Bhaskar Sunkara built AppDynamics into a $3.7B company. Now he's back with Bicycle AI, an always-on agent that catches revenue leaks before your team even knows they exist. In this episode he breaks down why most AI products will never make it to production, how to actually build enterprise trust, and what "doing the boring stuff" really means. We get into real use cases across travel and payments where Bicycle is already saving companies millions. If you're building in AI or buying AI tools for your business, this one is required listening.
undefined
Mar 25, 2026 • 1h 3min

RAG, Agents, and the Future of AI Memory with Roie from Pinecone

Most RAG implementations are fundamentally broken; and the company that coined "vector search" just told us why. In this episode, Roie from Pinecone breaks down the "Franken answer" problem plaguing AI systems, why naive retrieval falls apart at scale, and what most teams are getting wrong about evaluation. He reveals how the AutoGPT explosion nearly took down Pinecone's infrastructure overnight — and the radical architecture shift it forced them to build. We dig into why LLMs can't be trusted without grounding, what AI memory will actually look like in the age of agents and robots, and where the line between useful hallucination and dangerous fiction really sits. If you're building anything with RAG, vectors, or agents, this conversation will change how you think about it.https://www.pinecone.io/
undefined
Mar 11, 2026 • 1h 1min

You Can't Have an AI Story Without a DataStory ft. Dalan Winbush, Nasuni

95% of enterprises are failing at AI. Not because the technology doesn't work, but because they're measuring the wrong things. In this episode, Nasuni CIO Dalan Winbush breaks down why adoption metrics are meaningless without real business impact, why his decentralized AI team failed and what he replaced it with, and how he's building an army of digital employees that will match his 800-person workforce. From his sales agent NORA to a hiring agent that cut time-to-hire by 80%, Dalan isn't theorizing. He's sharing the playbook he's running right now.
undefined
Mar 5, 2026 • 56min

Making Insurance Fair: How Tuio Puts Customers First With AI ft. Juan Garcia, Tuio

Insurance was built to profit off confusion, and Tuio is proving it doesn't have to be that way. Juan Garcia and his co-founders rebuilt insurance from the ground up as a fully digital, AI-native company in Spain, and they're running 3x the industry's average profit margins while charging customers less. The secret isn't just slapping AI onto old processes. It's rethinking every layer of the business, from data infrastructure to claims handling to marketing, so that AI actually compounds in value. If you want to understand what it really looks like when a company is built around AI instead of bolting it on, this is the episode.
undefined
Feb 24, 2026 • 49min

Same Effort, 10x Results: How a Neurodivergent Artist Uses AI as a Force Multiplier ft. Victor Varnado

A billion-dollar liquor company paid a rapper with ADHD low six figures to build them a custom video game, and he delivered it in two weeks. That rapper is Victor, and in this episode he shows two games side by side, built on the same timeline: one before AI, one after. He's also a New Yorker cartoonist, NSF grant recipient, patent holder, and TV producer who will tell you straight when AI is not the right tool for the job. He breaks down a business model where every customer becomes a permanent marketing engine, and walks through the writing coach he built specifically for neurodivergent creators like himself. If you've written off AI as a crutch for people who don't want to do real work, this is the conversation that complicates that.
undefined
Feb 17, 2026 • 49min

AI Prototyping at Zero Cost: How Ian Cook Ships What Others Can't

95% of enterprise AI projects fail, but not for the reasons you think. Ian Cook has spent 16 years shipping AI products across healthcare, physical security, consumer goods, and now cultural data, and the pattern behind the failures is always the same: companies start from the top down with vague mandates instead of solving a specific person's specific problem. In this episode, Ian breaks down his framework for AI implementations that actually stick. Start bottom-up, ask a tangible question, and know what "done" looks like before you write a line of code. We get into why the cost of throwing away code is now zero and what that means for how fast you can experiment, how tools like Claude Code have changed what a solo engineer can ship in an afternoon, and which industries are about to get hit hardest by this wave. Ian doesn't sell magic. He builds prototypes that put working software in people's hands, and his track record speaks for itself.
undefined
Feb 3, 2026 • 1h 2min

Humanoid Robots Are a Distraction (Here's What Actually Works) ft. Grigorij Dudnik

What happens when AI leaves the screen and enters the physical world? It breaks.Spencer and Jacob sit down with robotics researcher Grigorij Dudnik, who's been running real experiments with real robots—and finding that most of our assumptions about AI fall apart the moment hardware gets involved. The big one: the idea that a single massive model can do everything. Grigorij makes the case that the future isn't a "super-intelligent" humanoid. It's a modular system where an LLM plans, specialized models act, and physical constraints keep everything honest.They get into why humanoids are overrated, why generalization keeps failing in robotics, and why a system built on narrow, composable skills might be the actual breakthrough everyone's overlooking.
undefined
Jan 27, 2026 • 58min

The Hidden Tradeoffs of AI Automation (and Why Friction Still Matters) ft. Jakob Ambuehl, Brex

How does generative AI actually work inside a large, regulated fintech company when real customers, real money, and real regulations are on the line?The AI Rebels Crew sits down with Jake from Brex’s Customer Experience Strategy team to reveal what actually happens when AI moves from hype to production. You’ll hear how a major fintech deploys AI in customer support without sacrificing trust, compliance, or the human experience, and why most AI implementations fail to do the same.We break down the metrics that matter (and the ones that quietly lie), why “containment rate” can mask bad experiences, and how Brex measures whether AI is genuinely helping customers or just making dashboards look good. Jake also explains how his team evaluates AI mistakes versus human error, and what responsible recovery really looks like at scale.Along the way, you’ll learn why friction isn’t always the enemy, how guardrails prevent costly failures, and why the most effective AI systems don’t replace people, they make the right work possible for them.If you want a clear, practical understanding of how AI succeeds in the real world (not in slide decks) this episode is worth your time.(AI Generated Summary)brex.com
undefined
Jan 22, 2026 • 55min

From Hype to Controls: Securing AI Before Regulation Catches Up

Most companies are racing to adopt AI, but almost none can explain who’s responsible when it goes wrong. Cordell Robinson of Brownstone Consulting joins the AI Rebels crew to unpack the uncomfortable truth about AI governance, security, and compliance in a world moving faster than regulation. Why is the U.S. is lagging behind on enforceable AI rules, how can existing frameworks like NIST and ISO be adapted to fill the gap, and what does “good governance” actually look like in practice? These are all questions you'll hear answers to. Cordell also shares why annual security checks are no longer enough, and how agentic AI could shift penetration testing from a once-a-year ritual to an on-demand capability. If you’re building, deploying, or betting on AI, this conversation will change how you think about risk, responsibility, and readiness.https://bcf-us.com/

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app