

The CTO Show with Mehmet Gonullu
Mehmet Gonullu
Broadcasting from Dubai, The CTO Show with Mehmet explores the latest trends in technology, startups, and venture funding. Host Mehmet Gonullu leads insightful discussions with thought leaders, innovators, and entrepreneurs from diverse industries. From emerging technologies to startup investment strategies, the show provides a balanced view on navigating the evolving landscape of business and tech, helping listeners understand their profound impact on our world.
mehmet@yassiventures.com
mehmet@yassiventures.com
Episodes
Mentioned books

Jan 26, 2026 • 38min
#567 Engineering Creativity: Peadar Coyle on Scaling AI Audio Infrastructure
In this episode of The CTO Show with Mehmet, Mehmet sits down with Peadar Coyle, Co-Founder and CTO of AudioStack, to explore how AI is transforming audio production from a creative craft into scalable infrastructure.Peadar shares how AudioStack built production-grade AI systems for media and brands worldwide, why audio is becoming a systems problem, and how founders and CTOs can balance speed, quality, and creativity in the age of generative AI.From programmatic advertising in the UAE to shipping daily in fast-moving startups, this conversation dives deep into the technical, strategic, and cultural realities of building AI-powered platforms.⸻👤 About the Guest: Peadar CoylePeadar Coyle is the Co-Founder and CTO of AudioStack, an AI-native audio production platform serving global media and entertainment companies.With a background in data engineering, open-source development, and philosophy, Peadar brings a rare blend of technical depth and human-centered thinking to AI systems design. He is passionate about building reliable, ethical, and scalable infrastructure for creative industries.https://www.linkedin.com/in/peadarcoyle/⸻🔑 Key Takeaways • Why audio production is shifting from “creative workflows” to “AI infrastructure” • How AI accelerates creativity instead of replacing it • The importance of shipping small, fast, and safely • Why observability and human-in-the-loop systems still matter • How to scale generative AI without losing trust • What founders get wrong about “AI prototypes vs real products” • How to build strong engineering culture in fast-changing environments • Why the last 10% of AI products is still the hardest⸻🎯 What You’ll Learn in This Episode • How AudioStack automated large-scale localized audio campaigns • How to balance customer demands with technical quality • How CTOs should rethink productivity with AI agents • What “production-ready AI” really means • How AI is changing product, engineering, and leadership roles • Why creativity remains a human advantage • How to prepare teams for continuous technological change⸻⏱️ Episode Highlights & Timestamps00:00 – Introduction & Peadar’s background02:00 – Why AudioStack was founded03:30 – Audio as infrastructure vs creativity05:00 – How AI accelerates creative iteration07:00 – UAE use case: Programmatic localized ads09:00 – Orchestration, latency, and reliability challenges11:00 – Observability and human-in-the-loop AI14:00 – Evaluating AI systems in production16:00 – Ethics, copyright, and trust in generative audio18:30 – Shipping fast: Engineering culture at AudioStack20:30 – Balancing customer needs with technical debt23:00 – Building culture in the AI era26:00 – How CTO roles are changing28:00 – Product + Engineering convergence30:00 – What makes great audio in the future32:00 – Advice for founders in creative AI35:00 – Final thoughts and recommendations⸻📚 Resources Mentioned • AudioStack Platform: https://www.audiostack.ai • Claude Code & AI Agents • AI Evaluation & Observability Tools • ISO/IEC 42001 (AI Management Systems) • SOC 2 Compliance Standards

Jan 23, 2026 • 42min
#566 Scaling Trust in Logistics: David Soileau on AI, Operations, and Leadership
In this episode of The CTO Show with Mehmet, Mehmet sits down with David Soileau, Co-Founder and CRO of Gophr, to explore how modern software, AI, and disciplined leadership are transforming industrial logistics.David shares his journey from the Marine Corps to building a nationwide on-demand delivery platform. He explains how Gophr pivoted during COVID and natural disasters, rebuilt its business model around accountability, and scaled with minimal overhead.The conversation dives deep into operational excellence, trust in B2B platforms, AI-powered logistics, and what it really takes to survive in a low-margin, high-pressure industry.⸻👤 About the Guest: David SoileauDavid Soileau is the Co-Founder and Chief Revenue Officer of Gophr, an on-demand logistics platform serving industrial, pharmaceutical, and enterprise customers across the United States.Before entrepreneurship, David spent 12 years in the U.S. Marine Corps and worked in industrial operations. His background in discipline, execution, and mission-driven leadership has shaped Gophr’s culture and growth strategy.Today, he leads revenue, partnerships, and expansion efforts while helping enterprises modernize their delivery infrastructure.⸻🎯 Key Takeaways • Why accountability and visibility are the foundation of trust in logistics • How Gophr successfully pivoted during COVID and hurricanes • The role of AI in vehicle selection, documentation, and compliance • How to scale a logistics company with only five full-time staff • Why low-margin industries demand technology-first thinking • Lessons from military leadership applied to startup execution • How to balance automation with human oversight⸻📚 What You’ll LearnBy listening to this episode, you’ll learn: • How to design logistics platforms that enterprise buyers actually trust • Why real-time tracking and digital documentation matter more than features • How AI can reduce operational errors in physical infrastructure businesses • How founders can grow under pressure without burning cash • What operational excellence looks like in practice • How to build resilience into your business model⸻⏱️ Episode Highlights & Timestamps00:00 – Introduction and David’s background02:10 – From marketplace to industrial logistics platform04:30 – The hidden costs of unreliable delivery07:20 – Building accountability through tracking and visibility10:15 – Operational metrics that matter in logistics13:40 – Scaling discipline and execution16:30 – AI-powered features at Gophr18:50 – Human-in-the-loop vs full automation22:00 – Risk management during crises24:40 – Margins and running lean in logistics27:10 – Military leadership in startups30:45 – Goal-setting and execution frameworks34:20 – Common founder mistakes in operations-heavy businesses36:50 – Gophr’s growth vision39:00 – Final advice for entrepreneurs⸻🔗 Resources Mentioned • Gophr Website: https://gophrapp.com/ • David Soileau on LinkedIn: https://www.linkedin.com/in/davidtheguy/

Jan 19, 2026 • 59min
#565 Startups Are Chains, Not Ropes: Lessons from 70+ Investments with Andrew Ackerman
In this episode, Mehmet sits down with Andrew Ackerman, two-time founder, early-stage investor with 70+ investments, accelerator leader, entrepreneurship professor, and author of The Entrepreneur’s Odyssey.Andrew shares hard-earned insights from running accelerator programs, investing across decades, and coaching founders at their most fragile moments. The conversation dives deep into why startups fail, what truly separates winning founders, how coachability beats ego, and why storytelling is more powerful than advice.They also explore how AI is reshaping entrepreneurship, why the bar for founders keeps rising, and why building faster is no longer a competitive advantage on its own.⸻👤 About the GuestAndrew Ackerman is a seasoned entrepreneur, investor, educator, and author. • Two-time startup founder • Investor in 70+ early-stage companies • Former accelerator leader (DreamIt) • Entrepreneurship professor • Author of The Entrepreneur’s Odyssey: A Novel Approach to Startup SuccessAndrew has spent decades working at the intersection of founders, investors, and large enterprises, giving him a rare inside view of what actually makes startups succeed or fail.http://www.linkedin.com/in/andrewbackerman⸻🧠 Key Takeaways • Startups fail due to broken links, not a single bad idea • Coachability matters more than confidence or experience • The best founders hold strong opinions loosely • Storytelling drives action better than direct advice • AI lowers the cost of building, but raises the bar for funding • First-mover advantage is weak without a real moat • Empathy is the hidden superpower behind great founders, salespeople, and storytellers⸻🎓 What You’ll Learn • Why startups should be viewed as chains, not ropes • How accelerators compress the learning curve for investors and founders • How to spot coachable founders early • Why experimentation beats gut instinct • How to test ideas cheaply before building • Why many founders hide in their comfort zone instead of doing the hard work • How AI changes the “why now” question for startups⸻⏱️ Episode Highlights & Timestamps00:00 – Introduction and Andrew’s background03:00 – Why running an accelerator changes how you see startups07:00 – Angel investing vs accelerator investing10:00 – Startups as chains, not ropes12:30 – Why startups fail in different ways14:30 – The one trait that separates great founders18:00 – Coachability, ego, and founder decision-making22:00 – Can entrepreneurship really be taught?25:00 – The “looking for money under the streetlight” founder trap28:00 – Why storytelling beats direct advice32:00 – SeatGeek origin story and early validation lessons36:00 – Empathy as a core founder skill40:00 – AI, hype, and what’s actually changing for startups45:00 – Why the investor bar keeps rising50:00 – Final advice for founders and investors⸻📚 Resources Mentioned • The Entrepreneur’s Odyssey by Andrew Ackerman: https://www.amazon.com/Entrepreneurs-Odyssey-Approach-Startup-Success/dp/1032883545/ref=tmm_pap_swatch_0 • Andrew’s website: https://www.andrewbackerman.com/

Jan 16, 2026 • 54min
#564 The Accessibility Advantage: Max Ivey on Why Inclusive Design Is a Competitive Edge
In this episode of The CTO Show with Mehmet, I’m joined by Max Ivey, known as The Blind Blogger and a leading voice in digital accessibility.Max shares his remarkable journey from growing up in a family-run carnival business to becoming an accessibility advisor helping companies rethink how they design products, websites, and AI tools. We go deep into why accessibility is not a legal checkbox but a business, UX, and growth advantage, and why most modern AI tools are still failing real users.This conversation is a masterclass for founders, product leaders, designers, and executives who want to build inclusive, scalable, and future-proof products.⸻👤 About the GuestMax Ivey is an accessibility expert, entrepreneur, speaker, and host of The Accessibility Advantage podcast. Blind since birth, Max brings decades of lived experience navigating technology, entrepreneurship, and digital products without sight.He advises startups and enterprises on building truly accessible and usable products, helping them move beyond fear-driven compliance toward inclusive design that benefits all users.⸻🧠 Key Takeaways • Accessibility improves UX for everyone, not just people with disabilities • WCAG compliance alone does not guarantee usability • Many AI tools are unintentionally scaling inaccessibility • Inclusive design builds brand loyalty, trust, and advocacy • Small companies can outcompete big players by embracing accessibility early • Designing with a keyboard-first mindset changes everything⸻📚 What You’ll Learn in This Episode • Why accessibility should be treated as a competitive advantage • How blind and disabled users actually navigate digital products • The hidden accessibility debt in AI-generated content • Practical principles for accessible and inclusive product design • The real business case behind accessibility, beyond legal risk • How founders can avoid common UX mistakes that cost revenue⸻⏱️ Episode Highlights & Timestamps00:00 – Introduction and welcome02:10 – Max Ivey’s journey from carnival business to accessibility advocate06:40 – Why Max chose to be open about his disability online10:30 – Teaching himself HTML to get online14:50 – How early tech limitations shaped Max’s mindset18:30 – Why many AI tools are still inaccessible23:10 – The danger of scaling inaccessible AI content27:40 – Why WCAG compliance is not enough31:20 – Keyboard-first navigation and real-world usability36:10 – Minimalist design and why complexity breaks accessibility41:30 – Accessibility, trust, and customer loyalty45:20 – The $21 trillion accessibility market opportunity49:40 – Accessibility as a growth and branding strategy54:10 – Perseverance vs stubbornness in entrepreneurship58:30 – Advice for founders facing adversity01:03:10 – Where to find and connect with Max01:05:00 – Final thoughts and closing⸻🔗 Resources Mentioned • Max Ivey’s website: theaccessibilityadvantage.com • Connect with Max on LinkedIn: Maxwell Ivey: https://www.linkedin.com/in/maxwellivey/ • The Accessibility Advantage podcast: https://podcasts.apple.com/us/podcast/the-accessibility-advantage/id1740242884?uo=4

Jan 12, 2026 • 40min
#563 From Geology to AI: Ahmad Saleem on Building the Google for Podcasts
In this episode of The CTO Show with Mehmet, I’m joined by Ahmad Saleem, Founder and CEO of Podyssey.Ahmad’s journey is anything but linear. From working as a geologist in mining and natural resources to earning a PhD in economics, moving into private equity, and eventually founding an AI startup, his path reflects deep curiosity, resilience, and systems thinking.We dive into why podcast discovery is fundamentally broken, how AI and natural language processing can unlock the real value hidden inside long-form audio, and what it takes to build and scale a product in an uncertain, fast-moving market.This conversation blends founder storytelling, product strategy, and honest reflections on failure, team building, and the future of content discovery.⸻👤 About the GuestAhmad Saleem is the Founder and CEO of Podyssey, an AI-powered search engine designed to help people discover specific insights within podcasts rather than just episodes.With a background spanning geology, economics, private equity, and natural language processing, Ahmad has been involved in nearly 20 ventures across his career. His work today focuses on applying AI to large-scale content discovery problems, particularly in long-form audio and multilingual environments.https://www.linkedin.com/in/ahmad-saleem-ansari/⸻🧠 Key Takeaways • Why podcast discovery is harder than ever despite the explosion of content • How AI and NLP enable searching inside conversations, not just titles • The difference between finding podcasts and finding relevant moments • Why categories and genres no longer work for modern podcast discovery • Lessons learned from nearly 20 ventures and how failure reshapes founders • How to build lean, distributed teams that move fast without sacrificing clarity • Where AI agents fit and do not fit in long-form content consumption⸻🎯 What You’ll Learn • How Podyssey is rethinking podcast discovery at a global scale • Why transcripts alone are not enough to understand context • How founders should think about feature creep after product-market fit • What changes when you build AI products in a remote, asynchronous world • How experience with failure changes decision-making and leadership⸻⏱ Episode Highlights & Timestamps • 00:00 Introduction and Ahmad’s background • 02:00 From geology and mining to economics and startups • 05:00 Why podcast discovery is broken • 09:00 AI, accents, transcription, and context challenges • 12:00 From episodes to snippets: rethinking podcast consumption • 15:00 Podyssey’s business model and monetization paths • 17:00 Avoiding feature overload after product-market fit • 20:00 Building lean, remote AI teams • 25:00 AI agents and the future of long-form content • 27:00 Failure, resilience, and restarting as a founder • 33:00 The global future of podcasting and multilingual discovery⸻🔗 Resources Mentioned • Podyssey platform: https://www.podyssey.com/

Jan 9, 2026 • 48min
#562 Agentic AI Is Not an Intern: Craig McLuckie on Control, Context, and Enterprise Reality
Agentic AI is moving faster than enterprise readiness.Boards are pushing adoption. Teams are deploying agents at speed. But security, control, and operational discipline are lagging behind.In this episode, Mehmet sits down with Craig McLuckie, the co-creator of Kubernetes and founder of Stacklok, to unpack why most agentic AI initiatives break after the demo and what enterprises must do differently to make them durable, secure, and production-ready.From MCP and context engineering to eval-driven development and why AI agents should never be treated like interns, this conversation goes deep into the realities CTOs, VPs of Engineering, and security leaders are facing right now.This is not a hype conversation. It’s an operator’s reality check for 2026.⸻👤 About the GuestCraig McLuckie is a foundational figure in modern cloud infrastructure. He is the co-creator of Kubernetes, founder of the Cloud Native Computing Foundation, and former VMware executive behind the Tanzu portfolio.Today, Craig is the founder and CEO of Stacklok, where he is focused on helping enterprises securely connect agentic AI systems to real-world infrastructure through open, controlled, and auditable platforms.https://www.linkedin.com/in/craigmcluckie/⸻🧠 Key Takeaways • Why agentic AI represents a true epoch shift, not just another tooling cycle • The real difference between demos, POCs, and production AI systems • Why MCP is powerful but dangerous without proper control layers • How context engineering is becoming more important than writing code • Why eval-driven development replaces test-driven development in AI systems • How enterprises should think about permissions, scope, and agent autonomy • Why most AI failures are workflow problems, not model problems • What 2026 realistically looks like for agentic AI adoption in the enterprise⸻🎯 What You’ll Learn • How to operationalize agentic AI without exposing your infrastructure • Why treating AI agents like humans is a security mistake • How to design guardrails without slowing teams down • Where CTOs should focus investment to move from hype to ROI • How leadership metrics and engineering evaluation must evolve in the AI era⸻⏱ Episode Highlights & Timestamps • 00:00 – Introduction and Craig’s journey from Google to Kubernetes • 03:10 – Why agentic AI feels like a historic inflection point • 06:05 – MCP explained and where enterprises get it wrong • 10:45 – The security risks nobody is talking about • 14:20 – Why AI agents should never be treated like interns • 18:30 – The danger of permission sprawl and tool pollution • 23:10 – Why most AI initiatives fail after the demo • 28:40 – Eval-driven development vs traditional software thinking • 34:15 – Context engineering as the new leverage point • 38:50 – How engineering leadership and metrics must change • 43:30 – What realistic agent adoption looks like in 2026 • 46:20 – Open source, ToolHive, and building durable AI platforms⸻🔗 Resources Mentioned • Stacklok: http://stacklok.com/ • ToolHive (Open Source MCP Platform): https://stacklok.com/toolhive/

Jan 5, 2026 • 50min
#561 Fall in Love With the Problem, Not the Product: Ghazenfer Mansoor on Why Startups Fail
In this episode, Mehmet sits down with Ghazenfer Mansoor, Founder and CEO of Technology Rivers, to unpack why so many software products fail quietly and what actually separates ideas that ship and scale from those that die early.Drawing on two decades of experience and over 60 shipped applications, Ghazenfer shares hard-earned lessons on customer discovery, feature bloat, technical debt, AI with real ROI, and building system-powered businesses that scale sustainably, especially in regulated industries like healthcare.This is a practical, no-fluff conversation for founders, CTOs, and operators building real products in a noisy AI-driven world.⸻👤 About the GuestGhazenfer Mansoor is the Founder and CEO of Technology Rivers, a custom software development company with deep expertise in healthcare, HIPAA-compliant systems, and AI-driven operational automation.He began his career as an early startup engineer, entered mobile development in its earliest days, and has since helped build and scale dozens of products. Ghazenfer is also the author of the upcoming book Beyond the Download, focused on building mobile apps people actually love and use.https://www.linkedin.com/in/gmansoor/⸻🧠 Key Takeaways • Why most startups fail by building solutions before validating problems • How feature bloat quietly destroys velocity, quality, and scalability • The hidden cost of technical debt and why postponing it always backfires • Why AI tools fail without clean data and mapped workflows • How regulated industries can innovate without breaking compliance • The shift from people-powered growth to system-powered growth • Why founders should think like acquirers from day one⸻🎯 What You’ll Learn • How to identify the real problem worth solving before writing code • How to prioritize features without killing your product roadmap • Where AI delivers real ROI versus where it’s just pitch-deck noise • How to design internal systems that create defensibility and valuation • Why compliance and innovation are not opposites • How to build products that users return to, not just download⸻⏱️ Episode Highlights & Timestamps • 00:02 Ghazenfer’s journey from early mobile engineering to healthcare software • 05:10 Why most startup ideas fail before reaching scale • 08:00 Feature race vs focus and why more features hurt products • 10:15 Technical debt explained in simple, practical terms • 14:00 AI in practice vs AI in pitch decks • 17:30 Why workflows matter more than tools • 19:45 Innovating in healthcare without breaking HIPAA • 23:00 RAG, hallucinations, and building safe AI systems • 26:45 Beyond the Download and building retention-first products • 35:30 Moving from people power to system power growth • 41:00 Thinking like an acquirer from day one • 46:00 Final advice on AI, innovation, and staying relevant⸻📚 Resources Mentioned• Technology Rivers https://technologyrivers.com/ • Beyond the Download by Ghazenfer Mansoor: https://technologyrivers.com/l/beyond-the-download/ • HIPAA compliance principles • Retrieval-Augmented Generation (RAG) architectures • AI tools including Claude, ChatGPT, and Gemini

Jan 2, 2026 • 50min
#560 Why DevOps Alone Is No Longer Enough: Michael Ferranti on FeatureOps and Reliability
In this episode of The CTO Show with Mehmet, Mehmet sits down with Michael Ferranti, a seasoned tech executive and product leader at Unleash, to explore why DevOps alone can no longer meet the reliability, speed, and risk demands of modern software systems.From real-world outages at Google and Cloudflare to the rise of AI-driven delivery, this conversation introduces FeatureOps as the missing control plane that allows teams to move faster without breaking production.⸻👤 About the GuestMichael Ferranti is a tech executive with over a decade of experience across DevOps tooling, infrastructure software, open source, and enterprise platforms. He has played key roles in scaling developer-focused technologies and advises organizations on balancing innovation, reliability, and governance at scale. Today, he focuses on FeatureOps as a foundational capability for modern engineering teams.⸻🧠 Key Takeaways • DevOps optimizes deployment, but FeatureOps governs runtime behavior • Many large-scale outages are caused by “big bang” releases without kill switches • Feature flags are not just for UI experiments, they are safety mechanisms • FeatureOps enables faster shipping and lower risk at the same time • AI-driven engineering increases the need for runtime control, not less⸻🎯 What You’ll Learn • Why DevOps alone breaks down at scale • How FeatureOps differs from traditional feature flagging • Lessons from Google and Cloudflare outages • When open source helps and when it complicates GTM • How AI changes release management and reliability decisions • Why human-in-the-loop control still matters in autonomous systems⸻⏱️ Episode Highlights & Timestamps • 00:02 – Michael’s journey from early cloud evangelism to FeatureOps • 04:00 – Scaling Portworx and why technology alone is not enough • 07:30 – Open source as a GTM strategy, myths and realities • 15:00 – Kubernetes, scale assumptions, and overengineering traps • 21:30 – What FeatureOps actually is and why it matters • 24:30 – Google outage case study and the cost of big bang releases • 27:30 – Cloudflare, kill switches, and runtime control • 31:00 – FeatureOps vs DevOps explained clearly • 35:00 – AI in release decisions and risk management • 43:00 – Human-in-the-loop engineering and future architectures⸻🔗 Resources Mentioned • Unleash Feature Management Platform: https://www.getunleash.io/ • Google SRE Handbook • DORA Reports on High-Performing Engineering Teams • ThoughtWorks Feature Management Practices⸻🔗 Connect with the Guest • Michael Ferranti on LinkedIn: https://www.linkedin.com/in/ferrantim/

Dec 29, 2025 • 46min
#559 AI Without the Black Box: Nat Natarajan on Building Trust at Global Scale
In this episode, Mehmet Gonullu sits down with Nat Natarajan, Chief Operating Officer and Chief Product Officer at Globalization Partners, to explore what it really takes to deploy AI in highly regulated environments.From labor laws and compliance across dozens of countries to human-in-the-loop AI systems, Nat shares how Globalization Partners built explainable, trustworthy AI that enterprises can actually rely on. This is a grounded, operator-level conversation on moving beyond AI hype toward real productivity and trust.⸻👤 About the GuestNat Natarajan is the Chief Operating Officer and Chief Product Officer at Globalization Partners, a pioneer in global employment solutions. He previously held senior leadership roles at companies including TurboTax (Acquired by Intuit), PayPal, RingCentral, Ancestry.com, and Travelocity. Nat brings decades of experience at the intersection of technology, regulation, and large-scale enterprise systems.https://www.linkedin.com/in/natrajeshnatarajan/⸻🧠 Key Takeaways • Why black-box AI fails in regulated industries • How human-in-the-loop design builds trust and adoption • The role of proprietary, vetted data in enterprise AI • Where general-purpose LLMs fall short for compliance-heavy use cases • Why AI should augment humans, not replace them • How CHROs and boards are rethinking AI as a “digital workforce”⸻🎯 What You’ll Learn • How to design AI systems that can explain their decisions • When to keep humans in the loop and when automation works best • How enterprises can deploy AI responsibly without slowing innovation • What makes AI adoption succeed inside large, global organizations • Why regulated complexity is an advantage, not a blocker, for AI⸻⏱️ Episode Highlights & Timestamps • 00:00 – Introduction and Nat’s background • 02:00 – Why regulated environments are ideal for AI, not hostile to it • 05:00 – Lessons from TurboTax and encoding legal reasoning into systems • 08:00 – Designing AI that avoids the black-box problem • 12:00 – Human-in-the-loop systems and guardrails • 16:00 – Why proprietary data beats generic models • 19:00 – Enterprise vs startup AI adoption dynamics • 23:00 – AI as a collaborator inside HR teams • 27:00 – Explainability, trust, and employee-facing AI • 32:00 – The CHRO’s role in an AI-powered workforce • 36:00 – From hype to real productivity with agentic AI • 40:00 – Final thoughts and advice for leaders adopting AI⸻📚 Resources Mentioned • Globalization Partners : https://www.globalization-partners.com/ • GIA: http://www.g-p.com/gia • Prediction Machines (Updated & Expanded Edition) – referenced by Mehmet

Dec 25, 2025 • 41min
#558 AI Is Easy to Build, Hard to Deploy: Data, Evaluation, and ROI with Bryan Wood
AI models are becoming commoditized, but deploying AI systems that deliver real ROI remains hard. In this episode, Mehmet sits down with Bryan Wood, Principal Architect at Snorkel AI, to unpack why data-centric AI, evaluation, and domain expertise are now the true differentiators.Bryan shares lessons from working with frontier AI labs and highly regulated enterprises, explains why most AI projects stall before production, and breaks down what it actually takes to deploy AI safely and at scale.⸻👤 About the GuestBryan Wood is a Principal Architect at Snorkel AI, where he works closely with frontier AI labs and enterprises to design high-quality, AI-ready datasets and evaluation frameworks.He brings over 20 years of experience in financial services, with a unique background spanning banking, engineering, and fine art. Bryan specializes in data-centric AI, programmatic labeling, AI evaluation, and deploying AI systems in high-compliance environments.https://www.linkedin.com/in/bryanmwood/⸻🧠 Key Takeaways • Why AI success is less about models and more about data and evaluation • How enterprises misunderstand ROI and why most projects stall before production • The difference between benchmark performance and real-world trust • Why evaluation must be bespoke, not off-the-shelf • How frontier labs approach data as true R&D • Why partnering beats building AI entirely in-house today • What’s realistic (and unrealistic) about autonomous agents in the near term⸻🎯 What You’ll Learn • How to move from AI experimentation to production deployment • How to design data that reflects real enterprise workflows • How to identify where AI systems actually fail, and why • Why regulated industries are proving grounds, not laggards • How startups can overcome data and talent constraints • Where AI is heading beyond today’s LLM plateau⸻⏱️ Episode Highlights & Timestamps00:00 – Introduction & Bryan’s background02:30 – Why data is now the real AI bottleneck05:00 – Models are commoditized. So what actually matters?07:45 – Why AI evaluation is harder than building AI11:30 – Enterprise misconceptions about AI readiness15:10 – Hallucinations, RAG failures, and finding the real problem18:40 – Why most AI projects fail to show ROI22:30 – Partnering vs building AI in-house26:00 – AI in regulated industries: myth vs reality30:10 – Startups, cold start problems, and data moats33:40 – Scaling data operations with small teams36:00 – What’s next: agents, data complexity, and AI timelines39:00 – Final thoughts and where AI is really heading⸻📌 Resources Mentioned • Snorkel AI – Data-centric AI and programmatic labeling: https://snorkel.ai/ • Enterprise AI evaluation frameworks • Frontier AI lab research practices • MIT studies on AI ROI and enterprise adoption


