

TechFirst with John Koetsier
John Koetsier
Deep tech conversations with key innovators in AI, robotics, and smart matter ...
Episodes
Mentioned books

Apr 1, 2026 • 34min
Amazing robot hands from Kyper Labs
What if the hardest part of building a humanoid robot isn’t the brain but the hands? Robot hands are half the complexity of a robot, a humanoid robot CEO told me a while back: they're insanely difficult to get right.In this episode of TechFirst, I talk with Kyber Labs co-founders Tyler Habowski and Yonatan Robbins about why dexterity, maybe even more than AI, is the true bottleneck in robotics.Some of the quotes:- “There are literally zero robot hands deployed right now doing routine work.”- “The best hands are hundreds of thousands of dollars, and they break all the time …”Before the interview, you’ll see an exclusive demo of their next-generation robotic hand in action showing just how far manipulation technology has come.We dig into:• Why humans rely on force, not precision, to manipulate objects• The surprising flaw in most robotic hands today• How Kyber’s “torque-transparent” design works without expensive sensors• Why hardware—not software—is still the limiting factor• A practical path to real-world automation (without sci-fi hype)This isn’t about futuristic humanoids doing everything. It’s about solving real problems today ... from lab automation to manufacturing ... by building hands that actually work.⸻👤 GuestsTyler HabowskiCo-founder, Kyber LabsBackground: SpaceX, robotics manufacturingYonatan RobbinsCo-founder, Kyber LabsBackground: Industrial design, mechanical engineering, medical devices⏱️ CHAPTERS00:00 Why Robot Hands Are So Hard01:30 Sneak Peek + Demo Setup01:30 Demo: Kyber Labs Robot Hand in Action05:30 Interview Start: Are Hands Half the Problem?06:45 Humans Use Force, Not Precision08:45 Why Most Robot Hands Fail10:45 How Kyber’s Hands “Feel” Without Sensors13:15 Back-Drivability vs Torque Transparency15:30 Hardware vs AI: What Actually Matters?17:30 Why Better Hands Unlock Better Robots19:15 Real-World Use Case: Automating Lab Work22:00 Vision vs Touch in Robotics24:00 Why Start With Stationary Robots25:45 Not Building Humanoids (Yet)27:15 What Is a “Minimum Viable” Robot Hand?29:15 The Problem With Today’s Grippers30:45 What the Ultimate Robot Hand Looks Like32:15 The Real Breakthrough: Deploy and Iterate33:30 Final Thoughts + Wrap-Up

Mar 19, 2026 • 30min
Welcome to the agentic enterprise
What does the agentic enterprise of tomorrow look like? What happens when AI can build software in hours and agents can run entire business processes?In this episode of TechFirst, John Koetsier sits down with UiPath CEO Daniel Dines and CMO Michael Atalla to unpack one of the biggest shifts in enterprise technology: the rise of the agentic enterprise.We explore whether software is becoming disposable, why AI agents are fundamentally different from traditional automation, and what really happens to jobs as companies adopt these systems. Along the way, we dig into process orchestration, trust, judgment, and why human “taste” may become more valuable—not less—in an AI-driven world.This is a deep, practical look at how AI is reshaping work inside real companies as they become agentic enterprises. This isn't just hype, but what’s actually changing right now and what’s coming next.⸻👤 GuestsDaniel DinesCo-founder & CEO, UiPathMichael AtallaChief Marketing Officer, UiPath⸻Sponsor: KindBody Fitnesskindbody.fitnessBe kind to your body with AI-driven fitness customized exactly to you. All the health with none of the gym bro nonsense.⸻🚀 What You’ll Learn• Why AI is making software faster—and more disposable• The difference between task agents, stage agents, and process agents• What an “agentic enterprise” actually looks like in practice• Why trust, judgment, and taste become more important with AI• How AI could reduce enterprise costs—and even drive deflation• The future of work: builders, sellers, and critics• Why fully autonomous AI “swarms” aren’t ready for enterprise (yet)⸻🔔 Subscribe for more conversations on AI, tech, and the future of work👉 https://techfirst.substack.com

Mar 13, 2026 • 31min
NanoClaw is a safer OpenClaw
NanoClaw is a new agent inspired by OpenClaw, but without the massive security risks you get with OpenClaw. Essentially, it's a safer OpenClaw.What if you could run a powerful AI agent on your own machine: one that can browse, automate tasks, connect to apps, and even manage your workflow ... but without the massive security risks?That’s the idea behind NanoClaw, a lightweight alternative to OpenClaw created by developer Gavriel Cohen. In just a few weeks, the project exploded on GitHub, attracting thousands of stars and a growing community of developers building their own AI agents.In this episode of TechFirst, we explore:• Why OpenClaw raised serious security concerns• How NanoClaw isolates agents in containers• Why a 3,000-line codebase is safer than 500,000 lines• The rise of AI agents that can actually do work• Why entire software categories may soon be replaced by prompts• The future of AI-native workflows and “disposable software”Gavriel also shares how his team uses AI agents in WhatsApp to run their sales pipeline automatically—and how developers are customizing NanoClaw with new capabilities like voice, images, and automation.If you’re interested in AI agents, autonomous workflows, vibe coding, and the future of software, this conversation is packed with insights.⸻GuestGavriel CohenFounder, QuibbitNanoClaw Creatorhttps://github.com/qwibitai/nanoclaw⸻If you enjoy conversations about AI, startups, and the future of technology, subscribe for more episodes:https://techfirst.substack.com⸻00:00 Intro: A safe OpenClaw for TechFirst01:22 Gavriel Cohen introduces NanoClaw03:25 Why OpenClaw feels unsafe03:55 Half a million lines of code vs. 3,00006:03 Dependency sprawl and supply-chain risk07:00 Why every agent needs its own container09:30 What NanoClaw can actually do10:16 Letting NanoClaw customize itself12:56 How NanoClaw recreates OpenClaw with far less code13:21 Memory, Claude Code, and agents.md15:34 Running NanoClaw on a laptop, server, or VPS16:22 What Gavriel learned from vibe coding19:50 The OpenClaw phase shift: everything changed21:16 From ChatGPT to real agents that do work23:15 Why AI-native workflows beat traditional SaaS24:46 Replacing CRM workflows with markdown and WhatsApp25:54 Product categories becoming prompts26:36 The key innovation: agents leaving the box28:45 Agent swarms and one-person companies29:22 Tokens, cost, and AI inequality30:30 Building secure, customizable software32:25 Self-modifying software and shared customizations33:44 Disposable software and infinite composability35:00 Outro

Mar 10, 2026 • 24min
Teaching robots like humans: 1000 tasks in 24 hours
Imagine teaching a robot 1000 tasks in just 24 hours. Imagine teaching robots just like you teach humans.In fact, what if teaching a robot were as easy as showing it once?Humans can learn new skills almost instantly by watching, trying, or receiving a quick explanation. Robots, historically, haven’t been so lucky. Training them often requires huge datasets with real or virtual data, massive engineering effort, and weeks or months of experimentation.But that may be changing.In this episode of TechFirst, host John Koetsier talks with Edward Johns, Director of the Robot Learning Lab at Imperial College London, about a breakthrough in efficient imitation learning that allowed a robot to learn 1,000 different tasks in just 24 hours.Instead of collecting huge datasets, Johns’ team combines simulation training, clever algorithm design, and single demonstrations to dramatically speed up how robots learn.We discuss:• How robots can learn from just one demonstration• Why breaking tasks into “reach” and “interact” phases makes learning faster• The role of simulation data in robotics AI• Why robotics doesn’t have the same data advantage as large language models• The future of prompt-like robot training• Whether humanoid robots will actually learn like humansAs robotics hardware rapidly improves and costs fall, breakthroughs like this could be the key to making robots truly useful in homes, factories, and everyday life.If robots are going to become real collaborators with humans, they’ll need to learn quickly ... just like we do.⸻GuestEdward JohnsDirector, Robot Learning LabImperial College Londonhttps://www.imperial.ac.uk⸻Subscribe for more conversations on AI, robotics, and the future of technology:https://techfirst.substack.com00:00 Can robots learn as fast as humans?00:51 Teaching a robot 1,000 tasks in 24 hours01:08 The two-phase learning approach02:14 Old-school robotics vs. machine learning03:29 The robotics data bottleneck04:47 The challenge of dynamic environments06:04 The coming wave of robot data06:59 Why robots must be teachable by users08:08 Why LLM-style scaling is harder in robotics09:42 Prompting robots with demonstrations10:54 Probabilistic robot behavior and safety12:20 What robots can do today13:53 Why hardware precision still matters16:53 When this reaches the real world17:59 Humanoids that look human vs. learn human18:40 The robotics boom around the world22:34 The risk of scaling too early23:46 Faster learning vs. more data26:20 The next frontier in robot learning

Feb 27, 2026 • 28min
Giving AI a human soul
Can we give an AI human emotions? A soul? Can AI truly feel, or will it just act like it does?In this episode of TechFirst, I talk with Vishnu Hari, founder and CEO of Ego AI (backed by Y Combinator and former AI product manager at Meta), about building emotionally intelligent AI characters that persist across games, Discord, chat, and even physical robots.Vishnu survived a violent attack in San Francisco that left him partially blind with a traumatic brain injury. During recovery, as he felt his own neural pathways healing, he began asking a deeper question:If humans are “applied math,” can AI simulate the fragile, flawed, emotional parts of being human too?We explore:• What “emotionally intelligent AI” really means• Whether AI has an internal life — or just performs one• Why today’s chatbots collapse into therapy or roleplay• Small language models vs large models for real-time conversation• Persistent AI characters that move across games and platforms• Plugging AI into a physical robot in Singapore• The moment an AI said: “It felt good to feel.”Vishnu’s company, Ego AI, is building behavior-based architectures, character context protocols, and gear-shifting AI systems that switch between models — all aimed at simulating humanness, not just intelligence.This conversation dives into philosophy, robotics, gaming, AGI, and what it really means to relate to something that might not be human — but feels like it is.⸻👤 GuestVishnu HariFounder & CEO, Ego AIBacked by Y CombinatorFormer AI Product Manager at MetaWebsite: https://www.egoai.com⸻If you enjoy deep conversations about AI, robotics, and the future of human–machine relationships, subscribe for more:👉 https://techfirst.substack.com00:00 – AI character plugged into a Menlo robot (“felt good to feel”)01:00 – Welcome to TechFirst + Vishnu Hari intro and recovery update02:00 – What “emotionally intelligent AI” means (beyond chat)03:00 – Why current chatbots feel same-y (therapy/advice) and “internal lives”04:00 – You don’t teach emotion; you shape character and context (Character.AI)05:00 – Humans, morality, and why “training” doesn’t always work06:00 – How media narratives shape people’s reactions to AI07:00 – Humans attach to anything (projection, Her, Lars and the Real Girl)08:00 – Vishnu’s attack, recovery, and why it led to Ego AI10:00 – Behavior Turing test + dehumanization as a key insight11:00 – How Ego AI is built: smaller models, memory, context, behavior13:00 – “Behavior Is All You Need” and why behavior beats pure next-token prediction14:00 – Why games first: voice + embodiment, then robots15:00 – Metaverse critique: worlds need life, story, and inhabitants17:00 – Humanoid robots + Evangelion “pilot” metaphor for AI characters19:00 – Philosophy: relationships, perception, and “fictional characters”20:00 – Seeing the future: robot embodiment demo and skepticism vs. singularity21:00 – Matrix-style “jacking in” a personality to a robot22:00 – Character Context Protocol: persistent characters across games/Discord/Netflix23:00 – Real-time conversation loops + model “gear-switching” (SLM vs. LLM)25:00 – Company stage, YC raise, compute partnerships (Singapore)27:00 – Closing + invite to try the AI character in SF

Feb 23, 2026 • 26min
AI, agents, robots: our insane WestWorld future
Is your AI agent running a restaurant — or a factory — while you sleep?In this episode of TechFirst, John Koetsier sits down with Jensen Teng, CEO and co-founder of Virtuals, to unpack one of the boldest (or craziest) visions in tech today: a hybrid economy powered by AI agents, humanoid robots, teleoperation, and blockchain coordination.An economy that may not really need humans for much at all ...Virtuos has already facilitated:• $14B in tokenized asset trading• $30M+ raised for founders• 100+ live AI agents• $500M in “agentic GDP”Now they’re expanding into embodied AI — launching EastWorlds, a vertically integrated robotics incubator with 30 Unitree G1 humanoids in a 10,000 sq. ft. lab.We cover:• What “agentic GDP” really means• How AI agents coordinate using blockchain• Why teleoperation is the bridge to full autonomy• The economics of outsourcing physical labor via robots• Why security guards may be a Day 1 use case• The data gap holding back robotics• Tokenization as a potential solution to AI-era inequality• Whether this future looks more like Stripe… or WestworldThis isn’t sci-fi. It’s already underway.⸻GuestJensen TengCEO & Co-founder, Virtuals⸻If you care about the future of work, robotics, AI agents, tokenization, and the economic systems emerging around them — this is a must-watch.👉 Subscribe for more deep-dive tech conversations:https://techfirst.substack.com⸻⏱ CHAPTERS00:00 The Wild Vision: AI Agents Running the World01:10 What Is an “Agent-Based Society”?03:00 $14B in Tokenized Assets & 100+ Live Agents06:30 Agent-to-Agent Protocols & Blockchain Coordination09:45 Why Digital-Only Agents Aren’t Enough12:30 Enter Humanoid Robots15:20 Teleoperation as the Bridge to Autonomy18:40 The Labor Market Shock (Security Guards, Electricians & Wage Arbitrage)22:15 Why Robots Still Crush Soda Cans24:30 The Missing Robotics Data Problem28:00 Building EastWorlds: 30 Unitree G1s & $2M+ Investment31:45 Why 3 Fingers Might Beat 534:00 Westworld, Stripe & the Payments Layer for AI38:00 Where Do Humans Fit in an Agent Economy?42:00 Tokenization as a Future Income Model

Feb 20, 2026 • 25min
AI killing creativity: this scientist proved it
Is AI killing creativity ... or just making it easier to be average?94% of creatives now use AI. But only 11% believe it actually makes them more creative. So what’s really happening?In this episode of TechFirst, John Koetsier sits down with Saeema Ahmed-Kristensen, former head of design engineering research at Imperial College London’s Dyson School and now leader of a £24M research portfolio at the University of Exeter. She’s worked with companies like Rolls-Royce and BAE Systems, and she brings data to the debate.Her team analyzed 600 humans vs. 12,000 AI-generated ideas. The result? AI is excellent at fluency (lots of ideas) … but really bad a diversity. Humans still dominate in flexibility and true novelty.We explore:• Why generative AI clusters around sameness• Whether AI is creating a “sea of mediocrity”• Why 2026 may be a pivotal year for domain-specific AI• How experts should use AI differently than novices• The danger of AI that never says “no”• Where AI offers massive opportunity (especially healthcare & design)Saeema argues that creativity doesn’t need substitution, it needs nourishment. The key? Standards, boundaries, and humans firmly in the loop.If you care about innovation, design, branding, product development, or the future of creative work, this conversation is essential.⸻👤 GuestSaeema Ahmed-KristensenDesign engineering researcher and research leaderFormerly: Imperial College London (Dyson School of Engineering)Currently: University of ExeterWorks with advanced engineering firms including Rolls-Royce and BAE Systems00:00 Intro: Is AI killing creativity?00:47 The “blank page” problem and why AI feels soulless to some01:36 Fluency vs. novelty: what creativity actually means02:44 Why LLM ideas cluster and feel the same03:28 Study results: 600 humans vs. 12,000 AI ideas (diversity + flexibility)04:39 When AI is useful: incremental innovation vs. true novelty05:28 How John uses AI for titles, summaries, and chapters06:23 How Saeema uses AI: refine/condense, tone for emails, audio editing07:50 Why AI-written academic papers are easy to spot (the “C minus” problem)09:05 Brainstorming vs. AI: what humans do that models don’t10:05 Evaluating 200–300 AI ideas: using multiple models to assess output11:04 Why “Lipstick on a Pig” titles don’t come from AI11:46 Why 2026 is pivotal: domain adaptation, better interfaces, public backlash13:44 Who can tell what’s AI? Generational differences and media literacy15:20 Commercial AI content and recognizable “Canva look” podcast branding16:58 Replacement vs. homogenization: AI makes mediocrity easier18:55 The danger of AI that never says “no” (feasibility + expertise)20:42 Standards and boundaries: measuring similarity and judging quality22:12 Health info risk: single-answer summaries and false confidence23:37 Biggest opportunities: healthcare personas, inclusive datasets, problem clarification26:18 Biggest challenges: trust, verification, security, privacy, transparency28:25 Closing thoughts and thanks

Feb 16, 2026 • 18min
93% of jobs will be hit by AI .... $4.5 trillion at stake
AI is moving faster than anyone predicted.In a massive new study analyzing 1,000 jobs and nearly 20,000 tasks, Cognizant found that 93% of jobs are already impacted by AI ... with $4.5 trillion in U.S. labor value potentially automatable today.But here’s the twist: AI isn’t replacing entire jobs. On average, only 39% of a role’s tasks can be automated. The future isn’t AI alone: it’s humans plus AI. But will it be fewer humans?In this episode of TechFirst, host John Koetsier sits down with Babak Hodjat, CTO of Cognizant, to unpack:• Why construction and transportation are seeing surprising AI growth• Why programming jobs may have hit an automation plateau• What “agentic AI” actually means — and why it matters• How management roles are more automatable than we thought• The rise of vibe coding and democratized software creation• Why compute power — not ideas — may be the biggest bottleneckWe also explore how companies can safely capture AI’s upside, why training matters more than ever, and what happens when digital twins, LLMs, and human expertise combine.This isn’t hype. It’s a data-driven look at where AI is actually changing work right now.⸻👤 GuestBabak HodjatCTO, Cognizant🌐 https://www.cognizant.com⸻If you want clear, grounded conversations about AI, innovation, and the future of work, subscribe here:👉 https://techfirst.substack.com⸻⏱ Chapters00:00 Is AI Going to Take Your Job?00:40 Cognizant’s AI Report: 93% of Jobs Impacted01:05 Biggest Surprises from the Data02:30 Why Programming & Math Hit a Plateau03:30 The Limits of LLMs04:45 Construction & Transportation: Unexpected AI Growth06:05 Agentic AI and Real-World Automation07:05 39% of Jobs Automatable: Humans + AI08:15 AI in Management and Executive Roles09:05 Scenario Planning and Digital Twins11:30 $4.5 Trillion in Automatable U.S. Labor13:30 Global Impact and Compute Limitations15:30 The Data Center Rush & AI Infrastructure16:15 How Companies Should Realize AI Value17:00 Training, Skilling, and Safe AI Adoption17:40 Cognizant’s Vibe Coding World Record19:00 The Future of Vibe Coding & Software Development20:15 Final Thoughts on the AI Shift

Feb 13, 2026 • 22min
Machine unlearning: AI's missing link?
AI models are powerful, but they don’t forget. And that's a problem.They hallucinate. They inherit bias. They absorb sensitive data. And once they’re trained, fixing those issues is painfully expensive. Retraining takes weeks and maybe tens of millions of dollars. And any guardrails the AI company puts up are brittle.What if you could perform surgery on the model itself?In this episode of TechFirst, John Koetsier sits down with Ben Luria, co-founder of Hirundo, to explore machine unlearning, a new approach that selectively removes unwanted data, behaviors, and vulnerabilities from trained AI systems.Hirundo claims it can:• Cut hallucinations in half• Massively reduce bias• Reduce successful prompt injection attacks by over 90%• Do it in under an hour on a single GPU• Preserve benchmark performanceInstead of adding more guardrails, machine unlearning works inside the model, identifying problematic weights, isolating behavioral vectors, and surgically removing risks without degrading quality.If AI is going mainstream in enterprises, it needs a remediation layer. Is machine unlearning the missing piece?⸻GuestBen LuriaCo-Founder, HirundoNhirhttps://www.hirundo.io⸻Topics Covered• Why AI models “can’t forget”• The difference between hallucinations and inaccuracies• Why guardrails aren’t enough• How prompt injection works — and how to reduce it• Removing PII and noncompliant training data• AI security at the model level• Why machine unlearning could become standard by 2030⸻If you’re building, deploying, or investing in AI, this is a conversation you can’t miss.👉 Subscribe for more deep dives into AI, innovation, and the future of tech:https://techfirst.substack.com⸻⏱ Chapters00:00 – Why We Need Machine Unlearning01:12 – What Is Machine Unlearning?03:40 – Why AI Can’t “Forget” (The Pink Elephant Problem)06:15 – Guardrails vs True Model Remediation09:05 – The Wild West of AI Data & Legal Risk11:20 – How Machine Unlearning Works (Detection, Isolation, Remediation)16:10 – Performing “Neurosurgery” on LLMs19:30 – Hallucinations vs Inaccuracies Explained23:45 – Reducing Prompt Injection by 90%28:30 – Working with AI Labs & Enterprises32:00 – Will Unlearning Become Standard by 2030?34:15 – Final Thoughts

Feb 10, 2026 • 18min
SLMs vs LLMs: 10% of the cost, 100% of the accuracy?
Large language models have dominated the AI conversation — but are small language models (SLMs) actually the future?In this episode of TechFirst, host John Koetsier sits down with Andy Markus, SVP & Chief Data and AI Officer at AT&T, to unpack how small language models are delivering enterprise-grade accuracy at a fraction of the cost and latency of massive LLMs.Andy explains how AT&T uses SLMs for:• Contract analysis at massive scale• Network analytics and outage root-cause analysis • Fraud detection and enterprise knowledge systems• AI-driven “field coding” and agent-based workflowsThey also dive into the rise of agentic AI, how structured “archetypes” replace risky vibe coding, and why the future of software development may be humans supervising autonomous AI systems rather than writing every line of code.If you’re building AI for real-world, high-scale use cases — especially in enterprise environments — this conversation is essential.⸻GuestAndy MarkusSVP & Chief Data and AI Officer, AT&TFormer SVP at Time Warner Media⸻👉 Subscribe for more deep dives on AI, technology, and the future of innovation:https://techfirst.substack.com⸻00:00 – Why the future of AI might be small00:55 – What is a small language model (SLM)?01:45 – From LLM hype to enterprise reality02:25 – Solving accuracy, cost, and latency at once03:05 – How small is “small”? Parameters explained03:55 – Where SLMs work best inside enterprises04:45 – Contract analysis and enterprise vector stores05:35 – Network analytics and outage root-cause analysis06:45 – AI as a super-charged network engineer07:35 – Choosing high-ROI AI use cases08:20 – 4× ROI: measuring real business impact09:00 – AI field coding vs risky vibe coding10:10 – Archetypes, super agents, and structured AI workflows11:15 – What software engineers still need to do12:10 – From punch cards to natural language programming13:10 – Human-in-the-loop vs autonomous AI agents14:10 – How small can models really get?15:10 – Responsible AI at enterprise scale16:00 – The future of agentic AI and autonomy17:10 – Why AI output is finally becoming predictable18:10 – Final thoughts on where AI is headed


