An Hour of Innovation: AI, Product, Tech and Career Growth

Vit Lyoshin
undefined
Jan 27, 2026 • 45min

AI Video Analysis: How AI Is Changing Mental Health Care Between Doctor Visits l Loren Larsen

Patients often hide how they’re really doing, but when AI listens between visits, the truth finally comes out, reshaping mental health care with empathy and precision.In this episode of An Hour of Innovation podcast, host Vit Lyoshin sits down with Loren Larsen, founder and CEO of Videra Health, to explore how AI in healthcare is transforming behavioral health by capturing what patients actually say and feel outside the clinic, using human-in-the-loop AI to support better care decisions.They discuss why the most dangerous moments in mental health care often happen between doctor visits, how AI-based check-ins can surface real patient narratives, and why ethical, well-tested AI matters more than ever. The conversation breaks down the limits of score-based assessments, the risks of poorly built AI, and how technology can extend, not replace, clinical judgment. It’s a practical look at mental health technology that’s already being used in real clinical settings.Loren Larsen is a longtime builder at the intersection of AI, video, and human decision-making. Before founding Videra Health, he served as CTO of HireVue, deploying video AI at a massive scale. In this episode, his experience matters because he’s navigated bias, ethics, and real-world deployment, offering a grounded perspective on what responsible healthcare AI should look like today.Takeaways* The most dangerous moment in a mental health patient’s life is right after leaving inpatient care.* AI check-ins between visits restore visibility into patient wellbeing when clinicians cannot scale human outreach.* Patients often share more honestly with AI than with therapists because they feel less judged and less pressure to perform.* Mental health scores without narrative (like PHQ-9) miss the “why” behind patient distress.* AI should augment clinical judgment, not replace therapists, especially during high-risk treatment moments.* Generative AI is not ready to safely conduct therapy, particularly in crises.* Model drift can occur from unexpected factors, such as medications or cosmetic procedures, not just bad data.* Poorly built healthcare AI can look legitimate, making it hard for buyers to distinguish safe tools from risky ones.* Ethical healthcare AI requires clear consent, transparency, and human oversight, not just technical accuracy.* The biggest challenge in AI healthcare adoption is balancing speed, safety, and trust in a fast-moving market.Timestamps00:00 Introduction01:35 Videra Health Origin Story03:02 AI Patient Check-Ins Between Doctor Visits05:33 Why Human Judgment Still Matters in AI Care08:49 Gaps in Mental Health Patient Care12:07 AI vs Human Care in Mental Health13:23 Testing & Validating Healthcare AI Systems17:16 Edge Cases, Bias, and AI Model Failure19:29 Ethical AI in Healthcare23:33 Why Healthcare AI Adoption Is Hard25:43 Common Myths About AI in Healthcare30:02 Lessons from Building Video AI at Scale34:54 Early Warning Signs in AI Systems38:31 Advice for First-Time Video AI Builders42:05 Innovation Q&AConnect with Loren* Website: https://www.viderahealth.com/ * LinkedIn: https://www.linkedin.com/in/loren-larsen/ This Episode Is Supported By* Google Workspace: Collaborative way of working in the cloud, from anywhere, on any device - https://referworkspace.app.goo.gl/A7wH* Webflow: Create custom, responsive websites without coding - https://try.webflow.com/0lse98neclhe * Monkey Digital: Unbeatable SEO. Outrank your competitors - https://www.monkeydigital.org?ref=110260 For inquiries about sponsoring An Hour of Innovation, email iris@anhourofinnovation.comConnect with Vit* Substuck: https://substack.com/@vitlyoshin * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Website: https://vitlyoshin.com/contact/ * Podcast: https://www.anhourofinnovation.com/
undefined
Jan 17, 2026 • 45min

AI Isn’t the Problem! Why AI Adoption Fails at Work (95% Get Zero ROI) | Jay Kiew

Most teams adopt AI, expecting a breakthrough, but end up frustrated, disappointed, and wondering what went wrong when productivity doesn’t improve.In this episode of An Hour of Innovation podcast, Vit Lyoshin sits down with Jay Kiew, a globally recognized expert in organizational change and transformation, to unpack why so many AI initiatives fail to deliver value, even when the technology itself is powerful and widely available.They explore why AI alone does not create productivity or innovation, and why research shows that nearly 95% of companies see little to no ROI from their AI initiatives. Jay explains how broken processes, weak critical thinking, and low change readiness quietly sabotage even the best AI tools. Instead of chasing the next technology, this episode reframes AI adoption as a human and organizational challenge, one that requires mindset shifts before tools can deliver results.Jay Kiew is a change strategist and transformation leader who works with organizations navigating complex change at scale. He is known for helping leaders move beyond tool-driven thinking toward building adaptive, change-ready cultures. In this episode, Jay’s perspective matters because it challenges the assumption that AI failures are technical problems and shows why leadership, process discipline, and learning capability are the real differentiators.Takeaways* AI does not create productivity by itself; it only amplifies the quality of existing processes and decision-making.* Most AI initiatives fail not because of weak models, but because teams cannot clearly explain how their work actually gets done.* Research showing that 95% of companies see no AI ROI reflects organizational readiness gaps, not a lack of AI capability.* Poorly defined workflows become painfully visible the moment AI is introduced into a team.* Leaders often deploy AI as a solution before agreeing on what problem they are trying to solve.* Organizations that struggle with change management tend to struggle the most with AI adoption.* AI agents fail when humans cannot articulate rules, context, and success criteria for the work.* Critical thinking is becoming more valuable than technical AI skills as automation increases.* Change fluency, the ability to adapt continuously, is emerging as a core career skill for the next decade.* Teams that succeed with AI focus less on tools and more on learning, feedback loops, and behavior change.Timestamps00:00 Introduction01:48 Why Leaders Misunderstand AI03:22 How AI Reveals Organizational Dysfunction05:58 SOPs and Critical Thinking for AI Success08:41 AI Adoption and ROI Reality13:19 Learning and Integration Matter More Than Tools16:11 What AI Agents Really Are18:03 How AI Agents Change Roles22:42 Training Teams for AI Adoption23:59 Why Teaching AI Tools Is Hard25:49 Learning on the Job with AI28:01 Essential Skills for the AI Era29:03 Design Thinking and Influence32:16 Why Human Perception Matters33:17 Change Fluency as a Future Skill34:13 AI’s Real Impact on Productivity36:19 Asking Better Questions with AI37:55 Practical AI Use at Work39:38 Innovation Q&AConnect with Jay* Website: https://www.changefluency.com/ * LinkedIn: https://www.linkedin.com/in/jaykiew-change-fluency/ * Instagram: https://www.instagram.com/changefluency * Book: https://www.amazon.com/Change-Fluency-Principles-Uncertainty-Innovation/dp/1774586991 Sponsors* Google Workspace: Collaborative way of working in the cloud, from anywhere, on any device - https://referworkspace.app.goo.gl/A7wH * Webflow: Create custom, responsive websites without coding - https://try.webflow.com/0lse98neclhe * MeetGeek: Record, transcribe, summarize, and share insights from every meeting - https://get.meetgeek.ai/yjteozr4m6ln Connect with Vit* Substuck: https://substack.com/@vitlyoshin * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Podcast: https://www.anhourofinnovation.com/
undefined
Jan 10, 2026 • 41min

Can AI Steal Your Book? The Alarming Plagiarism Problem! | US Publishing Expert

What if your book could be copied, republished, and sold under someone else’s name, and you’d barely know it happened?In this episode of An Hour of Innovation podcast, host Vit Lyoshin speaks with Julie Trelstad, a longtime publishing leader and one of the most thoughtful voices on copyright, metadata, and digital trust. Julie brings a rare insider’s view into how books are discovered, distributed, and increasingly misused in an AI-driven world.They explore a growing fear among writers, creators, and publishers: how AI is quietly reshaping plagiarism, authorship, and trust in the publishing ecosystem.They examine how AI-generated content is blurring the line between original work and imitation, why traditional copyright protections struggle in a machine-readable world, and how fake or derivative books can appear online within days. The episode breaks down the real risks authors face today, not hypothetical futures, and what structural changes may be required to protect creative work. It’s a practical, sober look at AI plagiarism.Julie Trelstad is a publishing executive and strategist known for her work at the intersection of technology and intellectual property. She has spent decades helping publishers, authors, and platforms navigate the identification, protection, and trust of content at scale. In this episode, her perspective matters because she explains not just that AI plagiarism is happening, but why the system makes it so hard to detect and stop, and what could actually help.Takeaways* AI can clone and resell a book in days, and most platforms struggle to reliably prove that the theft occurred.* AI-generated plagiarism often looks legitimate enough to fool retailers, reviewers, and buyers.* Authors lose sales and reputation when fake AI versions of their books appear at lower prices.* Traditional copyright law exists, but it was never designed for machine-scale copying and AI training.* There has been no machine-readable way for AI systems to recognize who owns content, until now.* Content fingerprinting can detect similarity across languages and paraphrased AI rewrites.* Time-stamped content registries can establish legal proof of who published first.* Most books already inside AI models were scraped without the author's consent or compensation.* AI lawsuits focus less on training itself and more on the use of pirated content.* Authors could earn micro-payments when AI systems use specific paragraphs or ideas from their work.Timestamps00:00 Introduction01:37 Why AI Plagiarism Is So Hard to Detect03:25 Amlet.ai and the Fight for Content Ownership05:32 How Copyright Worked Before Generative AI08:09 The Origin Story Behind Amlet.ai12:22 Building Machine-Readable Infrastructure for Copyright14:24 How Publishing Is Changing in the AI Era17:34 How Authors Can Protect Their Work with Amlet.ai20:38 Tools Publishers Use to Detect and Enforce Rights21:38 How Authors Can Monetize Content Through AI24:27 The Reality of AI Scraping and Plagiarism Today27:00 Publisher Rights, Digital Security, and Enforcement29:08 Evolving the Business Model for AI Licensing35:34 The Future of Digital Ownership and AI Rights38:37 Innovation Q&ASupport This Podcast* To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/Connect with Julie* Website: https://paperbacksandpixels.com/ * LinkedIn: https://www.linkedin.com/in/julietrelstad/ * Amlet AI: https://amlet.ai/ Connect with Vit* Substuck: https://substack.com/@vitlyoshin * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Website: https://vitlyoshin.com/contact/ * Podcast: https://www.anhourofinnovation.com/
undefined
Dec 20, 2025 • 46min

Functional Precision Medicine: How Cancer Drugs Are Tested Before Treatment | Jim Foote

Cancer care still forces patients and doctors to guess! Learn how functional precision medicine is replacing that uncertainty by testing cancer drugs before treatment even begins.In this episode of An Hour of Innovation podcast, host Vit Lyoshin speaks with Jim Foote, co-founder and CEO of First Ascent Biomedical, an innovator who is challenging one of the most uncomfortable truths in modern medicine: many cancer treatments are chosen without knowing if they will actually work.First Ascent Biomedical is a company focused on transforming personalized cancer treatment through functional precision medicine and data-driven decision support.In this conversation, they explore how functional precision medicine differs from traditional precision medicine and why testing drugs on patients’ live tumor cells changes everything. Jim explains how AI, robotics, and large-scale drug testing help doctors move from trial-and-error to a true test-and-treat approach. The discussion also covers the risks of ineffective or harmful treatments, the economic cost of cancer care, and what must change for this model to become part of standard oncology practice.Jim Foote is a former technology executive turned healthcare innovator whose work is deeply shaped by personal loss and firsthand experience with cancer care. He is best known for advancing functional precision medicine by combining genomics, live-cell drug testing, and AI-driven analysis to guide treatment decisions. His perspective matters because it connects real clinical outcomes with the technology needed to give doctors and patients clearer, faster, and more humane options.Takeaways* Cancer treatment still relies heavily on trial-and-error, even with modern medical technology.* Two biologically different patients often receive the same cancer treatment based on population averages.* Precision medicine based on DNA and RNA sequencing still cannot confirm if a drug will work before it’s given.* Functional precision medicine tests drugs directly on a patient’s live tumor cells before treatment begins.* Some FDA-approved cancer drugs can be completely ineffective or even make a patient’s cancer worse.* Testing drugs outside the body can prevent patients from being exposed to harmful or useless treatments.* AI and robotics enable hundreds of drug tests to be completed in days instead of weeks or months.* In a published study, 83% of refractory cancer patients did better when treatment was guided by this approach.* Knowing which drugs won’t work is just as important as knowing which ones will.* Personalized, test-and-treat cancer care has the potential to improve outcomes while reducing overall healthcare costs.Timestamps00:00 Introduction02:46 The Core Problem in Modern Cancer Care04:16 Functional Precision Medicine Explained06:42 How AI, Robotics, and Data Are Changing Cancer Treatment10:01 How Cancer Drugs Are Tested Before Treatment13:20 Personalized, Patient-Centric Cancer Care18:22 Cost, Access, and the Economics of Cancer Treatment22:19 The Future of Cancer Care and Patient Empowerment25:21 Real Patient Outcomes and Success Stories26:50 Why Functional Precision Medicine Is the Future31:18 Predicting, Detecting, and Preventing Cancer Earlier34:27 Where to Learn More About Functional Precision Medicine36:12 Transforming Healthcare Beyond Trial-and-Error37:27 Regulations, FDA Pathways, and Scaling Innovation40:09 Why Cancer Is Affecting Younger Patients41:17 Innovation Q&ASupport This Podcast* To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/Connect with Jim* Website: https://firstascentbiomedical.com/ * LinkedIn: https://www.linkedin.com/in/jim-foote/ * TEDx Talk: https://www.youtube.com/watch?v=CqLCgNxUhVc Connect with VitLinkedIn: https://www.linkedin.com/in/vit-lyoshin/ X: https://x.com/vitlyoshin Website: https://vitlyoshin.comPodcast: https://www.anhourofinnovation.com/
undefined
Dec 13, 2025 • 46min

The Future of Music Education: AI Tutors, Human Mentors, and Creativity

Music education is quietly undergoing a massive shift, and most people haven’t noticed yet.AI tutors are no longer just tools; they’re starting to shape how musicians learn, practice, and improve. But here’s the real question: where does human creativity and mentorship still matter in an AI-driven world?In this episode of An Hour of Innovation podcast, host Vit Lyoshin sits down with John von Seggern, a longtime musician, educator, and founder of Futureproof Music School, to unpack what’s actually changing, and what isn’t, in the future of music education. John has spent over a decade designing online music education programs and now works at the intersection of AI, creativity, and human mentorship.In this conversation, they explore how AI is personalizing music education in ways traditional schools struggle to scale. John explains how AI tutors can analyze music, guide students through complex production workflows, and surface the one or two things that matter most at each stage of learning. They also dig into why AI still falls short in mastery, taste, and creative judgment, and why human mentors remain essential. They discuss the hybrid model of AI tutors and human teachers, the future of music production learning, and what this shift means for creators trying to stay relevant in a fast-changing industry.John von Seggern is a musician, producer, educator, and music technologist who has worked with film composers and contributed sound design to Pixar’s WALL·E. He previously helped lead and design one of the world’s most respected electronic music programs before founding Futureproof Music School, where he’s building AI-powered, personalized music education systems. His work matters because it goes beyond hype, offering a practical, grounded view of how AI can support creativity without replacing the human elements that make music meaningful.Takeaways* AI tutors are most effective when they surface only one or two actionable fixes, not long reports that overwhelm learners.* Music education improves dramatically when AI can analyze your actual work (like mixes), not just answer theoretical questions.* The biggest limitation of AI in music is that elite, professional knowledge is often undocumented, so models can’t learn it.* Human mentors remain essential at advanced levels because taste, judgment, and creative intuition can’t be automated.* Personalized learning paths outperform one-size-fits-all programs, especially in creative and technical fields like music production.* Generative AI tools are fun, but most professionals prefer AI that assists the process, not tools that generate finished music.* AI acts best as an intelligence amplifier, helping creators move faster rather than replacing their role.* The future of music education isn’t AI-only, but a hybrid model where AI accelerates learning, and humans guide mastery.Timestamps00:00 Introduction03:02 How AI Is Transforming Music Education07:50 Why AI + Human Mentorship Works Better Than Music Schools11:43 Why Music Education Curricula Must Evolve Faster15:04 How AI Personalizes Music Learning for Every Student19:38 Building an AI-Powered Education Business24:22 What Students Really Say About AI Music Education26:20 Electronic Music vs Learning Traditional Instruments27:58 The Future of AI in Music and Creative Industries30:28 Why Artists Still Matter in AI-Generated Art32:21 Who Owns Music Created With AI?36:50 How Creators Can Survive and Thrive Using AI42:24 Innovation Q&ASupport This Podcast* To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/Connect with John* Website: https://futureproofmusicschool.com/ * LinkedIn: https://www.linkedin.com/in/johnvon/ Connect with Vit* LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Website: https://vitlyoshin.com/contact/ * Podcast: https://www.anhourofinnovation.com/
undefined
Dec 6, 2025 • 57min

RAG, LLMs & the Hidden Costs of AI: What Companies Must Fix Before It’s Too Late

Most companies have no idea how risky and expensive their AI systems truly are until a single mistake turns into millions in unexpected costs.In this episode of An Hour of Innovation podcast, host Vit Lyoshin explores the truth about AI safety, enterprise-scale LLMs, and the unseen risks that organizations must fix before it’s too late.Vit is joined by Dorian Selz, co-founder and CEO of Squirro, an enterprise AI company trusted by global banks, central banks, and highly regulated industries. His experience gives him a rare inside look at the operational, financial, and security challenges that most companies overlook.They dive into the hidden costs of AI, why RAG has become essential for accuracy and cost-efficiency, and how a single architectural mistake can lead to a $4 million monthly LLM bill. They discuss why enterprises underestimate AI risk, how guardrails and observability protect data, and why regulated environments demand extreme trust and auditability. Dorian explains the gap between perceived vs. actual AI safety, how insurance companies will shape future AI governance, and why vibe coding creates dangerous long-term technical debt. Whether you’re deploying AI in an enterprise or building products on top of LLMs.Dorian Selz is a veteran entrepreneur, known for building secure, compliant, and enterprise-grade AI systems used in finance, healthcare, and other regulated sectors. He specializes in AI safety, RAG architecture, knowledge retrieval, and auditability at scale, capabilities that are increasingly critical as AI enters mission-critical operations. His work sits at the intersection of innovation and regulation, making him one of the most important voices in enterprise AI today.Takeaways* Most enterprises dramatically overestimate their AI security readiness.* A single architectural mistake with LLMs can create a $4M-per-month operational cost.* RAG is essential because enterprises only need to expose relevant snippets, not entire documents, to an LLM.* Trust in regulated industries takes years to build and can be lost instantly.* Real AI safety requires end-to-end observability, not just disclaimers or “verify before use” warnings.* Insurance companies will soon force AI safety by refusing coverage without documented guardrails.* AI liability remains unresolved: Should the model provider, the user, or the enterprise be responsible?* Vibe coding creates massive future technical debt because AI-generated code is often unreadable or unmaintainable.Timestamps00:00 Introduction to Enterprise AI Risks02:23 Why AI Needs Guardrails for Safety05:26 AI Challenges in Regulated Industries11:57 AI Safety: Perception vs. Real Security15:29 Risk Management & Insurance in AI21:35 AI Liability: Who’s Actually Responsible?25:08 Should AI Have Its Own Regulatory Agency?32:44 How RAG (Retrieval-Augmented Generation) Works40:02 Future Security Threats in AI Systems42:32 The Hidden Dangers of Vibe Coding48:34 Startup Strategy for Regulated AI Markets50:38 Innovation Q&A QuestionsSupport This Podcast* To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/Connect with Dorian* Website: https://squirro.com/ * LinkedIn: https://www.linkedin.com/in/dselz/ * X: https://x.com/dselz Connect with Vit* Substuck: https://substack.com/@vitlyoshin * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Website: https://vitlyoshin.com/contact/* Podcast: https://www.anhourofinnovation.com/
undefined
Nov 28, 2025 • 36min

The Future of AI Assistants! Why Your Data Will Soon Talk Back to You | Mustafa Parekh

AI is becoming a business partner, not just a tool, and soon, your data will literally talk back to you.In this episode of An Hour of Innovation podcast, host Vit Lyoshin sits down with Mustafa Parekh, the founder of Lazy Admin, to explore how personalized AI is transforming the way companies understand and use their data.Mustafa breaks down how Lazy Admin turns complex Salesforce and CRM information into natural-language insights, visualizations, and strategic recommendations, all in seconds. They talk about the rise of AI assistants, the future of enterprise AI, how AI can learn your internal business language, the challenges of building secure “zero-data-exfiltration” systems, and why the next era of innovation isn’t just about solving problems, it’s about creating better, more human-centered ways of working. Together, they dive into AI ethics, government regulation, AGI risks, job displacement, product development mindsets, and why founders should build Minimum Lovable Products instead of just MVPs.Mustafa Parekh is a tech entrepreneur, Salesforce consultant, and the creator of Lazy Admin, an AI-powered data insights platform redefining how businesses access reporting and analytics. He is known for pioneering privacy-first architecture in enterprise AI, automating CRM workflows without exposing sensitive data, and helping companies make smarter decisions using real-time insights. His background spans full-stack development, global consulting work, and building impactful SaaS tools across industries.Support This Podcast* To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/Takeaways* AI is evolving from a generic tool into a personalized business partner that understands company context.* AI that learns your internal acronyms, vocabulary, and business lingo delivers dramatically better results.* Privacy-first architecture like Zero-Data-Exfiltration is becoming essential for enterprise AI adoption.* Companies waste hundreds of hours on reporting that AI can now generate in seconds.* The best products aren’t just viable, they’re lovable.* AI’s biggest impact will come when it merges with robotics and neuroscience, not just software.* Government regulation may slow down certain AI advancements due to unemployment and economic pressure.* Open-source AI offers deeper integration, while proprietary models support faster innovation.* Rapid prototyping and minimizing development time are critical for early-stage founders.* Marketing, not development, becomes the real challenge after launching a startup.* Choosing the right customer segment and understanding their pain points is essential for SaaS success.* The future of business AI lies in human-centered design, technology that enhances people rather than replaces them.Timestamps00:00 Introduction02:53 How Lazy Admin Was Born08:13 Validating the AI Product Idea11:02 How Lazy Admin Works13:02 User Experience & Onboarding17:18 AI Trends: The Start of the “AI Age”20:37 The Reality of AI Ethics23:11 Open Source vs Proprietary AI24:36 Will AI Replace Jobs?26:31 Startup Lessons & Founder Mistakes31:50 Client Success Stories33:42 Innovation Q&A RoundConnect with Mustafa* Website: https://lazyadmin.httpeak.com/ * LinkedIn: https://www.linkedin.com/in/mustafaparekh/ Connect with Vit* Substuck: https://substack.com/@vitlyoshin * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Website: https://vitlyoshin.com/contact/ Vit’s Projects* Podcast: https://www.anhourofinnovation.com/ * AI Booking Assistant: https://appforgelab.com/
undefined
Nov 21, 2025 • 1h 6min

AI Glasses That Have Something All Others are Missing | Bobak Tavangar

AI glasses are evolving faster than anyone expected, but only one company is building them to amplify human agency instead of monetizing your attention.In this episode of An Hour of Innovation podcast, host Vit Lyoshin explores the future of wearable AI with a guest who is reshaping the entire computing landscape: Bobak Tavangar, Co-Founder & CEO of Brilliant Labs.They dive deep into why the future of AI must be wearable, open-source, and private by design, and how Brilliant Labs’s team created the first AI glasses built to empower people rather than extract their data.They discuss the emergence of AI memory, the challenges of building long-lasting hardware, why battery life matters more than most people think, the philosophical risks of “outsourcing our thinking” to AI, and why Big Tech’s approach to wearable AI may be leading us in the wrong direction. Bobak also unpacks how open-source hardware can restore human agency, reconnect people, and potentially re-architect the Internet around the individual.Bobak Tavangar is a former Program Lead at Apple, a serial founder in computer vision and graph search, and now CEO of Brilliant Labs. He’s a design-first innovator who blends engineering with philosophy, an open-source advocate pushing for transparent, trustworthy AI, and a creator inspired by the Baha’i principle of oneness, building technology that strengthens human connection rather than weakens it.Support This Podcast* To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/Takeaways* AI glasses can amplify human agency, not replace it, when built with the right philosophy.* Brilliant Labs designed the first wearable AI platform that is open-source.* Privacy is central: the device never stores photos or audio, only encrypted embeddings.* True innovation in hardware requires painstaking component selection and constant iteration.* The future of computing must align more naturally with human biology than smartphones do.* AI should be a thought partner, not a substitute for human thinking.* Overreliance on AI can lead to cognitive atrophy, according to emerging research.* Open-source systems are essential for trust, transparency, and user control.* AI memory has the potential to revolutionize learning, recall, accessibility, and life organization.* Building AI glasses requires deep integration with factories, not just a software mindset.* Wearable AI may eventually reduce our reliance on smartphones, but the market will decide, not the company.* Future AI devices should foster connection and human well-being, not distraction or ad monetization.Timestamps00:00 Introduction03:13 Why He Left Apple: The Case for Open-Source AI Glasses06:00 Why the Next Big Tech Shift Is AI Hardware09:06 How Brilliant Labs Built Halo: From Idea to Prototype11:31 What AI Glasses Can Do Today: Memory, Recall, Real-Time Assistance14:32 AI Memory Explained: How Glasses Learn From Your Life17:11 The Hardest Problems in AI Hardware: Battery, Sensors, Design23:59 Meta vs Open-Source: Competing Visions for AI Glasses30:53 The Future of Wearable AI: Use Cases, Apps, and Developer Tools35:08 Privacy by Design: Why Brilliant Labs Stores Zero Images or Audio40:05 Will AI Make Us Smarter or Weaker? The Human Agency Debate46:56 What Life With AI Glasses Could Look Like in 5–10 Years50:56 Will Wearable AI Replace Phones? Early Signals for the Future54:31 Hard Lessons Learned Building Real AI Hardware01:00:01 Innovation Q&A RoundConnect with Bobak* Website: https://brilliant.xyz/ * LinkedIn: https://www.linkedin.com/in/bobak-tavangar-29445012/ * X: https://x.com/btavangar Connect with Vit* Substuck: https://substack.com/@vitlyoshin * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin Vit’s Projects* Podcast: https://www.anhourofinnovation.com/ * AI Booking Assistant: https://appforgelab.com/
undefined
Nov 14, 2025 • 1h 4min

Why AI Fails in Most Companies! And How to Fix It | Tullio Siragusa

Why do most AI initiatives fail — even at the world’s biggest companies?In this episode of An Hour of Innovation podcast, host Vit Lyoshin sits down with Tullio Siragusa, a business strategist, author, and creator of the EmpathIQ Framework™, to break down the human barriers that undermine AI adoption long before the technology ever hits production.Vit and Tullio explore why AI fails in most organizations, how outdated command-and-control cultures choke innovation, and why empathy, emotional intelligence, and decentralized decision-making are the real prerequisites for a successful AI transformation.They discuss Tullio’s EmpathIQ model for building AI-ready organizations, the future relationship between human intelligence and artificial intelligence, and the surprising ways companies can triple productivity without hiring by redesigning how people collaborate.Tullio Siragusa brings over 30 years of experience across telecom, ad tech, and software engineering, and has helped organizations worldwide transform through human-centered leadership. He’s the founder of Inventrica Advisory, a speaker and strategist specializing in organizational design, culture transformation, emotional intelligence, and AI readiness. His EmpathIQ Framework™ has guided companies toward building empowered, autonomous, and highly productive teams capable of thriving in the age of AI.Support This Podcast* To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/Takeaways* AI fails in most companies because of culture, not technology.* Outdated command-and-control structures suffocate the speed and autonomy AI requires.* Over 70% of AI projects fail due to human and cultural barriers, not technical ones.* Only 21% of employees are engaged, a massive hidden productivity leak.* Empowered, decentralized teams dramatically increase innovation and output.* The EmpathIQ Framework™ can triple a company’s capacity without adding headcount.* Empathy is a strategic advantage, not a soft skill, and it boosts revenue and performance.* AI amplifies whatever culture it enters, making organizational design a critical success factor.* Emotional intelligence will become the biggest competitive edge in the AI era.* Customers buy based on emotional needs first, not just transactions; empathy wins in sales.* Fixing culture first is essential before rolling out any meaningful AI transformation.* AI agents can mimic empathy, but they can’t replace human curiosity, wisdom, or intuition.* Leaders who ignore emotional intelligence risk building companies that sound cold, clinical, and interchangeable.Timestamps00:00 Introduction05:33 Why AI Fails: The Human Challenge Behind Adoption07:30 Organizational Design: The Bottleneck in AI Success10:45 Employee Engagement Crisis: The 21% Problem13:26 Empathy as a Core Business Strategy16:25 Measuring AI Success Beyond Technology24:48 EmpathIQ Framework Overview26:35 Force Field Analysis Explained28:27 Collaborative OKRs for Cross-Team Alignment31:16 Neuroscience-Based Leadership Coaching33:58 Self-Management & Decentralized Organizations37:49 Empathy in Action: Elevating Transactions48:07 Emotional Intelligence as a Competitive Edge58:20 Integrating Acquisitions with Empathy & DecentralizationConnect with Tullio* Website: https://tulliosiragusa.com/ * LinkedIn: https://www.linkedin.com/in/tulliosiragusa/ * X: https://x.com/tulliosiragusa * Other: https://linktr.ee/tulliosiragusa Connect with Vit* Website: https://vitlyoshin.com/contact/ * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin Vit’s Projects* Podcast: https://www.anhourofinnovation.com/ * AI Booking Assistant: https://appforgelab.com/
undefined
Nov 7, 2025 • 49min

How AI Is Reducing Healthcare Costs and Helping Doctors Focus on Patients | Zach Evans

AI that isn’t replacing doctors, it’s helping them save lives, cut costs, and bring humanity back to healthcare.In this episode of An Hour of Innovation podcast, host Vit Lyoshin sits down with Zach Evans, Chief Technology Officer at Xsolis, a leading AI and data analytics company transforming the way hospitals and insurance providers work together.Zach and Vit dive into how artificial intelligence is removing friction in healthcare, reducing administrative waste, and improving collaboration between hospitals, payers, and clinicians. They explore how predictive analytics and generative AI are being used to accelerate decisions, prevent costly billing errors, and free up doctors to focus on patient care. Zach also shares how his team built Dragonfly, Xsolis’s AI-powered platform that streamlines clinical workflows, enhances cybersecurity, and helps hospitals save millions every year.As CTO, Zach Evans leads the engineering and product strategy behind Xsolis’s data-driven solutions. With nearly a decade of experience in healthcare technology and digital transformation, he’s helped scale the company from a small startup to a national leader serving hospitals across the US. Zach is passionate about building human-centered AI systems that empower clinicians, improve patient outcomes, and redefine how healthcare organizations operate.Support This Podcast* To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/Takeaways* AI isn’t replacing doctors, it’s helping them make faster, better decisions.* US hospitals spend up to 25% of their revenue on administrative tasks.* Dragonfly, Xsolis’s AI platform, uses data to reduce friction between hospitals and insurance companies.* Predictive analytics can determine patient status (inpatient vs. observation) with 99% accuracy.* Generative AI now drafts clinicians’ initial patient reviews, saving hours of manual work.* Keeping a “human in the loop” ensures AI supports, not replaces, healthcare professionals.* Hospitals can resolve claim decisions within hours instead of weeks or months.* Agentic AI is being developed to automate repetitive tasks like medical forms and data entry.* Healthcare data is among the most valuable information on the black market, making cybersecurity critical.* Moving from reactive to proactive security helps prevent attacks before they happen.* AI is helping hospitals save millions by cutting denied claims and reducing administrative waste.* The next wave of healthcare innovation is ambient AI, enabling doctors to talk to patients instead of screens.* Every dollar saved on admin costs can be reinvested into patient care and clinical improvements.Timestamps00:00 Introduction03:01 Understanding Healthcare Friction and How AI Solves It05:21 AI-Driven Reimbursement: Streamlining Hospital and Insurance Payments10:55 Cybersecurity in Healthcare: Protecting Patient Data with AI17:12 Generative AI in Healthcare: New Innovations Changing Medicine23:34 Dragonfly by Xsolis: An AI Platform for Healthcare Efficiency26:36 Optimizing Hospital Workflows with Predictive Analytics and AI28:19 AI for Length-of-Stay Management: Improving Patient Flow32:33 Future of Healthcare Technology: From Automation to Intelligence36:54 Data Symmetry in Healthcare: Aligning Hospitals and Insurers37:48 Leadership and Innovation: Scaling a Healthcare Tech Team42:49 AI’s Real Impact on Healthcare Professionals and Clinicians45:34 Restoring Human Connection: How AI Improves Patient–Doctor RelationshipsConnect with Zack* Website: https://www.xsolis.com/ * LinkedIn: https://www.linkedin.com/in/zachevans/ * X: https://x.com/ZachEvans * Other: https://zachevans.io/ Connect with Vit* LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin Vit’s Projects* Podcast: https://www.anhourofinnovation.com/ * AI Assistant to build apps: https://appforgelab.com/

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app