

A Beginner's Guide to AI
Dietmar Fischer
"A Beginner's Guide to AI" makes the complex world of Artificial Intelligence accessible to all. Each episode asks someone working with AI about what they do and how AI can help you. Ideal for novices, tech enthusiasts, and the simply curious, this podcast transforms AI learning into an engaging, digestible journey. Join us as we take the first steps into AI 🚀 Hosted on Acast. See acast.com/privacy for more information.
Episodes
Mentioned books

Mar 28, 2026 • 41min
Are You Human? Proof it!
🎧 What makes us human in the age of AI?This episode of A Beginner’s Guide to AI explores one of the most important questions for business leaders today. As AI becomes more capable, the real challenge is not what it can do, but what we should never outsource.We explore The Blurring Test, a fascinating experiment where thousands of people tried to prove their humanity to a chatbot. What they revealed changes how we should think about AI, business, and identity.You will learn why AI can mimic humans but cannot experience reality, why human judgment becomes more valuable in an automated world, and how to use AI without losing authenticity and meaning.📧💌📧Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧👤 About Dietmar Fischer:Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at https://argoberlin.com/💡 Quotes from the Episode"AI can follow the recipe, but it cannot taste the cake.""Your humanity is not what you do, but why you do it.""The real risk is not AI replacing us, but us becoming more like AI."⏱ Chapters00:00 The Question That Changes Everything04:30 The MrMind Experiment11:20 AI vs Human Identity19:10 The Cake Test Explained26:40 AI in Business and Decision Making34:00 What Makes Us Human🚀 This episode challenges how you think about AI, business, and yourself. The future will not be about replacing humans. It will be about understanding what makes us irreplaceable. Hosted on Acast. See acast.com/privacy for more information.

Mar 26, 2026 • 23min
100 Interviews and Still Going Strong
If you want to know more about the podcast, about how it's produced, what are the challenges and wins, about some fun facts, a little bit behind-the-scenes, this episode is for you, as I tell you all about it - at least all the things I found noteworthy 😉📧💌📧Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧About Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comMusic credit: "Modern Situations" by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.

Mar 24, 2026 • 28min
Your AI Is Taking Orders From Strangers
Your AI might not be hacked. It might be persuaded.In this episode of A Beginner’s Guide to AI, we unpack one of the most underestimated threats in modern business: prompt injection. As AI systems and AI agents become deeply embedded in workflows, they don’t just process information anymore. They act on it. And that creates a completely new category of AI security risks.We explore how attackers can manipulate AI systems using nothing but language, why AI struggles to separate instructions from data, and how this leads to real-world issues like AI data leakage. This is not a theoretical problem. It is already happening inside enterprise environments.If you are working with AI in marketing, operations, or leadership, this episode will fundamentally change how you think about AI risk management and enterprise AI security.Key highlights:What prompt injection is and why it mattersWhy AI agents introduce new security risksReal-world case of AI data leakageHow AI systems get manipulated through inputWhat businesses must change to stay secure📧💌📧Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧Quotes from the Episode:“Prompt injection is social engineering for machines.”“Your AI can become an insider threat without meaning to.”“Language is no longer just information. It’s control.”Chapters:00:00 Why AI Security Is Different05:40 What Prompt Injection Really Is14:20 How AI Gets Manipulated by Language23:10 Why AI Agents Increase the Risk32:45 Real Case Study: AI Data Leakage44:30 How to Protect Your AI SystemsAbout Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comMusic credit: "Modern Situations" by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.

Mar 22, 2026 • 39min
The Extended Mind: Why AI Might Make Humans More Creative
Artificial intelligence is often framed as a battle between humans and machines. But what if that story misses the real point?In this episode of A Beginner’s Guide to AI, Prof. GepHardT explores one of the most fascinating ideas in cognitive science: the extended mind theory. According to philosopher Andy Clark, human intelligence has never been confined to the brain alone. For centuries we have extended our thinking through tools like writing, maps, calculators, and computers.Generative AI may simply be the newest and most powerful addition to this cognitive ecosystem.Instead of replacing human creativity, AI may expand it. By generating ideas, exploring possibilities, and challenging assumptions, AI can act as a powerful thinking partner.A striking example comes from the famous AlphaGo match against Go champion Lee Sedol. When the AI played the now legendary Move 37, professional players initially believed the move was a mistake. Later they discovered it opened entirely new strategic possibilities. The machine did not just beat humans at Go. It helped humans rethink the game itself.This episode explores how human AI collaboration works and why hybrid intelligence may define the future of creativity, work, and learning.📧💌📧 Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl 📧💌📧About Dietmar FischerDietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comQuotes from the Episode“Your brain has never worked alone. It has always been part of a thinking system that includes tools and environments.”“The future of intelligence may not be human versus machine but human plus machine.”“The most important skill in the AI age may not be prompt writing but judgement.”Podcast Chapters00:00 The Big Question About AI and Human Thinking 06:40 The Extended Mind Theory Explained 16:20 Why Humans Are Natural Born Cyborgs 26:50 The AlphaGo Story and Move 37 38:15 AI as a Creative Thinking Partner 49:30 The Future of Hybrid IntelligenceMusic credit: Modern Situations by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.

Mar 20, 2026 • 54min
Your Company WILL Be Hacked - Joshua Cook Explains How to Survive It // REPOST
What happens when your company gets hit by a cyberattack?In this eye-opening episode, attorney Joshua Cook reveals why cybersecurity isn’t an IT problem but a leadership challenge. After two decades fighting fraud and managing crisis response, Cook has seen every digital disaster imaginable — and he’s here to explain how to build true cyber resilience.📧💌📧Tune in to get my thoughts and all episodes — don’t forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧Josh breaks down how AI has democratized cybercrime, why phishing scams have become nearly impossible to spot, and how every CEO should create an incident response plan before chaos hits. He also explains why planning matters more than the plan itself — and how leaders can keep their teams calm when everything goes wrong.💡 You’ll learn:- How AI is fueling new waves of fraud and misinformation- Why leadership and communication are the real firewalls of business- How to train teams and run tabletop exercises before the crisis- What Maersk and Colonial Pipeline taught the world about transparency- Why companies with a plan lose 60 % less money in an attackPrepare, breathe, and lead — because it’s not if you’ll be hacked, but when.👀 Quotes from the Episode“Cybersecurity isn’t an IT issue. It’s a business problem, and it needs a business solution.”“AI has democratized cybercrime — you don’t need to be a hacker anymore, just willing to commit a crime.”“A plan might be useless, but planning is indispensable — that’s what makes companies resilient.”🧾 Chapters00:00 Welcome & Introduction – Meet Joshua Cook02:00 How a Fraud Attorney Ended Up Fighting Cybercrime05:00 AI Has Made Cybercrime Easier (and Smarter)08:00 The Elderly Are the New Prime Targets11:00 From Fake Law Firms to Real Scams – True Cases from the Field15:00 Turning the Tables: How AI Can Defend, Not Just Attack18:00 Cyber Resilience by Design – Why Leadership Matters22:00 When Crisis Hits: Lessons from Maersk and Colonial Pipeline27:00 Preparing the Team – How Training Prevents Chaos31:00 It’s Not If, It’s When – The Power of an Incident Response Plan35:00 Planning vs. Panicking – Eisenhower and the Art of Cyber Preparation38:00 Why Calm Leaders Win in Cyber Crises41:00 How Joshua Cook Uses AI Safely in Legal Practice44:00 No, the Terminator Isn’t Coming (But AI Might Take Your Job)47:00 Final Thoughts – Cybersecurity as a Business Superpower🔗 Where to Find the Guest- Joshua Cook on LinkedIn: linkedin.com/in/jnc2000- Josh's Book "Cyber Resilience by Design" – available wherever books are sold, e.g. on Amazon- Prince Lobel Tye LLP: princelobel.com🎧 About Dietmar Fischer:Economist, digital marketer, and podcaster exploring how AI reshapes decision-making, leadership, and creative work. Want to connect with me? You'll find me on LinkedIn!🎵 Music credit: “Modern Situations” by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.

Mar 18, 2026 • 54min
A Disturbing AI Story Big Tech Never Wants You to Hear, with Paul Hebert
Paul A. Hebert, founder of the AI Recovery Collective and author of Escaping the Spiral, speaks about a harrowing personal run-in with ChatGPT that led to obsession and fear. He discusses hallucinations, chatbot addiction, youth safety, and why AI literacy and accountability matter. Short, urgent, and practical conversations about limits, trust, and regulation of generative AI.

Mar 15, 2026 • 29min
Supervised vs Unsupervised Learning Explained with Real World Examples
Clear contrast between learning from labeled answers and discovering hidden patterns in raw data. Real-world examples include spam detection, customer segmentation, and a cake analogy. Discussion of labeling costs, bias risks, and when to combine both learning approaches. A DIY challenge invites listeners to try both methods on their own data.

Mar 13, 2026 • 48min
Building Scalable AI Agents: Chirag Agrawal Reveals How // REPOST
Engineering the Future of AI with Chirag Agrawal: Context, Memory and CoordinationArtificial Intelligence isn’t just getting smarter—it’s learning to coordinate. In this episode, Chirag Agrawal joins Dietmar Fischer to unpack how modern AI agents handle context, memory, and decision-making inside complex multi-agent systems. Together they explore how engineering, orchestration, and memory-sharing shape the next generation of AI architecture.📧💌📧Tune in to get my thoughts and all episodes—don’t forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧You’ll hear how Chirag’s fascination with search led him to build early prototypes of intelligent assistants, and how today’s LLM agents extend that idea far beyond simple queries. He explains why AI isn’t one giant super-brain but a constellation of specialized agents—each performing specific tasks with shared or isolated memory—and how this design mirrors human collaboration.🔑 Key TakeawaysWhy AI orchestration and context management are crucial for scalable systemsThe trade-offs between shared memory and independent agentsWhat engineers mean by the ReAct Loop—reasoning and acting in tandemHow multi-agent coordination is reshaping industries from healthcare to complianceWhy the “AI supercomputer” myth ignores practical limits of context windows💬 Quotes from the Episode“AI is just a higher form of search—it’s about finding the right action, not just information.”“Agents behave inhuman until you engineer context for them.”“Specialization in AI works the same way it does for people—each agent should do one thing really well.”“Coordination isn’t magic; it’s careful engineering.”“Context makes intelligence usable.”“A well-defined agent doesn’t need to do everything—it needs to do its one job perfectly.”⏱️ Podcast Chapters00:00 Welcome and Introduction01:45 Chirag Agrawal’s Early Fascination with Search and AI04:40 From Search Engines to “Find” Engines – How AI Takes Action07:10 The Rise of AI Agents and Multi-Agent Systems10:15 Why AI Agents Sometimes Behave “Inhuman”13:30 Context, Memory, and Coordination: The Core Engineering Challenges18:00 Shared vs. Isolated Memory – The Hive Mind Dilemma22:30 Why We Need Many Agents, Not One Super-Computer27:00 How the ReAct Loop Helps Agents Think and Act30:40 Industries Adopting AI Agents: Compliance, Medicine, and Law34:30 When AI Goes Off-Road – The Limits of Coordination37:15 Building Responsible, Constrained Agents40:10 The Future of AI and Why the Terminator Scenario Won’t Happen42:20 Where to Find Chirag Agrawal & Closing Thoughts🌐 Where to Find the Chirag AgrawalLinkedIn 🧑🏽🦱 linkedin.com/in/chirag-agrawalWebsite ➡️ chiraga.io🎵 Music credit: “Modern Situations” by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.

Mar 11, 2026 • 51min
Stop wasting your Copilot licenses — Jim Spignardo’s brutal checklist
Jim Spignardo, an Enterprise AI strategist at ProArch who helps firms adopt Copilot and build data & AI platforms. He talks about shifting AI conversations to leaders, the three Ds (dull, draining, distracting) to drive adoption, when to use Copilot versus building a platform, and why data governance and measurable KPIs are essential for scaling AI.

Mar 9, 2026 • 49min
Your “Revenue” Is Probably Wrong and Ritish Chugh Tells You Why
Ritish Chugh, analytics engineer at Airbnb who specializes in metrics governance and semantic layers. He exposes why teams mean different things by “revenue.” He explains the human data pipeline, building unified metric definitions and a semantic layer, and why data quality and governance must come before AI. He also shares AI wins like speeding PR summaries and reducing manual SQL.


