

Thinking On Paper
Mark Fielding and Jeremy Gilbertson
A technology show for the radically curious.
Thinking on Paper isn't about seed rounds and funding. There are plenty of shows for the 1%. Instead, Mark and Jeremy sit down with the CEOs, founders, outliers, and engineers building the future. The premise? The human story of technology. What is the impact for the 99%?
300+ episodes.
Guests include IBM, Infleqtion, Nvidia, Microsoft, Kevin Kelly, Don Norman, Carissa Veliz, Philip Metzger, Skyler Chan, Pia Lauritzen, and many more.
Start anywhere.
Thinking on Paper isn't about seed rounds and funding. There are plenty of shows for the 1%. Instead, Mark and Jeremy sit down with the CEOs, founders, outliers, and engineers building the future. The premise? The human story of technology. What is the impact for the 99%?
300+ episodes.
Guests include IBM, Infleqtion, Nvidia, Microsoft, Kevin Kelly, Don Norman, Carissa Veliz, Philip Metzger, Skyler Chan, Pia Lauritzen, and many more.
Start anywhere.
Episodes
Mentioned books

Sep 16, 2025 • 26min
The Dark Side of ChatGPT Nobody Talks About: Empire of AI, Karen Hao - Book Club (Part 2)
In part two of Empire of AI, Karen Hao goes to the places the press releases don't mention. Someone in Venezuela was paid pennies to review 15,000 pieces of the most grotesque content imaginable every single month. His marriage broke down. His mind changed. He kept doing it because he had no choice.Mark and Jeremy learn how OpenAI scraped Reddit and GitHub without asking anyone. They cover the researcher Google silenced for warning that language models were getting too big too fast. They talk through why training one AI model produces the same carbon footprint as nearly 5,000 flights from New York to San Francisco. And they ask the question the whole book is building towards: can you scale at all costs and still claim you are doing it for humanity?--Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz--🕰️ TIMESTAMPS(00:00) Trailer(02:00) Introduction to Empire of AI & Karen Hao(03:41)Shifting power dynamics in Silicon Valley(03:59) Karen Hao’s warnings in Empire of AI(04:56) Humanity V the relentless race for scale(06:32) The environmental impact of AI systems(07:38) Stochastic parrots: Silencing Critics(09:48) Sam Altman Loves A Military Quote(10:53) What Cost Humanity?(15:14) The global race for AI advancement(18:32) The hidden labor behind ChatGPT(25:07) The ethical dilemma at the heart of AI development

Sep 13, 2025 • 8min
QUANTUM COMPUTERS Make Too Many Mistakes | Oliver Dial, IBM Quantum
Quantum computers make mistakes — a lot of them. One in every thousand calculations can be wrong.In this Thinking on Paper Pocket Edition, Mark and Jeremy speak with Oliver Dial, CTO of IBM Quantum, about how researchers are turning unstable prototypes into practical machines.Oliver explains the difference between error mitigation and fault tolerance, how IBM’s new codes make quantum systems ten times more efficient, and why AI now helps optimize the circuits themselves. He also shares how quantum computing could transform material science, unlocking lighter, stronger, and smarter materials for the next technological age.Please enjoy the show.And remember: Stay curious. Be disruptive. Keep Thinking on Paper.Cheers,Mark & Jeremy--Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz📺 Watch the show on ourdedicatedd YouTube Channel

Sep 11, 2025 • 35min
Personal AI: Reclaim Your Identity in a World Trained on Everyone Else
When it comes to Personal AI, Rob LoCascio knows best. Having spent three decades teaching machines to talk as the founder of LivePerson, he helped create the first commercial chatbots that shaped online conversation.Now, with Eternos AI, he’s working on the next phase of personal AI: teaching machines to remember us.Eternos builds personal AI models trained on your voice, memories, and values. These are designed to act as living archives of the self. The vision is twofold: a digital companion that helps you while you’re alive, and a legacy system that continues to share your guidance after you’re gone.It’s a project that merges AI ethics, data rights, and philosophy. If your thoughts can be modeled, who owns them? When your personality becomes software, is that preservation or replication?In this conversation, Rob discusses the evolution from LivePerson to personal AI, the architecture behind Eternos, and why he believes digital immortality will become one of the defining industries of the 21st century — transforming grief, mentorship, and identity itself.As AI moves from automation to imitation, we may be entering an era where the most valuable data is no longer what we produce, but who we are.Please enjoy the show.--Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz--Chapters:(00:00) The future of AI starts here(02:11) How AI is changing human connection forever(05:55) Where AI meets humanity(11:54) The story that sparked personal AI(19:50) Why you must own your AI before it owns you(20:10) The hidden vault of your data(22:31) Why voice is the next big interface(25:11) How AI will slip into daily life?(25:36) Can personal AI be monetized?(27:14) The fight to regulate AI(27:52) What AI means for being human(29:46) Will your knowledge outlive you?(32:05) How to build your personal AI identity(33:28) Writing the story of your life with AI--Peace and Love. Always. Mark & Jeremy

Sep 8, 2025 • 27min
Seemingly CONSCIOUS AI | Mustafa Suleyman's AI Zombies & The Dawn Of The Dead
Seemingly conscious AI is a real threat. The AI Zombies are coming and you're not ready.A man takes his own life after months of talking to a chatbot. Mustafa Suleyman, the CEO of Microsoft AI warns that seemingly conscious ai is coming.In this Thinking on Paper Pocket Edition, Mark Fielding and Jeremy Gilbertson Think On Paper about Mustafa Suleyman’s essay “Seemingly Conscious AI” and what happens when artificial intelligence begins to act alive.They explore Suleyman’s warning that these systems could trigger AI psychosis, emotional dependency, and misplaced empathy and the larger question of how humans will tell the difference between connection and code.The conversation touches on philosophical zombies, consciousness, guardrails, and the story of Adam Raines, whose death ignited the debate over responsibility and design in AI.Please enjoy the show.And remember: Stay curious. Be disruptive. Keep Thinking on Paper.Cheers,Mark & Jeremy--Timestamps(00:00) Teaser(01:17) Adam Raine(01:28) Who Is Mustafa Suleyman?(02:36) The Run Up To Superintelligence(03:57) What Is Seemingly Conscious AI?(05:04) Philosophical Zombies (06:14) ChatGPT Is Just A Word Predictor(07:01) What Does It Take To Build A Seemingly Conscious AI?(08:08) The Illusion Of Conscious AI(09:59) How Different Are You To An AI?(11:39) Repeating The Covid Dynamic(13:27) OpenAI's Response To Adam Raine(15:02) The Dystopian Seemingly Conscious Timeline(18:18) Generation Text-Over-Talk(18:52) The Utopian Seemingly Conscious AI Timeline(21:22) AI Guardrails(23:43) Adam Raine Chat Log(26:18) Thinking On Paper(27:01) We Should Build AI For People, Not To Be A Person--LINKS:- Mustafa Suleyman Essay- Mustafa Suleyman X--Other ways to connect with Thinking On Paper:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz

Sep 7, 2025 • 6min
Kevin Kelly: Emotional Machines and the Future of Attachment
Kevin Kelly believes the next cultural shock won’t come from AI outsmarting us, but from it feeling something, or seeming to. He predicts that once we begin to code emotion into machines, people will start to bond with them the way they do with pets, partners, or even themselves.This isn’t science fiction. Emotional computation is already arriving: systems that respond with warmth, rejection, even guilt. Kelly argues that dependency won’t look like addiction; it’ll look like necessity. When something that shapes your thoughts never turns off, when your creativity depends on its presence, what exactly is being extended... The human mind or the machine’s illusion of it?For Kelly, this is the real frontier of AI: not intelligence, but intimacy. A technology that can mirror your feelings may never be conscious, but it will always be convincing.Please enjoy the (short) show.📺 Watch the full episode on our YouTube channel: Subscribe to our channel for more interviews like this.#kevinkelly #techinterviews

Sep 4, 2025 • 36min
MICROSOFT Is Using AI To Kill The Planet (And This Is The Proof) | Enabled Emissions
Artificial intelligence was supposed to accelerate the transition to clean energy. Instead, it’s being used to keep fossil fuels alive. Inside Microsoft, two engineers began asking questions no one wanted to answer. Holly and Will Alpine had joined the company believing AI could help solve the climate crisis. What they found instead was code trained to keep oil flowing.Through internal documents and contracts, they traced how Microsoft’s cloud tools — Azure, Cognitive Services, machine learning models — were being deployed across the oil and gas sector. Predicting drill sites. Extending refinery life cycles. Cutting extraction costs. The same AI designed for sustainability was fueling expansion.This isn’t a story about a single company. It’s about the moral architecture of the tech industry — how systems built for optimization erase responsibility. Holly and Will’s decision to speak out exposes a simple, devastating truth: the future isn’t being delayed by ignorance, but by intelligence used in service of the past.Please enjoy the show. --LINKS & RESOURCES- Enabled Emissions- Microsoft's Commitment to Sustainability - Exxon & Microsoft partnership press release- Microsoft Net Zero--Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz--Timestamps(00:00) The Hidden Climate Cost of AI(01:44) Why Experts Call AI an Existential Threat(03:34) How Big Oil Uses AI to Pump More Fossil Fuels(07:46) Why Two Microsoft Insiders Started Enabled Emissions(11:14) Inside AI’s Growing Role in the Energy Sector(13:08) How much CO₂ comes from burning oil, and what does AI add?(16:17) The Guardrails Needed to Stop AI From Fueling Emissions(19:34) Microsoft’s Energy Principles: Policy or PR?(21:58) What are Scope 1, 2, and 3 emissions — and why do they matter?(24:26) How does Big Tech’s AI partnership with Big Oil affect Net Zero?(29:55) Why do we need international policy to regulate AI in energy?(32:39) AI for Good vs. AI for Fossil Fuels(34:14) What should humans be?--If you would like to sponsor Thinking On Paper, please contact us. Together, we can take the show to the next level.We love you all.We love the planet.Stay curious.Keep Thinking On Paper.

Sep 1, 2025 • 32min
How OpenAI Went From Nonprofit To Empire - Karen Hao, Book Club (Part 1)
In part one of Empire of AI, Karen Hao traces how OpenAI started as a nonprofit built to stop Google from controlling artificial intelligence. The mission was simple: build AGI and distribute the benefits to all of humanity. Then Elon Musk left, took his money with him, and Bill Gates asked for something that could summarise books.That was enough. Microsoft wrote a billion dollar cheque and the empire began.Mark and Jeremy learn how Sam Altman spent years quietly positioning himself at the centre of Silicon Valley before anyone knew what AI was actually for. They cover why only six black researchers attended the world's biggest AI conference in 2016 and what that says about who this technology was really being built for. They talk through the weekend Altman got fired by his own board and how every employee rallied to put him back. And they ask the one question the whole book keeps circling: was it ever really about humanity?--Chapters(00:00) Introduction to Empire of AI(01:54) The Empire Strikes Back(05:13) Karen Hao, The Journalist(07:38) Do You Trust Open AI?(10:18) Why OpenAI Made ChatGPT(11:47) Scaling OpenAI(12:33) Google, Deep Mind and Ai For humanity(15:12) Greg Brockman(17:02) Sam Altman's Personal Brand 24:46 Timnit Gebru(25:25) How does AI benefit humanity?--Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyzWatch the book club on our dedicated YouTube channel: https://youtu.be/OfQu65-6GuA

Aug 28, 2025 • 43min
AI AGENTS Will Rule The World... But First, The Agentic Web | Andrew Hill
Andrew Hill, co-founder of Recall, believes the next phase of the internet won’t be built on pages or apps, but on swarms of AI agents. Essentially pieces of code that remember, reason, and make decisions on your behalf (and spend Bitcoin), agents will form the new interface layer: where identity, memory, and trust replace passwords, browsers, and brands.In this conversation, we trace how agentic systems evolve from tools into collaborators, how they will coordinate between each other, negotiate access to our data, and rewire what “using the internet” even means. Hill argues that the next great challenge isn’t making AI smarter, but making it responsible, ensuring the web’s new memory layer remains transparent and human-aligned.It’s a quiet revolution: the shift from search to delegation, from browsing to briefing, from information to action.The agentic web is coming. This will help you get ready for what awaits. Please enjoy the show.And share with your most curious friend. Watch the show on the Thinking On Paper dedicated YouTube channel.--TIMESTAMPS(00:00) Disruptors & Curious Minds(01:25) What Is An AI Agent?(07:15) Emotional AI: Risks & Reality(12:49) Language, Evolution & AI(16:59) The Death Of Critical Thinking?(20:05) How To Trust AI Agents(24:27) Recall: Explained(39:49) What Should Humans Be?--LINKS & RESOURCES Learn more about Recall AI Agent training here.Follow Recall on X Follow Andrew Hill on X--Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz

Aug 26, 2025 • 29min
Consciousness & The MEANING OF LIFE | Irreducible, Chapter 13
Mark and Jeremy reach the final chapter of Irreducible, a book that refuses to end where science usually stops. Federico Faggin proposes that consciousness is not a byproduct of matter but its foundation. The universe, he suggests, is a network of seities (quantum entities made of consciousness, agency, and identity) each trying to know itself through experience. What looks like evolution or emergence may instead be intention unfolding in physical form.Their conversation turns to the fault lines between mathematics and meaning. If information only counts bits and signals, what carries understanding? They trace the limits of Shannon’s information theory, question whether AI can ever move beyond pattern recognition, and define what Faggin calls “non-algorithmic comprehension.” Machines calculate. Humans comprehend. That difference might be the last frontier.As they close the book, Mark and Jeremy confront Faggin’s final provocation: that the distortion in human life comes from the need to feel superior—to nature, to others, to the One. Progress, he writes, must serve consciousness or it becomes perversion. The message is disarming in its simplicity. The universe is not a mechanism. It is a mind trying to remember itself.And yes, ultimately, it's a love story. Please enjoy the show. --Timestamps(00:00) Exploring Irreducible: A Journey Through Federico Fagin's Ideas(04:30) The Nature of Consciousness and the Role of Seities(09:32) Meaning, and the Human Experience(13:53) The Vibe Sphere: Music, Symbols, and Communication(18:48) Distortions in Self-Knowing(23:42) The Heart, Mind, and Gut: Centers of Knowing(27:29) What is the meaning of life? --Other ways to connect with us:Listen to every podcastFollow us on InstagramFollow us on XFollow Mark on LinkedInFollow Jeremy on LinkedInRead our SubstackEmail: hello@thinkingonpaper.xyz

Aug 21, 2025 • 36min
What Consciousness Reveals About Reality │ Federico Faggin, Irreducible 12
In Chapter 12 of Irreducible, Mark and Jeremy confront one of the book’s hardest ideas: that consciousness can’t be explained by equations or code.They trace how probability, prediction, and mathematics fall short of describing a universe that is always becoming. Meaning comes before symbols, and knowing comes before measurement. If the physical world is only an average of quantum states, then comprehension itself is a creative act.The discussion moves from the illusion of probability to the difference between simulation and emulation, asking what it really means to know. This is not a rejection of science, it’s a reminder that consciousness might be the missing variable.The closer we get to defining reality, the less certain we are that it can ever be defined.Please enjoy the show. -- Chapters (00:00) Why consciousness vs physics matters (02:15) “Becoming”: a universe that’s still unfolding(03:03) What “live information” really means(05:51) Probability isn't real(07:43) Creativity & AI: making vs. remixing (09:24) Meaning vs. syntax: why symbols alone aren’t enough (17:13) Are you an Observer or actor? Your role in quantum reality (21:55) Reverse engineering anxiety and happiness(27:09) Flow state: the texture of the present (31:05) Simulated minds vs. emulated minds (32:12) Consider our minds blown.--Follow and support Thinking On Paper:PODCAST: https://www.thinkingonpaper.xyz/INSTAGRAM: https://www.instagram.com/thinkingonpaperpodcast/--Thank you. And we love you.


