

Voices of VR
Kent Bye
Designing for Virtual Reality. Oral history podcast featuring the pioneering artists, storytellers, and technologists driving the resurgence of virtual & augmented reality. Learn about the patterns of immersive storytelling, experiential design, ethical frameworks, & the ultimate potential of XR.
Episodes
Mentioned books

May 12, 2026 • 1h 20min
#1718: Primer on “The Transformation Economy” with Joe Pine: When Experiences Fulfill Aspirations, Meaning, & Flourishing
On February 3rd, 2026, Joe Pine released The Transformation Economy, which is a follow-up to The Experience Economy co-written with James Gilmore and published in 1999. They identified a key pattern of how economic offerings have evolved beyond commodities, goods, and services, and moved into experiences as well as transformations.
Their prescient predictions about these underlying patterns in the late '90s took many years of convincing businesses of their merits. But after a few decades, their core ideas of The Experience Economy have taken root, and now it is much easier to see how consumers have shown that they are willing to pay for memorable experiences.
Now Pine is back at it again with The Transformation Economy with ideas that have been there from the very beginning, but he told me that the world wasn't ready yet, and he wasn't ready either. About 5-6 years ago, Pine started to hear from designers at World Experience Organization events talking about the transformative intent behind their experiences. This was the catalyst indicating to him that it was time to finally write this book, and he started researching the topics of aspiration, positive psychology, human flourishing, and the dynamics of transformation.
I had a chance to interview Pine about The Transformation Economy, and in my write-up below I provide an overview of some of his biggest ideas, some of my personal reactions, how they relate to the XR industry, and finally some of my disagreements on where value comes from. Despite some of my philosophical disagreements with Pine, I still see a lot of value in the frameworks laid out in his book. He describes a roadmap towards a future where the core values driving a critical mass of businesses have evolved to focus on helping their customers fulfill their deepest aspirations, find meaning and purpose, and promote human flourishing.
Progression of Economic Value
Pine & Gilmore first theorized about a hierarchy of economic value in a 1997 article titled: "Beyond Goods and Services: Staging Experiences and Guiding Transformations." They originally called it "The Economic Pyramid," and described it by saying, "The inexorable march of competitive forces drives the advancement of economic offerings over time: commodities are extracted from the environment to make goods, then delivered as services, which are scripted to stage experiences, which then guide those persons or enterprises in a transformation."
"The Progression of Economic Value" figure from page 3 of Pine's The Transformation Economy (2026).
Within their "Welcome to the Experience Economy" article in the 1998 issue of Harvard Business Review and in their 1999 book The Experience Economy, they started calling it "The Progression of Economic Value" as shown in the figure above. In The Transformation Economy on page x, Pine describes each of the five distinct economic genres as well as their associated verb / function,
Extract Commodities (fungible stuff)
Make Goods (tangible things)
Deliver Services (intangible activities)
Stage Experiences (memorable events)
Guide Transformations (effectual outcomes)
There is an inevitable gravity towards commodification, and the antidote is customization. This insight first came to Pine in 1994 after he wrote a book in 1993 titled Mass Customization: The New Frontier in Business Competition that explored how Mass Production was moving into Mass Customization. When customization is applied to a service, then it yields an experience. When customization is applied to an experience, then it has the potential to yield a transformation that could be life-changing. Here's how Pine & Gilmore described this progression to transformations in their original 1997 article,
"The way out of the commodization trap in which so many service companies find themselves is to move up an echelon of value and stage an experience. But experiences are not the utmost in economic offerings. Just as customizing a good automatically turns it into a service, so customizing an experience turns it into something distinct. If you design an experience so in tune with what an individual needs at an exact juncture in time, you cannot help but change that individual — guiding him to (and through) a life-transforming experience. Transformations are a fifth economic offering, whose value far exceeds that of any other."
Pine also says in The Transformation Economy that "Eliminating human contact is a surefire way to commoditize yourself." Technology has an inclination to move more and more towards automation and creating "frictionless experiences," but I see the value of human intuition, emotion, relationality, community, and meaning being a differentiating factor in the transformation economy. I suspect that it will be really beneficial to deliberately embrace friction and tension that comes from interacting with other humans as explored in the piece called Deep Soup. I see the movement towards the transformational economy as a bit of an argument against automating too many things with AI because people will be craving authentic human contact.
Key Concepts and My Personal Experience of The Transformation Economy
The Transformation Economy book is written with the intention to become a transformational experience within itself. There are many pointed questions throughout the book that helped shape my overall framing through the lens of my business.
My first reading of the book was focusing on trying to understand the origin, development, and evolution of Pine's provocative ideas to explore within my interview with him. My ongoing second reading of the book has catalyzed me to reconceive some fundamental notions around my identity, as well as the story of why I do what I do with The Voices of VR Podcast.
So much of my work has been driven by a fundamental impulse to bring about change in the world. My motivation to cover the frontiers of emerging technology with XR, AI, immersive storytelling, and experiential design has been because I've seen the transformative power of embodied and immersive experiences to potentially bring about some meaningful changes in the world.
I'm also very much drawn to philosophical frameworks like Process Philosophy that provide some key metaphysical foundations leading to a paradigm shift around the underlying nature of experience and reality itself. Here's a graphic from Andrew Davis' upcoming Whitehead's Universe book that lays out some of the scaffolding of this paradigm shift from substance metaphysics to process-relational metaphysics.
Davis, Andrew M. (Forthcoming in 2026). Whitehead’s Universe: A Prismatic Introduction. Orbis Books.
One of the key concepts that really stuck with me from Pine's The Transformation Economy was at the beginning of the third chapter that says, "All transformation is identity change." Pine cites Suzy Ross' definition of identity as "all the ways you can complete the statement ‘I am . . .’ " He says "From / To" statements are also key where you might say, "I was X, now I am Y."
I really resonate with these definitions of identity since they're very flexible and practical. Once I became aware of these "I am ..." statements, then I started to hear them all the time. I found myself naturally making and reflecting upon identity statements, which provide clues to changes that I aspire to. As an example, I've often found myself saying something to the effect of "I'm more a knowledge artist than a viable business person." So in essence, my aspirational, identity-transformation statement is "I am a terrible business person, but I aspire to become a thriving independent scholar and transformational change agent."
Reading through The Transformation Economy has been really inspiring since it's the first business book I've ever read where I can really see myself in these frameworks. Pine has been giving me language to articulate the possible futures that I'd love to live into, but yet the business models around the transformation economy are still nascent, uncertain, not very well specified, and rapidly developing.
Each business will have a unique blend of commodities, goods, services, experiences, and/or transformations that they'll be offering, and so it is unlikely that there will be a universal formula that works across all contexts. I'm still meditating on this statement where Pine claims that your business is what you charge for. He says on page 22,
"A business ultimately defines itself by what it charges for. If you charge for undifferentiated stuff, you’re in the commodities business. If you charge for tangible things, you are in the goods business. If you charge for the activities your people do, you are in the services business. So, economically, you are in the experience business if and only if you charge for the time customers spend with you."
Pine says that experiences are inherently ephemeral, and sometimes the only thing you keep from it is the memory, which can fade over time. He contrasts this with his definition of transformations, which he shares on page 10 as, "Transformations are effectual outcomes that change individuals in a lasting way. Where experiences are memorable, transformations are effectual."
This implies that the business offering of transformations actually has more of an ongoing time commitment. Businesses in the transformation economy will be helping "aspirants" (Pine's preferred term for customers in the transformation economy) achieve their aspirations of transforming from one state into another state over longer periods of time.
Aspirants will need to invest time, be patient with results, make progress, but also deal with periodic regressions. I've been reckoning with how I am what I charge for, and I can't help but think about the logistical difficulty in trying to escape the real-time accounting of how we've conceived of value delivered

May 10, 2026 • 1h 28min
#1716: “Human Spatial Computing” is a Human-Rights-Centered Textbook for XR Design
The Human Spatial Computing book was published by Oxford University Press on February 5, 2026, and I had a chance to interview the co-authors Reginé Gilbert and Doug North Cook a few weeks after it launched. They alternative as the lead author on each chapter, which provides a comprehensive overview of designing for XR through a variety of different lenses. The entire book is grounded in human rights and ethics, with a recurring focus on how to design experiences that are inclusive and accessible to as diverse of an audience as possible.
There's a helpful recap of the history of human computer interaction that goes way back to desire to recreate reality with the Leonardo da Vinci paintings and the imaginative worldbuilding creating new realities by science fiction writers. Other topics covered include insights from universal design principles, industrial design affordances, architecture, neuroscience, and ethics. Here's a list of the chapters of the book, which we also do a brief recap and overview throughout the course of this interview.
Why Should We Care about Ethics?
The Story of Human–Computer Interaction
What Connects Us All
Universal Design for Spatial Computing
Merging Human Creativity with Technology
The Body
Affordances of Immersive Technology and the Future of Computing
Spatial Computing and the Brain
Where Do We Go From Here?
There are also a lot of questions and activities at the end of each chapter, which makes this Human Spatial Computing book a compelling textbook option for folks teaching XR design.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality

May 10, 2026 • 1h 8min
#1715: “BurnerSphere” Combines Immersive Documentary, Social VR, and Digital Twin of Burning Man
BurnerSphere is part immersive documentary, party social VR platform, and part digital twin of Burning Man. It's a standalone VR experience that launched in early alpha for both Quest and Steam on July 22, 2025. It's an evolution of the original Burning Man on AltSpace that I covered back in episodes #940, #960, & #1192, and now they have their own standalone social VR platform that has a digital twin of Burning Man that creates a spatial context for a ton of immersive documentary content that's shot in 360-degree video, stereoscopic 180-degree video, gaussian splats, 3D-modeled recreations, 3D photos, and 2D photos and videos. It's a vast archive that has a taster that is completely free, but you can also pay camp dues to become a member to get access to all of the footage as well as special events.
I interviewed the cofounders of Big Rock Creative (BRCvr) Athena Demos and Doug Jacobson back in November 2025 to get the latest updates in what's happening with their hybrid immersive documentary archive and nascent social VR platform.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality

May 5, 2026 • 45min
#1714: Lincoln Center for Performing Arts Immersive Programming Overview with Jordana Leigh
The Lincoln Center for Performing Arts has been stage a variety of different types of immersive experiences as a part of their interdisciplinary programming, and I had a chance to catch up the lead immersive programmer Jordana Leigh at Venice Immersive in order to get an overview of what they've been showing, XR experiences they've commissioned, how audiences connect to each other about the unique transportive affordances of experiences presented there, and generally how they're using XR to bring new and diverse communities together in New York City. We also talked about their Lincoln Center Collider Fellowship for XR artists to advance their artistic practice through a range of either open-ended R&D or time and space for innovative experimentation. Leigh was scheduled to present at the IDFA DocLab R&D Summit, but had some travel delays. Hopefully this conversation helps to explain the many ways that the Lincoln Center for Performing Arts is totally in alignment with some of the broader themes of providing opportunities to de-isolate and revitalize civic society that is covered extensively in this report.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality

May 1, 2026 • 43min
1713: CIIIC’s €200 Million in Public Funding: The Creative Industries Immersive Impact Coalition
The CIIIC is the Creative Industries Immersive Impact Coalition based out of the Netherlands, which will be spending about €200 Million in Public Funding over the next five years. It is a really exciting development in Europe that is promoting the development of Immersive Experiences (which they abbreviate IX). They will be cultivating knowledge and methods of experiential design, developing immersive talent and human capital, cultivating immersive ecosystem and facilities, catalyzing innovation via various projects, and creating an over synergy across all of their efforts.
For a comprehensive recap of CIIIC and what they're doing, then also be sure to check out the CIIIC section starting on page 62 of the extensive 121-page IDFA DocLab Think Tank Report that I wrote, which was recently published on April 21, 2026. I provide a bit more context to this report in the intro and outro of this episode, which is an oral history interview with CIIIC Program Director Heleen Rouw at UnitedXR in December. This conversation forms the basis for that section, but also has some additional updates on their various efforts including:
Artistic & Design Research for Immersive Experiences (ADRIE) (5 projects)
Phase I of Innovation Impact Challenge: IX in Urban Development (17 projects)
Phase II Innovation Impact Challenge: IX in Urban Development (10 projects)
The "Shared Realities" consortium is part of the initial ADRIE cohort, which includes a collaboration between IDFA DocLab, Amsterdam University of Applied Sciences, MIT Open Documentary Lab, PHI, ARTIS Planetarium, and a number of XR studios based in the Netherlands including POPKRAFT, Polymorf, Studio Biarritz, WeMakeVR, ALLLESSS (Ali Eslami), Ado Ato Pictures (Tamara Shogaolu), and Cassette (Nu:Reality). Be sure to check out episode #1697 to hear more about how the Shared Realities initiative will be facilitating experiential designers and artists collaborating with researchers to see if immersive art can help to revitalize civic society.
This interview with Rouw provides an overview of the CIIIC, how they're defining "immersive" to be much broader than any single technology, and why they think immersive will be the next big wave of innovation that can help promote public interest values.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality

Mar 13, 2026 • 1h 4min
#1712: Preview of SXSW XR Experience 2026 with Blake Kammerdiener
I interviewed SXSW XR Experience 2026 curator Blake Kammerdiener about this year's selection, and how immersive artists are using Generative AI in a series of different projects. Below is the selection (ordered from longest to shortest). This year's program runs from 11a to 6p CDT from Sunday, March 15-17, 2026.
XR Experience Competition
Escape The Internet (Part 1) (50 min)
Inter(mediate) Spaces (45 min)
Winterover (45 min)
Fabula Rasa: Dead Man Talking (30 min)
Frustrain: Trainman (30 min)
The Forgotten War (30 min)
Watsonville (30 min)
Fillos do Vento: A Rapa (28 min)
Crafting Crimes: The Mona Lisa Heist (20 min)
Love Bird (20 min)
The Baby Factory is Closed (20 min)
Lionia Is Leaving (18 min)
Body Proxy (15 min)
Cycle (15 min)
The Great Dictator: A participatory AI installation about power, rhetoric, and memory (15 min)
XR Experience Spotlight
The Clouds Are Two Thousand Meters Up (62 min)
The Great Orator (50 min)
Lesbian Simulator (40 min)
A Long Goodbye (35 min)
Dark Rooms (35 min)
Lacuna (34 min)
The Dollhouse (24 min)
Reality Looks Back (21 min)
Insider Outsider (12 min)
loss·y (10 min)
Lost Love Hotline (10 min)
Out of Nowhere (10 min)
Spectacular: The Art of Jonathan Yeo in Augmented Reality (10 min)
Ascended Intelligence (9 min)
MIT Open Documentary Lab’s AR and Public Space Artist Collective
Layers of Place: Austin [90 min total]
ORYZA: Healing Ground (15 min)
The Founders Pillars (15 min)
Open Access Memorial (15 min)
Paper Boat (15 min)
Humble Monuments (15 min)
Moving Memory (15 min)
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality

Mar 13, 2026 • 52min
#1711: Mission Responsible 3: Discussion on AI Ethics with 6 Winners of Polys Ombudsperson of the Year
This is the panel discussion of Mission Responsible 3 featuring the winners of the Polys Ombudsperson of the year including: Kent Bye (2020), Avi Bar-Zeev (2021), Brittan Heller (2022), Micaela Mantegna (2023), Ingrid Kopp (2024), and Nonny de la Pena (2025). Introduced by Renard T. Jenkins. The big topic this year was AI, but lots to say about XR as well.
Here are some links that I mentioned in the introduction that were referenced within the show:
"Freedom of Expression in Next-Generation Computing" by Brittan Heller
XR Guild's Principles
US sanctioning individual ICC judges for decisions they don't like.
The Polys 6th Annual Immersive Awards takes place next weekend on Sunday, March 22, 2026 at SVA Theatre in New York City.
This is a listener-supported podcast through the Voices of VR Patreon.
Music: Fatality

Feb 14, 2026 • 54min
#1710: When Integration Becomes Subordination: Big Tech Parallels in Carney’s Davos Speech & Untethering from the AI Big Brother
Canada’s Prime Minister Mark Carney gave a rousing speech at the World Economic Forum on January 20, 2026 about the rupture of the rules-based order of the globalized economy, and he emphasized the need to build new coalitions to sustain the pressure coming from the United States' emerging authoritarianism. Carney said, “Great powers have begun using economic integration as weapons, tariffs as leverage, financial infrastructure as coercion, supply chains as vulnerabilities to be exploited. You cannot live within the lie of mutual benefit through integration, when integration becomes the source of your subordination.”
Just as globalized, economic integrations are being weaponized by the United States, then Big Tech's integrations woven throughout our lives will continue to become the source of our own subordination, especially as surveillance capitalism heads towards its logical conclusion of an all-pervasive, AI Big Brother, perhaps eventually explicitly tied into authoritarian governments.
The AI Big Brother has already started within the context of private companies, but with the outdated Third-Party doctrine of the Fourth Amendment, then any data given to a third party has "no legitimate 'expectation of privacy'." From UNITED STATES v. MILLER (1976): "The Fourth Amendment does not prohibit the obtaining of information revealed to a third party and conveyed by him to Government authorities." So the US government can request almost any data shared with a third party without a warrant, and given Big Tech's cozy relationship to a democratically-backsliding US government, then who knows what kinds of backroom deals are being made to automate data sharing.
We're already in an era where almost all data given to a third party is not considered to be private, and you can start to see some early indications for how this can go wrong in Taylor Lorenz's interview with 404 Media's Joe Cox about ICE's surveillance technologies. It seems likely that we are entering into the very early phases of Orwell's worst nightmare of a 1984 surveillance state powered by Big Tech's AI.
In this op-ed podcast episode, I connect some dots between Carney’s Davos speech about the hegemonic forces in the geopolitical sphere and the parallels with Big Tech's push towards "contextually aware-AI," which is just an always-on AI that is surveillance capitalism on steroids. Carney's speech provides a lot of insights for how Canada is navigating this new reality where the rules-based order on the International stage seems to be dissolving. One of his deepest insights is to simply name the truth, and to describe precisely what is happening. He refers to a powerful story from Vaclav Havel's The Power of the Powerless where shopkeepers eventually "took their [propaganda] signs down" during communist rule after they were no longer willing to live within a lie.
Carney says: "The system's power comes not from its truth, but from everyone's willingness to perform as if it were true, and its fragility comes from the same source. When even one person stops performing, when the greengrocer removes his sign, the illusion begins to crack. Friends, it is time for companies and countries to take their signs down."
Taking down metaphoric signs breaks the spell of the collective performative ritual that sustains the power of an authoritarian regime. Taking a sign down is also the embodiment of the first lesson of Timothy Synder's On Tyranny, which is "Do Not Obey in Advance." This lesson is certainly easier said than done, and I've been surprised how pervasive and powerful the chilling effects to remain silent can be. I find myself self-censoring, going dark on social media, and just generally not speaking the full truth as I see it. So this episode is a step in that direction of trying to name things as I see them, but also drawing the parallels between these broader political contexts and how they're collapsing into the technological contexts.
As a society, one sign we've been holding up is that we've collectively been willing to mortgage our privacy by giving our data to Big Tech because it allows us to get access to software and services for free. But as the line between Big Tech and authoritarian governments continues to blur, then I expect to see more people start "taking down their signs" of tolerating surveillance capitalism by tapering down or cutting off their relationship completely.
I'm already seeing some signs of this resistance to Big Tech starting to happen with the resurgence of dumb phones to counter smart-phone addiction, quitting social media to reduce the algorithmic filter bubbles that curate our realties, and a implementing a digital detox to unplug from the Internet in favor of more embodied, immersive, and experiential entertainment. We're starving for authenticity as social media networks are flooded with AI slop because it makes numbers go up, but yet it is a profoundly dehumanizing experience that feels like it's the logical extreme of novelty-optimized AI dopamine machines leading us to an Idiocracy dystopian future.
With the democratic-backsliding in the US, the Trump Administration has been following the "seven basic tactics in the pursuit of power" as detailed by The Authoritarian Playbook (2024) as they politicize independent institutions, spread disinformation, pursue the unitary executive theory at the expense of checks and balances, quash criticism and dissent, scapegoat vulnerable and marginalized communities, work to corrupt elections, and stoke violence with their Operation Metro Surge.
I'm seeing the abandonment of due process, and I've lost all faith in the enforcement of the rule of law as the Department of Justice has been weaponized. This abandonment of the rules-based order of the rule of law has a profoundly destabilizing psychological impact, and other countries have also been reckoning with it. In response, the Prime Minister of Canada Mark Carney has called for new coalitions of the middle powers given that the United States has chosen to abandon rules-based order in favor of coercive negotiating techniques. The US is leveraging their asymmetry of power to turn all relationships into a transaction that can be won or lost. Canada is unwilling to bend the knee to these authoritarian ways, and is making the call to arms for all middle powers to unite in order to resist the power of these hegemonic forces. There is a real strength in collective resistance, and so Canada is taking a hybrid approach towards coalition building. Their approach is primarily led by collaborating with countries that have shared values, but they also recognize the need for more pragmatic, ad-hoc, "variable geometry" coalitions based upon mutual benefit or interest.
Just as countries are thinking about how to maintain their sovereignty, we are all entering into a new era that has moved beyond a rules-based order. So people around the world are also thinking about how they can maintain their own sovereignty in the context of Big Tech's push towards an all-pervasive, AI surveillance machine.
One recent example of Big Tech's surveillance aspirations comes from an internal Meta memo shared with the New York Times arguing that the political chaos in the world right now makes it the perfect time to push out controversial tech that would normally get a lot of blowback. They're considering launching facial recognition features for their RayBan-Meta AI glasses as they callously characterize this moment as a "dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.” This type of realpolitik moral reasoning follows the logic of surveillance capitalism, which completely ignores the broader potential cultural and legal impact of their technologies in favor of their short-term gain. I've previously written about how Meta's dream of contextually-aware AI is a dystopian privacy nightmare within the Proceedings of Stanford's Existing Law and Extended Reality Symposium.
The always-on and persistent sensing from face-mounted cameras embedded into glasses is the next frontier for Meta, but this persistent capturing from wearable technology across all contextual domains will start to change the legal definition of our "reasonable expectation of privacy." This is because part of the legal test laid out by Harlan's concurring opinion in KATZ v UNITED STATES (1967) is what "society is prepared to recognize as reasonable." In other words, whatever the culture accepts as the boundary between public and private contexts becomes a part of the legal test for what the government considers to be protected by the Fourth Amendment. So an always-on AI surveillance from wearable face cameras will inevitably change these legal definitions and weaken everyone's Fourth Amendment protections.
Even if all of the raw data remained on these devices, then inferences made from devices would not be protected if they're shared with a third party. Imagine a noisy raw camera feed from Meta's AI glasses is processed, but it makes some incorrect inferences from computer vision algorithms or hallucinations from a large language model, then these incorrect inferences could end up in the hands of a government and used as evidence against you in a court of law. The film Coded Bias does a great job of elaborating how marginalized communities have been harmed by biased algorithms that have been integrated into automatic decision-making in the context of policing, housing, employment, etc.
Carney's roadmap has many lessons that we can also apply to our own encounters of a new reality. He named the truth of this situation and is taking Canada's metaphoric sign down signaling that they are no longer willing to live within a lie. Canada is untethering itself from their relationship to the United States as the US takes an authoritarian turn into dem

Feb 5, 2026 • 1h 35min
#1709: Ian Hamilton on Getting Fired from UploadVR & Concerns on AI Authorship in News
On Wednesday, January 28, 2026, Ian Hamilton announced on Bluesky that "I've been fired from UploadVR." He was the editor in chief at UploadVR, and he wrote a Substack post titled "Ian is Typing" on January 30th detailing how is co-workers were pushing to do a test of a "clearly disclosed AI author for UploadVR," and that he had three specific concerns that it be brief, for the ability for readers to turn off and hide all AI-authored posts, and for human freelancers to have the right of first refusal. Hamilton claims to have tried to raise these concerns in the context of Slack, but that the experiment was going to proceed regardless. He writes, "Unable to shift the direction of my colleagues and out of options to affect what was coming, I stepped out of Slack and sent a final email to them on Wednesday morning with a number of my contacts in the industry copied, raising some of these concerns. Not long after, I was called by my boss and fired."
I spoke with Hamilton last Friday after his Substack post in order to get more context that led to his departure. Hamilton claims that UploadVR Editor & Developer David Heaney and UploadVR's Operations Manager Kyle Riesenbeck were behind the push to test this clearly disclosed AI author on UploadVR, and that ultimately the proposed test was a business decision made by Riesenbeck. It was a decision that Hamilton ultimately disagreed with, and he cites it as the primary factor that led to behavior that ultimately led to his firing. (UPDATE Feb 5, 2026: It is worth noting here that UploadVR has yet to run this AI bot author test, but that it was the proposed test that was the catalyst for Hamilton’s behavior).
The specific reasons and circumstances around Hamilton's firing are publicly disputed by Heaney, who reacted on Twitter after Hamilton's Substack post went live by saying, "It is indeed only one side of the story. And an incomplete telling of it, with key omissions and wording choices that serve to paint a misleading picture." In another post Heaney says, "I can't get into it more at this point for obvious reasons, but don't believe everything you read, especially a single side of a complex story." I asked Hamilton for his reaction to Heaney's claims that he's being misleading during our interview, and he did provide more context in our conversation that lead up to his firing. Ultimately, it does sounds like the proposed AI bot author test was the primary catalyst for Hamilton, and that this disagreement may have led to other behaviors and reactions that could also be reasonably cited for why he was fired. UploadVR may have a differing opinions as to what happened, but no one from UploadVR has made public comments beyond what Heaney has said on Twitter. I have extended invitations to both Riesenbeck or Heaney to come onto the podcast for a broader discussion about AI, but nothing has been confirmed by the time of publication.
My Personal Take on AI: Technically, Philosophically, Legally, and Culturally
Public discourse around AI has split into a binary of Pro-AI vs Anti-AI, and while my personal views can not be easily collapsed into one side of the other, I'd usually take the Anti-AI side of a debate if given the opportunity. I do think some form of AI is here to stay, and will be around for a long time, but that right now there is a lot of hype and deluded thinking on the topic. I see AI as a technology that consolidates wealth and power, and so a primary question worth asking is “Whose power and wealth is being consolidated?” Karen Hao's The Empire of AI elaborates on how the past patterns of colonialism are replaying out within the context of data and the field of AI, as well as how scaling with more compute power has been the primary mode of innovation in AI, and that Gary Marcus has been pushing against the "Scale is All You Need" theory for many years now.
Technically speaking, I'm more of a skeptic in the short-term around LLMs along the lines of Stochastic Parrots critique that is elaborated upon by Emily M. Bender and Alex Hanna in The AI Con book, but also Yann Lecun's call for more sensory grounding, as well as Gary Marcus' calls for more neurosymbolic cognitive architectures. AI has always been a marketing term as elaborated by Dr. Jonnie Penn’s Ph.D. thesis on "Inventing Intelligence: On the History of Complex Information Processing and Artificial Intelligence in the United States in the Mid-Twentieth Century." My perspective on AI has been informed by 122 unpublished interviews with AI researchers, many of whom also cite how the empirical results often outpace the theoretical results (i.e. there are often benchmark improvements without full knowledge around the theoretical foundations behind it leading resulting in plateaus rather than monotonic progress). I've also spoken to over 100 XR artists, storytellers, and engineers about AI on the Voices of VR podcast over the past decade. When the context is bounded, and the data are gathered while being in right relationship, then there can be some real utility. But there's also many gaps and ways that LLMs cause harm to marginalized communities. See the film Coded Bias for more details on that front.
Philosophically speaking, Process Philosophy has had a big influence on me, and so check out my conversation with Whitehead scholar Matt Segall on AI. Timnit Gebru and Émile P. Torres' paper on the TESCREAL bundle has also been a key influence that deconstructs the influence of philosophies like Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism on AI research. I don't think AI is conscious, but I lean towards Whitehead's panexperientialism, which sees experience as going all the way down. This perspective also helps to differentiate humans from machines by looking at things like emotions, meaning, value, intention, context, relationships, all of which can easily get collapsed if only looking through the lens of “intelligence.” I'm curious about Data Science as Neoplatonism ideas, and Michael Levin’s work on ingressing minds (influenced by Platonic forms and Whitehead's eternal objects) and his general calls for SUTI: the Search for Unconventional Terrestrial Intelligence. I also love Timothy E. Eastman’s Logoi Framework as elaborated in his Untying the Gordian Knot: Process, Reality, Context book. He highlights the triadic nature of reality being input-output-context, and the logic of actualizations being Boolean logic and the logic of potential being non-Boolean logic, which is something that Hans Primas elaborates on in Knowledge and Time. So AI needs to account for the pluralism of non-Boolean realities, but often collapses them into a singular formal system that collapses situated knowledges. Also see James Bradley’s “Beyond Hermeneutics: Peirce’s Semiology as a Trinitarian Metaphysics of Communication,” which elaborates on Charles Saunders Peirce’s semiotics as being a triadic system that includes a sign, object, and interpretant, and LLMs take a nominalist, dyadic approach that collapses the deeper meaning or interpretation (see computational linguist Bender’s elaboration of this argument in The AI Con). Also see Michèle Friend’s Pluralism in Mathematics: A New Position in Philosophy of Mathematics as it applies Gödel's Incompleteness to the foundations of mathematics itself and points out the limits of Boolean logic, and the need for an overall paraconsistent logic. AI researcher Ben Goertzel wrote a paper on "Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation." Here's a talk I gave with some of my preliminary thoughts on AI. I also have a lot more thoughts and resources in my write-up from when I argued against AI in a Socratic debate at AWE 2025. Also check out this recent philosophical talk that digs into some of the philosophical foundations to my experiential design framework and Whitehead's panexperientialism.
Legally speaking, I generally advocate for a relational approach as well as open source, decentralized approaches, but also I see that there's a need for some legal checks and balances around privacy. I elaborate on these in a paper titled "Privacy Pitfalls of Contextually-Aware AI: Sensemaking Frameworks for Context and XR Data Qualities" that was written for the Stanford's Cyber Policy Center's "Existing Law and Extended Reality" Symposium. But there is no sign of any new comprehensive federal privacy law in the US, which is where these major Big Tech companies are located. So the privacy implications of contextually-aware AI remain to be extremely fraught, especially with the trend of democratic backsliding in the US and beyond.
Culturally speaking, I find the forced integrations of AI into many layers of UX / UI to be largely non-consensual and with me being left with the feeling that AI is being shoved down my throat when I didn't ask for it and usually avoid using it whenever I can. I don't want AI to write for me, because writing is the process of thinking for me, and I'd rather think for myself (see “thinking as craft” argument from Hanna in The AI Con). I do find the experience of AI slop videos, photos, and text to be profoundly dehumanizing and makes me want to retreat from any social media space where AI slop is flooding the feeds. I hate the experience of having to question the provenance and legitimacy of everything I see and hear, and the AI-driven misinformation campaigns are a blight on democracy. I really resonate with the view that AI is the Aesthetics of Fascism considering the extent of how authoritarian leaders are using AI slop to push their democratic backsliding agendas.
So my perspectives on AI don't fit neatly into a single category, but I do resonate with some of the Anti-AI, Neo-Luddite sentiment. I'd point to Emily M. Bender and Alex Hanna’s The AI Con book, Karen Hao’s Empire of AI, Shoshana Zuboff’s Age of Surveillance Capitalism,...

Dec 7, 2025 • 1h 41min
#1708: How Process Philosophy Centers Experience. A Prismatic Tour of “Whitehead’s Universe” by Andrew M. Davis
In this engaging discussion, Andrew M. Davis, a process philosopher and author, dives into the world of Alfred North Whitehead’s process philosophy. He emphasizes how human experience shapes reality, exploring concepts like prehension and the integration of mind and matter. Davis also highlights the importance of creativity in understanding our universe and the value-laden nature of existence. Plus, he connects Whitehead’s ideas to art and education, envisioning a re-enchanted cosmos through process and participatory co-creation.


