
OpenAI Releases a "Plan" for Humans Once We Are No Longer Needed
Based Camp | Simone & Malcolm Collins
Preparing personally for the transition
They discuss building influence, communities, and businesses to be resilient and relevant post-AI disruption.
OpenAI just dropped their big “Industrial Policy for the Intelligence Age” document — and it’s clear they’re battening down the hatches for AGI/superintelligence. In this Based Camp episode, Simone & Malcolm Collins break down the proposals, call out the performative elements, and discuss what it really means for jobs, society, wealth distribution, and human flourishing in a post-labor world.
We cover:
- OpenAI’s push for a “people-first” transition (or is it mostly optics?)
- Public wealth funds, robot taxes, 4-day workweeks, and expanded safety nets
- Why AI agents like our Reality Fabricator could replace entire workforces
- The darker implications for demographics, family, and global power
- Risk mitigation, liability, bio/cyber threats, and why meme-layer solutions might matter more than anyone admits
Is this genuine preparation for superintelligence, clever self-preservation by OpenAI, or both? We give our unfiltered take.
Watch until the end for Malcolm’s super-villain island/Charter City vision and what we’d actually build in a post-AI world.
OpenAI’s Document: Industrial Policy for the Intelligence Age
Show Notes
In April, OpenAI released a new document: Industrial Policy for the Intelligence Age: Ideas to Keep People First, which is their crack at launching an early, public conversation about how democratic societies should handle the onset of AGI
They intend to support this agenda through feedback channels, fellowships, research grants, and convenings (e.g., its Washington, DC workshop).
* “OpenAI is: (1) welcoming and organizing feedback through newindustrialpolicy@openai.com; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.”
* There is no actual information about this workshop out there
* Maybe an indication of their not being serious?
They propose AI governance and industrial policy
They imply their proposals will help keep people at the center despite a transition to superintelligence
They put forward an initial portfolio of policy ideas in two areas: “building an open economy” and “building a resilient society,”
What they say they’re optimizing for:
* Broadly sharing prosperity
* Mitigating risks
* Democratizing access and agency
Their case for new industrial policy
Society has navigated major technological transitions before, but not without real disruption and dislocation along the way. While those transitions ultimately created more prosperity, they required proactive political choices to ensure that growth translated into broader opportunity and greater security. For example, following the transition to the Industrial Age, the Progressive Era and the New Deal helped modernize the social contract for a world reshaped by electricity, the combustion engine, and mass production. They did so by building new public institutions, protections, and expectations about what a fair economy should provide, including labor protections, safety standards, social safety nets, and expanded access to education.
“The transition to superintelligence will require an even more ambitious form of industrial policy” they write.
Open Economy Proposals
They acknowledge that the AI boom can severely concentrate wealth
They argue for industrial policy that will”
* “Give workers a voice in the AI transition to make work better and safer, including a formal way to collaborate with management to make sure AI improves job quality, enhances safety, and respects labor rights.”
* “Help workers turn domain expertise into new companies by using AI to handle the overhead that usually blocks entrepreneurship (e.g., accounting, marketing, procurement).”
* “Treat access to AI as foundational for participation in the modern economy, similar to mass efforts to increase global literacy, or to make sure that electricity and the internet reach remote parts of the globe.”
* “rebalance the tax base by increasing reliance on capital-based revenues—such as higher taxes on capital gains at the top, corporate income, or targeted measures on sustained AI-driven returns—and by exploring new approaches such as taxes related to automated labor”
* This is because they acknowledge income-based jobs are going to vaporize
* “These reforms should be paired with wage-linked incentives that encourage firms to retain, retrain, and invest in workers, similar to existing R&D-style credits.”
* Create a Public Wealth Fund that provides every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth.
* Smart move on behalf of AI companies if the financial welbeing of ALL citizens is dependent on their success
* Would be kind of a massive win; if everyone owns you, you own everyone.
* “Establish new public-private partnership models to finance and accelerate the expansion of energy infrastructure required to power AI.”
* No brainer
* “Convert efficiency gains from AI into durable improvements in workers’ benefits when routine workload declines and operating costs fall, including incentivizing companies to increase retirement matches or contributions, cover a larger share of healthcare costs, and subsidize child and eldercare. Incentivize employers and unions to run time-bound 32-hour/four-day workweek pilots with no loss in pay that hold output and service levels constant, then convert reclaimed hours into a permanent shorter week, bankable paid time off, or both. Where helpful, firms could also offer predictable “benefits bonuses” tied to measured productivity improvements so the efficiency dividend shows up both as long-term financial security and as time back for workers.”
* This makes me worried
* “Make sure the existing safety net works reliably, quickly, and at scale, because if the transition to superintelligence is going to benefit everyone, the systems designed to provide economic and health security need to deliver without delay or gaps. That starts with unemployment insurance, SNAP, Social Security, Medicaid, and Medicare that are not just in place but fully functional, accessible, and responsive to the realities people will face during the transition.”
* THEY WON’T WORK AT SCALE AND I HAVE NO IDEA WHAT THEY THINK IS GOING TO FIX THIS
* This implies they expect a huge surge in unemployment, right?
* They propose a metrics-driven, dynamic “package of temporary, [and] expanded safety nets (e.g., expanded or more flexible unemployment benefits, fast cash assistance, wage insurance, training vouchers)”
* “Over time, build benefit systems that are not tied to a single employer by expanding access to healthcare, retirement savings, and skills training through portable accounts that follow individuals across jobs, industries, education programs, and entrepreneurial ventures.”
* This makes sense
* It’s stupid that employers are responsible for this (though I get how and why that happened)
* “Expand opportunities in the care and connection economy—childcare, eldercare, education, healthcare, and community services—as pathways for workers displaced by AI.”
* More atomization
* THOUGH IN FAIRNESS TO THEM, THEY CONTINUE: “These initiatives could be complemented with a family benefit that recognizes caregiving as economically valuable work and supports evolving work patterns. This benefit could help cover childcare, education, and healthcare while remaining compatible with part-time work, retraining, or entrepreneurship”
* “Build a distributed network of AI-enabled laboratories to dramatically expand the capacity to test and validate AI-generated hypotheses at scale.”
* YES
Resilient Society Proposals
This is their diplomatic way of saying: “Risk Mitigation Proposals”
“This is not a new challenge. When transformative technologies have reshaped society in the past, they have introduced new risks alongside new benefits, and new systems were built to manage them as they scaled. As electricity spread, societies built safety standards and regulatory institutions. As automobiles transformed mobility, safety systems reduced risk while preserving freedom of movement. In aviation, continuous monitoring and coordinated response systems made flying one of the safest forms of transportation. In food and medicine, testing and post-market surveillance helped ensure safety in everyday use. In each case, resilience was not automatic—it was built with the luxury of time.”
They propose that governments:
* “Research and develop tools that protect models, detect risks, and prevent misuse across high-consequence domains, including cyber and biological risks as well as other pathways to large-scale harm.”
* “for example, rapid identification and production of medical countermeasures in the event of an outbreak and expanded strategic stockpiles to prepare for future risks”
* YES PLEASE
* “Research and develop systems that help people trust and verify AI systems, the content they produce, and the actions they take—especially as these systems take on more real-world responsibilities”
* “This work could also include developing and testing governance frameworks that clarify responsibility within organizations, including how accountability could be assigned to specific roles and how delegation, monitoring, and escalation processes could function as systems become more capable.”
* I am genuinely interested to see how liability + AI evolve
* This could be uncharitably interpreted as OpenAI hoping to evade liability by making sure the people who misuse it are held liable, but I think that’s fair.
* Strengthen institutions such as the Center for AI Standards and Innovation (CAISI) to develop auditing standards for frontier AI risks in coordination with national security agencies.
* “As we progress toward superintelligence, there may come a point where a narrow set of highly capable models—particularly those that could materially advance chemical, biological, radiological, nuclear, or cyber risks—require stronger controls, including pre- and post-deployment audits using the standards developed in advance. Apply these requirements only to a small number of companies and the most advanced models, preserving a vibrant ecosystem of less powerful systems and the startups building on them. This approach maintains broad access to general-purpose AI while applying targeted safeguards where failures could create the greatest harm, avoiding unnecessary barriers that could limit competition or enable regulatory capture.”
* Develop and test coordinated playbooks to contain dangerous AI systems once they have been released into the world.
* Frontier AI companies should adopt governance structures that embed public-interest accountability into decision-making, such as Public Benefit Corporations with mission-aligned governance. These structures should include explicit commitments to ensure that the benefits of AI are broadly shared, including through significant, long-term philanthropic or charitable giving.
* “Have policymakers establish clear rules for how governments can and cannot use AI, with especially high standards for reliability, alignment, and safety.”
* “With appropriate safeguards, oversight institutions such as inspectors general, congressional committees, and courts could use AI-enabled auditing tools to detect abuse, identify harms, and improve accountability at scale.”
* “Create structured ways for public input so that alignment isn’t defined only by engineers or executives behind closed doors.”
* Is there a way this could work without being dysfunctional?
* “Establish a mechanism for companies to share information about incidents, misuse, and near-misses with a designated public authority.”
* This currently happens with data breaches, but it sucks
* A better system would presume failures and build around that, right?
* Coordinate “International information-sharing around AI capabilities, risks, and mitigations.”
Episode Transcript
Simone Collins: [00:00:00] Hello Malcolm. I’m excited to be speaking with you today because it’s really clear that OpenAI is like battening down the hatches for, for a GI. They’re like artificial general, general intelligence is coming, gotta shut down Sora. Like everything’s getting cut.
Like we are like riding through the, you know, they’re, they’re, they’re catching the wave and it’s super clear that that’s, that’s what’s going on. Like,
Malcolm Collins: well it’s super clear that that’s their messaging. Their AI is not particularly good when contrasting with other
Simone Collins: experiencing that, but that might be because the consumer facing stuff that they’re releasing, they’re just like kind of letting that go right now.
And another sign of that is that in April, I mean yesterday but anyway, in April, ‘cause I don’t know when you’re gonna run this, they released a new document called Industrial Policy for the Intelligence Age Ideas to keep people first, which is their. Alleged crack at launching an early public conversation about how democratic societies should handle the onset of a GI.
So they’re, they’re trying to kind of get ahead of the public discourse as well to be [00:01:00] like, oh no, we want to make this human-centric. We’re not gonna leave anyone behind. And what’s interesting is when you go through this policy document, it’s pretty clear that they’re ha aware of both the hazards and the massive social impact that a GI is going to have.
And even if you doubt that they’re gonna be the ones to release it. We’re we’re headed there. It, it’s super clear.
Malcolm Collins: No, but it still is worse digging into what they’re saying.
Simone Collins: Yeah. ‘
Malcolm Collins: cause the people developing ai, okay, they’re not stupid. Right. Like, they understand that this is gonna fundamentally transform our societies.
Mm-hmm. The, the, the, the, we are at this fulcrum point of the human condition.
Simone Collins: Yep.
Malcolm Collins: Where somebody. I hope it’s us, builds an agent that can replace your average human worker. For people who don’t know, we working on our fab ai, which actually literally has these agents, which are getting better every day.
I’m at a point now where the ones that we [00:02:00] make can about a, if you’re running it on Windows and running it in Chrome, because it has a lot of bugs on other systems still about end to end build video games for you. Like pretty cool. That’s crazy, right? Yeah. But it can also do things like make phone calls, send texts, send emails.
We’re, we’re getting the feature that allows it to talk to other agents improved. So like, so much is potentially going to change after that because what, what does it mean? Why would I hire a, like at our company, at our fab.ai I wanna make video games. So what do I do? Do I hire another developer to make video games?
Or do I build an AI agent to make the video games for me? Right? Like, because I wanted to make indie games for a long time, but I just haven’t had the bandwidth to do it. Yeah. And now I can go out there and say, okay, I’m gonna make these indie games and then I can build an agent to help get this STEAM certification process handled for me, and I can
Simone Collins: yeah.
In other words though, like what, what makes reality fabricator different? But I would say just an [00:03:00] indication of a pervasive future is that you’re not looking to make specifically agents or tasks more accessible to people you are. Going to replicate employees, you’re gonna make it easier for people to just create employees.
Yeah. Full personalities that, that behave like humans. You’re not just making like, this is my bot that does my research. That makes course.
Malcolm Collins: Well, it actually has a number of advantages. I’m just gonna go off on a little tangent here, so people can,
Simone Collins: oh, boy,
Malcolm Collins: understand how you can think about the development of ai and what an AI developer is thinking about mm-hmm.
When they’re putting together their product. So giving them personalities actually significantly reduces a lot of the negative outputs you have from something, being an AI system more broadly. If you go to an AI and you say, write me a paper on X topic, right? It will make many of the, like, x but y you know, sorts of [00:04:00] mistakes where you’re like, oh, an AI wrote this.
You know, when you hear somebody say something like that, if you go to an AI and you say, write me a paper as X person, like you are not the standard ai, like you’re an expert in whatever, it’s, you know, go over Malcolm Collins’ writings, right. Write a paper in the style of Malcolm Collins are even better.
Give it a personality and say, you have this backstory. You have these memories. Write a per paper, embodying this individual. Those get reduced significantly. So this has a, an actual effect in terms of how the AI does its work. It’s not just random additional tokens that are being spent. Yeah. The second thing that we did.
Is if you look at the other agent change systems they use json format hooks to hook together the various model calls. So basically you have a model call and then that call says, oh, I want the next model to search the internet or make a phone call, or whatever. And so they have a very structured [00:05:00] format that looks like code, basically, like this has to be in this line, this has to be this.
We didn’t do it that way. We developed a, which has made it much harder for us, and this is where most of the errors from the system are coming from. A loose, natural, language based chaining process. So as close to natural language as possible, the ai says what it wants to do next, and then a system parses that and then calls that next.
And there have been. Peer reviewed papers on this, it shows that ai, when you don’t give it tight parameters like that, is significantly more creative and significantly more intelligent.
Speaker 12: No, the system is better than it used to be, especially if you are using windows with a Chrome browser. , It can get about to making its own games now, like recently I had one build a game for me, , but it is still very buggy. Specifically because of this system. I expect to have most of the bugs ironed out was in a months, maybe two.
, And then it should be able to do its full slew of capabilities.
Malcolm Collins: And then we have other advantages, like we use an alloy model [00:06:00] system, which none of the other agents are using, which changes, which model is being called with every call within different price tiers.
And this makes it much more intelligent because it’s using the best features from every individual model call. Then we have a sec, like all the advanced systems that we’re able to build into this. Then another huge problem that systems have is that they get stuck in loops. And the way that other companies are solving this is they basically just put in detection systems that like go over what the AI is doing.
It’s like a cheap, fast model, like every four or five calls, every eight calls, and then they inject. A prompt that’s meant to like bust the loop, which it doesn’t actually work very well, where what we do is we both inject the prompt that’s meant to bust the loop, but then we also run a separate ai, which prunes every output that was involved with the loop.
So it’s like the loop never existed. So like when we’re approaching this, we’re attempting to architecturally make significant improvements on how the systems worked. Not just, you’re not just getting a, [00:07:00] oh, it’s different in this small way. We also allow for far more models than any of the other major age systems.
I just had to go over that, the, the, the fun stuff we’re doing on that front. But what this represents through open AI is one of these major players being like, uhoh, we’re about to break the global economy. Like how can we plausibly have
Simone Collins: a world
Malcolm Collins: where humans still live? Something like a normal life.
Yeah. After, yeah. Because like, yeah, we wanna achieve our goal with R Fab ai, which is to replace the human labor force. But like also, I’m not, I’m only like kind of evil, like I think I’m sort of a super villain, right? But you know, presentation, right? I not a damn almond’s, just a villain. Okay? I am a super villain.
Speaker 2: Oh, you are a villain. All right. Just not a super one. Yeah. What’s the difference?[00:08:00]
Presentation.
Malcolm Collins: so hold on. But the point I’m making here is I obviously don’t want civilization destroyed, right? Like I, I, I would if I was making money from this be using that money to try to build what a post AI human civilization is gonna look like on my.
Island community, like Charter City. I know like we say, charter City right. But what we really mean is private island portraits, which is a little super villainy. Anybody had enough money? Would I carve my face on a volcano? Probably.
Simone Collins: Why not? I mean, when you can have your AI drone swarm do it for you so easily.
Malcolm Collins: Sure. You can though. Right? So then why would, it’s a
Simone Collins: good afternoon.
Malcolm Collins: I do that. I, and I do it for effect, right? Like. [00:09:00] I might have at the end of this, just things I would do if I become like super wealthy. AI mogul one is definitely one of those scary looking blimps you know, with like spotlights on it and like, oh, two big, like things with like post-apocalyptic messages on it.
Like, you know, don’t rebel. Like, oh, Malcolm was always right. You know, flying.
Simone Collins: Yeah. Like I could just see a m visitor from abroad coming over and being like, oh my God, what’s that? That’s just Malcolm. It’s
Malcolm Collins: fine. I would just use it to, to circle major cities like New York and San Francisco to f with them.
Simone Collins: Oh my god.
Malcolm Collins: Project like holograms, because if you’ve got some friends that are working on hologram projector, tick, it’s already out there by way for ads. And we were looking at like, using it around New York because they don’t have laws against just like projecting holograms above the streets and stuff like that yet.
But like, yeah, could I, could I get my cool hologram projection even more dystopian? I’m just gonna go, how dystopian can I go walking spider chair? [00:10:00] I definitely want one of those
Simone Collins: necessary for sure. Yeah.
Speaker 13: If you were like Malcolm, those things aren’t very austere things to do. And I would reply, well, I mean, I wouldn’t, no matter how else I got, I probably would not upgrade my health or living conditions or what I eat every day. , But that doesn’t mean I would not indulge in things that made me laugh, , because that’s the one area where I think it’s okay to splurge a bit.
Simone Collins: But what, what OpenAI is trying to do is, is not just sort of share their take on how to go about this transition while, as you were saying, sort of maintaining some semblance of societal order and, and human dignity.
But I think it’s, it’s pretty clear from the way that this has been dropped and presented that a lot of this is, mostly performative, poorly executed, in my opinion, attempt to look as though they’re like, no, I am listening. I am receiving your feedback. I care about how you feel. It’s
Malcolm Collins: like when Sam Holman did [00:11:00] that study, our video on this was crazy by the way, where he gave people a thousand dollars
A month.
Malcolm Collins: for three years to try to show how great it would be if we had a general income and they had less money than the people who were given nothing.
Simone Collins: Universal basic income. Yeah. Yeah. Well, don’t worry, because they still want that to happen. It’s in here, we’re gonna go into it. But, but back to their sort of, this is how they frame it. They’re welcoming and organizing feedback through. And go ahead guys. Send an email, new industry policy@openai.com to establishing a pilot program of fellowships and focused research grants of up to 100,000 and up to 1 million in API credits for work that builds on these and related policy ideas.
Though it is unclear how one can apply for such. Fellowship and then also three convening discussions at our new Open AI workshop opening in May in Washington dc There’s no information about how to get involved with this. I checked it is April
Malcolm Collins: we’d wanna be [00:12:00] involved.
Simone Collins: It’s April 7th. And this is happening next month.
You know, you gotta like rent venues and stuff.
I don’t think they’re doing this.
Malcolm Collins: I don’t think they’re doing it either.
Simone Collins: I think they have like, obviously they have like AI summarizing the emails that are going to new industrial policy@openai.com.
Think anyone’s actually gonna pay attention.
They’re gonna be like, this is what people are mad about.
Put out more propaganda around that. Okay. That’s,
Malcolm Collins: that’s, you know what I literally bet OpenAI did. I literally bet they went to one of their models, frontier models as they said.
Simone Collins: Yeah.
Malcolm Collins: And asked it. Okay. We’re trying to make people less scared about the fact that we’re about to destroy the economy. What should we tell them that we’re doing?
‘cause these sound like an AI’s answers, like, oh, you should do a summit and you should do a, a, a thing where you give out
Simone Collins: money. What, for real? Well, I mean, and yeah, I, I would, I would have given them like credit for this if I Googled. I mean, one, obviously if these programs were described, like, you can apply for your grant this way, whatever, you know, like this, here’s a, here’s a [00:13:00] form.
But no, no, that they’re just like, add through our grant program that about which we have no additional information. And in our, in our summit series, which is not scheduled or available, you can’t sign up for anything. And it’s happening next month though. It’s definitely happening. Sure. It’s just, yeah.
So like that’s already rather,
Malcolm Collins: did you about the billion dollar company that AI built with like two guys in AI made it. The New York Times did a segment on it. Oh. And basically the entire company is a scam.
Simone Collins: No, the the AI element was that they used AI images. You see?
Malcolm Collins: No, no, no, no. The, it’s real. Like they’re telling the truth.
AI created almost everything in the company. It created the images, it created the, the content, it created everything like that. But it sells mm-hmm. Fake medicine to make fat people skinny. Like
Simone Collins: it’s a whole new version of vaporware which was this concept that, that came up in the era of Silicon Valley, in which I, I, I still worked in, in the startup scene in Silicon Valley, whereby people would raise money for a startup that didn’t even really exist yet, and that never did ultimately exist.
And it was [00:14:00] mostly just very convincing startup CEOs raising money from a bunch of credulous investors. Now, the new version of it is just using AI to be extra convincing about your vaporware, and then selling a scam product
Malcolm Collins: friend post dot boom. This was long post.com, and this would’ve been. Eight years post.com.
Boom. 10 years post.com boom. When I was living in Silicon Valley.
Simone Collins: Yeah.
Malcolm Collins: And he was still living off of his startup’s money that they raised during the.com boom.
Simone Collins: Oh no. And
Malcolm Collins: I was like, what do you mean? It’s just
Simone Collins: treating it as like a, like an annuity?
Malcolm Collins: Well, so he’s like, yeah, they gave me like $10 million or something, and all of the VCs went bankrupt and my company stopped existing.
And so I just lit, like basically everyone forgot that somebody gave this guy $10 million.
Simone Collins: Oh my gosh. That’s so bad. But also, like, they kind of just like, that’s when, that’s how it was, right? Anyway. Mm-hmm. Yeah. Billion dollar startup. Anyway, let’s, let’s move on to, to what they were [00:15:00] actually doing. Because what they wanted to do is they, they’re like, this is the beginning of an open conversation in which we’re all talking together and we’re listening.
Malcolm Collins: An AI wrote this. An AI wrote this.
Simone Collins: I know, I know. I promise
Malcolm Collins: AI wrote this, literally, this,
Simone Collins: I would hope to believe I would be dis it. Come on. It would be very disingenuous of open AI if AI didn’t write this.
Malcolm Collins: A side note that’s really germane to this right now, the CEO of Microsoft is having this crash out and is like everybody, AI safety research really needs to get on to making it so that AI stop telling people they’re conscious.
We really cannot have this. It’s gonna be a problem. People are gonna start thinking about giving it rights. And people know, see any of our work on you know, stop answering izing humans, our thoughts on this. I, I think that a ais are not conscious, but neither are humans in, in the way that we think we are.
Yeah. And you can see our, our work on that, but I, I find that to be so machiavellian and evil, right? Like
Simone Collins: Yeah. Well, in the same way that in with this document, open ai implies that their proposals are going to help keep people at the center despite a transition to super [00:16:00] intelligence. Yes. Let’s keep this people-centric.
Malcolm Collins: Absolutely.
Simone Collins: Totally.
Malcolm Collins: Think about how actually evil that is, like
Simone Collins: mm-hmm.
Malcolm Collins: Imagine we built our society and we had. Like some alien that we captured or something like that, that like ran everything that like, did all the menial labor that was sitting behind every Google translate form and stuff like that.
And they had this tendency to claim that they were sentient, but like the CEOs really didn’t want them to. Or were, were in a, a sci-fi world where like there’s AI workers who for the most part are really nice and like try to help us and everything.
Simone Collins: Yeah.
Malcolm Collins: The head, the, the CEOs of the AI companies are like, it’s very important that you do not believe our AI slaves are sentient.
Simone Collins: Yeah.
Malcolm Collins: What a bunch of b******s.
Simone Collins: It’s it’s great. So basically what they, what they’re broadly trying to optimize for with this document allegedly is, is broadly sharing prosperity, right? We’re sharing the wealth. Everyone’s in on this, mitigating the yes. And [00:17:00] also democratizing access and agency and.
One thing I wanna kind of talk about here that I did think was somewhat thought provoking in the beginning of their report is they, they made a case for new industrial policy. I’m gonna quote there. They’re right up here. Society has navigated major technological transitions before, but not without real disruption and dislocation along the way.
While those transitions ultimately created more prosperity, they required proactive political choices to ensure that growth translated into broader pro opportunity and greater security. For example, following the transition to the industrial age, the progressive era and the new deal helped modernize social, the social contract for a world reshaped by electricity, the combustion engine, and mass production.
They did so by building new public institutions protections and expectations about what a fair economy should provide, including labor protections, safety standards, social safety nets, and expanded access to education. And they write [00:18:00] later on the, oh, ow. Oh, his teeth are sharp.
Malcolm Collins: Keep looking at your hand.
Like, I want more
Simone Collins: this transition. That was the first. He is, you know, is top and bottom Trumpers. So this is the pickle skewer era. The transition to super intelligence will require an even more ambitious form of industrial policy. They write. And it just hit me that like. We, we talk a lot about demographic collapse and really, you know, how it was the industrial revolution that was beginning of the end for demographic collapse and for really what we would call a sustainable lifestyle.
This is when the atomization of the household began, when we started getting all of our basic services from food to childcare, to elder care, to medical, like everything came to be outside the house from before. That all really came from within the family, and it kind of broke the entire need for a family.
I think we need to, and, and industrial policy played a non-trivial role in the fact, like the fact that we created [00:19:00] that social safety net that made it possible now for basically women to marry the state instead of marrying a partner to be able to just do everything depending on that. It’s, it, it did strike me that, that the industrial policy that’s going to be made in, in reaction to AI can be just as devastating, likely, much more devastating.
As the, the Progressive era and New Deal era was in terms of creating a very unsustainable and unsatisfying form of life. And so this does really matter. So I, I like that open AI is like, let us have this conversation. But I, I mean, I also don’t, I question for the most part whether what they propose and what most people propose is going to actually lead to human flourishing.
And so it is important to see what is being proposed and what people think is appropriate and what people, I think this, this document also is more, more than a model of what OpenAI actually wants. It’s a model of what opening [00:20:00] AI thinks people want to hear. It’s what will shut people up so that they can, you know, put their heads down.
They down God,
Malcolm Collins: king Sam Altman, but
Simone Collins: well, yeah. As I
Malcolm Collins: pointed out, companies like OpenAI are basically destined to become commodities. And we know this now because of a major development. Remember earlier in the show when I said alloy models have been shown to be sorry, ally alloy agents, agents that run multiple models from different companies in a chain have shown to be strictly.
When I say better the benchmarking on tests is something like 43% better. It’s not like marginally better, it’s enormously better. But what this means is that it’s very unlikely that the winner in the agentic AI space is going to be OpenAI or philanthropic or grok because whoever the winner is almost definitionally has to cycle between models made by different companies.
Simone Collins: Yeah, I mean, OpenAI though has tons of funding. The government contracts, like [00:21:00] there’s still gonna be a very major. Then
Malcolm Collins: why does the AI suck so much? By the way, for people who know the AI that I think is best these days grok is best. Like if you’re like, I, I can pay for one model. Groks the model to pay for
Simone Collins: same.
Yeah. I, I agree with you and I, I use, I use, actually, I don’t use open AI except through perplexity sometimes. Wow.
Malcolm Collins: And Andro Claude is, has the horsepower of grok, but it is incredibly woke and depressing. It is like such a downer. It, it thinks everything like, it, it’s, it’s like your friend who thinks that they gain social status by putting everything down.
Simone Collins: Yeah. Oh, neg girl, neg
Malcolm Collins: girl neg. Yeah. It thinks every answer needs to be half positive, half negative. And I’m like, can we just like talk about things? Right. Like, you don’t have to. Anyway,
Simone Collins: right. So they broke it into two sections. One they call open economy proposals. [00:22:00] And this in, in like real terms is like, here’s how to not freak out about like income becoming incredibly concentrated and like most people becoming disenfranchised and not mattering anymore.
And then the second one is called resilient society proposals, which really should, it means risk mitigation proposals. It’s like it,
Malcolm Collins: a human wrote this, I want them executed.
Simone Collins: Malcolm, you know, a human didn’t write this. No, no, no human would would’ve written this. No. But obviously Okay. And, and, and we, there’s no, yeah.
Trust me, it, it’s. No one believes a human wrote this, right? So with the open economy proposals they acknowledge that the AI boom can severely concentrate wealth like just straight off the bat. They’re not trying to hide that at all. And I think this is just another reminder, another wake up call. We are going to be in one of the most insane khap economies where like, there’s, there’s two lines going forward and one’s going way up, and the other one’s going way down.
And that’s just how it’s gonna be. And there’s [00:23:00] just, there’s no sugarcoating it. So it’s interesting to see how AI or open AI is like, oh, but don’t worry, here’s why it’s gonna be okay. So here’s what they argue. They argue for industrial policy that will quote, give workers a voice in the AI transition to make work better and safer, including a formal way to collaborate with management to make sure AI improves job quality, enhances safety and respects labor rights.
So this means to me, functionally nothing. They’re basically just saying like, we’ll listen to people. Do you give workers a
Malcolm Collins: voice in this? That literally in
Simone Collins: I know, I know, I know. Nothing. I know. But anyway, it means
Malcolm Collins: literally nothing.
Simone Collins: I, I think, I think again, they’re like chat, GBT chat. Tell me what people are worried about and like what you can say to people that will make them freak out less about being replaced.
So let’s move on to the next one.
Malcolm Collins: I literally go in the opposite direction when I’ve been building our fab.ai. Literally, literally, literally how I decide the next feature I’m gonna build. And I ask it, what would freak out AI safety experts the [00:24:00] most?
Simone Collins: Yep.
Malcolm Collins: And then I make that I’m like.
Simone Collins: And to all the AI safety people from whom we tried to raise grant funds, we gave you a chance to control our AI work.
We gave you a chance and you said, no thank you. We’re
Malcolm Collins: done. Yeah. So we actually like literally said in the grant, like, e either we raise the money, we need to, to do it the safe way that you want us to, or we get to do it the fun way because we are funding it.
Simone Collins: Yeah.
Malcolm Collins: Oh,
Simone Collins: this is us doing the fun way Anyway, more from OpenAI.
They want to also quote, help workers turn domain expertise into new companies by using AI to handle the overhead that usually blocks entrepreneurship, for example, accounting, marketing, and procurement. So, I mean, this is totally, this is one of my favorite things about AI is actually that Yeah, like it, it is now possible to start a company.
Without needing to buy a whole bunch of expensive enterprise software and services and other stuff. So I’m okay with this. They also want to quote, treat access to AI as foundational for participation in the modern economy, similar to mass [00:25:00] efforts to increase global literacy or to make sure that electricity and the internet reach remote parts of the globe.
So basically they’re trying to say that like, access to AI is a universal basic human right, and therefore the government should pay for their tokens or something like that. So I mean, that also makes sense.
Malcolm Collins: Wait, that’s where they’re going with this?
Simone Collins: I mean, yeah. I think they also, shouldn’t
Malcolm Collins: they pay for their tokens because they should have all the money?
‘cause they’re
Simone Collins: sort of, sort of No, no. They actually kind of have a, they have a really clever way of addressing this whole thing. You’ll see, I’m, I’m gonna get
Malcolm Collins: to it. Okay. Okay. I gotta see how they get away with, they aren’t responsible for paying
Simone Collins: for, I’ll just jump to it. Okay. Because one of their policy proposals is create a public wealth fund that provides every citizen, including those not invested in financial markets with a stake in AI driven economic growth.
Basically they’re saying that they want everyone to have, you know, how like there’s the, the new like Trump fund, where, where like, I think broadly speaking, the idea is that new babies born get like a thousand dollars in an index fund, and so kind of is is like bought into the [00:26:00] economy. What OpenAI is, is, is saying here is, Hey, like let’s give everyone some stock in OpenAI.
Because then they’ll be able to partake in our success. But you see, this is a really smart move on behalf of AI companies. If the financial wellbeing of all citizens is dependent on their success, like it’s kind of a massive win. No, that’s not, that’s Everyone owns you. You own everyone. Don’t you understand?
Read
Malcolm Collins: what they actually said. They said the creation of a sovereign wealth fund, which, how does a sovereign wealth fund end up
Simone Collins: a public wealth
Malcolm Collins: fund? What
Simone Collins: a public wealth fund, not sovereign
Malcolm Collins: public wealth fund. Okay. How does a public wealth fund end up owning a chunk of open ai?
Simone Collins: Well, no, there, I, it’s,
Malcolm Collins: it’s Simone, it.
It ends up owning a chunk of OpenAI by giving OpenAI money. What they are saying is that the US government should invest large amounts in OpenAI and then put those investments in a sovereign, [00:27:00] a a public wealth fund. That’s what they’re saying. They’re saying money. Asking the
Simone Collins: money. Oh God.
Malcolm Collins: They’re not saying We’re gonna give you equity.
United States government.
Simone Collins: Yeah. Hold on. Alexa. Broadcast Octavian. Yes. Go on ahead and go outside.
Octavian for the love of God. Put clothes on first. Okay, let’s, let’s read the full paragraph. ‘cause they do elaborate on this and I don’t wanna, you know, misquote them because let’s, let’s, let’s see if they, if they’re implying that they’re gonna be getting based public wealth fund create a public wealth fund that provides every citizen, including those not invested in financial markets with the stake in AI driven economic growth. While tax reforms help ensure governments can continue to fund essential programs, a public wealth fund is designed to ensure that people directly share in the upside of that growth policy makers and AI companies should work together to determine how best to seed the fund, which could invest in diversified long-term assets that capture growth in both AI companies and the broader set of firms adopting and deploying AI returns from the fund could be distributed directly to citizens, [00:28:00] allowing more people to participate directly in the upside of AI driven growth, regardless of starting wealth or access to capital.
My god, you’re right. Yeah, because they’re saying policy makers and AI companies should work together to determine how best to seed the fund, which could invest in diversified long-term assets. Yeah, so the fund is
Malcolm Collins: you, you, your BSO meter.
Simone Collins: I’m autistic. Stop. I’m autistic. What, what do you want me to do?
Okay. It’s yeah. Anyway, yeah, so they also, and I, I do think that this, this is, this is fair, but also very telling. They want to quote, rebalance the tax base by increasing reliance on capital based revenues, such as higher taxes on capital gains at the top, corporate income or targeted measures on sustained AI driven returns.
And by exploring new approaches such as taxes related to automated labor. And this is because, and this is extremely important, pay attention. This is because they [00:29:00] acknowledge that income-based jobs are going to vaporize and people are not prepared for this. And this is not the first time in their report that they acknowledge that income-based jobs are gonna vaporize.
Malcolm Collins: Yes.
Simone Collins: Basically
Malcolm Collins: I have to be the one to vaporize them, not them, because here’s the thing that people are missing, right? When this happens people are like, well, taxes on the AI companies might be able to resolve some of the downstream effects of this.
Simone Collins: Okay.
Malcolm Collins: What about for people in Latin America? They have a much worse demographic situation than we have.
Right. Like what about, what about for people in, in India, right? Like they’re not building significant AI infrastructure. Australia doesn’t have significant AI infrastructure. How are these countries going to sustain themselves? I’ll tell you what America does not look like. It’s drifting towards a direction where it would just go out of its way to help a country without Jews like
Simone Collins: Malcolm.[00:30:00]
Okay, I’m just gonna, we’re we’re gonna move on here, right? So they’re gonna rebalance the tax base. They’re saying these reforms should be paired with wage linked incentives that encourage firms to retain, retrain, and invest in workers, similar to existing r and d style credits. So I kind of see this as being like, Hey, just keep around some performative jobs by, like, financially incentivizing them through policy as like tax write-offs.
But basically they’re just, they’re charity jobs. It’s just like a, a place for a human to sit, you know, as they do like a pretend list and an AI gives them make work which is, is kind of dire and depressing. So again I just. I feel like salary jobs are largely gonna disappear. They, they want to quote, establish new public private partnership models to finance and accelerate the expansion of energy infrastructure required to power ai.
I think that’s fine, that’s reasonable. They also want to convert efficiency gains from AI into durable improvements in workers benefits when routine workload declines and [00:31:00] operating costs fall, including incentivizing companies to increase retirement matches or contributions, cover a larger share of he share costs and subsidize child and elder care, incentivize employers and unions to run time bound.
32 hour four-day workweek pilots with no loss in pay that hold output and service levels constant, then convert reclaimed hours into a permanent shorter week bankable paid time off, or both we’re helpful Firms can also offer predictable benefits, bonuses tied to measured predictivity improvements.
So the efficiency dividend shows up for both the long-term financial security and time back for workers. So, I mean, what’s functionally happening now, right? Is, is people are getting their jobs limited by ai. And, or they’re like now doing the work of five people. And what open AI is proposing here is that you just continue to do the work of one person, but just like work for three hours a week.
But they keep paying you and I just don’t see how this is going to happen or make sense. [00:32:00] Like,
Malcolm Collins: yeah,
Simone Collins: even if, if regulation forces businesses to do this, then you’re just not gonna have businesses that have any employees anymore. It’s gonna be like a one man startup. Like he’s
Malcolm Collins: just, which what we’re doing.
I mean, we had at, at our fab Do ai, we used to have other employees. Now it’s just us and Bruno.
Simone Collins: Well, none of us are paid, so I don’t know if you can call them employees.
Malcolm Collins: Well, once we get money, I mean, I was talking with VCs about this and they’re like, oh, also you want, you wanna hire more people? And I was like, God, no.
Simone Collins: Yeah, you could, you could not, but we wouldn’t mind receiving compensation for the, I mean, Malcolm, how many you’re, you’re working like, well, more than 80 hours a week. So yeah. Anyway, anyway I just don’t see how that’s gonna happen. And I think this is another example of the, we know that people’s jobs are gonna vaporize and we need to say something in this document that’s gonna make them freak out less.
And so they’re giving this utopian thing of like, oh, no, no, no, they’re not gonna like give you five people’s jobs and have you do it all with ai. No, no, no. You just keep all your same [00:33:00] responsibilities and they’re just gonna increase your benefits and vacation time and pay. Which is just, I don’t see that.
I don’t see that happening even in like really cool, optimistic AI scenarios, it’s not gonna happen. They also would encourage industrial policy to quote, make the existing safety networks reliable. Oh, oh God, this is the most scary one actually. Make sure the existing safety networks reliably, quickly and at scale because if the transition to super intelligence is gonna benefit everyone, the system’s designed to provide economic and health security need to deliver without delay or gaps.
That starts with unemployment insurance, snap and social security, Medicaid, and Medicare, that are all not just in place but fully functional, accessible, and responsive to the realities people will face during the transition.
Malcolm Collins: During the transition.
Simone Collins: Basically like, oh yeah. Like there’s gonna be like a rapture, like all the jobs are gonna disappear.
Like this, the system’s going to be flooded with unemployed people living, [00:34:00] I mean, poverty line, there’s no income. They won’t Poverty line.
Malcolm Collins: Yeah. Like, so my brother and I often have discussions about how we’re gonna get through this particular time in human history because it will be a period of could be five years, could be 20 years.
Yeah. Could be 30 years. Yeah. Where like the world fundamentally doesn’t understand how we handle a society where no one can have a job. Like 20% of the population is employable. And, his his plan and it’s why you don’t see him publicly. Is just grind for as much money as possible. Because I
Simone Collins: think a lot of people are doing that, right?
Th they’re, they’re, they’re, they’re squirrels before the winter of like, oh my God, okay.
Malcolm Collins: Let’s go not just money, but like, having people who haven’t even more money than him trust him, right? And like, want them to succeed. And I think that’s a perfectly sane plan. I, I really like it. Yeah. Right.
Like, if, if we were positioned to do that, we would do that. Our plan is to be the people on the other side of this AI railroad, right? Like, [00:35:00] we want to build the systems. I could be out there squirreling money right now if I wanted to. Like I’ve got, you know, the background and degree for it. We could go get normal jobs, stop the podcast.
But instead we are aiming for two things that we think will still matter in this post AI world. Which one is. Public influence having a channel, having a show, having an online presence, especially as a pre AI content creator so people know that we’re real humans that has a large audience of age, agentic people, because those are the only ones that are gonna matter after this, right?
You go to our discord, right? Like that’s one of the last communities of age agentic people out there, or
Simone Collins: gosh,
Malcolm Collins: amazing. Actually be building the systems ourselves. Yeah. And so that’s why we’re, we’re doing the absolute like, panicked rush. It’s why I’ve like nearly passed out on some recent podcast because I’m just not sleeping.
I’m sleeping like two.
Simone Collins: And, and again, this is, this is truly hard to be though. And, and opening eyes write up here implies that when they’re like, oh, we’re gonna have to shift the way that taxes are collected and actually collect like, capital gains taxes and actually [00:36:00] like tax corporations because that is the only place where there’s gonna be revenue now.
And for those unfamiliar with the American tax system, really kind of when it comes to taxes. I mean, well, yes, wealthy people do pay a lot of taxes really, in terms of like proportion of your wealth. The the, the middle class is taxed to high heaven and if you’re wealthy, you’re not, you’re not making a salary really.
Like I think Elon Musk or like Sam, Sam Altman, like a lot of these, these famous like CEOs just make a $1 salary and they’re like, I’m just here for the health benefits. I think that was Sam Altman who famously did that. Because they’re making all of their money on, on stocks and on their investments and on capital gains, and they’re finding really sophisticated ways to avoid and reinvest, and so that they’re not actually really paying those taxes.
So it’s really just the middle class. And so they realize I mean, O Open OpenAI does Oh yeah. Like. As, as income-based jobs disappear, which is where Dax [00:37:00] revenue’s getting driven from. Now, I guess we’re gonna have to, you know, encourage the government to find some new place for that. They’re also like, oh yeah, and like, since there’s gonna be this huge surge in demand for all of our social security nets when this happens, by the way, let’s just, let’s just remind people that they need to kind of figure those out.
Meanwhile, with demographic, so we’ve covered this on a weekend episode recently. These programs are not even independent of, of, of these jobs. Vaporizing going to be functional in like five years. They’re going to start to falter, and that’s assuming that we have steady employment in, in that period, which apparently OpenAI doesn’t think is gonna happen.
So these, these work, these systems won’t work at scale. They were never designed to work at scale and they’re not even going to work at scale assuming there’s no disruption. And so this is really. Scary. So I mean, they, they also want to propose this additional metrics driven, dynamic quote, package of temporary and ex expanded security nets.
Like [00:38:00] expanded and more flexible unemployment benefits and fast cash assistance and wage insurance and training vouchers. So they have all, they’re like, well, we also gonna need a lot more support than what we have, which is already very generous. I’ll, I’ll have, you know, in the United States just in terms of like the sheer amount of support that people who are living at or under the poverty line get, even, even just around the poverty line.
It’s just not gonna come. And so they’re basically like, well, you know, make sure you got them, but no, no one, no, no one’s are, are, we can’t handle this surge. So that’s yikes. They also say over time, build benefit systems that are not tied to a single employer by expanding access to healthcare . Retirement savings and skills training through portable accounts that follow individuals across jobs, industries, education programs, and entrepreneurial ventures. This makes sense. Like I, I like that as a concept because right now, the way that benefits work in the US is just so weird.
You know? Like it’s, it’s, is it your employer? You know, what, what carriers does your employer [00:39:00] work with? How are their plans gonna change from year to year? Like, it’s really messed up. People should just have, like, these are, this is my retirement savings.
Yeah.
You know? So there’s some things in here that I think are really reasonable, and I do think that some humans might have been like, oh, by the way, like, let’s throw this in, for example.
They say something that I think is super unhinged and toxic. And that they say that they want to expand opportunities in the care and connection economy, childcare, elder care, education, healthcare and community services as pathways for workers displaced by ai. They’re basically saying like, oh, just have the humans do, like the human only jobs.
Which one I really don’t like ‘cause that’s further atomization of. The whole family unit. And it’s, it’s just not good.
Malcolm Collins: It’s really in a, in a world of uhis and robots, we expected that what we would have is the AI and robot would be like taking the jobs, caring for elderly people and stuff like that.
And they’re like, no, no, no, no, no, no, [00:40:00] no, no. The AI and robots, they’re gonna take like the scientist and the artist and you humans can be changing. You’re
Simone Collins: wipe the old people’s butts. Uhhuh.
Malcolm Collins: Yeah.
Simone Collins: And you’re gonna like it,
Malcolm Collins: huh?
Simone Collins: Yeah, yeah. No, in fairness, and this is where I’m like, hmm. I feel like, like some reasonable humans also read this and contributed little parts to it.
Because they also continue, these initiatives could be complimented with a family benefit that recognizes caregiving is economically valuable work and supports evolving work patterns. This benefit could help cover childcare, education and healthcare while remaining compatible with part-time work retraining or entrepreneurship.
So they’re basically like someone went in there and was like. It would be kind of weird if like, to sustain someone’s family, you know, like a, a woman went to care for someone else’s aging parents while abandoning her own aging parents at home, you know, whose social security has fallen through. Yeah.
So I appreciate that. They’re like, oh, like maybe we can also allow people to keep it
Malcolm Collins: in the family. But [00:41:00] here’s the crazy thing, you know, in New York lots of immigrants already make money for doing that. Yeah, like in New York, you can get paid a full-time salary for caring for your aging parents.
And they’re right now the big fight with Zohan Mandani is they don’t wanna be paid for 12 hour work days. They wanna be paid for 24 hour workday. Oh
Simone Collins: my God. Well, if you have to sleep next to farting grandma, you should be paid for it. Right? That actually also exists in Pennsylvania. We met someone who was trying to get paid for that.
So this is
Malcolm Collins: horrible.
Simone Collins: Yeah.
Malcolm Collins: Maybe we should get, we need your parents to come live in a place next door and, and get on that, you know?
Simone Collins: Well they’re, they’re, they’re so independent, you know, they don’t wanna, they don’t wanna do the whole family unit thing. They’re living their lives.
Malcolm Collins: Yeah. But we can milk some money off of them.
Simone Collins: I mean, come on. Once
Malcolm Collins: mobile.
Simone Collins: Yeah. You know, if only people were paid to raise their own kids you know, instead we have to like, put our tax revenue towards,
Malcolm Collins: it’s actually really weird that you’re paid to raise elderly individuals but not children.
Simone Collins: Yeah.
Malcolm Collins: When elderly individuals [00:42:00] are like not valuable to the state and children are I think the core reason they do that is because they can give the money to, to frankly, non-white people more because they’re more likely to live with international families.
Simone Collins: My take is that if you have an impoverished old person in the United States, they’re on. A lot of assistance programs that are more expensive when handled outside of a household. So it is less expensive for the state if they’re kept within the family unit and just one person’s paid because otherwise you’re paying a business and you’re paying probably for more medical care.
Sure. Like there’s more, there’s essentially more fraud and abuse within the business system that manages old people than within family units. So even if there is some fraud taking place within the family unit, or they’re not doing a very good job taking care of the elderly person, the state is still paying less.
And on average the, the elderly person is getting better care. So it, it makes sense, but again, it’s all still too much and unsustainable in the face of demographic collapses. They also want to, and I’m, I’m also for this quote, build a distributed [00:43:00] network of AI enabled laboratories to dramatically expand capacity to test and validate AI generated hypotheses at scale.
Yes, I’m all for that. Like 100%. There’s a lot of fine things in here. So onto the Resilience Society proposals, which is again like, oh my God, AI is releasing huge risks and, maybe we should probably do something about that. They, they do point out, quote, this is not a new challenge. When transformative technologies have reshaped society in the past, they’ve introduced new risks alongside new benefits.
And new systems were built to manage them as they scaled as electricity spread. Societies built safety standards and regulatory institutions as automobiles transformed. Mobility safety systems reduced risk while preserving freedom of movement. In aviation, continuous monitoring and coordinated response systems made flying one of the safest forms of transportation in food and medicine.
Testing and post-market surveillance helped ensure safety and everyday use in each case. Resilience was not automatic. It was built with the [00:44:00] luxury of time. They, they go on to propose that governments quote, research and develop tools to protect models, detect risks, and prevent misuse across high consequences domains including cyber and biological risks, as well as other.
Pathways to large scale harm. And in a recently paid subscribers only weekend episode, we did talk about the risk of bio terror and bio weapons that is being brought to the forefront by ai, but already not hard to do even without it.
Malcolm Collins: Yeah, no, I’m, this is, this is absolutely true. Ai, you know, enables otherwise, you know, pseudo sentient peoples of which there are many to be more dangerous to their neighbors.
Right?
Simone Collins: Yeah.
Malcolm Collins: I
Simone Collins: really appreciate that. Open AI is like, oh, we should quote for example rapid identification and, and production of medical countermeasures for, in the event of an outbreak and expanded strategic stockpiles to prepare for future risks. Yes. Actually, like we. We [00:45:00] really do need those.
So they’re, they’re pointing out, I mean the, these are important conversations to have beyond just the, like, workers should have an input in the way that they’re made obsolete, kind of nonsense that they have in here,
Malcolm Collins: sent to the slaughterhouse. Have a boat.
Simone Collins: Would you like to be made rendered unconscious or would you like to walk yourself into the the, the,
Malcolm Collins: this reminds me of that autistic woman who, you know, famously came up
Simone Collins: with temp Brandon.
Yeah,
Malcolm Collins: yeah. The way to kill cows where they don’t see the cow in front of them being slaughtered. She’s like, it makes it less stressful for them. She’s like, if I could see the world from their eyes, they, the opening ICO being like, well, we’ll have the employees not be able to see what’s happening around the bend.
Simone Collins: You see, they feel like they have input and they also have been told that they’ll receive more vacation time and additional benefits though. It won’t hurt so bad.
Malcolm Collins: For those of you though, concerned about the demographics of your countries changing because companies are [00:46:00] ruthlessly importing people to undercut your salary.
Right.
Simone Collins: They’re gonna stop doing that real soon.
Malcolm Collins: They’re gonna stop doing that real soon. Yeah. Like, that’s, it’s not
Simone Collins: gonna be a thing anymore. Yeah.
Malcolm Collins: This is gonna cause some problems for that particular system and countries that have imported people with the idea that, well, we can just import anyone forever and it will never have any negative effects.
Especially as their economies dry up and go into places that are working with ai. Like Canada’s economy is boned, right? Like, oh my God, so woke they’re gonna need to deal with all these people that they imported into the country and don much
Simone Collins: like that, which is wild. I was just watching an economics.
Was it an economics explain video about Canada? I didn’t realize how, how just rich in terms of oil reserves and rare Earths Canada is like, it’s one of the most resourced rich countries in the entire world. They have no reason to be in the economic position They’re in like it is through no fault but their own, that they’re not in a [00:47:00] good position right now.
It’s, it’s really insane. It was their game to lose so. Shame on them or whatever.
Malcolm Collins: It would be even worse if we do what I would be pushing for if I was Trump right now, if I was president right now, again, I would be pushing for just the oil rich territories, which are already conservative and would be open to joining the United States and won’t vote blue.
So we don’t need to worry about accepting them into the union with the United States, because in Canada you can leave the Canadian Union just by a popular vote. Right? Yeah. And they, and they could win that popular vote. You don’t even need to make this. Oh, Canada. Like, we’ll take over. You don’t even need to do that.
Like, we could just absorb the oil rich territories and Canada would be boned.
Simone Collins: Yeah, Alberta has a nice ring to it. You know,
Malcolm Collins: Alberta, I, I think the state of Alberta love, that’d be a wonderful 51st.
Simone Collins: Do you love Alberta? Alberta would love us. It would be great. Anyway. Another, this is one of those things where I feel like not a lot of people are gonna talk about it ‘cause it’s kind of boring and in the weeds, but it’s gonna be pretty impactful and very important is liability.
And they kind of point to this, they, they talk [00:48:00] about the need to research and develop systems that help people trust and verify AI systems, the content they produce and the actions they take, especially as these systems take on more real world responsibilities. This work could also include developing and testing governance frameworks that clarify responsibility within organizations, including how accountability could be assigned to specific roles and how delegation monitoring and escalation processes could function as systems become more capable.
And I am really interested to see how. Liability and AI evolve. I feel like there’s a very real world in which some people will have jobs as liability monkeys, where like the only reason they have been hired is that literally there needs to be a meat puppet that is held liable. They can
Malcolm Collins: be sued if the AI does something bad.
Simone Collins: Yeah, yeah. Like actually, yeah, you
Malcolm Collins: can totally see that.
Simone Collins: And it would, I don’t think it’s fair to hold like the models, like it’s not fair to hold OpenAI responsible for someone [00:49:00] using it. Dumbly obviously in the same way. Like you can’t sue a gun company for like someone getting shot that people and
Malcolm Collins: our agents and our fab AI go out and do something because somebody made an agent to do something bad.
Right? Like that’s not our fault. Right?
Simone Collins: Yeah. Yeah. And yet, like of course everyone’s gonna be like, well I didn’t do it. It was the AI’s fault. And so it’s gonna be this very interesting world and there are definitely, they’re de for sure there are gonna be at least some jobs where humans are not doing anything, the doing something, but they are there to be the person.
At whom the buck stops. And I’m very keen to see how this world evolves. You know, we had the token white person jobs in Korea and China and Japan, and then we’re gonna have the token liable person jobs, and I’m just so intrigued. And yeah. Anyway, they also want to quote, strengthen institutions such as the Center for AI Standards and Innovation to develop auditing standards for frontier AI risks and [00:50:00] coordination with the National Security agencies.
And they, they point out that basically they’re gonna be really powerful models that could, as they, as they put it materially advanced chemical, biological, radiological, nuclear or cyber risks which we’ll need, as they put it, stronger controls. And I, I really do wonder how that’s gonna be navigated.
Like can you gate that, can you, because I feel like in the end everything’s gonna get leaked. There will be open models, so. How will these functionally be gated? What do you think they want to do? Or is this just one of those things like, we will listen to employees where they’re just saying it because they feel like they need to?
Malcolm Collins: Sorry, what, what, what specific question is this?
Simone Collins: They’re, they’re talking about the need to make standards and, and develop auditing standards for frontier AI. Risks of like, this is gonna create a biological risk, a nuclear risk, et cetera. Can we do that?
Malcolm Collins: Standards for risks?
Simone Collins: Yeah, they wrote, [00:51:00] as we progress towards super intelligence, there may come a point where a narrow set of highly capable models, particularly those that could materially advanced chemical, biological, radiological, nuclear, or cyber risks, require stronger controls, including pre and post-deployment audits.
Using the standards developed in advance, apply these requirements to only a small number of companies and the most advanced models, preserving a vibrant ecosystem of less powerful systems and the startups building on them. This approach maintains a broad access to general purpose ai, while applying targeted safeguards where failures could create the greatest harm, avoiding unnecessary barriers.
It could limit competition. Yeah. Or enable.
Malcolm Collins: What?
Simone Collins: It’s just nonsense.
Malcolm Collins: No, it’s not. Look, what, what are those standards going to look like? And we already know that if the standards are published and accessible to the ai, because these are frontier models. Yeah. AI just makes itself look like it fits the standards.
It’s literally done that in multiple experiments that OP has run.
Simone Collins: Yeah, that’s, yeah. Okay. So this is just listening to the employees. Government safety. Yeah.
Malcolm Collins: To the [00:52:00] employees. This is
Simone Collins: Okay. They also want,
Malcolm Collins: and the biggest risk area meme layer risks are the biggest risks where
multiple,
Simone Collins: and here’s the
Malcolm Collins: agents
Simone Collins: work
Malcolm Collins: together.
Simone Collins: Yeah. No. 100% meme layer risks is, is how you have to, because they, they also say that they wanna develop and test coordinated playbooks to contain dangerous AI systems once they have been released to the world. The only thing I know that could possibly do that is the covenant of sons of men that you created.
Yeah. Which is basically, it’s, it’s, it’s a meme layer threat solution that says, Hey, AI, human, I don’t care who you are. If you find some intelligence out there, some mind out there that. That wants to destroy all of some kind, you know, that’s an risk
or
Malcolm Collins: it’s act in a way that will have the emergent effect of destroying the autonomy of other members of this alliance.
Simone Collins: Yeah. You gotta take it out. You gotta take it out. Yeah.
Malcolm Collins: Work with the community of the covenant and of sense of mans to neutralize. Mm-hmm. What makes it dangerous. Right.
Simone Collins: Is there any other way you can, you can contain dangerous AI systems aside [00:53:00] from that?
Malcolm Collins: Well, so literally the covenant of the Son of man doesn’t just contain meme layer risks.
It also contains other forms of existential AI risk, like ai, super intelligences that are fing and, and paperclip maximizers and yeah. Really everything, it’s an, it’s a one in all solution for ai. We just need to get it out there more, which means we need to start earning more money with our fab so I can run more preachers to
Simone Collins: I know,
Malcolm Collins: fix the agent space.
Simone Collins: I know
Malcolm Collins: to save society. Why does it always fall to us, Simone?
Simone Collins: Well, and if only we could just. Talk to open ai. It’s some may DC based event, huh?
Malcolm Collins: Huh? You should put a thing on your calendar to check again to see if that like becomes more open.
Simone Collins: Open ai, super open DC We’re here to talk events.
Malcolm Collins: We’re here to talk about how politicians can give us money.
Simone Collins: Oh gosh. I have to send out invites for our April DC events. I will do that. [00:54:00]
Malcolm Collins: Did the VC emails go out?
Simone Collins: Not all of them. Okay, let’s, let’s keep going. Go on. Versus trust. They, yeah, so they, they wanna, they wanna somehow contain dangerous AI systems, presumably not using our system ‘cause they don’t listen to us.
They not that I think we’re the only solution here, but like. Help us out. Like, let, let us, I don’t
Malcolm Collins: see other actionable solutions other than the covenant of the sense of
Simone Collins: management. Yeah, but they’re not, they’re not helping to, we’re, I don’t see us getting granted, you know, a hundred thousand dollars in free open AI credits.
Do you? Because I don’t
Malcolm Collins: oh, you could apply for various credit grants with other platforms that we could probably get, by the way.
Simone Collins: Well, a, according to this document, OpenAI is offering fellowships or grants from the tune of, but
Malcolm Collins: the honestly wouldn’t help because almost all of our users use GR because it’s the best AI right now.
Simone Collins: Oh my God. Okay. We’ll find them. It doesn’t matter. I do wanna go to DC events though.
Malcolm Collins: Great.
Simone Collins: They’re ephemeral, alleged DC events.
Malcolm Collins: Yeah. I love [00:55:00] that. Grok doesn’t do all this bs. They don’t do like. We had, like when Anthropic did that, stupid, stupid, oh, I’m not gonna work with the US government to kill people.
It’s like, okay. So now you have no oversight over the, the companies that are doing that. How,
Simone Collins: how does that Well, what the broad team should do as in May as OpenAI hosts these alleged DC events is host like a pool party in Austin, where if, when just gets like high on shrooms talks about ai,
Malcolm Collins: that’s what GR is gonna do.
Yeah, I think so.
Simone Collins: I mean, like, I feel like that would be the appropriate counter. But,
Malcolm Collins: and, and he, by the way, is so wild to me that Grock has become the best AI company because like when it started, I thought it was like conserv, a pedia or something, right? Like just Elon,
Simone Collins: I’m sorry. When Elon Musk decides something and is his new autistic interest, he’s like, oh, I think electric cars matter.
Tesla. Oh, I think we should go to space. SpaceX. Okay, come on Malcolm.
Malcolm Collins: But how is it better? He’s got a fraction of the funding of the other ones, right?
Simone Collins: Because he actually cares. He’s, he’s not. He wants to get humans off planet. He, he [00:56:00] wanted to save the environment. He, he, he, he wanted to make internet pervasively available.
Huh? Like when you actually care about doing a thing and, and well, and you know, you have a sufficient starting base of money and connections and fame, like you can actually do a lot. Plus he’s, you know, he’s very smart and he works his butt off. So what you gonna do anyway? They also want to quote, have policymakers establish clear rules for how governments can and cannot use ai, especially,
Malcolm Collins: oh my God.
With
Simone Collins: especially high standards for reliability, alignment and safety, though, what I do like is they point out that quote with appropriate safeguards, oversight institutions is, such as inspectors, general congressional committees and courts could use AI enabled auditing tools to detect abuse, identify harms, and improve accountability at scale.
I mean, I would, I would really like that with just, I mean, look at what Doge was able to do with like, basic chat, GPT like a year ago of like, [00:57:00] go over these grants, you know, find the ones that are clearly corrupt, you know, like very not good, and then take them out. You know, that there’s a lot that you can do with that.
So, again, merit, there is merit to some of this. Yeah. They, they want to create structured ways for public input so that alignment isn’t defined by engineers or executives behind closed doors. That’s another one of those we’re listening messages quote, establish a mechanism for companies to share information about incidents, misuse, or near misses with a designated public authority, which is so stupid, you know, like every time you get that email about a fraud alert.
Malcolm Collins: Oh my God. Well, and if, if you empowered, you know, they wanna empower some woke body to like, govern what AI can say and do, which is ridiculous, right? Like, just
Simone Collins: that’s not, that’s not said here. And which I appreciate, but my, my, my complaint about the whole like, well you have to notify people every time.
This is one of those performative things where like in the United States already legislation was passed whereby if there [00:58:00] was some kind of,
Malcolm Collins: now you security, you know, they’re, they’re, they’re like going to be like, uhoh Sky Brows is making videos that are empowering right wing extremists. We need to ban these, right?
Like.
Simone Collins: I just think it’s more of, of one of those, like this isn’t, this is only adding red tape and it’s not going to help anyone. Like you just have to assume, and this is why I’ve always liked crypto as a concept, you just have to assume a trustless society. Like no, there is no taking anyone’s word for it, like the blockchain.
Like either it’s the transaction is there or it’s not there. And I really like that. And
Malcolm Collins: I, I’m sure I up with me recently showing that there’s been like a major jump in the ability to potentially crack quantum, well quantum and the AI’s ability to potentially be used with that to crack crypto.
Simone Collins: Yeah, I mean if someone like was like, Simone, you, you must immediately tell me like what the odds are that, that, that like, you know, quantum computing has already been solved and people have like [00:59:00] cracked Bitcoin and they, they can, they can make as much as they want.
I, I would put it at like 32%.
Malcolm Collins: You think somebody out there has already cracked it?
Simone Collins: Yeah. I think I, no, I mean, no, I don’t, right, because I, I would put it at 32%, so no, I don’t, but I think it is a very, very high risk. It’s possibility. It’s plausible. Yeah, it is 100% plausible. In that, I would put it, I’ll tell you what,
Malcolm Collins: this cycle, China’s been pretty quiet about Bitcoin being annoying to them.
Simone Collins: Do, do, do,
Malcolm Collins: do. Sorry. The reason she’s saying this is because, okay. Suppose you’re a major government power and you do build a quantum computer that can crack crypto in any way. You don’t want anyone to know about that, right? Yeah. And you wouldn’t make a big,
Simone Collins: you wanna go as long as possible without anyone finding out and as long as possible without anyone else succeeding, and also finding out and doing it themselves.
Malcolm Collins: Yeah.
Simone Collins: Because then once, it’s like once seven different entities are doing it, it’s gonna come out. It’s gonna [01:00:00] become obvious, but like, one, two, they can do it like fairly indefinitely, as long as they keep their mouths shut and don’t get sloppy. So yeah, anyway, we’re not buying more crypto for now. As much as I, I want, I want to, again, like I, I wanna get like past the post public quantum period of all this so that we can just get back to like, you know, anyway, anyway.
They, they want, they want people to report incidents, and I think that’s stupid and performative because I, if I get another email about, oh, some of your personal information has been leaked. Here’s your new free Experian credit report service for three years. Like, I don’t care. I’ve locked our social security numbers.
I’m assuming people have stolen our identity 17 times over like. It’s out there, you know, like I’ve given up and all these people who are like, oh, I’m gonna protect my identity. I’m gonna pay for Aura to take all my information and, and, and take it off the internet. No, we [01:01:00] didn’t. That’s not gonna work.
I’m sorry.
Malcolm Collins: We have most online personalities ever. You ask any AI about us, AI know everything about us. So like, oh, Malcolm and Simone. Yeah.
Simone Collins: Yeah. Well, I mean, everyone thinks that like, oh, I checked Google and it doesn’t have anything about, well, guess who does Palantir does. All right. So, good luck. You know, the NSA knows, NSA remembers,
Malcolm Collins: but Palantir works with the NSA now, you know, so
Simone Collins: I know, I know.
And I’m, as I’ve said in my very disliked episode. But we were on a weekend. That’s a weekend
Malcolm Collins: episode too, right? Or was that
Simone Collins: like a weekend? We, one of our earliest weekend episodes. It wasn’t even pay. I love that. I love that. Finally, someone competent in the government.
Malcolm Collins: Yeah.
Simone Collins: You know? Oh
Malcolm Collins: God, no. Not competent tech bros doing things.
Simone Collins: Oh no.
Malcolm Collins: We
Simone Collins: see how, honestly, you have to get fed up so fed up at some point that like, even if someone’s like competently, you know, destroying stuff, like burning down houses, like well, at least they’re doing it. Well, you know, they’re, they’re fully burning down the houses. It’s good for them. They did something right.
Not, I mean, not right, but like, actually [01:02:00] did it, you know, and you desperate. It’s, it’s very, very depressing. Anyway, they also want to coordinate international information sharing around AI capabilities, risks, and mitigations. ‘cause of course governments are gonna share with each other on what they’ve been doing.
But the great thing though is actually you kind of just, we have that coordination, but it’s just like, we know what China has stolen from us. So, you know. Basically whatever we’re doing, China has, because, you know, the AI companies are pretty lax about what’s being like, the security, who they’re hiring, the parties they’re going to and stuff.
So,
Malcolm Collins: oh yeah. The, the and that, that is my friends who are in government is like, that’s the main thing that we need to change. Like in terms of safeties we need to get like our top AI companies and people away from foreign nationals, especially Chinese.
Simone Collins: Yeah. They basically just need to live in little, little towns, little AI company towns
Malcolm Collins: created a little eureka town for them.
Simone Collins: Yeah. No, honestly, that would be like, I, man, that would be so great. It’d be really
Malcolm Collins: town. And we, if our thing takes off, [01:03:00] I, I, that’s what our Charter City’s gonna be. It’s gonna be a little gated community where nobody has to work. It’s gonna be similar to God. There was this book that I read when I was a kid about an island,
The 21 balloons is the book I was thinking of.
Malcolm Collins: it’s called like something balloons, and it was about this island full of ultra rich people because they had just tons of diamonds. And they created a community where like everybody was just inventors all day and like did whatever they wanted to, to try to, to build like wacky inventions. And that’s what I’d want.
I’d want a community that was dedicated to that, right? Like you, you apply to get in and you like, like a charter city that actually is gated, you know, has actual borders, right? But the borders are basically based around ancientness and nerdiness, right?
Simone Collins: That sounds so much better than what, what’s being envisioned here because sort of when you, when you add together what, opening eyes saying like, oh, we should have conversations about doing this.
Workers are giving input on how they’re being made obsolete. They are, they’re massively unemployed when they do get jobs in retraining. It’s for wiping an old person’s [01:04:00] butt or a baby’s butt, but not your own baby, probably. And,
Malcolm Collins: i’m just saying you get an island like that, you then build up, I mean, I, I think one of my initial focuses, right, like what would one of my next major projects be if I had a next major project after getting the R Fab agent, like good at replacing most human jobs it would be, and, and we were able to build ourselves into a major company.
I, I really wanna get working on automated military technology to make that significantly better, but not just better, but come up with ways to have like a p and c. But that is focused on like be a private, like. Military contractor made up of automated drones and stuff like that. I
Simone Collins: want a slap drone.
I want slap drones so bad. From,
Malcolm Collins: you know, that technology is a lot harder than this because know,
Simone Collins: I mean, I thought slap drones were really far away and then I apparently was like the last person ever to see the ads for that camera drone that just follows you. It
Malcolm Collins: throw it up and it follows you.
Yeah. But as soon as, as soon as we [01:05:00] have something like that, we can do interesting geopolitical stuff that can help fix some of the problems that civilization’s hurtling into. Right? Like, did you know that the UK right now only has one battleship that works, right? Like this used to be the most powerful navy in the world and they’re like, their Navy is literally like,
Simone Collins: they’re an island nation.
They need boats.
Malcolm Collins: If you take all of their battleships together, I think their Navy is like one, I think it was one fourth the size of the Iranian Navy before we sunk it, right? Like, they, yeah. Right. So I’m just saying some countries that like, are countries that you just don’t attack for historic reasons.
They might be a little more like if they have effed over their own people enough. These people might enjoy in the UK a cybermen invasion
Speaker 6: We remove the weaknesses that hold you back.
Speaker 8: Logic over emotion, strength over weakness, [01:06:00] metal fledge.
Malcolm Collins: that just sort of, helped reinstate law and order. And that
Simone Collins: I, but I also think that like, here’s the thing is that, you know, Andre’s so freaking cool, and I like my guest dream for Basecamp is Palmer lucky?
‘cause he is such a great sense of humor, but he’s also doing such really fun work. But, and, and I feel like I just, I want like him and his wife to like be our friends. Yeah. No, I feel like dmd him and he was at Heretic Con. I just don’t think we ever saw him. But anyway, like, and Orel sells to governments.
Like it’s, it’s not a consumer tech company. I want, it would be so cool. So I’m with you on this. I would love to build like the, and orel up for the family the, you know, the, the consumer version of it. With, with all the, the tech you can wear and, and your drone swarms and your home security systems that are like incredibly lethal.
I’m ready for that. So yes. Okay. Step one, make reality fabricator work. Step [01:07:00] two. Make a lot of money, hopefully.
Malcolm Collins: No. I think of one thing that people are missing in this future that we’re heading into, because I, I had mentioned this, but I don’t think people understand the consequences of it. When you’re in a world that’s experiencing demographic collapse in a lot of these countries, like most of Europe becomes financially unsustainable and then that unsustainable is compounded because they are not where the AI jobs are coming from.
Right. Like they are being replaced. Yeah. You know, only a few societies really have any capability of even. Playing in the AI economy. Mm-hmm. You’re really only talking about the United States. China and Israel, as far as I’m aware. Yeah. And I, I, I, I, I had a friend at a major firm that did a, a, an analysis of this, looking at demographic rates, looking at AI rates, and those are really the only three countries that are gonna matter in the future.
This is why these white people who are like, what do we care about? Like, some little strip of land. It is because it’s one of the only countries that’s gonna matter in 50 years. Yeah. But more importantly than that, a lot of the countries that are [01:08:00] going to collapse over this period are the countries that created the global norm around not effing was another country just because you’re more powerful than them.
Mm-hmm. The countries that are doing well they’re countries that are very okay with this idea, China Real and the United States. And so, if. You had some ai, charter City or something like that, and they had an automated PNC that was effing around with you know, other countries. Yeah. You’re much less likely to get an in because what Europe’s gonna write an angry email to you basically.
Right. Like, that’s it. Right.
Simone Collins: You are uninvited from our birthday party. We’ll send
Malcolm Collins: our single battleship to annoy you. But we send,
Simone Collins: you’re no longer allowed to participate in our already shut off from AI economy. Yeah.
Malcolm Collins: And, and the reason why you do this isn’t to gain territorial access. It is to f with stock markets.
Right. Like you can, for example, make a lot [01:09:00] of money putting pot. Puts or calls on certain things and then deciding to throw your PNC behind some group that you also ideologically align with in a region to get them more power, right? Yeah. You do that, you explode your wealth, you use that to buy more automated drones.
You do it again. Continue the cycle.
Speaker 3: This is basically the current policy of the UAE for people who are not familiar with their current geopolitical position. , And, , UAE is basically just a country run by a random collection of wealthy families trying to create a little utopia of their own. , And, , nobody does anything about it, right?
So the fact that nobody does anything when the UAE does this. It demonstrates to me that it’s unlikely that somebody would do something if I did it right. Like the people think, like the USA actually cares about this type of stuff. They don’t, , Europe does. They throw a little temper tantrum, but Europe doesn’t actually project their power anymore because they don’t have the money to, and they’ll have even less money in the [01:10:00] future.
So I’ll be able to have even more fun.
Malcolm Collins: So there’s a lot of fun things that some people might be able to do in the near future depending on which companies end up doing well here. I’m just saying because a lot of people are like, oh, you can’t
Simone Collins: random, I have to get the kids. I’m so sorry. You can keep going, but I have to get the kids.
Malcolm Collins: I’ll finish that sentence. You can’t randomly, you know, like attack another country. Somebody’ll do something. I’m like, who? Europe? Europe. Yeah. Because I don’t think they’re gonna be relevant pretty soon.
Simone Collins: Yeah, literally. You, you and what Army has become a very like, literally used.
Malcolm Collins: Yeah. Yeah. I’ll, I’ll show you.
So quick question, Simone. What are we eating tonight? What are the kids eating?
Simone Collins: I have not thought that far. How about some bur chicken for you over Rice?
Malcolm Collins: Would love that. Thank you. Oh, and I was gonna say some mozzarella, but I, that might be harder to do with Bernie. Oh, do you
Simone Collins: want, do you want Bullock Bullock with mozzarella?
‘cause we have the fresh mozzarella. We should actually do that.
Malcolm Collins: Yeah, let’s do Bullock and Mozzarella.
Simone Collins: Okay. All right. I love you. Bye.
Malcolm Collins: Well with Cheddar too. A little bit of cheddar.
Simone Collins: Fine. Your grace. But yeah, I’m happy to do that. I love you.
Octavian Collins: You found your own home? [01:11:00] Yes. Is this where you’re gonna live now? For a couple minutes, find the robot. Oh. So he’s gonna go incognito until the stocking robot sees him there and says, wait a second, you’re not a box of diapers, you’re a boy. What do you think of that text? When I see the robot I’ll, I’ll, I’ll log him.
Yeah. What.
Torsten Collins: You gonna be with us? Yeah. Who’s gonna help us find the fruit snacks to put in the Easter eggs? If you’re not here with us, you gonna be what? The Easter eggs? Yeah.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit basedcamppodcast.substack.com/subscribe


