
Dean Ball on open models and government control
Interconnects
Practical usability and tooling gaps for open models
They highlight rough launches, runtime and tooling challenges, and how closed labs advantage in polished deployments.
Watching history unfold between Anthropic and the Department of War (DoW) it has been obvious to me that this could be a major turning point in perspectives on open models, but one thatāll take years to be obvious. As AI becomes more powerful, existing power structures will grapple with their roles relative to existing companies. Some in open models frame this as ānot your weights, not your brain,ā but it points to a much bigger problem when governments realize this.
If AI is the most powerful technology, why would any global entity let a single U.S. company (or government) control their relationship to it?
I got Dean W. Ball of the great Hyperdimensional newsletter onto the SAIL Media weekly Substack live to discuss this. In the end, we agree that the recent actions by the DoW ā especially the designation of Anthropic as a supply chain risk (which Dean and I both vehemently disagree with) ā points to open models being the 5-10 year stable equilibrium for power centers.
The point of this discussion is:
* Why do open models avoid some of the power struggles weāve seen play out last week?
* How do we bridge short term headwinds for open models towards long-term strength?
* The general balance of capabilities between open and closed models.
Personally, I feel the need to build open models more than ever and am happy to see more constituencies wake up to it. What I donāt know is how to fund and organize that. Commoditizing oneās compliments is a valid strategy, but it starts to break down when AI models cost closer to a trillion dollars than a hundred million. With open models being very hard to monetize, thereās a bumpy road ahead for figuring out who builds these models in face of real business growth elsewhere in the AI stack.
Enjoy and please share any feedback you have on this tricky topic!
Listen on Apple Podcasts, Spotify, and where ever you get your podcasts. For other Interconnects interviews, go here.
Chapters
* 00:00 Intro: is the Anthropic supply chain risk good or bad for open models?
* 04:03 Funding open models and the widening frontier gap
* 12:33 Sovereign AI and global demand for alternatives
* 20:55 Open model ecosystem: Qwen, usability, and short-term outlook
* 28:20 Government power, nationalization risk, and financializing compute
Transcript
00:00:00 Nathan Lambert: Okay. We are live and people will start joining. Iām very happy to catch up with Dean. I think as we were setting this up, the news has been breaking that the official supply chain risk designation was filed. This is not a live reaction to that. If we get any really, really interesting news, weāll talk about it. I think one of the undercurrents that Iāve felt that this week where everything happened is gonna touch on is open models, but thereās not an obvious angle. I think I will frame this to Dean to start, which is how does-- Like, thereās two sides of open models. One is that thereās the kind of cliche like, not my weights, not your weights, not your mind, where like somebody could take it away if not an open model, which people are boosting like, āOh, like Anthropicās gonna take away their intelligence.ā But the other side is people worried about open models existing that the Department of War can just take and use for any purpose that it wants. And I feel like both of these are a little cliche. And the core question is like, is this type of event where more control is coming towards AI and more multi-party interest, like is that gonna be good or bad for the open weight model ecosystem?
00:01:12 Dean Ball: My guess is that in the long run, this is probably profoundly good for open weight AI. And like the whole reason I got in, like, so I became interested in frontier AI governance. I did something totally different with my time before. I wrote about different kinds of policy and studied different kinds of policy. And the reason I got into this was because it immediately occurred to me that the government was gonna... I was like, okay, letās assume weāre building super intelligence soon or whatever, like very advanced AI that seems like really important and powerful. Thatās gonna be something that I depend on, like for my day-to-day life. Iām gonna need it for all kinds of things. Itās gonna profoundly implicate my freedom of expression as an American and my exercise of my liberty and all that. And yet itās also gonna profoundly implicate national security. And so the governmentās gonna have its hands all over it, and they also might not like me using it because I might use it, and others might use it to challenge the status quo in various ways, to challenge the existing power structures which the government is a part of. So we have a political problem on our hands here, in my view.
00:02:36 Dean Ball: It immediately occurred to me that weāre gonna have this huge problem of like, this is gonna be a conflict because this is something thatās gonna enormously implicate American speech and liberty, and also itās gonna have legitimate national security issues, and also the governmentās gonna want it because of bad power-seeking reasons. And so thatās always a part of the picture. And my view was this is just a fight thatās gonna play out over the coming decades, and I wanna be a part of this fight. But number two, in that fight, you have to have an insurance policy, and open weight is the insurance policy. Open weight is the way we can always say yes, but we can build the open ecosystem. We can do that. And so I think in the fullness of time, this is gonna be beneficial, but the problem is thereās a lot of coordination and economic problems that have to be solved here. Itās not just a matter of hoping that Google and Meta or whomever else, or the Chinese companies, by virtue, out of the goodness of their hearts continue to open-source things. Thatās not scalable. There has to be a reason to do it. So what are the institutional dynamics open weight gonna look like in the long term? I donāt really know, but it feels deeply under theorized.
00:04:03 Nathan Lambert: I think itās hard to fund is the thing. I mean, we saw Qwen had their turmoil this week, which is timely, and Iām not that surprised because the stakes for these companies is so high, and they all are trying to make sure their companies win in it. And people will say like, āOh, Meta should commoditize their complements and release open models.ā But no oneās ever commoditized their complements with something that costs a trillion dollars to make. Like, thatās a line item. Like, is Apple gonna commoditize... Apple commoditizing their complement would be them doing the... They could spend just as much as all the other tech companies are on CapEx and spend hundreds of billions of dollars, but theyāre choosing not to. And I just like, I agree that long term it should be better, but if we never bridge that gap, does it actually materialize? Like, the crank is being turned of these models getting better and better. GPT 5.4 released today, excited to try it.
00:05:02 Nathan Lambert: But like, where does it go? Like, what Iām working on is totally falling behind the frontier. Weāre the foundation of research, but itās like I see it already slipping.
00:05:13 Dean Ball: So I kinda think, yeah, I mean, look, I think itās gonna get bad in the short term, itās gonna be bleak, right? Thereās just no doubt about that in my view. Because weāre in this period, like I think the pace of frontier progress is gonna continue. My own view is that, like, just ācause I peer in and use the open weight Chinese models on a fairly regular basis, and I kinda just feel as though the gap has widened between the US frontier and the open frontier. Unfortunately, itās so sad that US frontier and open frontier are increasingly distinct things. But I do feel as though that probably is true. And thatās probably gonna continue because in the next, like, in the early stages of a new technology, you would expect for the vertically integrated players to be the ones who do the best. And over time, the modular players can win, and part of that is ācause eventually you do get to good enough, right? Like, eventually, I think most people think the iPhone is good enough now. There was a time when every year the iPhone upgrade was like, āOh my God, this is so much better.ā Intelligence is maybe different, but maybe not for a lot of things.
00:06:37 Nathan Lambert: Well, like, thereās no iPhone that you can buy from anyone. Nothing you can buy from anyone but Apple is nearly as good. Thatās the concern. Itās like, is it gonna be Anthropic that like, yeah, it stopped getting better, but you canāt rebuild it. Like, you canāt make the open source version.
00:06:51 Nathan Lambert: I also think I had a later question, which is like, the weights are so much less of a concern for me. So like, somebody dropping a two-trillion-parameter model thatās open weights and way better than anything else that somebody has built and released in the open, it almost doesnāt matter if you donāt understand the harness and the tools and the setup you need to make it into a Claude-like system. Like, you need what, eighty nodes of H100s that cost a hundred thousand dollars a day to run and expertise to make it a system. Itās like the shifting away from weights is also happening. I donāt think itās happening in this open versus closed ecosystem at the surface level of the discussion. So thatās why Iām just like, I donāt know if itās gonna exist. The thing that I could see happening is that open weights models are niche, and they help these Claude-like models, but thereās not an alternative in that universe. So itās like, is the government capable of actually making this alternative exist? I donāt know. Like, I donāt know if you can Manhattan Project this, and I wouldnāt advocate for it.
00:07:53 Dean Ball: I actually think about it from the opposite perspective, because I think that what happens if the government follows through on what theyāve threatened with Anthropic, which is to make it so that basically any military contractor cannot have any commercial relations with Anthropic, which means NVIDIA canāt sell GPUs to them for anything. Amazon canāt sell cloud services to them. Amazon and NVIDIA also canāt be invested in them, by the way, if you take any commercial relations at its face value. Now, thatās not a power the government actually has, but nonetheless, if this harassment campaign continues, I think what it probably does... You know, I spend a lot of time in international policy, dealing, talking to foreign governments and civil society in foreign countries, and they already have major trust issues with respect to the US closed source models because they think the US government is gonna come in and disable the models. Like, the American president will get mad at Brazil, say, and in addition to putting tariffs or sanctions, the US president will say, āYeah, weāre also gonna turn off all your public services that are dependent upon American closed source models.ā Right? So people view that as this profound threat, and people are legitimately scared of that in other countries.
00:10:00 Dean Ball: I think this turns that fear up another meaningful degree, and probably not incorrectly, by the way, probably rightfully so. And so I kinda look at this and I think, well, now a lot of American companies might also have that concern, and so you certainly have a demand side of people who are gonna be like, āI get this. It is a risk to use anything where I have a commercial relationship. āCause once I have a commercial relationship, the government can regulate that. Can I find some way of getting out of it?ā I think thereās gonna be demand for that. Whether or not that demand produces supply, I think will depend on... It might just not be possible, thatās true. But I think youāve never had a more favorable demand picture, and I suspect that on the margin, this probably will favor open in the longer run.
00:10:44 Nathan Lambert: Yeah. So thereās a few ways that I think about this. I have this thing, like ATOM Project and all this other stuff I do, and itās like, how do I meaningfully advocate for this? I think thereās something, like I work at AI2, and AI2 has budgets of order of a hundred million dollars and can train decent models. But if I wanted to redo an AI2, like my method for getting that type of money, itās mostly gonna be like befriending a billionaire. And it seems like philanthropy dice roll in the near term is a way to get it. But then, like, maybe it really is some long slog of a multi-industrial consortium that takes a couple years off the ground and slowly, like, Googleās, or all these Netflix and all these five hundred billion dollar smaller companies are gonna give millions of dollars to have somebody else do it because they canāt get the billion dollars themselves, but they know they need to have it existed.
00:11:31 Dean Ball: And sovereign wealth funds. Right. Sovereign wealth funds everywhere can do that, right? Thereās trillions of dollars in sovereign wealth. Thereās pension funds, public employee pension funds. A lot of people can chip into this and itās possible. This is like, Yann LeCun thinks this is the inevitable outcome. He thinks that the future is gonna be that some sort of global consortium gets together and builds this, because no one country is gonna be able to own it, because itās gonna be too important. Iāve always kinda doubted that, and Iāve always thought that that outcome is probably a bad outcome for the world, honestly.
00:12:06 Nathan Lambert: Thatās a bad outcome for how good the AI is.
00:12:09 Dean Ball: Thatās correct. Itās a socialist outcome, you know? Itās not communism, but it is democratic socialism, and Iām not a democratic socialist, so Iām not a super big fan of that. But at the same time, I have to be honest that I kinda think that this probably does increase the odds of that precise outcome coming to bear.
00:12:33 Nathan Lambert: I think something that comes sooner is that a lot of these super wealthy countries are gonna realize they can have real... Like, they can do some sort of sovereign AI and make some sort of noise, particularly starting with open models. I think thereās the Institute for Foundation Models, which is based on the UAE university system. Like, thatās--
00:12:53 Dean Ball: Thatās very UAE-coded, yeah.
00:12:55 Nathan Lambert: Theyāve been playing that for years, and they can keep doing this. Their models are gonna be pretty good, and I think thereās gonna be more people that do this. Thereās the SWISS initiative in EU, which is on one hand doing a good job, on the other hand plagued by the most obvious European limitations of talent cycling and consortium life. I think these things are gonna become more of a thing in the next year, but I donāt know exactly how they impact the... They donāt impact the frontier of AI, but maybe theyāre just like how the geopolitics and power of AI evolves. And I for some reason feel like open models need to be the thing that theyāre gonna do because if they have a closed model thatās not as good, it doesnāt really give them any sort of power. But I donāt have a good enough world view for what that actually does, and if thereās more EU models, if India actually has their act together and trains a solid model. I donāt know what that does, but I feel like itās probably gonna happen.
00:13:54 Dean Ball: Yeah. I mean, itās really super interesting ācause I think the other thing-- that will be inherently... I mean, it will be a Linux compared to a macOS, you know? It will not be as good of an experience for people. But then it becomes strange. Like, I donāt think macOS is as appealing of a thing if itās viewed to be owned by the US government, right? And in fact, part of the reason I think that Apple is able to make its case quite credibly to consumers and businesses is they have resisted US government pressure to turn things over before. People might remember about a decade ago, there was this shooter in San Bernardino, California, and the FBI tried to force Apple to release iPhone data, and Apple said, āNo, weāre not gonna expose this information.ā Now, I think the FBI eventually just hacked it anyway, but thatās a separate issue. Itās a matter of principle here.
00:15:01 Dean Ball: So yeah, I think itās an interesting question: do we expect for the gap between the open frontier and the American closed frontier to widen in the near future, especially just because of how much compute theyāre gonna have?
00:15:30 Nathan Lambert: A hundred percent. And data and talent. Like, a hundred percent. Itās happening.
00:15:34 Dean Ball: Data, talent. And itās compounding, right? I mean, this has always been my view. And how much, Iām not sure, but I think it could be quite significant because these things are compounding benefits. And so if you expect them to just continue compounding, then all of a sudden it gets pretty bleak pretty quickly, would be my fear.
00:16:00 Nathan Lambert: One of the... I mean, whatās your take on this? Why has it not compounded so much faster? Like, I feel like these three companies are spending, I donāt know, 10X what the Chinese labs are spending, and you only get like a little bit better model. Like, I believed so full-heartedly that Claude and ChatGPT and all these models are much better, and I expect them to become better by increasing margin, but itās still confusing why theyāre not already more ahead.
00:16:29 Dean Ball: I go back and forth on this. Sometimes I think they are that ahead, and itās just difficult to show up in benchmarks for the obvious reasons that benchmarks get chased. And like, I do feel that with the coding agents and with certain use cases, I do just feel like, wow, the American frontier is just way ahead, profoundly ahead of the Chinese frontier there. But thereās a lot of other things where you do kinda saturate how good you can be. I suspect that a very large fraction of AI usage is essentially glorified Google search. Even though I donāt think AI is glorified Google search, I suspect that a lot of what people use it for is that, at the consumer level. And it isnāt obvious to me how much better you can get at things like that. But my guess would be that over the next five years, I would guess the American labs really take off, in part because of compute, data, internal deployments for recursive self-improvement style stuff. And also, itās amazing how we talk about that as just a normal thing now.
00:18:05 Nathan Lambert: I think there will be a ceiling on it. Like, theyāre gonna get a ton of improvement-- The gains are insane. Itās like, personally, at my job, Iāve been a lot of a research manager and just chasing s**t down to get a model out the door. But now I can take on hard engineering tasks because Iām like, āOkay, might as well do this at the same time.ā Like, going from zero to a hundred software engineers at anyoneās fingertips is worth a lot in terms of exploration. But the next, like, from a hundred to ten thousand is like, people can mess that up type thing. But thatās a huge gain.
00:18:37 Dean Ball: I kind of agree. I think thereāll be a sigmoid there too. But then the other thing that will happen is, like, what I sort of wonder is will the AI companies, will the current model vendors, will they eventually become more like true infrastructure companies where what they actually do is they have models that design their own chips and models that design their own data centers and models that design their own successors. And so itās this hugely vertically integrated thing, and what youāre really getting access to is not just the model itself, but youāre getting access to this highly optimized hardware, physical world infrastructure. And again, thatās kind of already the case, but does that become even more the case? And then thatās truly insurmountable for any open player. Thatās definitionally insurmountable for an open player, and that becomes scary too. But again, this is why Iāve always felt so good about the position of the US closed source labs. This is why Iāve always been pretty bullish on them and have my concerns about open.
00:20:07 Dean Ball: But to the extent the US government makes it impossible to trust closed source models, you do provide an advantage to open there. Youāre giving a shot in the arm. If you like open source, you should hope that the supply chain risk designation against Anthropic is quite broad.
00:20:09 Nathan Lambert: Itās a rough thing to hope for.
00:20:09 Dean Ball: I mean, you shouldnāt actually hope for it, but I just mean, like, if thatās the only thing you care about in the world is open source, then--
00:20:17 Nathan Lambert: I would say that anyone that only cares about open source probably is not thinking through any of these principles. It just gets really bad if you only have-- Like, AI is not gonna be meaningful lift to the economy and nor sustainable if everything is open. Like, if models are truly commoditized, things look kind of rough out there.
00:20:36 Dean Ball: I think a world where models get commoditized is a really bleak world too, actually. And yeah, this is why Iām very worried about what the US government is doing. But I think that it helps on the margin, though. It probably helps on the margin in terms of waking people up. That still is my view.
00:20:55 Nathan Lambert: I am a little surprised by the Qwen stuff, but I think thereās-- Itās like, at some point, I knew there was gonna be a year where a lot of the open model efforts just died because theyāre just too expensive and too similar. But at the same time, having a lot of efforts that are somewhat similar but exploring a lot of the minor permutations in modeling space to figure out what works for people who use open models is actually quite good. Iām very bearish on the reflection style approach, which is build a lab, build an incredible model, drop it, make a bank selling it on-prem. Because on-prem is not that distinct from a business model as having a closed model. You could sell a closed model on-prem with the right IP controls. But then the person who actually wins open is by trying a whole bunch of tiny different things, understanding what is actually a meaningful differentiator in private data, in certain deployments and whatever, and then really iterating on that with a community. And thatās why I was like, Qwen is the closest to doing this by being so close to the community, and itās so distinct from what a lot of the other labs are betting on.
00:22:05 Nathan Lambert: But I see the pressure going away and kind of reducing diversity onto standards, because standards also make inference more efficient. Using open models is really rough. I think some of the best open models have really had rough launches. I think GPT-OSS had a horrible launch in terms of usability and is now one of the most popular models of all time. Qwen 3.5, itās like researchers I work with are like, āOh, letās see if we can do some basic RL baselines on it,ā and all the software stack is kinda broken. It takes a few weeks to get it going. And this is ācause all the models change differently, and closed labs just have such an advantage there ācause they should conceivably ship things on day one that work. I mean, donāt talk about Claudeās runtime, but thatās fine.
00:22:42 Dean Ball: And donāt talk about the GPT-5 auto router either. But yeah, no, totally. I think thatās right.
00:22:53 Dean Ball: I think fullness of time, Iām bullish on open source in the long run, fairly bearish in the next five years. The next five years are gonna matter quite a bit. And there is a lot of cope in both open source world and also... I donāt really hear it so much in open source world. I think open source world is actually more honest about this. But where the cope is so bad is in global civil society discourse. Like, I was in India for the AI Impact Summit recently, and they are just smoking the copium, being like, āWe are gonna do everything on subfrontier open source models, and weāre just gonna diffuse those, and thatās all weāre gonna need in our economy.ā And I just think thatās, if youāre India, thatās really not the bet you wanna make. I understand these are resource-constrained countries. They have a lot of acute constraints that they face, but nonetheless, I think thatās probably not a good bet.
00:24:05 Nathan Lambert: Well, itās even if those long tail models will work like manufacturing has worked, where itās like Apple has put hundreds of billions of dollars into the manufacturing ecosystem in China to get absolute fine margins and scale. Like, if you really-- these things are gonna be used so much that that fine margin is actually gonna matter a lot, and it is not cheap to get that fine margin. You canāt just YOLO a DeepSeek V3 and spend five million dollars in compute and be done. Itās still gonna be expensive for a long time.
00:24:34 Dean Ball: Yeah, it requires-- I think the Chinese approach, in the long run, if Chinaās gonna continue its strategy and they want to be competitive with the American frontier, theyāre gonna have to fully socialize that, I think. I donāt think DeepSeek alone is gonna be able to do this, and I donāt think even Alibaba alone is gonna be able to do this. I think theyāre going to need some sort of collective effort. Especially because of the export controls, the American export controls. Theyāre gonna have to centralize compute. Theyāre gonna have to centralize all these things, and talent and data and all that.
00:25:17 Nathan Lambert: I donāt see it happening. Like, maybe someone gets officially AGI pilled, and I donāt know that much about China. But the things I know about China, it seems like that would be a big lift, and it would take a lot of time to actually do it. Like, all the companies would have to give up their biggest... All the cloud companies are like tech companies making a lot of money. They would be like, āWe have to give up what?ā
00:25:42 Dean Ball: No, it would be a tough sell. Obviously, if the Chinese government decides they want to do it, they absolutely will. But in total, it will be a tough sell. My experience having had diplomatic engagements of many sorts with Chinese government-- and a lot of Chinese tech policy is actually not directly set by the government. Itās actually more kind of civil society, academia and civil society adjacent to government. Had a lot of conversations with folks like that, and theyāre definitely... Itās largely not a very AGI-pilled crew. I think AGI-pilled-ness probably has a rough correlation with GDP per capita, and I think China is about where you would expect based on their GDP per capita, maybe a little bit ahead, but not very so. But if they ever do get AGI pilled, thatās the kind of thing that they could consider, but then thatās still a pretty extraordinary outcome because the Chinese government would have to be willing to make these things and then give it away. And I kinda just donāt think they will.
00:27:11 Nathan Lambert: Yeah. I mean, all the politics of control with how everybody thinks AI is so powerful are pointing to very value-destructive actions economically in order to achieve the end state that people determine to be right. Itās like supporting open source to the extent that you can to avoid situations like Anthropic being labeled a supply chain risk and having interactions like that totally decimating runway of AI productivity. Like, if the companies are really gonna commit to open source for other things, then theyāre gonna lose money. And I see this in-- Chinaās economy would be taking a gigantic hit doing this. And thatās kind of a common theme of what weāre talking about is that the interface of AI in an economic fashion is gonna make the next few years really weird.
00:28:06 Dean Ball: I hope so.
00:28:09 Nathan Lambert: I think things are gonna be weird, but I havenāt spent a ton of time thinking about how that interacts with political institutions. I thought about socially weird a lot, but I havenāt thought about power weird a lot.
00:28:20 Dean Ball: Oh, power weird is what I worry about all the time. What I worry about the most is I think itās plausible that what weāre seeing... Iāve always had this concern. I have this dual problem of-- maybe Iām talking out of both sides of my mouth. Maybe thatās just the critique, and itās a fair critique. But I routinely complain about how people in government arenāt really... They pretend to take AI seriously, but they donāt take it that seriously. And they donāt really own the implications of advanced, of near term advanced AI and all that. I think we basically have transformative AI right now, but they donāt own that, because itās annoying, itās difficult, itās conceptually challenging.
00:29:08 Dean Ball: But the flip side of that is that if people do start to take it very seriously, thereās the risk that they sort of lash out, that they get scared, and they lash out and do things that are rash, in a rush. And that actually creates very, very bad, much worse outcomes than you otherwise might have gotten. I think thatās a very fair risk, and I think itās possible that you might see things like that happen within the U.S. I donāt think this particular incident with Anthropic is quite an example of that. But itās possible that you do see that in the coming years, and that is in and of itself a pretty scary outcome because if the U.S. government decides that they want to nationalize the frontier labs, I think it could be one of the most tyrannical things we ever see happen in this country.
00:30:16 Nathan Lambert: Yeah. Itās like, I donāt know how to reply to this. I think things are... Itās serious times and I see so many... It feels like such a Sisyphean task to make more open models exist, but all the broader trends seem to point to that being a more stable equilibrium in a lot of ways. Like, good enough open models and keeping up with what we all feel happening in the closed model land.
00:30:50 Nathan Lambert: So I donāt know. I stay motivated, but I feel increasingly lost in terms of achieving it.
00:30:56 Dean Ball: I donāt think you should be. I think, look, I suspect the US government will not actually do it, and the best thing about America is that our general sort of-- I donāt wanna say incompetence, but the general sort of chaos of American institutions and decentralized confusingness of it all, it can often be quite frustrating, and it can sometimes be a detriment, but it can also be really great because we tend to not execute and follow through on our very worst ideas. And so I donāt think weāre going to do that. It doesnāt feel very American to do it. I worry about it because I worry about these rash reactions, and thatās why I fight as heavily as I do on things like this, despite not insignificant cost to me to do it, politically speaking. But thatās totally worth it because I care about this. I think everything, I think that will probably be fine. But yeah, I do agree. Itās a major risk. Itās a major risk, and itās a weird world to think about, Iāll tell you that much.
00:32:16 Nathan Lambert: Yeah. I donāt have a lot more to add. Iām sure weāll continue this discussion. I think it warrants the space of it ācause thatās the... Itās one of the longer term things, but itās not in the news cycle whatsoever, at least the open model angle. Thereās just so many layers. People have to talk. Like, send feedback, people listening. Iāll even send this out as a podcast as well and just like, what do people think? How do we get to the places we want to get to?
00:32:46 Dean Ball: Well, one thing Iām particularly interested in is-- one of the items in the Trump administration action plan, which I worked on for those who donāt have that context, is this idea of financializing compute, creating a financial market, like basically a commodities market for compute so that you can buy, you know, like really robust. In the same way that you can buy electricity spot, electricity futures and electricity on the spot market and things like this, the wholesale. Could you do something like that for compute? That could really profoundly change the dynamics and the economics of AI production. Itās not gonna turn them over. It doesnāt flip them on their head, but it changes it quite meaningfully. And Iām very excited by that prospect.
00:33:48 Dean Ball: And thatās the kind of thing that I would be increasingly doing if this sort of interference of government into the frontier continues. What I suspect Iāll do is start developing some of those ideas which I developed earlier. Iām only one person. If those things start to seem relevant again, I totally will. Because anything to make it easier to produce AI for people that donāt have trillions of dollars will be extremely important.
00:34:38 Nathan Lambert: Yeah. I think that... I donāt know. Iām happy to leave it there.
00:34:43 Dean Ball: Cool.
00:34:45 Nathan Lambert: I can let you get on your trip. Itās good to catch up. Iām early in the process of potentially coming to DC in a few months, so I will let you know if I do.
00:34:52 Dean Ball: Oh, please do. Itād be great to see you. We can record an episode of my podcast live.
00:34:58 Nathan Lambert: Sounds good. Okay. Thanks everybody for listening.
00:35:03 Dean Ball: Talk to yāall later. Bye.
This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit www.interconnects.ai/subscribe


