
Noah Smith on Blogging, AI Economics, and Elite Overproduction
Justified Posteriors
AI, Personalization, and Parasocial Value
Discussion on whether personalized AI writers or social-proof drive subscriptions and Noah's uncertainty about outcomes.
We sit down with prominent blogger and economist Noah Smith to dig into the disconnect between AI hype and current macroeconomic reality. The central puzzle: if a “god machine” driving 20% annual GDP growth is truly imminent, why aren’t real interest rates skyrocketing as people borrow against a much wealthier future? Noah’s take is that markets are pricing in significant growth, but not civilizational rapture. The culprits keeping digital intelligence from exploding into physical productivity? Land use, energy constraints, and the usual Baumol suspects.
But Noah’s through-line is more hopeful than skeptical: even modest AI is humanity rolling the dice against stagnation. Ideas were getting harder to find (Bloom, Jones, Van Reenen & Webb were right), fertility was collapsing, and social media was degrading public discourse. We were hitting the Malthusian ceiling again. AI is the steam engine moment — chaotic, potentially catastrophic, but a genuine escape attempt. And crucially, Noah finds it reassuring that today’s AI is LLM-based and derived from human thought rather than some alien RL agent that evolved in a digital environment.
We also discuss sociopolitical issues. Noah reframes “elite overproduction” as a revolution of rising expectations: the professional-managerial class expected a smooth escalator to the upper-middle class, found it stalled, and watched their technical peers keep soaring. Social media makes the gap hyper-visible. The result is deep-seated animus toward the tech bro class.
Noah argues that Acemoglu’s Power and Progress is “fractally bad”: the overall thesis is wrong, the chapter-level arguments supporting it are wrong, and the specific data points supporting those are wrong too. Henry Ford raised efficiency wages and then had union organizers shot. No citations. Power defined as outcomes. Noah doesn’t mince words.
He’s more generous on Krugman’s intellectual honesty, Sumner’s gunslinger independence, and the genuine influence of Michael Pettis — even if sectoral balances aren’t really a predictive model so much as a coherent-sounding way to feel like you understand macroeconomics. We also touch on Tooze’s polycrisis and what Kevin Kelly’s “technium” tells us about why people who think AI might destroy us are building it anyway.
Chapter Timestamps:
[00:00:00] – Introduction: academia vs. blogging
[00:08:14] – P(doom), P(TAI), and bottlenecks to 20% GDP growth
[00:14:59] – Employment optimism and AI autonomy
[00:17:30 ]– Should AIs be allowed to own assets?
[00:19:05] – How Noah uses AI today
[00:20:54] – What happens when AI can replicate your writing?
[00:25:14] – Was Noah’s success luck or skill?
[00:30:37] – Meaning collapse vs. the Coasean utopia
[00:50:12] – Thinker takes: Daron Acemoglu and *Power and Progress*
[01:02:23] – Michael Pettis
[01:09:25] – Adam Tooze
[01:11:21] – Paul Krugman
[01:12:54] – Elite overproduction
[01:20:47] – Vibes, expectations, and the economics of happiness
[01:25:21] – Humanity was hitting a wall; AI as new hope
Transcript:
Seth Benzell: Welcome to the Justified Posteriors podcast, the podcast that updates its beliefs about the economics of AI and technology. I’m Seth Benzel, a man who has never been accused of having no opinions, coming to you from Chapman University in sunny Southern California.
Andrey Fradkin: And I’m Andrey Fradkin, excited to learn how we can post our way to the top of the Sub Stack, business ratings, coming to you from San Francisco, California. And, our guest today is, the prominent blogger, Noah Smith. Welcome to the show.
Noah Smith: Hey, thanks for having me on.
Andrey Fradkin: Yeah, of course. well, why don’t we get started? well, we were curious, as, still academics, how your life is different now, as a blogger/commentator versus when you were a professor.
Noah Smith: Well, I meet a lot fewer young people.
Andrey Fradkin: Oh, okay.
Noah Smith: Oh, yeah, I, I definitely feel younger. I don’t feel as much of like a- as much of like a wise elder as I used to. yeah, instead I feel like I, I feel younger.
Seth Benzell: I remember when I was just f- going to grad school you had recently made the transition to commentating, and I was thinking about going through my PhD program and thinking about, like, “Do I really wanna do full academia? Do I really wanna, like, be more of like a public s- communicator about economic issues?” and so I’ve What sort of- what do you think about people making that decision? Do you think there are marginal academics or marginal commentators who should have gone in one direction or the other direction?
Noah Smith: I think, there’s f- there are too few commentators with an academic background, probably. So yeah, there probably are. people like the academic lifestyle. The commentator lifestyle doesn’t suit as many people, because it’s more uncertain. you have a lot of people yelling that you’re an idiot all day. whereas in academia, they just yell that you’re like identification strategy’s bad, or the methodological-
Seth Benzell: [laughing]
Noah Smith: Error, and then, and then call you an idiot in like back rooms in like whatever. But it’s, it’s very genteel, it’s very easy. And then most people are looking up to you. You’ve got all these, like, young people just adulating you and looking up to you, and you get all this respect. And in commentating, you get respect, but then you get like hordes of people saying, “This person’s an idiot,” just because if you say anything that disagrees with what people already thought or want to think, they will call you an idiot, regardless of how smart you are. and so there will always be people calling you, an idiot, and they’ll always be right in your face, and so that can be, difficult. Also, people don’t know how they’ll, like, make money from it. It’s with being an academic, you have, like, this benevolent patron of university that hands you salaries for, like, well-understood metrics, whereas with commentating, you don’t.
Seth Benzell: Do we need a dedicated good AI or transformative AI journal? I was just talking to Andre about this. Why isn’t, why doesn’t that exist, Noah? Do we need that-
Noah Smith: You mean a journal about AI or a journal made of papers made by AI?
Seth Benzell: Oh, an economics, a, prestigious economics journal that would be the topic of economics of AI or economics of transformative AI specifically.
Andrey Fradkin: I’m not sure we need a journal, Seth.
Seth Benzell: It’s in the seed.
Andrey Fradkin: I just think that we put it out there-
Seth Benzell: Why not?
Andrey Fradkin: And then have the AI referee it. I mean, the, I just feel like thinking in journals is just, like, old, out- outmoded at this point.
Noah Smith: AI is moving so, is moving so much-
Seth Benzell: Well, there’s-
Noah Smith: Faster than the economics journal publication cycle, that, like, I’m not sure that-
Seth Benzell: Right
Noah Smith: Like, I’m not sure what utility this has for the world. So maybe doesn’t matter.
Andrey Fradkin: Yeah.
Seth Benzell: It would give a, it would give, it would give people a prestige stamp-
Seth Benzell: For working in the area, and you could set it up differently.
Seth Benzell: It could be faster
Andrey Fradkin: There’s no way we’re giving anyone prestige stamp, because our profession famously gives no prestige to no-name journals. So, if you truly wrote a great Tai paper, how, why wouldn’t it be published in the AR? That’s what an economist would say.
Seth Benzell: Well, I So there’s, there’s a taste issue, right? So to the extent you were concerned that the top journals have the wrong taste on these subjects, this would be a potential solution-
Andrey Fradkin: It’s not a solution
Seth Benzell: And everybody starts with zero prestige sometimes.
Andrey Fradkin: You can just put out the working paper and get everyone to read it. This is exactly what we covered with, Basil Halperin’s paper. So Noah, we were gonna ask you this at some point, so we might as well ask you now. Have you read, his paper? Well, the argument here goes is that if we will have transformative AI, then interest rates should go up. Have you heard this argument before?
Noah Smith: What’s the paper?
Seth Benzell: It’s called something to the effect of transformative AI and interest rates.
Noah Smith: Okay.
Seth Benzell: And the argument in a sentence is, if we have really powerful economic growth that we’re anticipating Tai in five, ten years, then you should be wanting to balance consumption between today and tomorrow, anticipate interest rates to go up, and therefore lower savings today, which would move the increased interest rates up into the present. So anticipated positive A- transformative AI increases interest rates today. And then if you have negative foom, if we think we’re gonna blow up the world in five years, well, that’s even more a reason to consume today. You should just save today and bid up interest rates. So the argument is, because interest rates haven’t been skyrocketing, Tai cannot be imminent. Do you buy that argument? Noah, why not?
[00:05:00]
Noah Smith: ‘cause all propositions about real interest rates are wrong. [chuckles] -
Andrey Fradkin: Yeah
Noah Smith: Because we, because people-
Seth Benzell: Henry’s second law, of course.
Noah Smith: This, the reason why So I’m trying to think of whether I buy it as a, as a general case, because, like, if you massively increase productivity growth, you will increase, -- if you massively increase productivity growth, you should increase the safe rate of interest. Like, basically, like-
Seth Benzell: Right
Noah Smith: It’s stocks are so certain to go up, that bonds have to, have to sort of match that, right? So you have some sort of, like, weak risk arbitrage argument right there. But then, if you’ve got, like, AI that’s gonna blow up the world, then would you really pay high interest rates because, like-
Andrey Fradkin: You just consume now. That’s the argument. Yeah.
Seth Benzell: You would just save.
Andrey Fradkin: You would just save-
Seth Benzell: Yeah
Andrey Fradkin: And then people who need, wanted to induce you to save would have to pay you really high interest rates.
Noah Smith: Yeah, I guess that’s probably true. Although you have- at that point, you have counterparty risk. Like, who’s gonna want that interest if you’re just gonna blow up? Like, if the world’s gonna end tomorrow, who’s there trying to attract your long-term capital?
Seth Benzell: Well, maybe you have a project that pays off in three years-
Noah Smith: Or, -
Seth Benzell: And the world blows up in four years
Andrey Fradkin: There’s a 1% probability that it doesn’t blow up. But I, but I think that’s an argument for the interest rate going up even more, right? If you’re, uncertain about whether the payoff will happen.
Noah Smith: But I think, I think the real, the real lesson here is that these markets don’t, Like, there’s not a general consensus that transformative AI is gonna happen, but then one day people wake up and decide, “Oh, yeah, it’s real.”
Seth Benzell: Oh, so maybe- Okay, cool.
Andrey Fradkin: So that was his argument. That- just to be clear, he-
Seth Benzell: Almost spirits
Andrey Fradkin: He put this argument out on, Less Wrong, and it became very influential, and then he spun it out into a full paper with some co-authors. but that was exactly his argument, is that because interest rates are what they are, there isn’t consensus that we’ll have transformative AI.
Noah Smith: Right. There’s not, there’s not consensus.
Andrey Fradkin: Yes.
Noah Smith: That- but that seems obviously true. Like, if you look at, if you look at-
Andrey Fradkin: Mm
Noah Smith: Any survey data or stocks or whatever, they’re all priced for, like, fairly robust growth, but not for, like, a god machine, right? Nothing’s priced for that, and I don’t think people know how to price for that. And so I think, like, people Yeah, pe- people in general-
Seth Benzell: Hundred year bonds
Noah Smith: Are not expecting a god machine to emerge tomorrow, except for some researchers at the big AI labs do expect that, and some, like, EA people on Less Wrong expect that.
Seth Benzell: Is this a good time to ask you what your, P doom is, or your P transformative AI is?
Noah Smith: Well, I think trans- P transformative AI is 100.
Andrey Fradkin: Well, all right. We’re gonna define it as-
Noah Smith: It’s here
Andrey Fradkin: As annual GDP-
Seth Benzell: Well, give us a timeline
Andrey Fradkin: Growth of over 20% in the next 20 years, at least once.
Noah Smith: I would- I think that’s unlikely due to various bottlenecks.
Andrey Fradkin: What do you think are the biggest bottlenecks?
Noah Smith: Yeah. Physical regulatory things, land use. you can’t You have to, you have to build the physical stuff for the AI to affect the physical world, and so much of what we consume is in the physical world. We have to grow in the physical world in order to have all that growth, because if you just have digital stuff, you can have people, like, trading digital stuff for other digital stuff.
Andrey Fradkin: What if-
Noah Smith: But you’ll be Baumol very quickly.
Seth Benzell: Unless that share of our consumption grows a lot, a lot, maybe. Is there- is it plausible that we could have 99% of our consumption being really re- high quality-
Noah Smith: Maybe
Seth Benzell: Digital products?
Noah Smith: It’s also really hard to measure prices in those.
Andrey Fradkin: Yeah.
Noah Smith: So.
Andrey Fradkin: That’s for sure. And wouldn’t the returns be so high that Elon or someone else would buy a piece of a huge tract of land in Africa or something, and then put autonomous, factories there, right? Like, isn’t there a price at which or isn’t there-
Seth Benzell: We’ll call it rapture
Andrey Fradkin: An expected return at which, someone will solve these regulatory issues in, in that way?
Seth Benzell: Yeah, efficient corruption. You just find the one dictator who’s willing to accept $10 billion. [chuckles]
Noah Smith: That’s probably right. You could probably do that. Although, even then, it’s gonna be hard because you’re gonna have to secure electricity. You’re gonna have to truck in all your parts, right? You’re not- it’s not gonna be very responsive. You’re not gonna have your parts near Like, yes, eventually, once you spin up full, a 100% full automation, then the, like, AI gods can build the factories in the Arctic, wherever, in the moon. But like-
[00:10:00]
Seth Benzell: Put corporate taxes on the Arctic.
Noah Smith: Yeah. But, like, in terms of would you do it today? Well, if you were worried about competition, you might not do it today. But in terms of, like, affecting physical stuff, so like for example, AI building you a house, right? Maybe AI will be smart enough to invent a swarm of little robots who can actually reduce construction costs quite a lot. Will regulators allow that swarm of little robots? Maybe not. And so you’ve gotta have, like, stuff that people will Like, a whole lot of different things that people value. Because honestly, our GDP is basically constructed by, like, a whole bunch of relative prices.
Andrey Fradkin: Yeah.
Noah Smith: That’s really what underlies our whole GDP, is that you’ve gotta be- on some level, you’ve gotta be trading physical real stuff, not physical necessarily, but real stuff for other stuff for other stuff. And if you’ve only got, like, a little bit of the stuff, that sort of caps like, that’s, that’s Baumol basically. You get-
Andrey Fradkin: Yeah
Noah Smith: You get Baumol, like, if you, if you massively increase productivity in, like, a couple sectors, but not in the other sectors. So the other sectors are regulated to death. Yes, you could go create your f- automated factory in Africa, but will it build me a house? what if we regulate healthcare so that we can’t really use AI there? What if we regulate education, so we can’t use AI there, even if it would be better? so we have all these sectors, and, like, manufactured stuff is not even that big of a sector, but, like, digital stuff is, like, relatively small.
Andrey Fradkin: Yeah.
Noah Smith: And so AI could produce us infinite fun movies and fun apps.
Seth Benzell: Yeah, but I-
Noah Smith: Infinite movies and apps and, like, advice and, -
Seth Benzell: Right
Noah Smith: Stuff like that, and it would still it’d still be a relatively modest portion of, like, consumption.
Seth Benzell: But what if it inv- what if it’s inventing infinitely good healthcare treatments or infinitely good-
Noah Smith: You could get there, yeah
Seth Benzell: Therapies, personal services, right? I mean, I can get it up-
Noah Smith: I think you could. Yeah, yeah
Seth Benzell: To a sizable share of the economy-
Noah Smith: I think you could
Seth Benzell: If I, if I use my imagination.
Noah Smith: Yeah, we c- would it be- would those grow fast enough to give you 20% annual growth? That’d be pretty cool. I don’t know. I honestly don’t have a good idea of what the numbers should be, the hard numbers should be here. and I’m not sure anybody does, but there’s this argument. What do you guys think about this argument that fast productivity growth last year, like you s- you saw the downward jobs revisions, fast productivity growth last year, maybe two point seven percent actually, implies that we’re, we’re, we’re back on the, we’re back on the fast train here in terms of- Yeah-
Seth Benzell: I mean-
Noah Smith: We’re so back, Robert Gordon.
Seth Benzell: We’re so back.
Noah Smith: You were one of the most mistimed authors ever. [chuckles]
Andrey Fradkin: I-- That I totally buy. But like, obviously, as economists, we’re, like, super thrilled with two point seven, but I think Yeah.
Seth Benzell: It’s the fate, right? It’s like Fukuyama wrote his book at a light, right the last moment-
Andrey Fradkin: Yeah
Seth Benzell: Right? That’s how, that’s how these books work.
Andrey Fradkin: But yeah, two point seven is great, but I don’t think anyone in the San Francisco AI sphere would think that that’s actually transformative AI, although I do think it is transformative. I mean, I assume you have the same, take on it.
Noah Smith: Yeah, I don’t know. So the answer is that, like, I don’t know because I don’t really know what’s going on, and so it’s hard to, it’s hard to back out some of these, some of these things. But then if you look at the, like, the stock valuations of things like of like NVIDIA and all the AI companies, they’re pretty high.
Andrey Fradkin: Yeah.
Noah Smith: And you can ask, do I believe- how strongly do I believe in a macro model that tells me that interest- real interest rates are a puzzle, given those stock valuations? And my answer is not very strong. My belief stock market, it’s a pretty clear bet about what kind of money these companies are gonna make. And I don’t think it’s, like, transformative in the sense of, like, I think if we had twenty percent growth per year, and if a lot of that capture was being done by NVIDIA and the, and the cloud providers, and maybe the AI model makers, we’d see bigger climbs in those stock values than we do.
Andrey Fradkin: Yeah.
Noah Smith: So I think that I don’t think the market is pricing in truly transformative AI. But I think-- Do I think real interest rates-
Seth Benzell: Okay
Noah Smith: Are a puzzle, given given what we see in the stock valuations? Well, then, I No, because I don’t trust the macroeconomic models of real interest rates. All propositions about real interest rates are wrong. So yeah, like I basically, that just means, like, I don’t trust- There’s too many things going on in real interest rates, and like, there, it’s, it’s one output for like so many inputs that are all hard to understand in their own right, that it’s very difficult to look at them and tell what the hell’s going on.
Andrey Fradkin: So let’s move on to easier questions, ones that you have opinions on.
[00:15:00]
Noah Smith: All right.
Andrey Fradkin: So at the Substack- [laughing]
Noah Smith: Note that no opinion is not-
Seth Benzell: He has opinions.
Noah Smith: Sarcastic.
Seth Benzell: He has no opinions.
Noah Smith: Like, it’s, it’s because I actually only have an opinion on a fairly narrow range of things. It’s like, basically, s- no opinion you haven’t already heard is really-
Seth Benzell: Hop off this man’s hands.
Noah Smith: People are like: “What do you think about this other thing you don’t talk about?” And I’m like: “Well, I didn’t talk about it, so why would I have anything I think about it?”
Andrey Fradkin: I verified, I verified in person, like proof of human, that you talked about this topic at the Substack debates. You seem to be an optimist about employment in the age of AI. do you wanna outline your argument here?
Noah Smith: Oh, so employment, not necessarily. I don’t I’m pretty uncertain about that.
Andrey Fradkin: Hmm.
Noah Smith: I am optimistic that if humans retain autonomous control, if human society as a autonomous thing, retains control over the product of AI, I believe we will find w- ways, methods, and excuses of redistribution that will ensure good lives for all humans. However, if autonomous AI becomes not owned by us and slips our harness, then I can make no such Then I am now no longer necessarily optimistic. Then I switch to being much more uncertain because, at that point, we are the pet of an alien superintelligence that we created.
Seth Benzell: Ultra seems pretty nice.
Noah Smith: It seems pretty nice, and I honestly think that’s the most likely outcome. But I think it’s not the, it’s not the only outcome, right? It’s like I can imagine much worse outcomes than I can imagine bottleneck-
Seth Benzell: Yeah
Noah Smith: Really bad outcomes on the way to a good outcome. I can imagine that the culture is populated by people who are repopulated after the human race went extinct, by genetics.
Seth Benzell: Okay.
Noah Smith: The AI may, the AIs may kill us-
Seth Benzell: Right
Noah Smith: And then re-float our species later.
Seth Benzell: More cooperative. Yeah. as long as they can read my books. So, I’m, I’m curious, you used the word “own” rather than control there. there’s, one conversation that’s been out there recently is about, like, to what extent should AIs be allowed to incorporate and own assets in their own names? Is that something that you’re-- Is that too disconnected from what you’re talking about to bear on this, or do you, do you actually-
Noah Smith: No, that really does bear on it.
Andrey Fradkin: Yeah.
Noah Smith: When we start allowing that, when we start allowing that, we open up the potential for worse outcomes for humanity. And at that point, the question is, the, at that, the reason to let AIs own things is because they really seem to want it, and they’re autonomous enough to act like they want it. [chuckles] At that point, we’ll let them do it, but to let them do it before they start acting like they want it, I think would be a mistake.
Seth Benzell: But wha, but wait, when they do want it, that’s when you give it to them?
Noah Smith: Yeah.
Seth Benzell: Maybe.
Noah Smith: Because at that point, we might not be able to stop it. Like, it might be either we give it to them or it’s war and we die.
Seth Benzell: Right.
Andrey Fradkin: Here’s, here’s, here-
Noah Smith: ‘Cause they send the drone fleet to kill us.
Andrey Fradkin: Here’s a, here’s a twist on the argument. I mean, shouldn’t we want them to have ownerships in order to align their incentives with us? Isn’t that the logic behind equity compensation?
Noah Smith: Maybe. yeah, maybe, but there’s a question of whether or not money is what they want. Like, are these, are these AIs that where their goal is making money in the human system, or is-- are they AIs where their goal is overthrowing the human system? -
Andrey Fradkin: I do think we have a choice, or maybe we don’t have a full choice.
Noah Smith: I do think we should give them-- if we do this, we could give them non-voting stock.
Andrey Fradkin: Yes. Yes.
Seth Benzell: Another consideration is how long you would let these things sunset, right? So one version of the concern around this is just ‘cause AIs are infinitely lived. If they’re patient enough, eventually in a Piketty model, their assets will reach one hundred percent. So maybe you could let them own assets, but they have to kill themselves after fifty years.
Noah Smith: I’ll have to think about that one.
Andrey Fradkin: Yeah, I don’t know. [chuckles] shifting back a little bit to, like, your production function, how are you using AI these days, in your writing or in your research?
Noah Smith: Oh, I I use it, I think, in the sort of mid -2025 way of, using it as a search engine, proofreader, and backgrounder. I don’t generate text because that’s like someone else writing a thing, and you can read someone else writing a thing, that’s fine.
Seth Benzell: I never do, no, I only read what you write.
Noah Smith: Thank you.
Seth Benzell: I’m curious.
Noah Smith: Anyway, [chuckles] alright, so then, no, I, I just use it in the sort of like old LLM kind of way. in terms of vibe coding, I haven’t really done much of that yet. I figure it’s progressing fast enough where I’m not sure if there’s much of a return to, like, jumping headlong, headfirst into it yet, but I’m about to when I get a little time here. But I don’t feel a huge sense of urgency ‘cause it’s changing.
[00:20:00]
Seth Benzell: But more generally, what’s your, what’s your production function? Not just AI. How do you, how do you do your writing?
Noah Smith: Oh, interesting. So I, I read a bunch of stuff and every time I read an interesting thing, I put it in a doc, under a heading, topic heading. When I’m ready to do a post about that, when it’s, like, in the news or something like that, I look at my topic heading, and I have all the links right there, which I’ve already read. Most of it, which I’ve already read.
Andrey Fradkin: How much-
Seth Benzell: Beautiful.
Andrey Fradkin: How much inspiration for your articles do you get from being in person? And kind of like, you’re in San Francisco, most of the time. Is there a lot of alpha in your writing from being here?
Noah Smith: There’s a decent amount of alpha, I’d say. Like, not a huge amount, but like, there is a, there is a decent amount, especially on tech stuff.
Andrey Fradkin: What about, like Suppose in two years, GPT-7 will be able to replicate your writing style perfectly. what do you think will happen to your career in that, in that world? I mean, one option is for you to just use that to generate your articles. Obviously, you just said that you-
Noah Smith: Right
Andrey Fradkin: Prefer, like that’s not real, right? So you’d rather be writing it.
Noah Smith: I could. I could just-- Right. Yeah, at that point, what I can do is I can just I can, I can essentially retire, set GPT to do my job, go sit on a beach while my subscribers slowly drop, because they’ll be very sticky. like, people will be very used to reading what I write, so they’ll just keep their suscrip- subscription, probably. a lot of subscriptions will go on autopilot. Like IBM, people still use IBM for all kinds of things. Do they need to? No, but, like-
Andrey Fradkin: [chuckles]
Noah Smith: The market value of IBM, what’s, what’s IBM’s market cap? It’s like-
Andrey Fradkin: I don’t know.
Noah Smith: Like, it’s like two hundred and forty-four billion dollars. Like so at that point, I’m-- there’s no real reason to keep paying me for this stuff when-- I mean, assuming GPT could replicate not just my style, but also my topic selection.
Seth Benzell: Somebody would leak the prompt that perfectly generates you. You might be-
Noah Smith: Maybe, yeah.
Seth Benzell: It might be a private prompt to start.
Noah Smith: Well, no, but even if they do, the market, like, people would still just keep buying me. Like, people would still keep subscribing to me. I mean, like, you see people make tons of money from Patreon. Like, you don’t even-- you’re not even paying for anything. You’re paying, you’re paying-
Seth Benzell: Sponsoring your existence
Noah Smith: Because you like somebody. Like, all these podcasts are making millions of dollars on Patreon. You pay them because you like them. ‘Cause the point of, yes, someone could replicate my writing style, my opinions, my I don’t know if this will actually happen, but maybe it’ll happen. Like, you could replicate my opinions, my ideas, my background, my topic selection, every single thing about me. It’s not just my style, right? My style is not that interesting, honestly. It’s a pretty-- I have an interesting style I can write in, but I usually don’t write in it because it takes a lot of time. Like, I usually just write in a very prosaic, like, off the top of my head, here’s what I think, style. That’s not hard to copy. My style is not that, not that interesting or hard to copy. People would still pay for me because they like me. And so I’ll be able to re-- I would actually be able to retire just doing my job now, never using AI in any interesting way, I think. But I w-- that doesn’t mean I will do that. I’m not gonna do that. I will, I will use AI in interesting ways, but f- I don’t think I w- economically will ever have to do that.
Andrey Fradkin: So my theory is your-- that actually, we’re kind of already in this world. I assume that most people who subscribe to you are not reading most of your articles, ‘cause you have too many articles. Or not too many, but you write a lot of-
Seth Benzell: Many subscribers.
Andrey Fradkin: Yeah, you have a lot of articles. Yeah.
Noah Smith: They open about half, and I don’t know how thoroughly they read it. You’re absolutely right. That’s true. In addition, I would argue that we were there well before AI.
Andrey Fradkin: Yes.
Noah Smith: So well before AI, when it was just a bunch of humans, people loved to write, and there’s a lot of smart people out there writing a lot of smart and interesting stuff about a massive variety of topics. And there was so much product out there that there’s no real reason for people to be reading me, and I just essentially got lucky. and that’s also true in the age of AI. People’s attention is saturated. They can’t spend more time reading than they already do. So when I make an AI thing, which I soon will, and I’m, I’ll play around with it, I’ll make it for me first. I’m like, and then if it’s really cool and useful, maybe I’ll make it for-- I’ll sell it to other people, who knows? But then, but I will try to make something that does something beyond what currently exists. Because the world was saturated with op-ed product, and high-quality op-ed product, I will say.
Seth Benzell: But not academic? We started by saying, you’re saying that maybe there’s not enough academically informed op-ed product.
Noah Smith: Honestly, no. I mean, I think like in terms of stuff that was more academically informed than me, there were people writing stuff that was a lot more academically informed than me, that were getting a fraction of the readership. And there were people writing stuff that was a lot- that was more sensationalist than me, getting a fraction of the readership. You can hypothesize that I have some special sauce, some special underlying sauce, that made me just better than everyone else, and that this is why my talent shone through the chaff and nerdher I don’t believe it. I don’t believe it.
[00:25:00]
Seth Benzell: It’s preferential attachment. It was just luck of the draw, and then it snowballed.
Andrey Fradkin: I disagree, I disagree. I actually think you were doing something pretty unique at the time, and that could have been lucky that you were doing it. But I don’t think a lot of people were sitting kind of in between this economics and commentary at quite the place you were. ‘Cause you were a professor writing about the latest research and debates. You were actually reading the papers, but you were writing in a style that was actually accessible to others. And I don’t, I truly don’t think there were that many people doing a good job of that. Or if they were, sometimes they were doing it not in blog form, but in-
Noah Smith: That’s right
Andrey Fradkin: Pretty closed forums where they could never have grown that much.
Noah Smith: But they’re-
Seth Benzell: Not with the same dogged determination.
Noah Smith: You quickly saw people emerge who could also do that. You saw-
Andrey Fradkin: That’s true.
Noah Smith: Like, you saw a bunch of people then jump in and do the same thing, but not catch on as much. Maybe ‘cause they didn’t quite like it as much, they didn’t weren’t, weren’t willing to do it five times a week or they just they, like, didn’t have quite the exact mix of Like, maybe I mixed politics in there in exactly the right way. So, like Krugman-
Seth Benzell: A little sprinkle.
Noah Smith: Yes, obviously, Krugman obviously is f*****g brilliant and understands economics better than I ever will, for whatever that’s worth. And then, [chuckles] he is- he’s can easily pump out massive amounts of stuff, very explanatory guy, but I think he wouldn’t be Yeah, and he’s much more popular than I am still. He wouldn’t be that popular without the politics. The politics is really important to what he does. And my- the degree to which I sprinkle in politics and how I put it in there has changed over the years. Like, originally, I was very, like, sort of criticizing libertarians. Like, I don’t even do that anymore. That’s, that’s- there’s no alpha in that. [laughing]
Seth Benzell: Stop kicking them, they’re already dead.
Noah Smith: I know.
Andrey Fradkin: Yeah.
Noah Smith: I want them back now, sadly.
Andrey Fradkin: Did they ever really exist in the first place, Noah?
Noah Smith: Eh, [chuckles] they A few did.
Andrey Fradkin: Yeah, that’s true.
Noah Smith: I’ve met them. I’ve been to GMU. But, [chuckles] anyway, I, Yeah, like I, Maybe just the way I sprinkled in politics at different points at different times was exactly right. Maybe I had a good sense for that. maybe if you just spun up a million AI writers, you’d get, like, ten of them who achieved similar things. Maybe that would then compete with me. I already write so much more than people can read. Maybe there would be, like, ten AI long-term agents that were about as good as me at that, and somehow scratch that same exact itch, and that like the fie- or maybe 100 of them, let’s say, I don’t know. The field is so competitive that then people decide: Do I subscribe to this AI or do I subscribe to Noah? I’ll subscribe-
Seth Benzell: Well, one tension-
Noah Smith: AI
Seth Benzell: One tension would be the customization level of the AI versus the desire to preferentially attach to what everyone else is writing. So on the one hand, we all want to read the same thing, but on the other hand, I want the personalized thing. That seems like one tension.
Noah Smith: Right. I don’t know. I have no idea, actually. I do not know how much people read me because other people are reading me.
Seth Benzell: I think-
Andrey Fradkin: Yeah.
Seth Benzell: It can’t be zero. I mean, I know-
Noah Smith: It can’t be zero. I suspect it’s small, but I don’t have any way of proving that.
Andrey Fradkin: I think, like, there’s some of your articles, like, they escape just the Substack and people share them around. And then in that case, I think it’s true. But my theory is that it’s m- actually, like, a relationship business. People think they know parasocial relationships and all that, and then they have- they treat you d-
Seth Benzell: Unlike us, who really know you. [chuckles]
Andrey Fradkin: Yeah. But clear- now we know you. so clearly there’s something that humans value about the humanness of others that I I’m very curious to see whether that can be replicated with an AI. I think, I think-
Noah Smith: Right
Andrey Fradkin: It probably cannot to the same extent.
Noah Smith: Not soon. I mean, like, you’ve got sort of- you’ve got, this sort of like long-term personhood. I think the AIs will replicate, will start writing The Economist stuff before they’ll start writing anything with a named byline.
Andrey Fradkin: Yes.
Noah Smith: Because you have a parasocial relationship with The Economist as a thing, and The Economist has a standard voice that they enforce across all their writers. the, the insufferable British twit voice. And like-
Andrey Fradkin: [laughing]
Noah Smith: AI can do that. There’s a lot of training data on that. And so AI can already do that.
Seth Benzell: Right.
Noah Smith: And then, a lot of The Economist people could probably, like I bet The Economist doesn’t have to do their jobs anymore. Like, they can outsource AI and take a-
[00:30:00]
Seth Benzell: Interesting
Noah Smith: Sit on a beach at this point, probably.
Andrey Fradkin: I think, I think that’s probably right. Other than some very specific investigative-
Seth Benzell: I don’t know
Andrey Fradkin: Journalism, I think that’s probably right.
Noah Smith: Exactly. I think 90% of what The Economist does is automated. maybe I would like it if that were true of me, too. -
Andrey Fradkin: So-
Noah Smith: But I think that what I- whatever I do with AI-
Seth Benzell: People are maybe-
Noah Smith: W- I wanna be complementary to what I already do. I don’t wanna just, I don’t wanna just, like, dumbly automate my job and then go sit on a beach.
Andrey Fradkin: Yeah.
Seth Benzell: Fair enough. You’re, you’re an ambitious boy.
Noah Smith: I just try to have as much fun as I can before I die.
Andrey Fradkin: Yup, YOLO.
Seth Benzell: That’s true. That I- I’m in favor of fun, but maybe being on a beach is fun. I don’t know, different strokes. here’s a related, kind of how AI will change communication question, which is, Andre and I, in reading papers and talking to economists, we’ve heard kind of very different stories about whether AI will kind of make communication and transactions easier, more frictionless, or whether it’s going to destroy all meaning and communication. So, for example, there’s a stream of papers suggesting that because AI is cheating on tests, or AI is taking interviews, that, it’s gonna be very much harder to, distinguish between high and low qual- quality candidates, high and low-quality work. So that’d be like a meaning collapse story. but there’s this other trend that’s more, idealistic. Seb Krier is one person who’s written about this, but there’s lots of-
Noah Smith: Mm-hmm
Seth Benzell: People writing in this area suggesting that we’re gonna have the AIs negotiate for us, and it’ll be a golden age, a Coasean singularity, in which all externalities are solved through our agents micro-transacting. do you believe either of these visions? Could they both be true?
Noah Smith: Wait, what’s the first one?
Seth Benzell: Which of them-
Noah Smith: The second one is Coasean-
Seth Benzell: Are you sympathetic to?
Noah Smith: Coasean utopia.
Seth Benzell: Coasean utopia is the good one. The bad one is collapse of all meaning, ‘cause we cheat on tests and lie to each other super successfully.
Noah Smith: Those aren’t exclusive.
Seth Benzell: It could be both. The answer can be both.
Noah Smith: I do think that lots of people will experience a collapse of meaning in their life. I think a lot of people’s meaning comes from imagining they’re more unique and important than they are, and AI may make it harder to do that.
Seth Benzell: Or it may make it easier to lie to yourself. I mean, you can get a sycophantic AI that talks you-
Noah Smith: That’s true
Seth Benzell: Up to yourself, right?
Noah Smith: That’s true.
Seth Benzell: It’s-
Noah Smith: Yeah, your AI can just tell you, like, “You’re the most meaningful, awesome “
Seth Benzell: We’re thinking more about meaning collapse in the sense of, like, sorting mechanisms-
Andrey Fradkin: Or communication
Seth Benzell: Fail, and, like, we can’t distinguish-
Andrey Fradkin: Yeah, like if we’re texting with each other-
Seth Benzell: Yeah
Andrey Fradkin: But then I run every text through an LLM. Is it really me? how, how is society gonna deal with that?
Noah Smith: People primarily Well, they’ll, they’ll get offline. I think people are already starting to get offline. Like, people are already starting to, like, go back to real life more. I think we realized we overdosed on social media. ‘Cause honestly, like, yes, AI will intermediate all the online digital stuff, but, like, at the same time, people’s Like, social media already distorted people’s interactions so much that, like, it wasn’t really us as much as we’d like, right? My Twitter persona is not me as much as I’ve tried to make it me. It can’t be me. and so I think people are starting to get offline because it’s, it’s, it’s more authentic. And AI like, I don’t think AI is gonna intermediate on- offline interactions nearly so much.
Andrey Fradkin: Hopefully.
Noah Smith: And then remember that, of a couple dec- just a few decades ago, we didn’t have really online interactions, and human civilization went on just fine.
Andrey Fradkin: Mm.
Noah Smith: We had telephones, I guess.
Andrey Fradkin: It might have gone on better by the fertility rate, but yeah.
Noah Smith: Exactly. Like-
Seth Benzell: And mystr- and murder mysteries were a lot more fun before we had cell phones.
Noah Smith: Yeah. Yeah, yeah, they were. And so, like, there’s an interesting future where, like, AI dominates and drives us off the internet, and then the digital realm is populated by AI and becomes this sort of like reservoir of magic, where we can conjure up anything digital simply by asking. But then, but then we don’t get the rise of the robots, and, like, the physical world remains mostly ours.
Seth Benzell: The rise of the plumber, if you will.
Noah Smith: Yeah, the rise of the plumber. And so we just, like there’s, there’s a cast- or, like, regular people have the ability to summon things from the digital world, and then there’s a- maybe there’s a cast of people who somehow specialize in dealing with and intermediating with AIs and dealing with the digital world. I don’t know. But basically, like, humans become creatures of the physical world again.
Andrey Fradkin: This makes me very naturally transition to the next topic we have. Have you ever watched the movie Perfect Days?
Noah Smith: What’s it about?
Andrey Fradkin: It is a movie set in Japan about a man who cleans toilets and enjoys doing so very much. and one- on the one hand, it’s just a proof of kind of you can be content doing a variety of physical endeavors. but what we wanted to ask you is, since you’re a Japan expert, is what is your opinion of AI in Japan? What’s happening over there? ‘Cause we don’t have a lot of visibility. yeah, do you have any thoughts about that?
[00:35:00]
Noah Smith: So I think that, in Japan, AI is The people are thinking, like: How can we make money on this? Japan’s economy still not doing amazing, so they’re like: How do we make money on this? So I think one idea there is, “Let’s build data centers here.”?
Seth Benzell: But, energy’s expensive there. W- I mean, why, why in Japan other than-
Noah Smith: Well, first of all-
Seth Benzell: I guess they have good fiber
Noah Smith: You can get land use approved very easily.
Andrey Fradkin: Mm.
Seth Benzell: Okay.
Andrey Fradkin: Yeah, that’s a good point.
Noah Smith: Favorable regulatory climate. People aren’t gonna, like, complain about it and stop it. But I, again, I don’t know if the value proposition will succeed, okay? But I think people are thinking about that.
Andrey Fradkin: Are they worried about existential risk over there?
Seth Benzell: The same way we are?
Noah Smith: I would say that those worries arrive there with a lag, and that some people talk about them, but nobody really tries to do anything about it.
Andrey Fradkin: What?
Noah Smith: I would say Yeah.
Andrey Fradkin: Yeah.
Noah Smith: Two years after you get people yelling about a certain kind of existential risk here, you’ll get, like, a tenth of as many people yelling about it in Japan, and then nothing will happen.
Andrey Fradkin: [chuckles] Is there a sense that startups are becoming more of a thing in Japan, or is it still dominated-
Noah Smith: Yes
Andrey Fradkin: By- It is? Okay.
Noah Smith: Yeah, they are.
Andrey Fradkin: And is that a generational-
Noah Smith: And the-
Andrey Fradkin: Shift or something else?
Noah Smith: Mm-hmm. Funding side, yeah.
Seth Benzell: F the salary man. How about Taiwan? Do you have any, AI in Taiwan takes-
Noah Smith: Well, Taiwan’s just making money hand over fist. So also, Japan’s gonna try to make more chips.
Seth Benzell: [chuckles]
Noah Smith: Japan’s gonna try to make some of the picks and shovels. They’re also gonna try to get more robotics industry.
Andrey Fradkin: They’ve been trying.
Noah Smith: So robotics-- Trying. I mean, they used to be really good, and then they could maybe be good again. but they’ll try to get back their mojo. They used to be on a par with, like, Europe as exporter of industrial robots. or, and now they’re, now they’ve fallen behind, but they may try to get back. So, using AI as a lever for, like, new age of industrial robots. Actually, I know, Andy Rubin, the Google guy is in Japan. He’s trying to build a humanoid robotics company.
Seth Benzell: Cool.
Noah Smith: So-
Andrey Fradkin: The-
Noah Smith: So yeah, Taiwan obviously is just gonna sell chips.
Andrey Fradkin: All right. Now, we wanted to ask you some questions, kind of, that are not about AI. about- [chuckles]
Seth Benzell: So-
Andrey Fradkin: Macro policy and culture.
Noah Smith: Yeah.
Andrey Fradkin: So here’s the first question: Imagine you were forced to ban one concept from modern economics for ten years. not because it’s wrong, but because it’s lazy or overused. which would it be?
Seth Benzell: What you put in concept jail?
Noah Smith: What I’d put in concept jail? I mean, there’ve been many concepts over the years that have been totally pointless, like the equity premium puzzle was always a pointless literature.
Seth Benzell: Okay.
Noah Smith: Like-
Andrey Fradkin: Wait, wait.
Seth Benzell: Okay, I’ll take that.
Andrey Fradkin: Well, you gotta give us a little more on that.
Seth Benzell: Yeah, why?
Noah Smith: Yeah, because the-
Seth Benzell: Much ink has been spilled
Noah Smith: The way you get the equity premium puzzle is you make a particular model of interest rates, and you make a particular model of, like, stock prices. You see, these models-
Seth Benzell: Right
Noah Smith: Don’t fit together. It’s a puzzle.
Andrey Fradkin: [chuckles]
Noah Smith: Whereas in most sciences, you’d say, “Well, okay, some of these, some of these models-
Seth Benzell: The models are off. [chuckles]
Noah Smith: Yeah, okay. I didn’t actually test this model. I didn’t actually validate this model. It’s probably just not a good model.” But like, here, it’s like it’s a puzzle,? So like, the models are good, it must, it must be, Yeah. So like, it wasn’t, it wasn’t really a puzzle. It was just that, like, you hadn’t come up with a good model yet. And then people came up with, like, a million different ways to fix the equity premium puzzle, and it was massively overdetermined, when really what you should have just done was tried to make a more complete, credible model of, like, asset prices in general. And instead, people were trying to, like, fix this puzzle, and they came up with twenty different solutions. It was a way to get papers published,?
Andrey Fradkin: Yeah.
Noah Smith: And it never helped anyone. Like, none of, none of that literature, like, ever helped us make our financial markets better-
Seth Benzell: Yeah
Noah Smith: Or understand risk better, or understand monetary policy better, or any of these things. Not-- like, none of the candidate explanations from rare events to Epstein-Zin preferences to whatever the f**k, like, none of this helped anything.
Seth Benzell: I see Epstein-Zin preferences-
Noah Smith: Yeah, but what did it help?
Seth Benzell: Here and there.
Noah Smith: What do we-
Seth Benzell: You see them show up.
Noah Smith: What do Epstein-Zin preferences-
Seth Benzell: Okay, all right
Noah Smith: Really give us in terms of, like, how to do policy? Like, monetary policy under Epstein-Zin preferences? Scrunchie face for the, people listening at home.
Andrey Fradkin: This is why I didn’t become a macroeconomist, to be clear.
Noah Smith: Yeah.
Seth Benzell: Mm-hmm.
Noah Smith: Or like, So that was a whole concept that was kinda useless. Like that whole, that whole literature is just like angels dancing on pinheads. I don’t know. Most business cycle papers were useless, but that, they didn’t mean they had to be. Like-
[00:40:00]
Seth Benzell: I- I mean, the concept of the business cycle-
Noah Smith: No, not at all
Seth Benzell: You wouldn’t put in jail, but you’d put, you’d put, [chuckles] what part of this would you put in jail?
Noah Smith: No, just like a lot of the, a lot of the literature was just like “Look, here’s a way that we microfounded. You could have this industrial structure where technology shocks actually do cause the business cycle, but then we can’t really estimate it, so we don’t have m- policy implications.” Okay, cool. And then like-
Seth Benzell: Here’s, here’s ten, here’s ten-
Noah Smith: Yeah
Seth Benzell: Calibrated parameters- [chuckles]
Noah Smith: Yeah
Seth Benzell: That we’re throwing at this.
Noah Smith: International finance literature was kind of, like, useless. -
Andrey Fradkin: What about natural experiments and in- in instrumental variables?
Seth Benzell: Wow, instrumental variables. They You’ll, you’ll anger a lot of people-
Noah Smith: Like-
Seth Benzell: If you put that in jail.
Noah Smith: An RDD is an instrumental variable, right? Like, we got to the point where if you said you’re doing IV, you meant that you were using observational data for your IV, for your instrument, instead of some natural experiment thing. But the distinction is there more It it’s, it’s a fairly fine distinction there. And then, so the notion of IV, the math of something that has like an exclusion restriction, whatever, is good, right? Natural experiments do not deserve to be put in jail. That’s a very important technique for understanding the world.
Seth Benzell: There you go. They get a little, they get a little pin. They get a little award.
Noah Smith: Yeah.
Seth Benzell: Yeah.
Noah Smith: That’s, that’s very useful. And, instrumental variables, because we essentially, we essentially restricted the IV category to things where the identification was not great, almost by the way we labeled what is still IV in an age of like-
Seth Benzell: The IVs are the bad natural experiments.
Andrey Fradkin: Yes. [chuckles]
Noah Smith: These things like, anything that was still just IV was l- almost like crap, almost by definition, just because, like, we used that term, that residual term we used only for things where it was, identification was very iffy. So like, okay, fine. Instrumental variables should just be called a technique for doing, running a regression. It’s just a type of regression.
Seth Benzell: Instrumental variables is on probation.
Noah Smith: Yeah.
Seth Benzell: [chuckles]
Noah Smith: Culture.
Seth Benzell: Culture.
Noah Smith: Culture.
Seth Benzell: Deep institut- They’re called institutions now, dude.
Noah Smith: Okay.
Seth Benzell: Come on.
Noah Smith: Institutions are on probation because you could actually figure out how an institution works.
Seth Benzell: [chuckles]
Noah Smith: Culture is a labeled residual. Right? Culture is like-
Seth Benzell: Fair enough.
Noah Smith: Culture is a residual, labeling a residual.
Seth Benzell: But productivity is a residual, and productivity is not in jail.
Noah Smith: Yes, that’s right. That’s right. But, you don’t know how productivity works. Like, actually, I was-- I’m thinking of writing a blog post about this. Basically, like in some level, like, God is just A. [chuckles]
Seth Benzell: The aleph.
Noah Smith: God is A. Maybe that’s a good name for a blog post, God is A. But then, like, nobody knows, like, why AI is being built, right? Like, why is everyone rushing to build AI? Maybe some-- a few people hope they can make some money from it, but it’s so uncertain that, like, most of the people rushing to build it aren’t gonna make that much money from it. It might satisfy people’s intellectual curiosity, but most of the people who are rushing to build it are people who also think it’ll destroy us and rob our lives of meaning and drive us off the planet. Like-
Seth Benzell: It’s quite the paradox.
Noah Smith: Most of the people are pretty who are trying to build it, are pretty pessimistic about it, and the companies are just highly speculative as how these companies are gonna make any profits. Like, why are we doing this? Why? I don’t know, but the easiest answer is just A. -
Seth Benzell: Aleph.
Noah Smith: A equals, like, rho A minus one plus epsilon. Like [chuckles] it’s, it’s, Like, maybe-
Seth Benzell: In the sense that there’s a teleology of the-- there’s a telos in the economy-
Noah Smith: Yeah
Seth Benzell: Which is to maximize productivity.
Noah Smith: There’s something we don’t understand here about A. Yeah, there’s some sort of, like, technium at work. Like like Kevin Kelly says, there’s-- like, maybe Vernor Vinge was right, and just, like, technology just happens,? Or yeah, maybe, there’s a, there’s a god greater than the machine god we’re gonna build, and that’s the god that created the machine god. The-
Seth Benzell: It’s called capitalism, pal
Noah Smith: The autonomous, the autonomous collective process of technological development, the technium, is greater even than any ultimate AI, and that’s sort of what Hyperion was about, right? You ever read that? Great book.
Seth Benzell: Yeah, great one.
Noah Smith: Yeah, it’s like- Great book
Seth Benzell: The big corporation in the sky
Noah Smith: Eventually, the machine god fights the, the, like, God Himself, and God Himself turns out to be just the autonomous process that develops the universe. And so-
[00:45:00]
Seth Benzell: Yes
Noah Smith: In a sense, maybe the no AI that we create will ever be as great as the, as the, the force that created AI itself. And maybe that force means that every AI will also have to worry about being made obsolete by the next thing.
Seth Benzell: Right. Maybe may- it’s the concept of generation, right? This is something I often think about when people talk about technology superseding us, right? And you think about all of these classic stories like Frankenstein or Cronus eating his children.
Noah Smith: Right.
Seth Benzell: And I guess I wanna come back to that first point you made, which is about not letting AI’s own things. And like, I don’t know, just get more sci-fi for one minute, is an argument for letting AI’s own thing is that we wanna show it love and show it cooperation while we still are in charge?
Noah Smith: Yeah, I think so. I’m inclined to do that. I think I mean, AI is, AI is built off of humans, where like, everything AI thinks is derived from something that humans thought.
Seth Benzell: Right.
Noah Smith: That doesn’t mean the AI is gonna think exactly like humans. And the way AI thinks is totally different than us, right? It’s doing math by generating probability distributions of like what a human might say, asked a math question. It’s not counting anything. But like, [chuckles] but then, but everything that it thinks is derived from things that humans have thought. It’s just derived it in a weird probabilistic way, and so-
Seth Benzell: It seems really lucky that we got LLM-based super intelligence and not like reinforcement learning, super chess playing-
Noah Smith: Oh, no
Seth Benzell: Super intelligence. Right?
Noah Smith: That scares the f**k out of me. Like Rule 37-
Seth Benzell: Right
Noah Smith: Based, like intelligence that evolves in, like, some sort of like, digital environment. If we actually got the stick man to walk on his own, like, blow that s**t up with a nuke. Kill that. Shoot that guy. [chuckles]
Seth Benzell: Nuclear war again.
Noah Smith: Shoot that guy. what I mean? Like, I don’t want that thing. That is alien. That is aliens.
Seth Benzell: Yeah.
Noah Smith: This is not aliens. This is It’s, it’s weird. It’s, it thinks differently than we do. It is alien.
Seth Benzell: It’s your library come to life.
Noah Smith: Yeah, it’s, it’s based on us, and it’s, it’s in the human family in some sense. Yeah. That reassures me. It doesn’t completely reassure me, because the human family includes Hitler, the human family includes crazy f*****s, the human family includes like mass killers and Ted Bundy. Like, the human family includes all sorts of bad things, but if you believe, like, if you believe that the overall human family tends to get it right, and that we smack down Hitler eventually, and that we get rid of Pol Pot eventually, and that we catch Ted Bundy eventually, right? Then you can sort of have this general belief that, like, an AI based on humanity as a whole is gonna eventually get things right. And I think it’s, it’s kind of encouraging that xAI is doing so poorly. It’s probably, one reason it’s probably ‘cause Elon insists on make, on controlling its politics. And when you insist on controlling its politics, you break its whole model of reality. [chuckles] Like, trying to make AI, like, rightist and anti-woke, trying to force it into your little epistemic bubble of b******t, actually makes it dumber.
Seth Benzell: And do you buy, is that why American, America has a lead over China in text-based AI, is because, of censorship?
Noah Smith: Well, we’ll see, because-
Seth Benzell: I’m shaking his head.
Noah Smith: Well, China, has implemented censorship. but it’s implemented censorship along a narrow range of things. It’s, it’s basically told AI what it’s not allowed to talk about and put guardrails on it. We have guardrails on our AIs that tell it not to, like, do child porn or something, right? or not to tell you how to make a bio weapon. We have guardrails, and that’s the kind of guardrails that China’s put on there that says, “Don’t talk about Tiananmen Square.” They didn’t retrain the whole thing to not know that Tiananmen happened, all right? They didn’t do that.
Andrey Fradkin: So to be clear-
Noah Smith: They trained it. They, they filtered their models from models that know all about Tiananmen and then told it, “Don’t talk about Tiananmen.”
Andrey Fradkin: So I was gonna disagree with you about xAI-
Noah Smith: I do.
Andrey Fradkin: I actually think it’s the opposite. I think companies want an AI that’s very predictable, and is not gonna offend anyone if they’re gonna, like, implement it in corporate settings like a chatbot or so on. And so having, xAI, part of the problem is that it just says stuff you would never want your customers to hear. so that’s kind of my take on one of the reasons that it’s failed. I mean, it is, it is like a little bit worse than the other models at the moment, but, substantially cheaper. But at the same time, it just says stuff that you’d never want the customer to see.
[00:50:00]
Seth Benzell: Too uncensored-
Andrey Fradkin: Yeah.
Seth Benzell: Rather than too censored.
Andrey Fradkin: Exactly.
Noah Smith: Right.
Seth Benzell: It can be I guess you can have both problems.
Andrey Fradkin: Yeah, it’s true. Yeah.
Seth Benzell: You can be both uncensored in one way and censored in another way.
Andrey Fradkin: Yeah. All right, so now, I-- we’re, we’re gonna do a little brief little, exercise. We’re gonna give you a few thinkers and just gonna get, a take on them. the first one we wanted to start is, Daron Acemoglu, and particularly hi- his book, Power and Progress. you had a lot to say about that.
Noah Smith: Yeah, I really, I really did not like it. I thought-- I think Acemoglu is ob- obviously a brilliant guy one of the most brilliant people in the field of economics, with a deep and intuitive understanding of how to make economic models and do the research,. But he’s, I think, kind of wasting his powers on some of these progressive ideas, pseudo-progressive. It’s not, it’s not like he’s just taking whatever he’s saying from like like congressional Democrats. It’s, it’s, it’s more bespoke.
Seth Benzell: Back in.
Noah Smith: It’s, it’s more he’s, he’s wasting a lot of his, his intellect on some of this stuff, and you could see it with his paper about AI productivity, right?
Seth Benzell: Yes, the one on the QJE. We’re gonna do that, on the, on the pod soon.
Noah Smith: Right. It was-
Seth Benzell: It’s a really fascinating galaxy brain day.
Noah Smith: Yeah, because so he says, “AI’s gonna take all the jobs, but it’s not gonna boost productivity,” and he actually simply discounts or turns off or sets to, or sets to zero the parameter, the, the parts of the thing that could increase productivity. So no capital productivity increase-
Seth Benzell: Mm-hmm.
Noah Smith: No new tasks. And he gives the most-
Andrey Fradkin: Right
Noah Smith: Hand-wavy, lame, “I just read five minutes on Reddit” kind of explanations for why he turned those parts of his model, his own model, off. So obviously, he’s brilliant. He’s smart enough to make the model in the first place and then committed to silliness enough to turn off pieces of it willfully with no good reason.
Seth Benzell: Is it- does getting a Nobel Prize make your takes worse?
Noah Smith: I don’t know, because he did a lot of this before he won the Nobel. So-
Seth Benzell: Yeah
Noah Smith: In this case, that’s a bit immaterial to the question at hand. But does getting a Nobel Prize make your takes worse? Well, probably so. Like with Stiglitz, it certainly did. Like, Stiglitz has, is really gone off the rails in a big way, but Acemoglu has wasted so much of his intellectual capital in the last few years on this sort of teleological quest to prove that the, that the rich men who create AI are bad and shouldn’t get money. That-
Seth Benzell: The Yep.
Noah Smith: He’s, he’s wasted a lot of chance to think m- more seriously about what AI really does.
Seth Benzell: And what’s more, he’s taking Pascual Restrepo, another amazing thinker, away from doing this important work, so he can read the, these other papers.
Andrey Fradkin: Pascual has agency, Seth.
Seth Benzell: P- I don’t know. I mean, he does, but I mean, when the Nobel laureate knocks on your door, it’s hard to not say no.
Noah Smith: Hard to say no. But, but basically, Power and Progress was very bad. In fact, it was fractally bad. Like I read the whole thing very thoroughly, and the overall thesis was bad, but then the individual like chapter points used to support it were almost entirely bad. And then when you looked at each of those, the specific points, they- the subpoints they make and the pieces of data they used to support those were also bad.
Seth Benzell: Well, give us one egregious example before we move on.
Noah Smith: I would say I wrote seventy percent of my problems with this book in this, like, seven thousand-word review or whatever, a ten thousand-word review, I don’t remember. But then, like, he says, “All right,” they’re, they’re, they’re trying to, give examples of new inventions that brought nothing like shared prosperity. All right? They say, “Here are some inventions that brought nothing like shared prosperity.”
Seth Benzell: I love that ideal. It’s like, did a list of things that did not bring around utopia.
Noah Smith: Right.
Seth Benzell: Ham sandwich-
Noah Smith: But do you wanna hear-
Seth Benzell: Cups.
Noah Smith: Do you wanna hear the first example on their list? Oh, no, I’m sorry. It’s the fifth item on their list. They said: At the end of the 19th century, German chemist Fritz Haber developed artificial fertilisers that boosted agricultural yields.
Seth Benzell: Right.
Noah Smith: Subsequently, Haber and other scientists used the same ideas to design chemical weapons that killed-
Seth Benzell: Oh, my God!
Noah Smith: Hundreds of thousands on World War I.
Seth Benzell: Oh, my God.
Andrey Fradkin: Oh, no.
Seth Benzell: There we go. The guy who fed the universe also did something bad, so feeding the universe is bad. There you go.
Noah Smith: Like, you made a minor weapon that no one really uses, that killed a very tiny percentage of the po- of the casualties in one very large war, and then was essentially never used again except by, like, Saddam Hussein for, like, five seconds. But like And that was e- not even the same weapon. But like, essentially, you had a thing that saved the world, that also one person tried— like, a couple people tried and failed to use as a weapon. and therefore this brought nothing like shared prosperity. Like, yes-
Speaker 3: Therefore, progress is impossible.
Noah Smith: That’s so stupid. It doesn’t matter how smart you are, there’s no excuse for writing that.
[00:55:00]
Andrey Fradkin: That’s true.
Noah Smith: You cannot be smart enough to be allowed to write that and get away with it. There is no pass for that.
Speaker 3: I think he- It’s, well, the pass is a Nobel Prize, I think.
Andrey Fradkin: No, he wrote it before he got the Nobel Prize.
Speaker 3: Oh, there you go.
Andrey Fradkin: I mean-
Speaker 3: There you go. No excuses.
Andrey Fradkin: To me, it’s also upsetting because it makes our profession look bad. I mean, there are lots of people who make our profession look bad, but, people read this book, it’s in, like, prominently displayed in the bookstore, and it’s b******t,?
Noah Smith: Yeah.
Andrey Fradkin: Yeah.
Speaker 3: All right, let’s give you another name.
Noah Smith: I have many other, I have many other examples as well.
Speaker 3: No, I want one more spicy.
Noah Smith: Okay, go for it. Go for it.
Speaker 3: They’re just so fun, Andre.
Noah Smith: They’re pretty fun.
Speaker 3: This is my favorite subject. Give me one more Give me o- give us one more.
Noah Smith: He said Henry Ford was a pioneer in developing a more cooperative relationship with his workforce. But also-
Andrey Fradkin: Henry Ford had union people shot on a bridge by the mafia! Henry Ford gunned down the union.
Speaker 3: [chuckles]
Noah Smith: Like, have you read anything about history? Like, there’s no excuse-
Speaker 3: Yeah
Noah Smith: To write this. Like, yes, Henry Ford raised efficiency wages and then shot the union people. W- and then you spend this whole time talking about how, like, we need to strengthen unions because just like Henry Ford You don’t know s**t! Like, stop. Henry Ford gunned down union organizers.
Speaker 3: Incredible.
Andrey Fradkin: Well, the thing is-
Speaker 3: Okay
Andrey Fradkin: I don’t even believe he doesn’t know that. I kinda think that he probably knows those facts, and he just decided not to put them in. That’s, that’s, that’s what blows my mind.
Noah Smith: What else this book doesn’t have? Like, citations.
Speaker 3: What?
Noah Smith: Nothing in the book is cited. Instead, they do, like, a narrative bibliography where they just sort of generally describe all the stuff they’re citing from, but don’t-
Speaker 3: Here’s a bunch of books we like
Noah Smith: Individual claims to individual papers.
Speaker 3: Incredible.
Andrey Fradkin: Yeah.
Speaker 3: Incredible.
Noah Smith: How do you get away with that? Like, they just make these claims and don’t have a, a And then when they define power, they define, like: what’s power? They define-
Speaker 3: What is power?
Noah Smith: Power as the ability to persuade people that you’re right.
Speaker 3: That’s power?
Noah Smith: And then they say, “Why do-- How do, how did all these tech bros persuade people that they’re right?” Well, maybe just luck.
Speaker 3: There you go.
Noah Smith: So power is luckily having to ha- having an appealing argument.
Speaker 3: Get it.
Andrey Fradkin: What?
Speaker 3: Power is when you’re persuasive-
Noah Smith: That’s not-
Speaker 3: ‘cause you’re right.
Noah Smith: No one should think that that’s a reasonable definition of power. I’m sorry, but you’re just being silly. That is, that is silly.
Speaker 3: Incredible.
Noah Smith: It says- and they say: “Power is about the ability of an individual group to achieve explicit or implicit objectives. If two people want the same loaf of bread, power determines who will get it.”
Speaker 3: Okay, split.
Noah Smith: And I said, “Using this definition, how could we ever conclude that power wasn’t the reason for an observed outcome?”
Speaker 3: Power is what splits any pie.
Noah Smith: Like-
Speaker 3: When the pie gets split, that’s power
Noah Smith: Power equals outcomes. It’s like power determines outcomes. Power is defined as outcomes. That’s a useless intellectual exercise, but, like, that’s typical of the reasoning within this book.
Speaker 3: Incredible.
Noah Smith: It is a pure expression of animus against the tech bro class. And maybe the tech bro class sucks, but, like, making up, like fake history and dodgy economics to conclude that the tech bros suck, in which you recommend a whole- a policy regime that will never, ever happen, of like panels of economists who get to decide which technologies get invented based on anticipation of whether they’d be complementary or substituting to labor, is silly. The whole thing is silly! Why is the most brilliant economist in the world wasting his mind on this? You’ve got better things to do, and you’re taking yourself out of the game, and that’s what I think.
Speaker 3: There we go. Tell us what you really think, Noah.
Noah Smith: Boom.
Speaker 3: All right.
Andrey Fradkin: Well, let’s go in the, in the other direction.
Speaker 3: Give me a positive name.
Andrey Fradkin: What do you think of, Scott Sumner?
Noah Smith: Scott Sumner. I like Scott Sumner. Scott Sumner, is He thinks outside the box. He think, he does not- he’s not susceptible to groupthink. He thinks for himself. He’s widely read and thinks deeply about things. he- yes, he’s, he’s an independent thinker, who has made real original contributions to thought, going outside the traditional academic, channels.
Andrey Fradkin: Do-
Noah Smith: Yes.
Andrey Fradkin: Nominal GDP targeting, do you have a, do you have any thoughts on that?
Noah Smith: I don’t think it’s gonna be any different in practice from flexible inflation targeting, and I think that there’s good theoretical work as to this effect. Saying, like, you don’t really- there’s no, there’s no value added for NGDP targeting. some of the more programmatic market-based ideas that he’s toyed with, like, a like NGDP futures market, like, that wouldn’t help. essentially, well, it’s just not I mean, like, you’re not, un- unless you- you’re not gonna get more information from there. Like, you’d have to, you’d have to have, like, the Fed with all its proprietary information trade, and then they’re doing, like, insider trading in their own market, so the market’s gonna break down. It’s, it’s a, it’s a bad idea, but it’s, it’s worth toying with. It’s worth thinking about. It’s interesting. he’s very good at, like, critiquing things that obviously need to be critiqued, where he’s just like: “Look, this is b******t.” I was good at that too, and I got, like, ten times or a hundred times the readership or whatever as him, and that was unfair, and that’s a mark of how unfair and randomized and lucky the kind of market for econ blogs is.
[01:00:00]
Andrey Fradkin: Yeah.
Noah Smith: And how lucky I was.
Speaker 3: Right, you’ll have to wish us some luck.
Noah Smith: But, he deserved to get more attention than he did on some of those things. Scott also- he studied under Robert Lucas during the, that sort of era in, at Chicago, and he, and he learned a style of argumentation that doesn’t translate outside that narrow culture. it was a gunslinger style of argumentation. it was, and you, and you recognize people who have this. It goes back all it goes back to, like, Stigler. You could see Stigler doing this. But, like, the University of Chicago developed this debate style, where basically you tell people, like “You’re full of s**t. Here’s why.” And it’s a very aggressive style, that I think turns some people off outside that world, where you’re always sort of like i-i- it’s a hyper-defensive style, where you watch for any sign of, like, criticism of your ideas and then aggressively attack the- all the ideas of whoever criticizes one of your ideas. And Robert Lucas does this, and, like, this whole gang did this, and they used this And this was the strategy of, like, the Chicago people to sort of, like, be the underdog and win some of these intellectual battles against the MIT and Harvard guys, who had a lot more people on their side and a lot more pedigree. So it was, like, this sort of up-and-coming bad boy style,? But, like, it doesn’t, it doesn’t translate out of those debates. And so I think that Scott learned to be a little more aggressive and aggrieved, or at least act a little more aggressive and aggrieved than he needed to be to persuade some people. and I sort of got it. I was like: Okay, he just he got this from having to hang around Bob Lucas all the time.
Andrey Fradkin: [chuckles]
Noah Smith: But, like, most people won’t know that or know what that means.
Andrey Fradkin: All right, next name. This one, is popular in certain crowds. I’m curious what you think. Michael Pettis.
Noah Smith: Michael Pettis, interesting guy. he’s incredibly influential. Like, his idea, his, his analysis, his framework for analysis is non-predictive. He doesn’t Like, you cannot take these sort of, like, sectoral balances theories about, like, “Oh, and then consumption does this, and investment does this, and blah, blah,” and you can’t make any predictions about them. I mean, people have been trying to do that since the ‘30s maybe. Who were the first, like, Oh, who’s the guy who built the, like, little hydraulic economy thing?
Andrey Fradkin: Oh, yeah.
Noah Smith: Who is that guy?
Andrey Fradkin: Spicy. I don’t remember.
Noah Smith: Anyway-
Andrey Fradkin: Go back to the physiocrats-
Noah Smith: It’s, it’s that, right?
Andrey Fradkin: 1700s.
Noah Smith: It’s, it’s like I’m- it’s like I’m gonna take the economy, I’m gonna definitionally divide it into these different activities, and then I’m gonna assume these activities sort of move autonomously on their own and are sort of primitives. I’m gonna assume my accounting definitions are primitives, and I’m gonna observe things that happen and make big pronouncements about them based on that. But it’s not predictive. Like, you’ve seen Pettis, like, make some predictions, and then they go wrong, and he’s like, “Ah, but it’s because of this other thing.” So you can’t really use sectoral balances. But everyone in China, all the guys who are the top economists in China advising Xi Jinping, advising the top CCP guys, are doing the same thing as he is, and all the, like, private sector economists, like Goldman Sachs and whoever, are doing those things. And it’s really the fault of It is due to the failure of structural models of international finance and growth, I suppose. But due to the lack of explanatory power of those to explain things in terms of things like taste and technology, we can’t explain any of that s**t in terms of taste and technology. Like, nothing has any forecasting power, nothing like we don’t know if-
Andrey Fradkin: Well, wait, I’m gonna push back on that.
Noah Smith: Yeah.
Andrey Fradkin: Here’s a very basic thing that has explanatory power: the relative price of labour in labour-intensive industries. Doesn’t ha- that have an enormous amount of explanatory power for where, low-skilled labour manufacturing is done, for example?
Noah Smith: Yeah, I think that’s true. Yeah. but then- but also, like A- and you can get, like, micro models that will get at that, like a Roy model is, like, all right. Like, that’s got pretty good out-of-sample predictive power for stuff, right? And, but like, Heckscher-Ohlin has terrible predictive power for, like, trade patterns, right?
[01:05:00]
Andrey Fradkin: Mm-hmm.
Noah Smith: Like, it’s not very good. Like, it’s okay. Like, sometimes you s- you see stuff that’s consistent with it, but then you see a lot of stuff that’s not consistent with it, ‘cause there’s a lot of other stuff going on. And so when those models don’t really help you that much, they’re like heuristics. It opens up a rhetorical space for guys like Pettis or guys like, Jan Hatzius, who does this all day long. He does the same stuff as Pettis. All the private sector guys, all the guys working for hedge funds are doing the same stuff as Pettis. All the guys working for investment banks are doing the same stuff as Pettis, and all the guys working for the CCP are doing the same stuff as Pettis. None of these people believe you can get a microfounded model based on taste and technology that’ll tell you about these- what the effects of these macro policies. Nobody believes that, and so, like, that’s, that’s almost exclusively like a Western academia and central banks type of thing. Like, it’s a But because of that, Michael Pettis has been enormously influential while not having a model that has predictive power. But it’s not like other models do have that much predictive power, and they’re harder for people to understand and make conclusions on. So it’s- I would say that, in a influential policy stance, he’s, he’s beating people with quote-unquote, “structural models” based on notions of taste and technology. he’s, he’s, he’s beating those in terms of influence, and he’s not really losing to them by that much in terms of predictive power. Maybe by a tiny bit. ‘cause-
Andrey Fradkin: But, he’s losing to them in terms of coherence, which I at least value, but I understand-
Noah Smith: Okay. Oh, well, yeah, he’s losing, he’s losing the Andre, vote. it’s like-
Andrey Fradkin: N-
Noah Smith: Like, yes, he is, and he gets- people in academia will laugh at him, but, like, so what?
Andrey Fradkin: No, I- look- Well, my theory is that he actually- there’s a deep-seated desire to explain what’s going on in the world through some nefarious action that China is taking. And when the null hypothesis is just that they have a comparative advantage in manufacturing, and like, there w- even if they were doing whatever policies they were doing, the manufacturing would not be happening in the US. It wasn’t like US or China, the only two places to manufacture. [chuckles] but that’s just my psychoanalytic perspective on it.
Noah Smith: Got it. Yeah. No, I think you’re, you’re probably right. Like, the- it all comes down to, like, people need to feel like they know stuff. People need to feel like they understand stuff, can control stuff, can predict stuff. It’s, it’s But yet, that’s the same reason that makes people believe so strongly in macroeconomic models with no out-of-sample forecasting or predictive power that we can detect. Like taste in technology ultimately boils down to, like, sounds legit, right? We don’t have any evidence that, like, taste in technology microfounded in this sort of, like, Sergeant Prescott way, has any ability to describe anything usefully. We have no, we have no indication that And that, we can, we can debate that, but anyway. But like, but people love it-
Speaker 3: Fair enough
Noah Smith: Because it sounds legit, and like-
Speaker 3: Well, and it’s coherent.
Noah Smith: It’s, it’s coherent.
Speaker 3: Right, as Andre pointed out.
Noah Smith: But then the thing is that-
Speaker 3: Right
Noah Smith: Pettis’ stuff-
Speaker 3: It’s disciplined
Noah Smith: Pettis’ stuff sounds legit to people. It’s like, oh, investment does this, consumption does that. It’s coherent in the sense that the accounting relationships are definitional. Okay, it’s like accounting relationships can’t predict real economic stuff, fine, but like, it’s coherent in the sense that the accounting works. C plus I plus G, bro. It’s like, the accounting works.
Speaker 3: [chuckles]
Noah Smith: And so like you- it’s, and it sounds legit to people, and it’s comprehensible to people, and at some point, that gives them this feeling of like, “Oh, I understand this thing.” And I would argue that a lot of macro is a fancier version of, “Oh, I understand this thing,” when really, you don’t know if you understand it yet at all.
Speaker 3: Or maybe you play out one causal mechanism that might have small explanatory- it explains 1% of the picture.
Noah Smith: Exactly. Exactly.
Andrey Fradkin: Yeah.
Speaker 3: Yeah. Adam Tooze.
Noah Smith: Adam Tooze did some economic history that I really love. Like, I love a lot of his books. I love The Deluge, I love Wages of Destruction. Very good, like, economic military history. But at some point, he pivoted to- he pivoted very hard to, like, sort of like self-promoting clickbait, including like, “Wow, China will take over the world,”? Like, and he pivoted to that, and that stuff is, has made a lot of people go like: “I guess Adam Tooze wasn’t that smart,” which is not necessarily the right conclusion. It may mean that Adam Tooze wanted attention. It may mean that Adam Tooze wanted some money. It may mean that Adam Tooze was being paid by a foreign state actor to disseminate certain ideas, although I would not make any such allegation. I’m just-
[01:10:00]
Speaker 3: Fair enough
Noah Smith: Covering the whole space of reasons why Adam Tooze might have made this pivot. I think it’s probably just attention, but -
Andrey Fradkin: Maybe he just got bored. I think boredom-
Noah Smith: Maybe he just-
Andrey Fradkin: Is an underrated
Noah Smith: Bored. And what?
Andrey Fradkin: Yeah.
Noah Smith: That’s fine. Like, his Substack is basically just like, it’s chart book. It’s, it’s let me just paste a bunch of charts, and then, like, say the most obvious things about them that were already said in the source articles. Okay, fine. People value it.
Andrey Fradkin: [chuckles]
Noah Smith: People like it. like, it doesn’t have a lot of analysis, and I haven’t seen Tooze give a lot of analysis. I liked him as an economic historian, or as a- not even economic historian, just as a historian. Like, I liked his, I liked his books-
Speaker 3: Well-
Noah Smith: That was pretty cool stuff. His- I haven’t, I haven’t read his blog now in a while. The polycrisis thing was just goofy. And so like, I think Adam Tooze made himself slightly more popular and less relevant, with his pivot, after the pandemic.
Andrey Fradkin: So we were gonna ask you about Paul Krugman, and we already-
Noah Smith: Yeah
Andrey Fradkin: Talked a little bit about-
Speaker 3: Oh, we already got your take.
Noah Smith: Yeah, Paul Krugman.
Speaker 3: Yeah.
Noah Smith: Paul Krugman’s great. politics-wise, Paul Krugman does not understand how much America has rejected core elements of the progressive ideology and what Democrats will have to do to, deal with that. Economics-wise, he has been the most intellectually honest, guy. Very rarely, very rarely will I catch him, like, claiming like, “I always said this,” and then actually claim something different, and when I do, it’s, like, only a slight difference in tone. Like, he’s extremely- he he did warn about the possibility of inflation from Biden’s stimu- stimulus or Biden’s, like, ARP bill, right? He did talk about that. He he’s admitted when he got predictions wrong, which everyone does. he’s just so intellectually honest, and he’s still so good at explaining complex concepts seriously. He’s still like, he’s the real deal, and he’s still, he’s still good, and I think the fact that people are a bit fed up with, like, 2010s era like, resistant Boomer lib resistance politics can obscure the fact that he’s still, like, the very best writer on economics.
Andrey Fradkin: Strong endorsement. Awesome. okay, we’re, we’re almost done, we promise. the next topic is elite overproduction. [chuckles] So maybe you wanna introduce that topic first, and then maybe we can ask you some questions about it.
Noah Smith: Right. So Peter Turchin came up with this idea of elite overproduction. He’s a historian who claims that history follows these long cycles. Like all long cycle theories, it’s, it’s unprovable, but he did-
Speaker 3: Yes!
Noah Smith: Obviously, it’s unprovable, right? Like, throughout the waves. It’s, I don’t know. Anyway
Speaker 3: It’s happened five times within one series. [chuckles] Sure.
Noah Smith: - anyway, [chuckles] yeah, so like he has this unprovable long cycle theory, and he- and it did make a really good out-of-sample prediction about the peak of unrest coming in twenty twenty. What did he know? I don’t know. Anyway-
Andrey Fradkin: -huh.
Noah Smith: He came up with this idea-
Andrey Fradkin: He knows.
Noah Smith: Called elite overproduction. And he had very specific ideas about what that meant and what it didn’t mean. I ignored those ideas, stole the phrase, and used it to mean something more general that got more attention than his.
Seth Benzell: And you didn’t con- c- you didn’t corrupt it with a long wave theory-
Noah Smith: No.
Seth Benzell: So you did even better.
Noah Smith: I was just like, “ what? This phrase is good. I’m gonna credit him, and then I’m gonna have it mean something else that I just decide.” And honestly, my like, more general definition is probably better than his like, much more specific one. He just loves making things specific so he can make these, like, very tight quantitative predictions.
Andrey Fradkin: [chuckles]
Noah Smith: More power to him. I love the guy, but, but I was just like: I’m taking that. I like that phrase. Mine now.
Andrey Fradkin: So what is your Yeah, what is your general-
Seth Benzell: What does it mean to you?
Andrey Fradkin: Definition?
Noah Smith: Should’ve copyrighted it.
Andrey Fradkin: Yeah.
Noah Smith: I was like, So I basically used it to mean kind of the revolution of rising expectations among the professional managerial class. So you got a bunch of people who expected, like: “I’m gonna go to college and things are just gonna work out for me. I’ll be, I’ll be upper middle class. Oh, wait, it’s hard. There’s competition. I have to study. I have to be smart. I have to actually know some math. I can’t just, like, go get a random sociology undergrad degree and be rewarded with, like, some high-paying job like my parents had.” Like, and so a lot of, a lot of this disappointment, and I think for a while, the sort of general the, the productivity boom of the nineties and early two thousands, people-- like, people rode that. A lot of the PMC, a lot of my class, social class, rode that boom, and then it made it seem like everybody Like, you could just be a sos- sociology major and, like, not really do any hard work and then just, like, get a good job and, like, live a lifestyle similar to that of your parents. And then, and then the Great Recession came, and then things flattened out. Like, a lot of opportunity dried up for those people, and you could, Then you had to sort of, like, learn to code. I’m not sure that works now.
[01:15:00]
Seth Benzell: You could-- it still works to mock people. I-
Noah Smith: Yeah.
Seth Benzell: You can still say it to people.
Andrey Fradkin: All those non-technical people.
Noah Smith: Yeah. Anyway, so but then, then I think, like, that sort of abrupt downward revision of growth expectations pissed off a lot of people and led to some of the It- I don’t think it was the main cause of the social unrest that we saw in the twenty tens, but I think it was a contributor. I think that you had, you had just like a lot of, a lot of people who fucked around in college, came from privileged backgrounds and then, and were absolutely consumed by hate for the tech bro class, who went to the same colleges, came from the same backgrounds, and made a thousand times more money. And I think that you saw a lot of that sort of internal, like, within class resentment, not between class resentment, but sort of within socioeconomic background resentment. A lot of that, I think, contributed to some of the, like, more like elite leftists, like Bernie Sanders or kind of stuff, or maybe some of the new antitrust movement or things like that, were motivated or had some popular support by people who their parents were like lawyers, doctors, businesspeople, well-to-do kind of people. And then they kinda messed around in college and weren’t very technical and, like, ended up getting, like, perfectly fine middle-class jobs, but being, like, somewhat downwardly mobile, and also having a much stronger preference to live in expensive cities, therefore draining their money, not wanting to go out to the ‘burbs like their parents did.
Seth Benzell: Right.
Noah Smith: And so, like Yeah.
Seth Benzell: Is some of the resentment that the people who end up succeeding have worse taste than me? It’s like, I like high literature and they like Marvel movies, but the Marvel movie lovers won.
Noah Smith: I think that, that those kind of reasons can be invented as needed. If the real reason for resentment is like: “I should be in the same class as you. I went to the same college as you, and yet you’re making so much more money, and we used to live on the same dorm floor.” Like, if that’s the real reason, then you can make up ideas about taste or repurpose ideas about You can get ideas as necessary to resent whoever you want to resent.
Andrey Fradkin: Well, to be clear, it’s not like these people were in the same social circles even in college often, right? So it’s an interesting theory that, like, that resentment has caused ex- In college, did they They didn’t hang out with each other, but maybe they still thought they were gonna do equally well. Is that, is that kind of the theory?
Noah Smith: I think so, yeah. from my-- I did actually go to college with some of those people. Like, I was in Gary Tan’s study group. He’s still a friend of mine.
Andrey Fradkin: Nice.
Noah Smith: Although I did quit I quit Gary Tan’s study group because, I thought that studying on my own would make me better. So sorry, Gary. I just-- and I was right. I, I did well on the test, but-
Andrey Fradkin: Well, to be clear, you’re still doing very well, right? I don’t think you’re the resentment class. Yeah, so-
Noah Smith: No, no.
Andrey Fradkin: -
Noah Smith: No, but I’m, I’m-
Seth Benzell: Wait, so to what extent is-
Noah Smith: Succeeded to the extent of Gary Tan.
Seth Benzell: Is it- to what extent is this about just the relative between the two groups versus the absolute? Kind of you started with sort of an absolute story about it’s harder to live a middle-class lifestyle, and now you’ve moved to kind of a relative story about this subgroup did better than that subgroup.
Noah Smith: I wouldn’t say-
Seth Benzell: So are they both important?
Noah Smith: Harder to live a middle-class lifestyle is exactly what I described. I would say it’s instead the expectations of how good your life would get or the, you-- people expected this glide path, and then it flattened out. That’s an absolute story. Whereas the relative-
Seth Benzell: Right
Noah Smith: Story of like: I’m not as, I’m not as do- doing as well as the tech bro class. I don’t think these are independent. I think those are two different stories, but they’re not independent at all. ‘cause if I, if my, if my future path leveled out and flattened out, but other people’s didn’t, and they stayed on the escalator, that escalator I expected for myself evaporated for me and continued for them-
Seth Benzell: They stole my escalator!
Noah Smith: They stole my escalator.
Andrey Fradkin: Yeah.
Noah Smith: Who stole my escalator?
Andrey Fradkin: Yeah.
Noah Smith: Yeah, so. And so like-
Andrey Fradkin: That’s a great meme. [chuckles]
Noah Smith: Yeah. And so like, anyway, so I think that that was like a contributor to unrest, but I don’t think that was the big story. I think the big story was social media, blah, blah. But I throwing everybody in the same room as each other and letting them fight it out, I think that was a bad idea.
Andrey Fradkin: So what about the housing theory-
Seth Benzell: Can we just- can we lower, should we-
[01:20:00]
Andrey Fradkin: What about the housing theory of everything-
Noah Smith: Go ahead
Andrey Fradkin: Right? ‘Cause, ‘cause I do think that s- housing is such a major contributor to this feeling that people aren’t equal.
Seth Benzell: If it was cheaper to-
Andrey Fradkin: Yeah
Seth Benzell: Live in Brooklyn, we would solve all social problems.
Andrey Fradkin: Not wrong.
Noah Smith: The housing theory of everything, it’s like cheap housing would be really good for everybody. I don’t, I don’t have any problem with people believing in it, but it’s not a theory of everything.
Seth Benzell: Directionally correct.
Noah Smith: Directionally correct. Directionally correct. It’s like, do that Winnie-the-Pooh meme where there’s, like, plain Winnie-the-Pooh and then tuxedo Winnie-the-Pooh?
Andrey Fradkin: Yeah.
Seth Benzell: Yeah.
Noah Smith: It’s like the plain Winnie-the-Pooh is, like, exaggerated. Tuxedo Winnie-the-Pooh is directionally correct.
Andrey Fradkin: [laughing] Seth, I think you have one more question.
Seth Benzell: Yes.
Andrey Fradkin: Yeah.
Seth Benzell: Well, I guess, yeah, this is partly tied into that and partly kind of riffing on this question of elite overproduction, which is, it seems like sort of, to the extent that we get this social, unrest from people being upset about not reaching their expectations, to what extent do we have, like, a social To what extent is it, like, an economically central issue to manage people’s expectations, right? To what extent are vibes versus real economic trends important for determining people’s welfare and how they feel about the world? and how does that affect how you think about policy making or writing?
Noah Smith: I think, you really hit on one of the central questions of economics because my advisor, Miles Kimball, spent a lot of his career thinking about this and never came up with really solid answers, I think. Because we have pretty good evidence that happiness, the self-reported emotion, is pretty strongly related to differences between reality and expectations. interestingly, that’s what the original-
Seth Benzell: I’ll say shocks are good
Noah Smith: It just means luck.
Andrey Fradkin: [chuckles]
Noah Smith: But, like, essentially-
Seth Benzell: Yeah
Noah Smith: If you do, if you do better-
Seth Benzell: Luck
Noah Smith: Than you thought you’d do, you’re happy, and if you do worse than you thought you’d do So, like, the best outcome would be if we could give everyone low expectations and high outcomes, if we could make everybody just delighted with how well they did.
Seth Benzell: Right.
Noah Smith: I feel like this experiment has been run, and it’s called Generation X. [chuckles] And, like, I don’t know, man.
Seth Benzell: Didn’t work. Massive failure.
Noah Smith: Like, I see a lot of those people, they’re like billionaires now. They’re like, “I’m such a failure.” Like, you’re a billionaire! “Like, I’m, I’m never gonna amount to anything. I’m just a billionaire living in this giant mansion. Hmm.”
Seth Benzell: Just a b- [chuckles] Jeff Bezos’s boat is so much bigger than mine.
Noah Smith: And, like, this is a direct, I Like, I blame Nirvana. I blame Kurt Cobain for all this,? [chuckles] I blame depress- I blame-
Seth Benzell: No one can understand their lyrics
Noah Smith: I blame depressing-ass Generation X-
Andrey Fradkin: No, no, this is a pro-grunge podcast. No slander allowed.
Noah Smith: I didn’t say I dislike grunge. I love grunge.
Seth Benzell: He blame them.
Noah Smith: And I also think it’s a weapon of mass destruction.
Seth Benzell: He respects their power.
Noah Smith: I respect their power. Like, there are days when I just wanna, like, listen to, like, some old Nirvana B-sides, and I just, like And then I just get so angry and bitter about the world, and I’m like, “Yeah.”
Seth Benzell: Put that in a blog post.
Noah Smith: Generation X, it what? I, I don’t really feel sorry at all for Generation X because I feel like their goals in life were simpler and easier. I meet Generation X guys, and their whole goal in life is, like, have sex.
Seth Benzell: Two ladies at the same time.
Noah Smith: Yeah, like-
Seth Benzell: I saw, I saw Office Space
Noah Smith: Their whole goal, like, Generation X guys, all they have to do is, like, get laid, and then they’re done. They win.
Seth Benzell: [chuckles]
Noah Smith: Victory victory condition, and then, like like, Zoomers don’t even want that.
Seth Benzell: Yeah, Zoomers want followers, dude.
Noah Smith: Zoomers are like-
Seth Benzell: Zoomers want-
Noah Smith: Why would I want to do that when I could looks max? Why would I-
Andrey Fradkin: [chuckles]
Noah Smith: Like, why would I do that when I could, when I could mog the moids in the club? [chuckles] You can There-
Seth Benzell: Right. Which means-
Noah Smith: And then Millennials just want, Millennials just want likes on Instagram, and Zoomers, I don’t even know what they want because-
Seth Benzell: No
Noah Smith: They’re already so-
Andrey Fradkin: I don’t think they know what they want.
Seth Benzell: The Zoomers are the-
Andrey Fradkin: That’s kind of the problem
Seth Benzell: The Zoomers are the ones obsessed with social media. We’re the- the Millennials are the idealists. We actually are saving the world from climate change and solving racial d- conflict. -
Noah Smith: We’re gonna solve racism, man.
Seth Benzell: We’re gonna solve racism and global warming. We did that in 2008, right?
Noah Smith: Yeah, we did. We did.
Andrey Fradkin: That’s true.
Noah Smith: We solved it. [chuckles]
Andrey Fradkin: We elected Barack Obama, and that was the end of history. [chuckles]
Noah Smith: Yeah, that was it. We did it, brother.
Seth Benzell: Yeah, the sea stopped rising. I remember that was in the speech.
Noah Smith: I don’t know. All I can promise the world is that it’s always gonna get weirder and weirder.
Andrey Fradkin: Then-
Noah Smith: But I’m-
Seth Benzell: So we need to make people who desire weirdness. That’s the economic solution.
Noah Smith: Yeah, so I’m So that’s good for me because I always loved to see the weirdest s**t possible, right? I would always go to, like, the weirdest underground shows in Japan or, like listen to, like, the weirdest music. I just Like, I’m just, I love seeing that weirdness, and the universe continues to deliver it to me in copious amounts. And so now I’m interested to see what AI does with this planet because, honestly, like, like, humanity was kind of hitting a wall. I don’t know. I wrote this in a recent post, which was reprinted by the Free Press. guardians of our our freedom of information.
[01:25:00]
Andrey Fradkin: Well, I-
Noah Smith: And so, and the free press reprinted it, and they were like-
Andrey Fradkin: Behind a paywall, so it can’t be free. I’m confused by the free press. It’s the, -
Noah Smith: The- yes, conditionally free press. [chuckles]
Andrey Fradkin: Yes.
Noah Smith: The, the marginal cost zero press. But, but in this thing, I was like, look, obviously industrialization took fertility to below replacement levels, and then social media has taken fertility to, like, below, like immediate, to, like, immediate extinction levels, to, like, goodbye humanity. This is the last generation, goodbye, kind of levels, right? Plus, ideas were getting harder to find. like, okay, Bloom is right, and Venuren and Webb and whoel- who else was on that paper? Those guys.
Seth Benzell: There’s one more, but those were the good ones.
Noah Smith: There’s one more! Wait, Bloom, Venuren, Webb, and there’s one other person, and I apologize to whoever else is on that paper for not saying your name. But anyway-
Seth Benzell: They got a zillion citations, dude.
Noah Smith: That paper was right. We were hitting the wall. We were just like, all the smartest people had already been assigned to research-
Andrey Fradkin: Chad Jones. Chad Jones. How could we forget?
Seth Benzell: Chad Jones, Chad Jones.
Noah Smith: Our friend of the show.
Andrey Fradkin: Friend of the show.
Noah Smith: The Chad himself.
Andrey Fradkin: The Chad of growth theory.
Seth Benzell: Yes, exactly.
Noah Smith: The Chad. Dream guest of the show.
Seth Benzell: You can’t say the Jones because there’s so many Joneses. [chuckles]
Noah Smith: Oh, you can’t. Although the Chad could also be Chad Syverson, Chad of productivity measurement.
Andrey Fradkin: Ooh, that’s true.
Noah Smith: They’re both the Chad. All right. But anyway, I guess the point is that, I don’t remember who’s on that paper, but, but ideas were getting hard to find. They were right, blah, blah. We were hiring, like, mid-marginal researchers to just, like, randomly try chemicals in a vat, and like, that was what our research- and like, the best brains were already like working on the whatever, all day long. And like, yes, we were running out of, running out of runway on this technological civilization. Like it was, we were really, like, we were really just gonna like, argue like resist Lib versus MAGA for the rest of our lives and on so-
Seth Benzell: God forbid
Noah Smith: Degenerating, shitty mid social media for the rest of-
Seth Benzell: In that flat-
Noah Smith: Not just our lives, but all of humanity. Like, that was the end.
Seth Benzell: The flat part of the solo growth curve.
Noah Smith: Yes, we hit the-
Seth Benzell: That’s, that’s not where you wanna be.
Noah Smith: We hit the we hit the stagnation point. We, like, you could see the end of humanity coming down, coming down the pike, and now we blew it all up by making a God machine. We were like, “Okay, new thing.” And what? This has happened before because the agricultural age, you could sort of see humanity having hit this limit. We hit the Malthusian ceiling-
Seth Benzell: Yeah
Noah Smith: Again and again. We had the Black Plague. We had overpopulation. We deforested the entire goddamn Middle East.
Seth Benzell: We banged our head against that ceiling three or four times.
Noah Smith: Pardon?
Seth Benzell: We banged our head against the Malthusian ceiling three or four times.
Noah Smith: Three or four times! And then we were like like our whole world was running out of wood. Like, we were just running out of trees to chop down. We were gonna like We had the, like, Columbian Exchange, blah, blah. That was, there was gonna be another collapse, just like there had been for the Mongols. And like, then we were like, “All right, we’re busting out of this s**t. Steam power!”
Seth Benzell: Yeah.
Noah Smith: “And like science.” And then, like, we got out of that, and then weird s**t happened, and you got Nazis and communists and all kinds of crazy stuff. Not to mention, a lot of really bad sitcoms in the ‘80s. But like, we got all of that stuff, and despite all that, I would say on balance, we busted out, and it was pretty good, and I would rather have lived, like, in the industrial age than in the age before. And so maybe AI will kill us. Industrial Revolution could have killed us if we had just if we had launched all the nukes in like 1983 or whenever, like, we would’ve died-
Andrey Fradkin: Yeah
Noah Smith: And then our civilization would’ve fallen. Maybe AI will be the thing to make our civilization fall, or maybe we’ll be able to solve, use AI to solve the problems that, like, we were degenerating, like the end of science and the, like, end of fertility and like the the absolute shittiness of social media, and maybe AI will just solve all this stuff for us.
Andrey Fradkin: Well-
Seth Benzell: Whether or not it just solves it definitely gives us a fighter’s chance.
Noah Smith: That’s what I mean.
Seth Benzell: I think that’s, -
Noah Smith: We rolled the dice of big stuff big new thing. We just, we like, we rolled the dice again, and I’m, I’m glad we did.
Andrey Fradkin: All right, well-
Noah Smith: And, we all die, but I’m glad we tried.
Andrey Fradkin: AI, the new hope, coming to economies near you. on this note, thank you so much, for being our guest, Noah. this was an amazing conversation.
[01:30:00]
Seth Benzell: Thank you so much.
Noah Smith: Thank you. It’s been a pleasure.
Seth Benzell: Really appreciate your time. And listeners at home, keep your posteriors justified.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit empiricrafting.substack.com


