Astral Codex Ten Podcast

Jeremiah
undefined
Apr 4, 2021 • 15min

Ambidexterity And Cognitive Closure

https://astralcodexten.substack.com/p/ambidexterity-and-cognitive-closure Back in a more superstitious time, people believed left-handers were in league with the Devil. Now, in this age of Science, we realize that was unfair. Yes, left-handers are statistically more likely to be in league with the Devil. But so are right-handers! It's only the ambidextrous who are truly pure! At least this is the conclusion I take from Lyle & Grillo (2020) Why Are Consistently-Handed Individuals More Authoritarian: The Role Of Need For Cognitive Closure. It discusses studies finding that consistently-handed people (ie people who are not ambidextrous) are more likely to support authoritarian governments, demonstrate prejudice against "immigrants, homosexuals, Muslims, Mexicans, atheists, and liberals", and support violations of the Geneva Conventions in hypothetical scenarios. The authors link this to a construct called "need for cognitive closure", ie being very sure you are right and unwilling to consider alternate perspectives. They argue that something about the interaction of brain hemispheres regulates cognitive closure, and that ambidextrous people, with their weak hemispheric dominance, get less of it. They study 235 undergraduates and find results that generally confirm this hypothesis: their ambidextrous subjects support less authoritarian and racist beliefs, and this is partly
undefined
Apr 3, 2021 • 35min

[Classic] The Parable Of The Talents

https://slatestarcodex.com/2015/01/31/the-parable-of-the-talents/ [Content note: scrupulosity and self-esteem triggers, IQ, brief discussion of weight and dieting. Not good for growth mindset.] I. I sometimes blog about research into IQ and human intelligence. I think most readers of this blog already know IQ is 50% to 80% heritable, and that it's so important for intellectual pursuits that eminent scientists in some fields have average IQs around 150 to 160. Since IQ this high only appears in 1/10,000 people or so, it beggars coincidence to believe this represents anything but a very strong filter for IQ (or something correlated with it) in reaching that level. If you saw a group of dozens of people who were 7'0 tall on average, you'd assume it was a basketball team or some other group selected for height, not a bunch of botanists who were all very tall by coincidence. A lot of people find this pretty depressing. Some worry that taking it seriously might damage the "growth mindset" people need to fully actualize their potential. This is important and I want to discuss it eventually, but not now. What I want to discuss now is people who feel personally depressed. For example, a comment from last week: I'm sorry to leave self a self absorbed comment, but reading this really upset me and I just need to get this off my chest…How is a person supposed to stay sane in a culture that prizes intelligence above everything else – especially if, as Scott suggests, Human Intelligence Really Is the Key to the Future – when they themselves are not particularly intelligent and, apparently, have no potential to ever become intelligent? Right now I basically feel like pond scum.
undefined
Apr 1, 2021 • 18min

Oh, The Places You'll Go When Trying To Figure Out The Right Dose Of Escitalopram

https://astralcodexten.substack.com/p/oh-the-places-youll-go-when-trying I. What is the right dose of Lexapro (escitalopram)? The official FDA packet insert recommends a usual dose of 10 mg, and a maximum safe dose of 20 mg. It says studies fail to show 20 mg works any better than 10, but you can use 20 if you really want to. But Jakubovski et al's Dose-Response Relationship Of Selective Serotonin Reuptake Inhibitors tries to figure out which doses of which antidepressants are equivalent to each other, and comes up with the following suggestion (ignore the graph, read the caption) 16.7 mg Lexapro equals 20 mg of paroxetine (Paxil) or fluoxetine (Prozac). But the maximum approved doses of those medications are 60 mg and 80 mg, respectively. If we convert these to mg imipramine equivalents like the study above uses, Prozac maxes out at 400, Paxil at 300, and Lexapro at 120. So Lexapro has a very low maximum dose compared to other similar antidepressants. Why? Because Lexapro (escitalopram) is a derivative of the older drug Celexa (citalopram). Sometime around 2011, the FDA freaked out that high doses of citalopram might cause a deadly heart condition called torsade de pointes, and lowered the maximum dose to prevent this. Since then it's been pretty conclusively shown that the FDA was mostly wrong about this and kind of bungled the whole process. But they forgot to ever unbungle it, so citalopram still has a lower maximum dose than every other antidepressant. When escitalopram was invented, it inherited its parent chemical's unusually-low maximum dose, and remains at that level today [edit: I got the timing messed up, see here]
undefined
Mar 26, 2021 • 15min

Toward A Bayesian Theory Of Willpower

https://astralcodexten.substack.com/p/towards-a-bayesian-theory-of-willpower I. What is willpower? Five years ago, I reviewed Baumeister and Tierney's book on the subject. They tentatively concluded it's a way of rationing brain glucose. But their key results have failed to replicate, and people who know more about glucose physiology say it makes no theoretical sense. Robert Kurzban, one of the most on-point critics of the glucose theory, gives his own model of willpower: it's a way of minimizing opportunity costs. But how come my brain is convinced that playing Civilization for ten hours has no opportunity cost, but spending five seconds putting away dishes has such immense opportunity costs that it will probably leave me permanently destitute? I can't find any correlation between the subjective phenomenon of willpower or effort-needingness and real opportunity costs at all.
undefined
Mar 26, 2021 • 13min

More Antifragile, Diversity Libertarianism, And Corporate Censorship

https://astralcodexten.substack.com/p/more-antifragile-diversity-libertarianism In yesterday's review of Antifragile, I tried to stick to something close to Taleb's own words. But here's how I eventually found myself understanding an important kind of antifragility. I feel bad about this, because Taleb hates bell curves and tells people to stop using them as examples, but sorry, this is what I've got. Suppose that Distribution 1 represents nuclear plants. It has low variance, so all the plants are pretty similar. Plant A is slightly older and less fancy than Plant B, but it still works about the same. Now we move to Distribution 2. It has high variance. Plant B is the best nuclear plant in the world. It uses revolutionary new technology to squeeze extra power out of each gram of uranium, its staff are carefully-trained experts, and it's won Power Plant Magazine's Reactor Of The Year award five times in a row. Plant A suffers a meltdown after two days, killing everybody. If you live in a region with lots of nuclear plants, you'd prefer they be on the first distribution, the low-variance one. Having some great nuclear plants is nice, but having any terrible ones means catastrophe. Much better for all nuclear plants to be mediocre.
undefined
Mar 25, 2021 • 35min

Book Review: Antifragile

https://astralcodexten.substack.com/p/book-review-antifragile Nassim Taleb summarizes the thesis of Antifragile as: Everything gains or loses from volatility. Fragility is what loses from volatility and uncertainty [and antifragility is what gains from it]. The glass on the table is short volatility. The glass is fragile: the less you disrupt it, the better it does. A rock is "robust" - neither fragile nor antifragile - it will do about equally well whether you disrupt it or not. What about antifragile? Taleb's first (and cutest) example is the Hydra, which grows more and more heads the more a hero tries to harm it. What else is like this? Buying options is antifragile. Suppose oil is currently worth $10, and you pay $1 for an option to buy it at $10 next year. If there's a small amount of variance (oil can go up or down 20%), it's kind of a wash. Worst-case scenario, oil goes down 20% to $8, you don't buy it, and you've lost $1 buying the option. Best-case scenario, oil goes up 20% to $12, you exercise your option to buy for $10, you sell it for $12, and you've made a $1 profit - $2 from selling the oil, minus $1 from buying the option. Overall you expect to break even. But if there's large uncertainty - the price of oil can go up or down 1000% - then it's a great deal. Worst-case scenario, oil goes down to negative $90 and you don't buy it, so you still just lost $1. Best case scenario, oil goes up to $110, you exercise your option to buy for $10, and you make $99 ($100 profit minus $1 for the option). So the oil option is antifragile - the more the price varies, the better it will do. The more chaotic things get, the more uncertain and unpredictable the world is, the more oil options start looking like a good deal.
undefined
Mar 24, 2021 • 7min

Adding My Data Point To The Discussion Of Substack Advances

https://astralcodexten.substack.com/p/adding-my-data-point-to-the-discussion [warning: boring inside baseball post] From The Hypothesis: Here's Why Substack's Scam Worked So Well. It summarizes a common Twitter argument that Substack is doing something sinister by offering some writers big advances. The sinister thing differs depending on who's making the argument - in this case, it's making people think they could organically make lots of money on Substack (because they see other writers doing the same) when really the big money comes from Substack paying a pre-selected group money directly. Other people have said it's Substack exercising editorial policy to attract a certain type of person to their site, usually coupled with the theory that the people they choose are problematic. I'm one of the writers Substack paid, which gives me some extra information on how this went down. Here's a stylized interpretation of the email conversation that got it started: SUBSTACK: You should join our new blogging thing! ME: No. SUBSTACK: It's really good!
undefined
Mar 20, 2021 • 49min

Book Review: The New Sultan

Explore the rise of Erdogan from democracy to dictatorship, Erdogan's educational background, the soft coup in Turkey, the rise of the AK party, and the erosion of democracy in Turkey.
undefined
Mar 18, 2021 • 17min

Sleep Is The Mate Of Death

https://astralcodexten.substack.com/p/sleep-is-the-mate-of-death Melancholic depressive patients report that they feel worst in the morning, just after waking up, get better as the day goes on, and feel least affected in the evening just before bed. Continue the trend, and you might wonder how depressed people would feel after spending 24 or 36 or 48 hours awake. Some scientists made them stay awake to check, and the answer is: they feel great! About 70% of cases of treatment-resistant depression go away completely if the patient stays awake long enough. This would be a great depression cure, except that the depression comes back as soon as they go to sleep. There's a lot of great work going on to figure out how to make cure-by-sleep-deprivation last longer - see the Chronotherapeutics Manual for more details. But forget the practical side of this for now. It looks like sleep is somehow renewing these people's depressions. As if depression is caused by some injury during sleep, heals part of the way during an average day (or all the way during an extra-long day of sleep deprivation) and then the same injury gets re-inflicted during sleep the next night.
undefined
Mar 15, 2021 • 16min

Mantic Monday: Mantic Matt Y

https://astralcodexten.substack.com/p/mantic-monday-mantic-matt-y The current interest in forecasting grew out of Iraq-War-era exasperation with the pundit class. Pundits were constantly saying stuff, like "Saddam definitely has WMDs, trust me, I'm an expert", then getting proven wrong, then continuing to get treated as authorities and thought leaders. Occasionally they would apologize, but they'd be back to telling us what we Had To Believe the next week. You don't want a rule that if a pundit ever gets anything wrong, we stop trusting them forever. Warren Buffett gets some things wrong, Zeynep Tufecki gets some things wrong, even Nostradamus would have gotten some things wrong if he'd said anything clearly enough to pin down what he meant. The best we can hope for is people with a good win-loss record. But how do you measure win-loss record? Lots of people worked on this (especially Philip Tetlock) and we ended up with the kind of probabilistic predictions a lot of people use now. But not pundits. We never did get the world where pundits, bloggers, and other commentators post predictions clearly in a way where they can check up on them later.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app