Astral Codex Ten Podcast
Jeremiah
The official audio version of Astral Codex Ten, with an archive of posts from Slate Star Codex. It's just me reading Scott Alexander's blog posts.
Episodes
Mentioned books
Feb 23, 2022 • 31min
Links For February
https://astralcodexten.substack.com/p/links-for-february?utm_source=url [Remember, I haven't independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can't guarantee I will have caught them all by the time you read this.] 1: The newest studies don't find evidence that extracurriculars like chess, second languages, playing an instrument, etc can improve in-school learning. 2: Did you know: Spanish people consider it good luck to eat twelve grapes at midnight on New Years, one at each chime of the clock tower in Madrid. This has caused enough choking deaths that doctors started a petition to make the clock tower chime more slowly. 3: At long last, scientists have discovered a millipede that really does have (more than) a thousand legs, Eumillipes persephone, which lives tens of meters underground in Australia and in your nightmares. Recent progress in this area inspired me to Fermi-estimate a millipede version of Moore's Law, which suggests we should be up to megapedes by 2140 and gigapedes by 2300.
Feb 22, 2022 • 18min
Play Money And Reputation Systems
https://astralcodexten.substack.com/p/play-money-and-reputation-systems?utm_source=url For now, US-based prediction markets can't use real money without clearing near-impossible regulatory hurdles. So smaller and more innovative projects will have to stick with some kind of play money or reputation-based system. I used to be really skeptical here, but Metaculus and Manifold have softened my stance. So let's look closer at how and whether these kinds of systems work. Any play money or reputation system has to confront two big design decisions: Should you reward absolute accuracy, relative accuracy, or some combination of both? Should your scoring be zero-sum, positive-sum, or negative sum? Relative Vs. Absolute Accuracy As far as I know, nobody suggests rewarding only absolute accuracy; the debate is between relative accuracy vs. some combination of both. Why? If you rewarded only absolute accuracy, it would be trivially easy to make money predicting 99.999% on "will the sun rise tomorrow" style questions.
Feb 19, 2022 • 2min
Austin Meetup Next Sunday
https://astralcodexten.substack.com/p/austin-meetup-next-sunday?utm_source=url I'll be in Austin on Sunday, 2/27, and the meetup group there has kindly agreed to host me and anyone else who wants to show up. We'll be at RichesArt (an art gallery with an outdoor space) at 2511 E 6th St Unit A from noon to 3. The organizer is sbarta@gmail.com , you can contact him if you have any questions. As per usual procedure, everyone is invited. Please feel free to come even if you feel awkward about it, even if you're not "the typical ACX reader", even if you're worried people won't like you, etc. You may (but don't have to) RSVP here.
Feb 18, 2022 • 12min
The Gods Only Have Power Because We Believe In Them
https://astralcodexten.substack.com/p/the-gods-only-have-power-because?utm_source=url [with apologies to Terry Pratchett and TVTropes] "Is it true," asked the student, "that the gods only have power because we believe in them?" "Yes," said the sage. "Then why not appear openly? How many more people would believe in the Thunderer if, upon first gaining enough worshipers to cast lightning at all, he struck all of the worst criminals and tyrants?" "Because," said the sage, "the gods only gain power through belief, not knowledge. You know there are trees and clouds; are they thereby gods? Just as lightning requires close proximity of positive and negative charge, so divinity requires close proximity of belief and doubt. The closer your probability estimate of a god's existence is to 50%, the more power they gain from you. Complete atheism and complete piety alike are useless to them."
49 snips
Feb 18, 2022 • 1h 16min
Book Review: Sadly, Porn
https://astralcodexten.substack.com/p/book-review-sadly-porn I. Freshman English class says all books need a conflict. Man vs. Man, Man vs. Self, whatever. The conflict in Sadly, Porn is Author vs. Reader. The author - the pseudonymous "Edward Teach, MD" - is a spectacular writer. Your exact assessment of his skill will depend on where you draw the line between writing ability and other virtues - but where he's good, he's amazing. Nobody else takes you for quite the same kind of ride. He's also impressively erudite, drawing on the Greek and Latin classics, the Bible, psychoanalytic literature, and all of modern movies and pop culture. Sometimes you read the scholars of two hundred years ago and think "they just don't make those kinds of guys anymore". They do and Teach is one of them. If you read his old blog, The Last Psychiatrist, you have even more reasons to appreciate him. His expertise in decoding scientific studies and in psychopharmacology helped me a lot as a med student and resident. His political and social commentary was delightfully vicious, but also seemed genuinely aimed at helping his readers become better people. My point is: the author is a multitalented person who I both respect and want to respect. This sets up the conflict.
Feb 15, 2022 • 21min
Mantic Monday: Ukraine Cube Manifold
https://astralcodexten.substack.com/p/mantic-monday-ukraine-cube-manifold?r=fm577 Ukraine Thanks to Clay Graubard for doing my work for me: These run from about 48% to 60%, but I think the differences are justified by the slightly different wordings of the question and definitions of "invasion". You see a big jump last Friday when the US government increased the urgency of their own warnings. I ignored this on Friday because I couldn't figure out what their evidence was, but it looks like the smart money updated a lot on it. A few smaller markets that Clay didn't include: Manifold is only at 36% despite several dozen traders. I think they're just wrong - but I'm not going to use any more of my limited supply of play money to correct it, thus fully explaining the wrongness. Futuur is at 47%, but also thinks there's an 18% chance Russia invades Lithuania, so I'm going to count this as not really mature. Insight Prediction, a very new site I've never seen before, claims to have $93,000 invested and a probability of 22%, which is utterly bizarre; I'm too suspicious and confused to invest, and maybe everyone else is too. (PredictIt, Polymarket, and Kalshi all avoid this question. I think PredictIt has a regulatory agreement that limits them to politics. Polymarket and Kalshi might just not be interested, or they might be too PR-sensitive to want to look like they're speculating on wars where thousands of people could die.) What happens afterwards? Clay beats me again: For context:
Feb 13, 2022 • 24min
Highlights From The Comments On Motivated Reasoning And Reinforcement Learning
https://astralcodexten.substack.com/p/highlights-from-the-comments-on-motivated I. Comments From People Who Actually Know What They're Talking About Gabriel writes: The brain trains on magnitude and acts on sign. That is to say, there are two different kinds of "module" that are relevant to this problem as you described, but they're not RL and other; they're both other. The learning parts are not precisely speaking reinforcement learning, at least not by the algorithm you described. They're learning the whole map of value, like a topographic map. Then the acting parts find themselves on the map and figure out which way leads upward toward better outcomes. More precisely then: The brain learns to predict value and acts on the gradient of predicted value. The learning parts are trying to find both opportunities and threats, but not unimportant mundane static facts. This is why, for example, people are very good at remembering and obsessing over intensely negative events that happened to them -- which they would not be able to do in the RL model the post describes! We're also OK at remembering intensely positive events that happened to us. But ordinary observations of no particular value mostly make no lasting impression. You could test this by a series of 3 experiments, in each of which you have a screen flash several random emoji on screen, and each time a specific emoji is shown to the subject, you either (A) penalize the subject such as with a shock, or (B) reward the subject such as with sweet liquid when they're thirsty, or (C) give the subject a stimulus that has no significant magnitude, whether positive or negative, such as changing the pitch of a quiet ongoing buzz that they were not told was relevant. I'd expect subjects in both conditions A and B to reliably identify the key emoji, whereas I'd expect quite a few subjects in condition C to miss it. By learning associates with a degree of value, whether positive or negative, it's possible to then act on the gradient in pursuit of whatever available option has highest value. This works reliably and means we can not only avoid hungry lions and seek nice ripe bananas, but we also do
Feb 11, 2022 • 1h 28min
ACX Grants ++: The Second Half
https://astralcodexten.substack.com/p/acx-grants-the-second-half This is the closing part of ACX Grants. Projects that I couldn't fully fund myself were invited to submit a brief description so I could at least give them free advertising here. You can look them over and decide if any seem worth donating your money, time, or some other resource to. I've removed obvious trolls, a few for-profit businesses without charitable value who tried to sneak in under the radar, and a few that violated my sensibilities for one or another reason. I have not removed projects just because they're terrible, useless, or definitely won't work. My listing here isn't necessarily an endorsement; caveat lector. Still, some of them are good projects and deserve more attention than I was able to give them. Many applicants said they'd hang around the comments section here, so if you have any questions, ask! (bolded titles are my summaries and some of them might not be accurate or endorsed by the applicant) You can find the first 66 of these here.
Feb 10, 2022 • 51min
So You Want To Run A Microgrants Program
https://astralcodexten.substack.com/p/so-you-want-to-run-a-microgrants I. Medical training is a wild ride. You do four years of undergrad in some bio subject, ace your MCATs, think you're pretty hot stuff. Then you do your med school preclinicals, study umpteen hours a day, ace your shelf exams, and it seems like you're pretty much there. Then you start your clinical rotations, get a real patient in front of you, and you realize - oh god, I know absolutely nothing about medicine. This is also how I felt about running a grants program. I support effective altruism, a vast worldwide movement focused on trying to pick good charities. Sometimes I go to their conferences, where they give lectures about how to pick good charities. Or I read their online forum, where people write posts about how to pick good charities. I've been to effective altruist meetups, where we all come together and talk about good charity picking. So I felt like, maybe, I don't know, I probably knew some stuff about how to pick good charities. And then I solicited grant proposals, and I got stuff like this: A. $60K to run simulations checking if some chemicals were promising antibiotics. B. $60K for a professor to study the factors influencing cross-cultural gender norms C. $50K to put climate-related measures on the ballot in a bunch of states. D. $30K to research a solution for African Swine Fever and pitch it to Uganda E. $40K to replicate psych studies and improve incentives in social science Which of these is the most important?
Feb 9, 2022 • 17min
Heuristics That Almost Always Work
https://astralcodexten.substack.com/p/heuristics-that-almost-always-work The Security Guard He works in a very boring building. It basically never gets robbed. He sits in his security guard booth doing the crossword. Every so often, there's a noise, and he checks to see if it's robbers, or just the wind. It's the wind. It is always the wind. It's never robbers. Nobody wants to rob the Pillow Mart in Topeka, Ohio. If a building on average gets robbed once every decade or two, he might go his entire career without ever encountering a real robber. At some point, he develops a useful heuristic: it he hears a noise, he might as well ignore it and keep on crossing words: it's just the wind, bro. This heuristic is right 99.9% of the time, which is pretty good as heuristics go. It saves him a lot of trouble.


