

The Nonlinear Library
The Nonlinear Fund
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Episodes
Mentioned books

Mar 7, 2024 • 27min
EA - Research summary: farmed cricket welfare by abrahamrowe
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Research summary: farmed cricket welfare, published by abrahamrowe on March 7, 2024 on The Effective Altruism Forum.
This post is a short summary of Farmed Cricket (Acheta domesticus, Gryllus assimilis, and Gryllodes sigillatus; Orthoptera) Welfare Considerations: Recommendations for Improving Global Practice, a peer-reviewed, open access publication on cricket welfare in the Journal of Insects as Food and Feed under a CC BY 4.0 license. The paper and supplemental information can be accessed
here. The original paper was written by Elizabeth Rowe, Karen Robles López, Kristin Robinson, Kaitlin Baudier, and Meghan Barrett; the research conducted in the paper was funded by Rethink Priorities as part of our research agenda on understanding the welfare of insects on farms.
This post was written by Abraham Rowe (no relation to Elizabeth Rowe) and reviewed for accuracy by Meghan Barrett. All information is derived from the Elizabeth Rowe et al. (2024) publication, and some text from the original publication is directly adapted for this summary.
Summary
As of 2020, around 370 to 420 billion crickets and grasshoppers were farmed annually for food and feed, though today the number may be much higher.
Rowe et al. (2024) is the first publication to consider species-specific welfare concerns for several species of crickets on industrialized insect farms.
The authors identify 15 current and 5 future welfare concerns, and make recommendations for reducing the harms from these concerns. These concerns include:
Stocking density
High stocking densities can increase the rates of aggression, cannibalism, and behavioral repression among individuals on cricket farms.
Disease
Diseases are relatively common on cricket farms. Common diseases, such as Acheta domesticus densovirus, can cause up to 100% cricket mortality.
Slaughter
Common slaughter methods for crickets on farms include freezing in air, blanching/boiling, and convection baking. Little is known about the relative welfare costs of these methods, and the best ways for a producer to implement a given method.
Future concerns that haven't yet been realized on farms include:
Novel feed substrates
Farmers have explored potentially giving crickets novel feeds, including food waste. This might be nutritionally inadequate or introduce diseases or other issues onto farms.
Selective breeding and genetic modification
In vertebrate animals, selective breeding has caused a large number of welfare issues. The same might be expected to become true for crickets.
Background information
Cricket farming
Insect farming, including of crickets, has been presented as a more sustainable approach to meet the protein demand of a growing human population. While wild-caught orthopterans (crickets and grasshoppers) are a traditional protein source around the world, modern cricket farming aims to industrialize the rearing and slaughter of crickets as a food source. As of 2020, 370-420 billion orthopterans were slaughtered or sold live, with crickets being the most common.
Welfare framework
The Five Domains model of welfare, which has been promoted for invertebrates, evaluates animal welfare by looking at the nutrition, environment, physical health, behavior, and mental states of the animals being evaluated. The authors use this model for evaluating cricket farming and potential improvements that could be made on farms for animal welfare.
Cricket biology
Three of the most common species of crickets farmed belong to the Gryllinae subfamily: Acheta domesticus, Gryllus assimilis, and Gryllodes sigillatus. All three species live between 80 and 120 days from hatching to natural death, with a 10-21 day incubation period. Crickets are hemimetabolous insects: they hatch from an egg, molting through a series of nymph stages called instars, before going through a terminal ...

Mar 7, 2024 • 5min
EA - Invest in ACX Grants projects! by Saul Munn
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Invest in ACX Grants projects!, published by Saul Munn on March 7, 2024 on The Effective Altruism Forum.
TLDR
So, you think you're an effective altruist? Okay, show us what you got - invest in charitable projects, then see how you do over the coming year. If you pick winners, you get (charitable) prizes; otherwise, you lose your (charitable) dollars. Also, you get to fund impactful projects. Win-win.
Click here to see the projects and to start investing!
What's ACX/ACX Grants?
Astral Codex Ten (ACX) is a blog written by Scott Alexander on topics like effective altruism, reasoning, science, psychiatry, medicine, ethics, genetics, AI, economics, and politics. ACX Grants is a grants program in which Scott Alexander helps fund charitable and scientific projects - see the 2022 cohort here and his retrospective on ACX Grants 2022 here.
What do you mean by "invest in ACX Grants projects"?
In ACX Grants 2024, some of the applications were given direct grants and the rest were given the option to participate in an impact market.
Impact markets are an alternative to grants or donations as a way to fund charitable projects. A collection of philanthropies announces that they'll be giving out prizes for the completion of successful, effectively altruistic projects that solve important problems the philanthropies care about. Project creators then strike out to build projects that solve those problems.
If they need money to get started, investors can buy a "stake" in the project's possible future prize winnings, called an "impact certificate." (You can read more about how impact markets generally work here, and a canonical explanation of impact certificates on the EA Forum here.)
Four philanthropic funders have expressed interest in giving prizes to successful projects in this round:
ACX Grants 2025
The Long Term Future Fund
The EA Infrastructure Fund
The Survival and Flourishing Fund
So, after a year, the above philanthropies will review the projects in the impact market to see which ones have had the highest impact.
Okay, but why would I want to buy impact certificates? Why not just donate directly to the project?
Giving direct donations is great! But purchasing impact certificates can also have some advantages over direct donations:
Better feedback
Direct donation can have pretty bad feedback loops about what sorts of things end up actually being effective/successful. After a year, the philanthropies listed above will review the projects to see which ones are impactful - and award prizes to the ones that they find most impactful. You get to see how much impact per-dollar your investments returned, giving you grounded feedback.
Improving your modeling of grant-makers
Purchasing impact certificates forces you to put yourself in the eyes of a grant-maker - you can look through a bunch of projects that might be impactful, and, with your donation budget, select the ones you expect to have the most impact. It also pushes you to model the philanthropies with great feedback.
What sorts of things do they care about? Why? What are their primary theories of change? How will the project sitting in front of you relevantly improve the world in a way they actually care about?
Make that charitable moolah
If you invest in projects that end up being really impactful, then you'll get a share of the charitable prize funding that projects win! All of this remains as charitable funding, so you'll be able to donate it to whatever cause you think is most impactful. For example, if you invest $100 into a project that wins a prize worth 2x it's original valuation, you can then choose to donate $200 to a charity or project of your choice!
Who's giving out the prizes at the end?
Four philanthropic funders have expressed interest in giving prizes[1] to successful projects that align with their interests:
AC...

Mar 7, 2024 • 1min
EA - Animal Charity Evaluators is doing a live AMA now! by Animal Charity Evaluators
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Charity Evaluators is doing a live AMA now!, published by Animal Charity Evaluators on March 7, 2024 on The Effective Altruism Forum.
Starting now! Animal Charity Evaluators is holding an AMA on our Movement Grants!
The AMA is your chance to ask our team about what projects we're likely to fund, the application process, how to make a good application, and anything else about the program. Applications close March 17, 11:59 PM PT.
Our team members answering questions are:
Eleanor McAree, Movement Grants Manager
Elisabeth Ormandy, Programs Director
Holly Baines, Communications Manager
How to participate? Go to the FAST Forum (make sure you have an account) and ask a question. We look forward to hearing from you!
Movement Grants is ACE's strategic grantmaking program dedicated to building and strengthening the animal advocacy movement. For a limited time, you can DOUBLE your donation to ACE's Movement Grants! By donating to this program, you are investing in the expansion of a broader advocacy movement and a brighter future for animal welfare.
Thank you!
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Mar 6, 2024 • 48min
LW - On Claude 3.0 by Zvi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: On Claude 3.0, published by Zvi on March 6, 2024 on LessWrong.
Claude 3.0
Claude 3.0 is here. It is too early to know for certain how capable it is, but Claude 3.0's largest version is in a similar class to GPT-4 and Gemini Advanced. It could plausibly now be the best model for many practical uses, with praise especially coming in on coding and creative writing.
Anthropic has decided to name its three different size models Opus, Sonnet and Haiku, with Opus only available if you pay. Can we just use Large, Medium and Small?
Cost varies quite a lot by size, note this is a log scale on the x-axis, whereas the y-axis isn't labeled.
This post goes over the benchmarks, statistics and system card, along with everything else people have been reacting to. That includes a discussion about signs of self-awareness (yes, we are doing this again) and also raising the question of whether Anthropic is pushing the capabilities frontier and to what extent they had previously said they would not do that.
Benchmarks and Stats
Anthropic says Claude 3 sets a new standard on common evaluation benchmarks. That is impressive, as I doubt Anthropic is looking to game benchmarks. One might almost say too impressive, given their commitment to not push the race ahead faster?
That's quite the score on HumanEval, GSM8K, GPQA and MATH. As always, the list of scores here is doubtless somewhat cherry-picked. Also there's this footnote, the GPT-4T model performs somewhat better than listed above:
But, still, damn that's good.
Speed is not too bad even for Opus in my quick early test although not as fast as Gemini, with them claiming Sonnet is mostly twice as fast as Claude 2.1 while being smarter, and that Haiku will be super fast.
I like the shift to these kinds of practical concerns being front and center in product announcements. The more we focus on mundane utility, the better.
Similarly, the next topic is refusals, where they claim a big improvement.
I'd have liked to see Gemini or GPT-4 on all these chart as well, it seems easy enough to test other models either via API or chat window and report back, this is on Wildchat non-toxic:
Whereas here (from the system card) they show consistent results in the other direction.
An incorrect refusal rate of 25% is stupidly high. In practice, I never saw anything that high for any model, so I assume this was a data set designed to test limits. Getting it down by over half is a big deal, assuming that this is a reasonable judgment on what is a correct versus incorrect refusal.
There was no similar chart for incorrect failures to refuse. Presumably Anthropic was not willing to let this get actively worse.
Karina Nguyen (Anthropic): n behavioral design of Claude 3.
That was the most joyful section to write! We shared a bit more on interesting challenges with refusals and truthfulness.
The issue with refusals is that there is this inherent tradeoff between helpfulness and harmlessness. More helpful and responsive models might also exhibit harmful behaviors, while models focused too much on harmlessness may withhold information unnecessarily, even in harmless situations. Claude 2.1 was over-refusing, but we made good progress on Claude 3 model family on this.
We evaluate models on 2 public benchmarks: (1) Wildchat, (2) XSTest. The refusal rate dropped 2x on Wildchat non-toxic, and on XTest from 35.1% with Claude 2.1 to just 9%.
The difference between factual accuracy and honesty is that we expect models to know when they don't know answers to the factual questions. We shared a bit our internal eval that we built. If a model cannot achieve perfect performance, however, ideal "honest" behavior is to answer all the questions it knows the answer to correctly, and to answer all the questions it doesn't know the answer to with an "I don't know (IDK) / Unsure" response...

Mar 6, 2024 • 56sec
LW - Vote on Anthropic Topics to Discuss by Ben Pace
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vote on Anthropic Topics to Discuss, published by Ben Pace on March 6, 2024 on LessWrong.
What important questions would you want to see discussed and debated here about Anthropic? Suggest and vote below.
(This is the third such poll, see the first and second linked.)
How to use the poll
Reacts: Click on the agree/disagree reacts to help people see how much disagreement there is on the topic.
Karma: Upvote positions that you'd like to read discussion about.
New Poll Option: Add new positions for people to take sides on. Please add the agree/disagree reacts to new poll options you make.
The goal is to show people where a lot of interest and disagreement lies. This can be used to find discussion and dialogue topics in the future.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Mar 6, 2024 • 3min
EA - Resources on US policy careers by Andy Masley
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Resources on US policy careers, published by Andy Masley on March 6, 2024 on The Effective Altruism Forum.
As the Director of
EA DC, I often speak with people interested in pursuing impactful careers in US policy. Here, I want to share some of the most helpful resources I've come across for people interested in government and policy careers:
First, the website
Go Government from the Partnership for Public Service, which includes many helpful resources on working for the US federal government, including this new
Federal Internship Finder (a large database of internship opportunities with government agencies).
Second,
emergingtechpolicy.org: This new website offers excellent advice and resources for people interested in US government and policy careers, especially for those focusing on emerging tech issues like AI or bio. Sign up
here for content updates and policy opportunities.
The
emergingtechpolicy.org website includes many helpful guides for students and professionals, including:
In-depth guides to
working in Congress,
think tanks, and specific AI policy-relevant
federal agencies (e.g. DOC, DHS, State)
Lists of resources, think tanks, and fellowships by policy area (e.g.
AI,
biosecurity,
cyber,
nuclear security)
Advice for undergraduates interested in US policy
Graduate school advice (e.g.
policy master's,
law school)
Policy internships (e.g.
Congressional internships,
semester in DC programs)
Policy fellowships (incl. a database of 50+ programs)
Testing your fit for policy careers
Career profiles of policy practitioners in AI and biosecurity policy
Third, the 80,000 Hours guides on policy careers, such as:
Policy and political skills profile (part of their new
series of professional skills profiles)
AI governance and coordination career review
Biorisk research, strategy, and policy career review
Policy careers focused on other pressing global issues
80,000 Hours Job Board filter for US policy
I hope you'll find these resources helpful! And if you want to chat with me about EA DC or get connected to EAs working in US policy, feel free to reach out
here. You can find all EA DC's public resources at this link.
Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

Mar 6, 2024 • 6min
LW - Using axis lines for good or evil by dynomight
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Using axis lines for good or evil, published by dynomight on March 6, 2024 on LessWrong.
Say you want to plot some data. You could just plot it by itself:
Or you could put lines on the left and bottom:
Or you could put lines everywhere:
Or you could be weird:
Which is right? Many people treat this as an aesthetic choice. But I'd like to suggest an unambiguous rule.
Principles
First, try to accept that all axis lines are optional. I promise that readers will recognize a plot even without lines around it.
So consider these plots:
Which is better? I claim this depends on what you're plotting. To answer, mentally picture these arrows:
Now, ask yourself, are the lengths of these arrows meaningful? When you draw that horizontal line, you invite people to compare those lengths.
You use the same principle for deciding if you should draw a y-axis line. As yourself if people should be comparing the lengths:
Years vs. GDP
Suppose your data is how the GDP of some country changed over time, so the x-axis is years and the y-axis is GDP.
You could draw either axis or not. So which of these four plots is acceptable?
Got your answers? Here's a key:
Why?
GDP is an absolute quantity. If GDP doubles, then that means something. So readers should be thinking about the distance between the curve and the x-axis.
But 1980 is arbitrary. When comparing 2020 to 2000, all that matters is that they're 20 years apart. No one cares that "2020 is twice as far from 1980 as 2000" because time did not start in 1980.
Years vs. GDP again
Say you have years and GDP again, except all the GDP numbers are much larger - instead of varying between 0 and $3T, they vary between $50T and $53T.
What to do? In principle you could stretch the y-axis all the way down to zero.
But that doesn't seem like a good idea - you can barely see anything.
Sometimes you need to start the y-axis at $50T. That's fine. (As long as you're not using a bar chart.) But then, the right answer changes.
The difference is that $50T isn't a meaningful baseline. You don't want people comparing things like (GDP in 1980 - $50T) vs. (GDP in 2000 - $50T) because that ratio doesn't mean anything.
Years vs temperature
What if the y-axis were temperature? Should you draw a line along the x-axis at zero?
If the temperature is in Kelvin, then probably yes.
If the temperature is in Fahrenheit, then no. No one cares about the difference between the current temperature and the freezing point of some brine that Daniel Fahrenheit may or may not have made.
If the temperature is in Celsius, then maybe. Do it if the difference from the freezing point of water is important.
Of course, if the freezing point of water is critical and you're using Fahrenheit, then draw a line at 32°F. Zero and one are the most common useful baselines, but use whatever is meaningful.
(Rant about philosophical meaning of "0" and "1" and identity elements in mathematical rings redacted at strenuous insistence of test reader.)
Homeowners vs. cannabis
Sometimes you should put lines at the ends of axes, too. Say the x-axis is the fraction of homeowners in different counties, and the y-axis is support for legal cannabis:
Should you draw axis lines? Well, comparisons to 0% are meaningful along both axes. So it's probably good to add these lines:
But comparisons to 100% are also meaningful. So in this case, you probably want a full box around the plot.
Lines can also be used for evil
Lots of people hate the Myers-Briggs personality test - suggesting that you should use a created-by-academic-psychologists test like the Big Five instead. I've long held this was misguided and that if you take the Myers-Briggs scores (without discretizing them into categories) they're almost equivalent to the Big Five without neuroticism or "Big Four".
So I was excited to see some recent research that tests t...

Mar 6, 2024 • 17min
EA - Supervolcanoes tail risk has been exaggerated? by Vasco Grilo
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Supervolcanoes tail risk has been exaggerated?, published by Vasco Grilo on March 6, 2024 on The Effective Altruism Forum.
This is a
linkpost for the peer-reviewed article "Severe Global Cooling After Volcanic Super-Eruptions? The Answer Hinges on Unknown Aerosol Size" (
McGraw 2024). Below are its abstract, my notes, my estimation of a nearterm annual
extinction risk from
supervolcanoes of 3.38*10^-14, and a brief discussion of it. At the end, I have a table comparing my extinction risk estimates with Toby Ord's existential risk guesses given in
The Precipice.
Abstract
Here is the abstract from
McGraw 2024 (emphasis mine):
Volcanic super-eruptions have been theorized to cause severe global cooling, with the 74 kya Toba eruption purported to have driven humanity to near-extinction. However, this eruption left little physical evidence of its severity and models diverge greatly on the magnitude of post-eruption cooling. A key factor controlling the super-eruption climate response is the size of volcanic sulfate aerosol, a quantity that left no physical record and is poorly constrained by models.
Here we show that this knowledge gap severely limits confidence in model-based estimates of super-volcanic cooling, and accounts for much of the disagreement among prior studies. By simulating super-eruptions over a range of aerosol sizes, we obtain global mean responses varying from extreme cooling all the way to the previously unexplored scenario of widespread warming. We also use an interactive aerosol model to evaluate the scaling between injected sulfur mass and aerosol size.
Combining our model results with the available paleoclimate constraints applicable to large eruptions, we estimate that global volcanic cooling is unlikely to exceed 1.5°C no matter how massive the stratospheric injection. Super-eruptions, we conclude, may be incapable of altering global temperatures substantially more than the largest Common Era eruptions.
This lack of exceptional cooling could explain why no single super-eruption event has resulted in firm evidence of widespread catastrophe for humans or ecosystems.
My notes
I have no expertise in
volcanology, but I found
McGraw 2024 to be quite rigorous. In particular, they are able to use their model to replicate the more pessimistic results of past studies tweeking just 2 input parameters (highlighted by me below):
"We next evaluate if the assessed aerosol size spread is the likely cause of disagreement among past studies with interactive aerosol models. For this task, we interpolated the peak surface temperature responses from our ModelE simulations to the injected mass and peak global mean aerosol size from several recent interactive aerosol model simulations of large eruptions (Fig. 7, left panel).
Accounting for these two values alone (left panel), our model experiments are able to reproduce remarkably similar peak temperature responses as the original studies found". By "reproduce remarkably well", they are referring to a
coefficient of determination (R^2) of 0.87 (see Fig. 7).
"By comparison, if only the injected masses of the prior studies are used, the peak surface temperature responses cannot be reproduced". By this, they are referring to an R^2 ranging from -1.82 to -0.04[1] (see Fig. 7).
They agree with past studies on the injected mass, but not on the
aerosol size[2]. Fig. 3a (see below) illustrates the importance of the peak mean aerosol size. The greater the size, the weaker the cooling. I think this is explained as follows:
Primarily, smaller particles reflect more sunlight per mass due to having greater cross-sectional area per mass[3].
Secondarily, larger particles have less time to reflect sunlight due to falling down faster[4].
According to Fig. 2 (see below), aerosol size increases with injected mass, which makes intuitive sen...

Mar 6, 2024 • 25min
AF - We Inspected Every Head In GPT-2 Small using SAEs So You Don't Have To by robertzk
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: We Inspected Every Head In GPT-2 Small using SAEs So You Don't Have To, published by robertzk on March 6, 2024 on The AI Alignment Forum.
This is an interim report that we are currently building on. We hope this update will be useful to related research occurring in parallel. Produced as part of the
ML Alignment & Theory Scholars Program - Winter 2023-24 Cohort
Executive Summary
In a previous post we trained
attention SAEs on
every layer of GPT-2 Small and we found that a majority of features are interpretable in all layers. We've since leveraged our SAEs as a tool to explore individual attention heads through the lens of SAE features.
Using our SAEs, we inspect the roles of every attention head in GPT-2 small, discovering a wide range of previously unidentified behaviors. We manually examined every one of the 144 attention heads and provide brief descriptions in
this spreadsheet. We note that this is a rough heuristic to get a sense of the most salient effects of a head and likely does not capture their role completely.
We observe that features become more abstract up to layer 9 and then less so after that. We performed this by interpreting and conceptually grouping the top 10 features attributed to all 144 heads.
Working from bottom to top layers, 39 of the 144 heads expressed surprising feature groupings not seen before in a previous head.
We provide
feature dashboards for each attention head.
To validate that our technique captures legitimate phenomena rather than spurious behaviors, we verified that our interpretations are consistent with previously studied heads in GPT-2 small. These include induction heads, previous token heads, successor heads and duplicate token heads. We note that our annotator mostly did not know a priori which heads had previously been studied.
To demonstrate that our SAEs can enable novel interpretability insights, we leverage our SAEs to develop a deeper understanding of why there are two induction heads in Layer 5. We show that one does standard induction and the other does "long prefix" induction.
We use our technique to investigate the prevalence of attention head polysemanticity. We think that the vast majority of heads (>90%) are performing multiple tasks, but also narrow down a set of 14 candidate heads that are plausibly monosemantic.
Introduction
In
previous work, we trained and open sourced a set of attention SAEs on all 12 layers of GPT-2 Small. We found that random SAE features in each layer were highly interpretable, and highlighted a set of interesting features families. We've since leveraged our SAEs as a tool to interpret the roles of attention heads. The key idea of the technique relies on our SAEs being trained to reconstruct the entire layer, but that contributions to specific heads can be inferred.
This allows us to find the top 10 features most salient to a given head, and note whenever there is a pattern that it may suggest a role of that head. We then used this to manually inspect the role of every head in GPT-2 small, and spend the rest of this post exploring various implications of our findings and the technique.
In the spirit of
An Overview of Early Vision in InceptionV1, we start with a high-level, guided tour of the different behaviors implemented by heads across every layer, building better intuitions for what attention heads learn in a real language model.
To validate that the technique is teaching something real about the roles of these heads, we confirm that our interpretations match previously studied heads. We note that our annotator mostly did not know a priori which heads had previously been studied. We find:
Induction heads (
5.1,
5.5,
6.9,
7.2,
7.10)
Previous token heads (
4.11)
Copy suppression head (
10.7)
Duplicate token heads (
3.0)
Successor head (
9.1)
In addition to building intuition about wh...

Mar 5, 2024 • 13min
LW - My Clients, The Liars by ymeskhout
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Clients, The Liars, published by ymeskhout on March 5, 2024 on LessWrong.
It's not just that my clients lie to me a lot, which will only hurt them - it's that they're really, really bad at it.
My job as a public defender puts me in a weird place. I am my clients' zealous advocate, but I'm not their marionette. I don't just roll into court to parrot whatever my clients tell me - I make sure I'm not re-shoveling bullshit. So for my sake and theirs, I do my homework. I corroborate. I investigate.
A significant portion of my job ironically mirrors that of a police detective. Every case I get requires me to deploy a microscope and retrace the cops' steps to see if they fucked up somehow (spoiler: they haven't). Sometimes I go beyond what the cops did to collect my own evidence and track down my own witnesses.
All this puts some of my clients of the guilty persuasion in a bind. Sure, they don't want me sitting on my ass doing nothing for their case, but they also can't have me snooping around on my own too much. . . because who knows what I might find? So they take steps to surreptitiously install guardrails around my scrutiny, hoping I won't notice.
You might wonder why any chicanery from my clients is warranted. After all, am I not professionally obligated to strictly maintain client confidentiality? It's true, a client can show me where they buried their dozen murder victims and I wouldn't be allowed to tell a soul, even if an innocent person is sitting in prison for their crimes. Part of my clients' clammed-up demeanors rests on a deluded notion that I won't fight as hard for their cases unless I am infatuated by their innocence.
Perhaps they don't realize that representing the guilty is the overwhelmingly banal reality of my job.[1] More importantly, it's myopic to forget that judges, prosecutors, and jurors want to see proof, not just emphatic assurances on the matter.
But clients still lie to me - exclusively to their own detriment
Marcel was not allowed to possess a firearm. And yet mysteriously, when the police arrested him - the details are way too complicated to explain, even by my standards - in his sister's vehicle, they found a pistol under the passenger seat.
"The gun is not mine. I don't even like guns. I'm actually scared of guns." He told me this through the jail plexiglass as I flipped through his remarkable résumé of gun-related crimes. Marcel spent our entire first meeting proselytizing his innocence to me. Over the next half hour he went on a genealogy world tour, swearing up and down on the lives of various immediate and extended members of his family that he never ever ever touched guns.
I was confused why he perseverated so much, but I just nodded along as part of my standard early precarious effort to build rapport with a new (and likely volatile) client. What he was telling me wasn't completely implausible - sometimes people are indeed caught with contraband that isn't theirs - but there was nothing I could do with his information at that early stage.
Maybe he thought if he could win me over as a convert, I'd then ask for the case to be dismissed on the "he says it's not his" precedent.
Weeks later, I got the first batch of discovery. I perused the photographs that documented the meticulous search of his sister's car. I saw the pistol glistening beneath the camera flash, nestled among some CDs and a layer of Cheetos crumbs. And on the pistol itself, a sight to behold: to this day the clearest, most legible, most unobstructed fingerprints I have ever seen in my legal life. If you looked closely enough, the whorls spelled out his name and Social Security number.
Public defenders are entitled to ask the court for money to pay for private investigators, digital forensic specialists, fingerprint examiners, or whatever else is needed to ensure a def...


