The Nonlinear Library

The Nonlinear Fund
undefined
Nov 21, 2023 • 10min

LW - Why not electric trains and excavators? by bhauth

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Why not electric trains and excavators?, published by bhauth on November 21, 2023 on LessWrong. Many countries are supporting electric cars for environmental and independence reasons. But perhaps there are some targets for electrification with better economics than those, cost-effective without any government incentives. For example, trains and hydraulic excavators. trains In some countries, most trains are powered by overhead electric lines. In America, most trains are powered by diesel engines. Why? The competent estimates I've seen for ROI of electrifying US rail lines have it being worthwhile. This isn't a new thing. Here's a paper from 40 years ago estimating ~19% ROI. Arguments that the economics are bad in America because of geographic differences are wrong. Why, then, hasn't that happened? Yes, US high-speed rail programs have not gone well, but unlike new high-speed rail lines, building electric lines over existing rail doesn't require purchasing a lot of land. One major reason is that the Association of American Railroads has lobbied against electrification programs. Apart from private lobbying, they've put out some reports saying "it doesn't make sense for America because American rail networks are special" (wrong), "we should wait for hydrogen fuel cell trains instead" (ultra-super-wrong), and various other bad arguments. Why would they do that? Some hypotheses: Construction of overhead electric lines would be much more expensive in America than other countries, making those ROI estimates inaccurate. The pay of rail executives depends on short-term profits, so they're against long-term investments. Manufacturing of electric trains would have more competition from overseas companies, and there's cross-ownership between rail operators and manufacturers. Change would require work, and might give upstart companies a chance to displace larger companies, so it's opposed in general. My understanding is that (2) and (4) are the dominant factors. Those aren't specific to rail; they're properties of US business management, so I think rail electrification is a good example of wider problems in US companies. Management is evaluated on shorter timescales than good investments provide returns on, so US companies eventually end up using outdated equipment and processes, and lose out to foreign firms. See also: GE under Jack Welch. Private equity now having better long-term returns in the US. US steel companies being outcompeted by foreign steel firms, and then eg ArcelorMittal taking over steel plants in the US. US shipyards failing to modernize, until they produce no commercial ships and Burke-class destroyers cost 2x as much to make as the Sejong-class equivalents from Korea. When you look at the internal evaluations of proposed projects at large companies, it's fairly common for 15% ROI to be the minimum value for serious consideration. That is, of course, higher than the cost of borrowing. The usual explanation has been that a substantial buffer is needed to account for inaccurate estimations, but that doesn't make sense to me, for 2 reasons: The required ROI doesn't increase linearly with low-risk interest rates or the cost of capital. Some ROI estimates are known to be more accurate than others. The spread between required ROI and interest rates doesn't increase proportionately with estimate inaccuracy. I have a different theory: the reason you see requirements for 15%+ ROI so often is because executives are often at their position for around 6 years, and they want most of the investment to have been returned by the time they're looking for a promotion or new job. What's really important isn't the true ROI estimated as best it can be, but rather the ROI in practice over the first few years. Fans of independent games have repeatedly seen some beloved game company g...
undefined
Nov 21, 2023 • 5min

LW - Navigating emotions in an uncertain & confusing world by Akash

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Navigating emotions in an uncertain & confusing world, published by Akash on November 21, 2023 on LessWrong. The last few days have been confusing, chaotic, and stressful. We're still trying to figure out what happened with Sam Altman and OpenAI and what the aftermath will look like. I have personally noticed my emotions fluctuating more. I have various feelings about the community, about the current state of the world, about the increasingly strong pressures to view the world in terms of factions, about the current state of AIS discourse, and the current state of the AI safety community. Between now and AGI, there will likely be other periods of high stress, confusion, or uncertainty. I figured it might be a good idea for me to write down some thoughts that I have found helpful or grounding. If you have noticed feelings of your own, or any strategies that have helped you, I encourage you to share them in the comments. Frames I find helpful & grounding 1. On whether my actions matter. In some worlds, my actions will not matter. Maybe I am too late to meaningfully affect things. Maybe this is true of my friends, allies, and community as well. In the extreme case, at some point we will pass a "point of no return"- the point where my actions and those of my community no longer have any meaningful effect on the world. I can accept this uncertainty, and I can choose to focus on the worlds where my actions still matter. 2. On not having clear end-to-end impact stories. There are not many things that make a meaningful difference, but there are a few. I know of at least one that I was meaningfully part of, and I know of a few others that my friends & allies were part of. Sometimes, these things will not be clear in advance. (Ex: I wrote the initial draft of a sentence that ended up becoming the CAIS statement, but at the time, I did not realize that was going to be a big deal. It felt like an interesting side project, and I certainly didn't have a clear end-to-end impact story for it. Of course, it is valuable to strive for projects that have ex-ante end-to-end impact stories, and it is dangerous to adopt a "well, IDK why this is good, but hopefully it will work out" mentality. 3. On friendship. I am lucky to have found friends and allies who are trying to make the world a better place. In the set of all possible lives, I have found myself in one where I am regularly in contact with people who are fighting to make the world better and safer. I can strive to absorb some of Alice's relentless drive to solve problems, Bob's ability to speak with integrity and build coalitions, Carol's deep understanding of technical issues, etc. 4. Gratitude to the community. The AI safety community has provided me a lot: knowledge, motivation, thinking skills, friendships, and concrete opportunities to make the world better. I would not be here without the community. When I reflect on this, I feel viscerally grateful to the community. 5. Criticism of the community. The AI safety community has made mistakes and undoubtedly continues to make important mistakes. I can feel grateful for certain parts of the community while speaking out against others. There is no law that says that the "community" must be fully good or fully bad- and indeed, it is neither. 6. On identifying with the EA or AIS community. I do not have to identify with a community or all parts of it. I can find specific people and projects that I choose to contribute to. I can be aware of how the community impacts me, both positively and negatively. I can try to extract its lessons and best practices while being aware of its dangers. I can be grateful for the fact that I have become a more precise communicator, I have new ways of monitoring my uncertainty, and I speak & think more probabilistically. This can coincide with concerns I...
undefined
Nov 21, 2023 • 14min

LW - For Civilization and Against Niceness by Gabriel Alfour

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: For Civilization and Against Niceness, published by Gabriel Alfour on November 21, 2023 on LessWrong. Scott Alexander wrote a great essay, called " In Favor of Niceness, Community and Civilization". Scott is a great writer, and conveys what I love about civilization in a beautiful way. Unfortunately, the essay conflates two behaviors. Though to be fair, those two behaviors often go hand in hand: Being uncivil, as in: breaking the norms of civilization. Being mean, as in: being not-nice, unpleasant to be around. The following paragraph embodies this conflation quite well: Liberalism does not conquer by fire and sword. Liberalism conquers by communities of people who agree to play by the rules, slowly growing until eventually an equilibrium is disturbed. Its battle cry is not "Death to the unbelievers!" but "If you're nice, you can join our cuddle pile!" I love civilization! Democracies let me politically coordinate with people internationally, socially liberal systems grant me freedom to be as weird as I want in private, and economically liberal systems let me try many exotic kinds of positive-sum trades with people! None of this would be possible without civilization. I agree, Civilization is great. But I don't want to join your cuddle pile! Civilization is often about being not nice As Scott Alexander says, civilization is about "agreeing to play by the rules." But this is not about niceness. On the contrary, playing by the rules often requires being not nice. [1] While we want companies to abide by strong regulations, and not cause negative externalities (like pollution), we also do not want them to be nice to each other. This is the core of antitrust law, that aims to minimize anti-competitive practices. More concretely, the goal of companies is to capture value (make profits), while the goal of free-markets is for companies to create value for consumers. The way those two incentives are aligned is through competition. By getting companies to compete, they need to keep improving compared to other companies to keep their profits, increasing the share of the value enjoyed by consumers In other words: We want companies to compete as fiercely as possible, thereby driving quality up and pushing prices down. As Adam Smith wrote: "It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest." This is a feature of economic liberalism. Similarly, in a court of law, while we want all lawyers present to strictly adhere to their local equivalent of the Model Rules of Professional Conduct, we don't want the defense attorney and the prosecutor to be nice to each other. When younger, I could not understand attorneys that defended people who they knew were criminals. Weren't these attorneys making society strictly worse? My confusion went deeper when I learnt that they had an ethical obligation to defend people who they knew were criminals. But it makes sense: the attorney doesn't issue the final sentence, the judge does. And the judge doesn't know if the person is innocent or not, or when they're guilty, how guilty they are. To solve this, judiciary systems go through something close to an Adversarial Collaboration. Both sides need to bring forward as much evidence for their case as possible. Only then can the judge make the best decision with as much information as possible. When the defense attorney makes their case, they are not changing the sentence, they are giving more information to the judge, who then decides on the sentence. If you think about it, it is obvious: it is better for the judge to have more information. And to get there, you need people to optimize for both sides of the story, not focus on the one we already believe to be correct. This is why prosecutors and defense attorneys sh...
undefined
Nov 21, 2023 • 7min

EA - High Impact Medicine - Impact Survey Results and Marginal Funding by High Impact Medicine

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: High Impact Medicine - Impact Survey Results and Marginal Funding, published by High Impact Medicine on November 21, 2023 on The Effective Altruism Forum. Introduction As part of the Marginal Funding Week, we want to give a brief update on High Impact Medicine, describing which projects marginal funding is likely to be spent on. High Impact Medicine (Hi-Med) is a non-profit organisation dedicated to inspiring and empowering medical students and doctors to make impact-driven decisions in their careers and giving. Theory of Change This is an overview of our activities, our current definition of positive impact, our target audience and the outcomes we monitor. The main assumptions behind our theory of change are: Target group-specific interventions can improve altruistic behaviour change beyond broad outreach: Interventions customised to professional groups account for background-specific needs, abilities, and goals. Professional peers can be potent facilitators of altruistic behaviour: Role models are an important trigger for altruistic behaviour change. Change is more likely when someone is "like me", i.e. belongs to a relevant peer group. Medical doctors are a well-suited target group for altruistic impact considerations: They are often strongly altruistically motivated, exceptionally skilled, and scientifically minded, and they often have significant career capital and high incomes. Proof of concept: Past interventions and their validation We conducted various programmes, interacting with > 500 medical doctors and students over the past two years. The full 2023 Impact Survey Executive Summary can be found here. The evaluation of our inaugural introductory fellowship cohort has been published in an academic peer-reviewed journal. Bioethicist Benjamin Krohmal recently ran an elective course for medical students at Georgetown University School of Medicine in the US, "Beneficence & Beyond: How to do the most good with your medical career", that was inspired and informed by our introductory fellowship. Our monitoring and evaluation team is currently helping to assess the results, and we are in conversations with other universities to run similar programmes. What we learned There is substantial interest in the medical community to learn more about doing the most good: We also got preliminary confirmation that the medical background of the High Impact Medicine team meant that we were able to form genuine and meaningful connections with our members, which in turn increased the tractability of our efforts. It's likely that a mix of interventions that matters: All individuals for whom Hi-Med has facilitated career changes have participated in both the introductory fellowship and 1:1 conversations. 1:1 conversations seemed to be particularly important in influencing them to make these career and giving decisions. We have seen the most positive impactful changes in individuals with high scores in altruistic motivation & career capital: This was an observation from our most successful case studies. Time investments to attain giving pledges can be extremely low: Charismatic individuals can initiate someone strongly considering a donation pledge in a single 1-1. Impact attribution is challenging: Individuals engage in multiple interventions, complicating evaluations. Reliance on volunteers is unsustainable: Operationally, our rapid community growth and reliance on contractors / volunteers strained our organisational capacity. Looking forward Based on our evaluation of past and current programmes, we plan to iterate in the following way Select for and attract more promising individuals (e.g. by building external credibility) and provide them with timely and individualised support (e.g. more 1:1 calls, a career fellowship cohort starting every other month, biosecurity career change ...
undefined
Nov 21, 2023 • 1min

LW - Vote on worthwhile OpenAI topics to discuss by Ben Pace

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Vote on worthwhile OpenAI topics to discuss, published by Ben Pace on November 21, 2023 on LessWrong. I (Ben) recently made a poll for voting on interesting disagreements to be discussed on LessWrong. It generated a lot of good topic suggestions and data about what questions folks cared about and disagreed on. So, Jacob and I figured we'd try applying the same format to help people orient to the current OpenAI situation. What important questions would you want to see discussed and debated here in the coming days? Suggest and vote below. How to use the poll Reacts: Click on the agree/disagree reacts to help people see how much disagreement there is on the topic. Karma: Upvote positions that you'd like to read discussion about. New Poll Option: Add new positions for people to take sides on. Please add the agree/disagree reacts to new poll options you make. The goal is to show people where a lot of interest and disagreement lies. This can be used to find discussion and dialogue topics in the future. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 20, 2023 • 2min

EA - Animal Advocacy Strategy Forum 2023 Summary by Neil Dullaghan

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Animal Advocacy Strategy Forum 2023 Summary, published by Neil Dullaghan on November 20, 2023 on The Effective Altruism Forum. Introduction In July 2023, the Animal Advocacy Strategy Forum[1] was held over three days with the purpose of bringing together key decision-makers in the animal advocacy community to connect, coordinate, and strategize. At the end of the forum, 35/44 participants filled out a survey similar to last year's Forum survey ( Duffy 2023) that sought to better understand the future needs of effective animal advocacy groups and the perceptions of animal advocates about the most important areas to focus on in the future. The attendees represented approximately 27 key groups in the animal advocacy space. 23/35 survey participants were in senior leadership positions at their organization (C-level, founder, and various "Executive" and "Director" roles). Our report discusses the results of that survey and workshops of the forum itself. Click here for the report on the Rethink Priorities website. Acknowledgments This report was written by Neil Dullaghan. Thanks to Daniela R. Waldhorn for their guidance, and Kieran Greig and Laura Duffy for their helpful feedback and to Adam Papineau for copy-editing. The post is a project of Rethink Priorities, a global priority think-and-do tank, aiming to do good at scale. We research and implement pressing opportunities to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to key issues. We do this work in close partnership with foundations and impact-focused non-profits or other entities. newsletter. You can explore our completed public work here. ^ Formerly known as the Effective Animal Advocacy Coordination Forum Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 20, 2023 • 8min

EA - The EA Animal Welfare Fund (Once Again) Has Significant Room For More Funding by kierangreig

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The EA Animal Welfare Fund (Once Again) Has Significant Room For More Funding, published by kierangreig on November 20, 2023 on The Effective Altruism Forum. Just as ~2 years ago, the EA Animal Welfare Fund has significant room for more funding. This could be a pretty important point that informs end of year giving for a number of donors who are looking to make donations within the animal sector. Briefly, here's why the Animal Welfare Fund has some pretty significant room for more funding at this point: Right now, there's currently ~$1M in the Animal Welfare Fund. We also now have 50 grants, summing to ~$4.5M in grants under evaluation. Between mid-last year and mid-this year, the EA AWF received ~350 applications over the past year of which ~150 were desk rejects and ~200 were graded by fund managers. Of these ~200, ~60 received funding, and ~30 received the grant amount they applied for or more. Assuming that the general shape of the pipeline remains similar, that could imply we may now have more grants than we can fund. Potentially even if we were to have an influx of several hundred thousand dollars. In general, the AWF is navigating a difficult period funding-wise: last year, we had ~$7M to allocate, whereas our projection for this year - extrapolating from donations received so far - is only ~$5M. We also have some plans for significant growth next year through some internal expansion plans in the works (e.g., possibly adding further fund managers, hopefully at least one who is full-time, and doing more active grantmaking). Also, a lot of our grantees have grown, so they'll have more room for funding. As a lot of the groups we give to are relatively small, they can grow at such a rate that they'd often be looking to absorb twice as much funding in the next year. If we zoom in on just say Fórum Nacional de Proteção e Defesa Animal-a promising Brazilian group. In 2021 we granted $30k to them and this year $80k. Which comfortably corresponds to a greater than 100% growth in grant amount over a two-year period. Generally, it seems that if we have more money in the fund, it encourages some good organizations to request more funding for some quality projects. Relatedly, some of the areas we grant in just tend to be pretty high growth and grow at comfortably >20% year on year. For instance, years ago there was basically very little that could be granted to invertebrate welfare, but this year we made several hundred thousand dollars in grants within that area. So next year, we think that we could fairly comfortably and productively absorb and grant out in the realm of $6M-$10M (that's a ~20%-100% increase on this year) without any significant decreases to the quality of our grants. Note too, that in previous years we have been able to do such jumps in grantmaking volume. In 2020 we granted out ~$2M in total, in 2021 more than doubled that to ~$4.5M, and in 2022 went up to $7M, before now likely decreasing to ~$5M this year. We think we're again on track to handle 2020-2022 levels of either absolute growth or percentage growth in grants for next year, which will put us in that $6M-$10M range. So one way to look at this is that we now have ~$1M in the fund but next year we could do something like at least ~$7.5M in grants. So in that sense, we have several million dollars in room for more funding. It could be worth thinking about how much we'll likely raise for grants for next year too though. This year, we typically raised ~$100k per month. Historically, we have seen about a ~2x-8x increase on that monthly total for the month of December and January (some end-of-year donations come in on the books in January). Another way to look at this then, is based on the current trends and growth in them year to year, we would now be looking at raising something like ~$1.7M (~$100k...
undefined
Nov 20, 2023 • 15min

EA - CEA is fundraising, and funding constrained by Ben West

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA is fundraising, and funding constrained, published by Ben West on November 20, 2023 on The Effective Altruism Forum. Tl;dr The Centre for Effective Altruism (CEA) has an expected funding gap of $3.6m in 2024. Some example things we think are worth doing but are unlikely to have funding for by default: Funding a Community Building Grant in Boston Funding travel grants for EAG(x) attendees Note that these are illustrative of our current cost-effectiveness bar (as opposed to a binding commitment that the next dollar we receive will go to one of these things). In collaboration with EA Funds we have produced models where users can plug in their own parameters to determine the relative value of a donation to CEA versus EA Funds. Intro The role of an interim executive is weird: whereas permanent CEOs like to come in with a bold new vision (ideally one which blames all the organization's problems on their predecessor), interim CEOs are stuck staying the course. Fortunately for me, I mostly liked the course CEA was on when I came in. The past few years seem to have proven the value of the EA community: my own origin cause area of animal welfare has been substantially transformed (e.g. as recounted by Jakub here), and even as AI safety has entered the global main stage many of the people doing research, engineering, and other related work have interacted with CEA's projects. Of course, this is not to say that CEA's work is a slamdunk. In collaboration with Caleb and Linch at EA Funds, I have included below some estimates of whether marginal donations to CEA are more impactful than those to EA Funds, and a reasonable confidence interval very comfortably includes the possibility that you should donate elsewhere. We are fortunate to count the Open Philanthropy Project (and in particular Open Phil's GCR Capacity Building program) among the people who believe we are a good use of funding, but they (reasonably) prefer to not fund all of our budget, leaving us with a substantial number of projects which we believe would produce value if we could fund starting or scaling them. This post outlines where we expect marginal donations to go and the value we expect to come from those donations. You can donate to CEA here. If you are interested in donating and have further questions, feel free to email me (ben.west@centreforeffectivealtruism.org). I will also try to answer questions in the comments. The basic case for CEA Community building is sometimes motivated by the following: suppose you spent a year telling everyone you know about EA and getting them excited. Probably you could get at least one person excited. Then this means that you will have doubled your lifetime impact, as both you and this other person will go on to do good things. That's a pretty good ROI for one year of work! This story is overly simplistic, but is roughly my motivation for working on (and donating to) community building: it's a leveraged way to do good in the world. And it does seem to be the case that many people whose work seems impactful attribute some of their impact to CEA: The Open Philanthropy longtermist survey in 2020 identified CEA among the top tier of important influences on people's journey towards work improving the long-term future, with about half of CEA's demonstrated value coming through events (EA Global and EAGx conferences) and half through our other programs. The 80,000 Hours user survey in 2022 identified CEA as the EA-related resource which has influenced the most people's career plans (in addition to 80k itself), with 64% citing the EA Forum as influential and 44% citing EAG. This selection of impact stories illustrates some of the ways we've helped people increase their impact by providing high-quality discussion spaces to consider their ideas, values and options for and about maki...
undefined
Nov 20, 2023 • 2min

EA - Open Philanthropy's newest focus area: Global Public Health Policy by JamesSnowden

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Open Philanthropy's newest focus area: Global Public Health Policy, published by JamesSnowden on November 20, 2023 on The Effective Altruism Forum. We're pleased to announce that we've added a new cause area to our Global Health and Wellbeing portfolio: Global Public Health Policy. The program will be overseen by Santosh Harish, Chris Smith, and James Snowden. Santosh will lead the majority of grantmaking for the program. We believe that some of the most important global health problems can be addressed cost-effectively by working with governments to improve policy. Policies like air quality regulations, tobacco and alcohol taxes, and the elimination of leaded gasoline have saved and improved millions of lives. These policies typically improve public health by addressing risk factors to alleviate the burden of non-communicable disease, which comprises a growing share of the health burden but receives relatively few resources. Policy interventions affect entire populations and are often cost-effective for governments to implement. We think philanthropy can have an outsized impact by helping governments design, implement, and enforce more effective public health policies. We've already made some grants for related work: Grants in our South Asian air quality program (which is now part of our Global Public Health Policy program) Several grants aimed at reducing lead exposure and excessive alcohol consumption Funding for the Centre for Pesticide Suicide Prevention, to support work aimed at reducing deaths from the deliberate ingestion of pesticides The chart below shows how little funding goes to address our current global public health policy focus areas relative to their estimated burden: Sources: Institute for Health Metrics and Evaluation; Mew et al. 2017; Open Philanthropy estimates These four topics are our current focus, but in the future we may explore other large health burdens addressable through public health policy such as tobacco, asbestos, and exposure to other pollutants. We believe our grants to date have already resulted in meaningful impact, and we're very excited for the potential of this new area. For more details, see the area page. And if you'd like to get in touch with us for any reason, please comment here or email info@openphilanthropy.org. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 20, 2023 • 3min

LW - Agent Boundaries Aren't Markov Blankets. [no longer endorsed] by abramdemski

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Agent Boundaries Aren't Markov Blankets. [no longer endorsed], published by abramdemski on November 20, 2023 on LessWrong. Edit: no longer endorsed; see John's comment. Friston has famously invoked the idea of Markov Blankets for representing agent boundaries, in arguments related to the Free Energy Principle / Active Inference. The Emperor's New Markov Blankets by Jelle Bruineberg competently critiques the way Friston tries to use Markov blankets. But some other unrelated theories also try to apply Markov blankets to represent agent boundaries. There is a simple reason why such approaches are doomed. This argument is due to Sam Eisenstat. Consider the data-type of a Markov blanket. You start with a probabilistic graphical model (usually, a causal DAG), which represents the world. A "Markov blanket" is a set of nodes in this graph, which probabilistically insulates one part of the graph (which we might call the part "inside" the blanket) from another part ("outside" the blanket):[1] ("Probabilistically insulates" means that the inside and outside are conditionally independent, given the Markov blanket.) So the obvious problem with this picture of an agent boundary is that it only works if the agent takes a deterministic path through space-time. We can easily draw a Markov blanket around an "agent" who just says still, or who moves with a predictable direction and speed: But if an agent's direction and speed are ever sensitive to external stimuli (which is a property common to almost everything we might want to call an 'agent'!) we cannot draw a markov blanket such that (a) only the agent is inside, and (b) everything inside is the agent: It would be a mathematical error to say "you don't know where to draw the Markov blanket, because you don't know which way the Agent chooses to go" -- a Markov blanket represents a probabilistic fact about the model without any knowledge you possess about values of specific variables, so it doesn't matter if you actually do know which way the agent chooses to go.[2] The only way to get around this (while still using Markov blankets) would be to construct your probabilistic graphical model so that one specific node represents each observer-moment of the agent, no matter where the agent physically goes.[3] In other words, start with a high-level model of reality which already contains things like agents, rather than a low-level purely physical model of reality. But then you don't need Markov blankets to help you point out the agents. You've already got something which amounts to a node labeled "you". I don't think it is impossible to specify a mathematical model of agent boundaries which does what you want here, but Markov blankets ain't it. ^ Although it's arbitrary which part we call inside vs outside. ^ Drawing Markov blankets wouldn't even make sense in a model that's been updated with complete info about the world's state; if you know the values of the variables, then everything is trivially probabilistically independent of everything else anyway, since known information won't change your mind about known information. So any subset would be a Markov blanket. ^ Or you could have a more detailed model, such as one node per neuron; that would also work fine. But the problem remains the same; you can only draw such a model if you already understand your agent as a coherent object, in which case you don't need Markov blankets to help you draw a boundary around it. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app