The Nonlinear Library

The Nonlinear Fund
undefined
Mar 9, 2024 • 33min

EA - Talking to Congress: Can constituents contacting their legislator influence policy? by Tristan Williams

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Talking to Congress: Can constituents contacting their legislator influence policy?, published by Tristan Williams on March 9, 2024 on The Effective Altruism Forum. Summary and Key Takeaways The basic case: Contacting your legislator1 is low hanging fruit: it can be done in a relatively short amount of time, is relatively easy, and could have an impact. Each communication is not guaranteed to be influential, but when done right has the potential to encourage a legislator to take actions which could be quite influential. Why do we believe that constituent communication is useful? At the state level, we've seen two studies which have randomly assigned some legislators to receive communication[1], finding a 12% and 20% increased chance of the legislator voting towards the desired direction. At the federal level, one survey of staffers[2] indicated that less than 50 personalized messages were enough to get an undecided member to take the requested action for the majority of offices (70%). Anecdotal accounts, both in the literature and our conversations indicated that, despite disagreement on how much impact communication has, the possibility certainly exists for it to affect what a legislator thinks. What is the best way to conduct one of these campaigns? Some factors are important to be aware of. Communication is best sent for issues legislators are undecided on, and to legislators with smaller constituencies. See How to Best Execute the Communication for more. Personalized communication goes the furthest. Many advocacy groups use form email templates where you merely add your name to a pre-generated message and hit send. These might be net negative, and staffers have made clear time and again that personal messages, written by the constituent, are best. In-person meetings are best, but letters, emails and calls are likely nearly as effective, while social media posts and messages have a more uncertain effect. The way you frame your concern matters. You'll have to decide whether you want to make a very specific ask to support a given bill, or want to make a more general case for concern with an issue, perhaps telling a personal story to support your position. The best messages will make use of both frames. Know your legislators. Different legislators will have their own agendas and issues of focus[3], so being familiar with your legislator's work is important. Introduction This is part of a project for AI Safety Camp undertaken to answer one chief question: can constituents contacting their legislator influence policy?[4] In answering this question, we're primarily speaking to two groups. First, to organizers within the broader policy/advocacy space trying to decide how to best work with Congress and if facilitating constituent communication could be a worthwhile part of that. Second, to individuals, who are concerned with the state of affairs of current risks and would like to take a further step (however small) in reducing that risk. We hope to provide below a synthesis of our findings, so that each of these groups can make a more informed decision as to whether it's worth their time. All in all, the below is the result of 10 discussions with current and former congressional staff, ~50 hours of collective research, and conversations with many organizations in the AI policy space. From our research and conversation with staffers, we've found little directly measuring the effectiveness of the method, but general agreement that it's likely impactful given certain circumstances, and much on how it can best be executed. From our conversations with those in AI policy, we've found that facilitating constituent communication isn't currently a focus for groups in the AI Safety ecosystem, but that the majority of those we've talked to are neutral to positive on bringing this in...
undefined
Mar 9, 2024 • 3min

EA - This is why people are reluctant to write on the EA Forum by Stan Pinsent

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: This is why people are reluctant to write on the EA Forum, published by Stan Pinsent on March 9, 2024 on The Effective Altruism Forum. Four days ago I posted a question Why are you reluctant to write on the EA Forum?, with a link to Google Form. I received 20 responses. This post is in three parts: Summary of reasons people are reluctant to write on the EA Forum Suggestions for making it easier Positive feedback for the EA Forum Replies in full Summary of reasons people are reluctant to write on the EA Forum The form received 20 responses over four days. All replies included a reason for being reluctant or unable to write on the EA Forum. Only a minority of replies included a concrete suggestion for improvement. I have attempted to tally how many times each reason appeared across the 20 responses[2]: Suggestions for making it easier to contribute I give all concrete suggestions for helping people be less reluctant to contribute to the forum, in chronological order in which they were received: More discourse on increasing participation: "more posts like these which are aimed at trying to get more people contributing" Give everyone equal Karma power: "If the amount of upvotes and downvotes you got didn't influence your voting power (and was made less prominent), we would have less groupthink and (pertaining to your question) I would be reading and writing on the EA-forum often and happily, instead of seldom and begrudgingly." Provide extra incentives for posting: "Perhaps small cash or other incentives given each month for best posts in certain categories, or do competitions, or some such measure? That added boost of incentive and the chance that the hours spent on a post may be reimbursed somehow." "Discussions that are less tied to specific identities and less time-consuming to process - more Polis like discussions that allow participants to maintain anonymity, while also being able to understand the shape of arguments." Lower the stakes for commenting: "I'm not sure if comment section can include "I've read x% of the article before this comment"?" Positive feedback for the EA Forum The question invited criticism of the Forum, but it did nevertheless garner some positive feedback. For an internet forum it's pretty good. But it's still an internet forum. Not many good discussions happen on the internet. Forum team do a great job :) Responses in full All responses can be found here. ^ ^ You can judge for yourself here whether I correctly classified the responses. I considered lumping "too time-consuming" and "lack of time" together, but decided against this because the former seems to imply "bar is very high", while the latter is merely a statement on how busy the respondent's life is. The form collected two responses: Why are you reluctant to write on the EA Forum? What would make it easier? Is there anything else you would like to share? Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Mar 9, 2024 • 58sec

EA - NIST staffers revolt against expected appointment of 'effective altruist' AI researcher to US AI Safety Institute by Phib

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: NIST staffers revolt against expected appointment of 'effective altruist' AI researcher to US AI Safety Institute, published by Phib on March 9, 2024 on The Effective Altruism Forum. "The appointment of Christiano, which was said to come directly from Secretary of Commerce Gina Raimondo (NIST is an agency under the US Department of Commerce) has sparked outrage among NIST employees who fear that Christiano's association with EA and longtermism could compromise the institute's objectivity and integrity." "The AISI was established in November 2023 to "support the responsibilities assigned to the Department of Commerce" under the AI Executive Order. Earlier today, US Senate Majority Leader Chuck Schumer (D-NY) announced that the NIST will receive up to $10 million to establish the US AI Safety Institute." Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Mar 9, 2024 • 24min

LW - Closeness To the Issue (Part 5 of "The Sense Of Physical Necessity") by LoganStrohl

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Closeness To the Issue (Part 5 of "The Sense Of Physical Necessity"), published by LoganStrohl on March 9, 2024 on LessWrong. This is the fifth post in a sequence that demonstrates a complete naturalist study, specifically a study of query hugging (sort of), as described in The Nuts and Bolts of Naturalism. This one continues my demo of phases one and two: Locating Fulcrum Experiences and Getting Your Eyes On. For context on this sequence, see the intro post. Reminder that this is meant as reference material. Before throwing in the towel, I thought I might as well try talking things through with ChatGPT. Why? I think it was just something I was doing a lot at the time. "Not sure how to proceed? Throw GPT4 at it for a while and see what happens." I didn't expect anything in particular from the activity. As it turns out, I can probably count on one hand the number of work-related discussions that have been as useful to me as this one - which probably says as much about my social deficits as it does about GPT4's abilities. It talked me around, and I un-gave-up. The most important part was using GPT as a Focusing partner. I did a (mostly) canonical version of Gendlin's Focusing, and came to understand that I was struggling with the balancing act between authenticity and legibility. I decided to almost completely abandon legibility. The relevant section of chat log: (Note: Often, ChatGPT suddenly turns into a mini encyclopedia entry. I fondly imagine an excited nerdy 12-year-old trying to get full marks on an essay at these moments. Despite considering them extraneous to me personally, since it's giving me background on terms I have chosen myself, I've left these parts in as potentially valuable context for onlookers, but I've italicized them so you can skip them if you'd prefer.) Logan I'm going to try a little bit of "focusing" a la Eugene Gendlin, to get in touch with whatever it is I want to work out. Please be a focusing partner. I feel a forward pressure, and another pressure holding that back. It's primarily in my chest and solar plexus. The forward pressure reminds me of stamping feet, raring to go, frustration, almost desperation. It has a couple of layers, I think, as though something is trying to hide. The "desperation" is from a lower layer than the "raring to go". I get a felt shift when I think "I feel afraid". I feel afraid that I am done, it is over, I can no longer do or make or think. I feel afraid that I will never make progress again. The forward pressure in my chest responded a lot to that last sentence. There was a release, and I exhaled tension. ChatGPT Thank you for sharing your experience in real-time. It sounds like a potent mix of emotions and sensations you're navigating through. The concept of "focusing" as developed by Eugene Gendlin involves tapping into our bodily sense of particular issues or feelings, the "felt sense", and trying to articulate or resonate with that feeling until there's a shift or clarification. It's an internal process, but sharing it, especially with a partner, can provide reflection and sometimes help articulate or pinpoint what's happening within. The forward pressure you describe, with its layers of "raring to go" and "desperation," seems to be a complex feeling, perhaps with multiple facets or sources. And your identification of the underlying fear - that you may not progress or contribute again - seems to be a significant realization. It's commendable that you're confronting these feelings head-on and seeking clarity. How are you feeling now after the release you mentioned? Are there more sensations or feelings that you want to explore further or share? Logan Yes, I want to feel into the "holding back" pressure. The "holding back" feels more like it's on the outside of my body. Like a wall, or a hand on my chest. I...
undefined
Mar 9, 2024 • 2min

EA - CEA is hiring a Head of Communications by Ben West

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: CEA is hiring a Head of Communications, published by Ben West on March 9, 2024 on The Effective Altruism Forum. Applications will be evaluated on a rolling basis. All applications must be submitted by Friday, March 22nd, 2024, 11:59 pm GMT. CEA is hiring a head of communications. While a successful candidate would ideally have a strong communications background, we're open to applications from generalists with strong foundational skills who can build a team with additional expertise. This is a senior leadership position reporting to the CEO. The remit of the role is broad, including developing and executing communications strategies for both CEA and effective altruism more broadly. We anticipate that this individual will become the foremost leader for strategic communications related to EA and will have a significant impact in shaping the field's strategy. This will include collaborating with senior leaders at other organizations doing EA-related work. Both EA and CEA are at important inflection points. Public awareness of EA has grown significantly over the past 2 years, during which time EA has had both major success and significant controversies. To match this growth in awareness, we're looking to increase our capacity to inform public narratives about and contribute to a more accurate understanding of EA ideas and impact. The stakes are high: Success could result in significantly higher engagement with EA ideas, leading to career changes, donations, new projects, and increased traction in a range of fields. Failure could result in long-lasting damage to the brand, the ideas, and the people who have historically associated with them. We're looking for a leader who can design and execute a communications strategy for EA. This person will be a strategic partner with and member of CEA's leadership team to help us shape both the substance and messaging of EA. You'll be able to build from the foundation set by our existing team, building on the work of our outgoing head of communications to further grow and expand the team, which currently includes one full-time staff member and support from an external agency. CEA has a new CEO, who is in the process of developing a new organizational strategy and views strengthening our communications function as a key priority. You should expect significant organizational support - e.g. attention from senior leadership and the allocation of necessary financial resources. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Mar 8, 2024 • 12min

EA - Australians are concerned about AI risks and expect strong government action by Alexander Saeri

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Australians are concerned about AI risks and expect strong government action, published by Alexander Saeri on March 8, 2024 on The Effective Altruism Forum. Key insights A representative online Survey Assessing Risks from AI (SARA) of 1,141 Australians in Jan-Feb 2024 investigated public perceptions of AI risks and support for AI governance actions. Australians are most concerned about AI risks where AI acts unsafely (e.g., acting in conflict with human values, failure of critical infrastructure), is misused (e.g., cyber attacks, biological weapons), or displaces the jobs of humans; they are least concerned about AI-assisted surveillance, or bias and discrimination in AI decision-making. Australians judge "preventing dangerous and catastrophic outcomes from AI" the #1 priority for the Australian Government in AI; 9 in 10 Australians support creating a new regulatory body for AI. To meet public expectations, the Australian Government must urgently increase its capacity to govern increasingly-capable AI and address diverse risks from AI, including catastrophic risks. Findings Australians are concerned about diverse risks from AI When asked about a diverse set of 14 possible negative outcomes from AI, Australians were most concerned about AI systems acting in ways that are not safe, not trustworthy, and not aligned with human values. Other high-priority risks include AI replacing human jobs, enabling cyber attacks, operating lethal autonomous weapons, and malfunctioning within critical infrastructure. Australians are skeptical of the promise of artificial intelligence: 4 in 10 support the development of AI, 3 in 10 oppose it, and opinions are divided about whether AI will be a net good (4 in 10) or harm (4 in 10). Australians support regulatory and non-regulatory action to address risks from AI When asked to choose the top 3 AI priorities for the Australian Government, the #1 selected priority was preventing dangerous and catastrophic outcomes from AI. Other actions prioritised by at least 1 in 4 Australians included (1) requiring audits of AI models to make sure they are safe before being released, (2) making sure that AI companies are liable for harms, (3) preventing AI from causing human extinction, (4) reducing job losses from AI, and (5) making sure that people know when content is produced using AI. Almost all (9 in 10) Australians think that AI should be regulated by a national government body, similar to how the Therapeutic Goods Administration acts as a national regulator for drugs and medical devices. 8 in 10 Australians think that Australia should lead the international development and governance of AI. Australians take catastrophic and extinction risks from AI seriously Australians consider the prevention of dangerous and catastrophic outcomes from AI the #1 priority for the Australian Government. In addition, a clear majority (8 in 10) of Australians agree with AI experts, technology leaders, and world political leaders that preventing the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war 1. Artificial Intelligence was judged as the third most likely cause of human extinction, after nuclear war and climate change. AI was judged as more likely than a pandemic or an asteroid impact. About 1 in 3 Australians think it's at least 'moderately likely' AI will cause human extinction in the next 50 years. Implications and actions supported by the research Findings from SARA show that Australians are concerned about diverse risks from AI, especially catastrophic risks, and expect the Australian Government to address these through strong governance action. Australians' ambivalence about AI and expectation of strong governance action to address risks is a consistent theme of public opinion rese...
undefined
Mar 8, 2024 • 5min

LW - Lies and disrespect from the EA Infrastructure Fund by Igor Ivanov

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Lies and disrespect from the EA Infrastructure Fund, published by Igor Ivanov on March 8, 2024 on LessWrong. TLDR I applied for funding from the EA Infrastructure Fund. They promised to give me their decision in a month, but they only kind of approved my application in 3 months. A month later they explained that they didn't actually approve my application, and I still have to wait for their final decision. It's almost 5 months since I applied, and I'm still waiting. During this time, my manager failed to meet 4 of his own deadlines, didn't answer me for many weeks, while expressing little accountability, empathy, and failing to properly communicate. Believing these false promises, I paused new projects, sustained financial and reputational damage, and the situation affected my mental health. I told my manager about it, but he didn't express empathy, and again failed to meet his own deadline. During this time I sent 35 emails to 3 fund managers but was unable to solve the issues. The story I'm a psychotherapist, and I focus on helping EAs. At some point I realized that some of them could benefit from therapy, buy can't afford it, so I decided to apply for funding from the EA Infrastructure Fund to offer my services pro-bono. It's a small and straightforward grant. The timeline of my application October 20 I applied for funding, and outlined that the hard deadline for my application is November 24. In the automatic response they told me that they expect to give me the decision before this deadline. As psychotherapist can't suddenly stop working with their clients, I paused taking new clients in advance in case I would get funding. November 24 Contrary to the promise, I don't get any response. I send emails asking for an update December 5 My manager Caleb Parikh apologises about the delay, and promises to give me an update within the following few days, but he broke his promise, and didn't send an update in time. December 18 Caleb sent another email, promising to tell me the decision within a week, and apologising for the delay. He breaks his promise again, and don't send me anything that week. I send him several emails asking for an update. January 30 Caleb sends me an email stating that the Fund is interested in making a grant on slightly different terms than I initially applied for. He also noted that this needs to be reviewed by their legal team. He didn't mention his failed promises, or what I should do next. I interpreted it as "Yes", and was relieved. I stopped taking new clients completely, and made promises to other people based on this information. I send Caleb several emails asking what I should do next, but he doesn't answer any of them. I reached out to another manager asking to help me reach out to him, and Caleb answers me only when this manager asked him to do so. At this point my financial situation became worse since I expected to rely on the fund money after they said that they are interested in funding me. I started feeling frustration, and I started thinking that I might not be able to fulfill things that I promised other people expecting money for the project. February 23 Caleb wrote that he still has to consult with lawyers to understand whether they can fund my project or not. In this email he also promised to give me an update within 2 weeks. I was confused on why did he initially said that they are interested in funding me, while making me wait for the final decision for more than a month? After that I told him about financial and mental issues caused by this situation. He answered, but didn't acknowledge his shortcomings, or express empathy. I also tried to reach out to other fund managers asking for help, but they didn't help me. March 8 Caleb breaks his promise once again, and don't send me an update on my application. To solve this situatio...
undefined
Mar 8, 2024 • 57sec

EA - How do you address temporary slump in work motivation? by Vaipan

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How do you address temporary slump in work motivation?, published by Vaipan on March 8, 2024 on The Effective Altruism Forum. I suppose this is familiar: you have some defined tasks for the week, they are inherently interesting (and reasonably impactful!), you have the right level of competence to achieve them (although these tasks are kind of learning-by-doing, since it's a start-up kind of task), you have a good working environment (silence and food, for me). And yet you feel that slump, you have opened the document and you have booked your Focusmate, and it should go into this deep flaw state. But it doesn't. You feel bland, neutral, and have nothing to report to your Focusmate partner because you haven't been able to write a damn word. But it's not a permanent thing--it's a 'it's been a few day' thing. Any resource? Thanks! Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Mar 8, 2024 • 4min

EA - The Insect Institute is Hiring by Dustin Crummett

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Insect Institute is Hiring, published by Dustin Crummett on March 8, 2024 on The Effective Altruism Forum. The Insect Institute is hiring for a vital, exciting, foundational role: a full-time Program Coordinator or Program Officer (depending on the qualifications of the successful candidate). This is a high-responsibility position where you will have the opportunity to drive real impact for our mission. As our second full-time employee, you will be tasked with helping to carry out the Insect Institute's interventions, including through engagement with policymakers, regulators, NGOs, and potentially media. Suitably qualified candidates may also be asked to contribute to research and report writing. As one of only a few people worldwide working in an extremely important cause area, you will have the potential for enormous counterfactual impact. Salary: $73,630-$87,694 USD pre-tax Location: Fully remote Application Deadline: April 1st, end of day in the EST time zone The full job description and application is available here. If you know someone else who might be a good fit, a referral form is available here. We offer a $500 bonus for referring the successful candidate. Questions about the role can be directed to info@insectinstitute.org. More Information: Key Responsibilities Implementing the Insect Institute's interventions. This might include, but not necessarily be limited to, activities like: Working with legislators on, e.g., environmental issues related to the adoption of insects as food and feed Outreach to regulators in US executive agencies or UK ministries on, e.g., food safety issues related to insect farming Outreach to and collaboration on projects with other NGOs, such as environmental, public health, or animal welfare organizations Drafting press releases and conducting outreach to journalists Especially for more senior levels, taking initiative to, e.g., identify ways to improve on current interventions, or to identify opportunities for new interventions If hired at a more senior level, potentially managing others, especially as the Insect Institute expands in the future For candidates with suitable skills, potentially some degree of research and report writing Requirements: Strong written and oral communication skills Ability to credibly and persuasively represent the Insect Institute's positions to other stakeholders We do not require starting familiarity with relevant academic domains (e.g., environmental science, public health, animal welfare, entomology) or with the state of the insects as food and feed industry. However, the candidate should possess the ability to gain familiarity as needed, and to proactively stay abreast of developments Adaptability, flexibility, and willingness to proactively do what is necessary to give the Insect Institute's projects the greatest chance of success Preferred: If you do not meet all of the below criteria, please still consider applying. Please also take an expansive interpretation of the below criteria (e.g., if you are not sure whether your work experience is relevant, err on the side of assuming it might be). Relevant work experience (such as, e.g., work in policy, advocacy, or alternative proteins). Relevant backgrounds might include but are not limited to, e.g.: Outreach to legislators or relevant government agencies (such as the USDA or FDA in the US, or Defra or the FSA in the UK), especially if on relevant issues (environment sustainability, food safety, etc.) Work within such government agencies, especially if on relevant issues Work in an NGO, such as one focused on the environment, alternative proteins, food safety, or animal welfare, doing work similar to that mentioned in the "key responsibilities" above Experience managing others, especially in working on relevant issues Expertise in a relevant...
undefined
Mar 8, 2024 • 8min

EA - Announcing Convergence Analysis: An Institute for AI Scenario & Governance Research by David Kristoffersson

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Announcing Convergence Analysis: An Institute for AI Scenario & Governance Research, published by David Kristoffersson on March 8, 2024 on The Effective Altruism Forum. Cross-posted on LessWrong. Executive Summary We're excited to introduce Convergence Analysis - a research non-profit & think-tank with the mission of designing a safe and flourishing future for humanity in a world with transformative AI. In the past year, we've brought together an interdisciplinary team of 10 academics and professionals, spanning expertise in technical AI alignment, ethics, AI governance, hardware, computer science, philosophy, and mathematics. Together, we're launching three initiatives focused on conducting Scenario Research, Governance Recommendations Research, and AI Awareness. Our programs embody three key elements of our Theory of Change and reflect what we see as essential components of reducing AI risk: (1) understanding the problem, (2) describing concretely what people can do, and (3) disseminating information widely and precisely. In some more detail, they do the following: Scenario Research: Explore and define potential AI scenarios - the landscape of relevant pathways that the future of AI development might take. Governance Recommendations Research: Provide concrete, detailed analyses for specific AI governance proposals that lack comprehensive research. AI Awareness: Inform the general public and policymakers by disseminating important research via books, podcasts, and more. In the next three months, you can expect to see the following outputs: Convergence's Theory of Change: A report detailing an outcome-based, high-level strategic plan on how to mitigate existential risk from TAI. Research Agendas for our Scenario Research and Governance Recommendations initiatives. 2024 State of the AI Regulatory Landscape: A review summarizing governmental regulations for AI safety in 2024. Evaluating A US AI Chip Registration Policy: A research paper evaluating the global context, implementation, feasibility, and negative externalities of a potential U.S. AI chip registry. A series of articles on AI scenarios highlighting results from our ongoing research. All Thinks Considered: A podcast series exploring the topics of critical thinking, fostering open dialogue, and interviewing AI thought leaders. Learn more on our new website. History Convergence originally emerged as a research collaboration in existential risk strategy between David Kristoffersson and Justin Shovelain from 2017 to 2021, engaging a diverse group of collaborators. Throughout this period, they worked steadily on building a body of foundational research on reducing existential risk, publishing some findings on the EA Forum and LessWrong, and advising individuals and groups such as Lionheart Ventures. Through 2021 to 2023, we laid the foundation for a research institution and built a larger team. We are now launching Convergence as a strong team of 10 researchers and professionals with a revamped research and impact vision. Timelines to advanced AI have shortened, and our society urgently needs clarity on the paths ahead and on the right courses of action to take. Programs Scenario Research There are large uncertainties about the future of AI and its impacts on society. Potential scenarios range from flourishing post-work futures to existential catastrophes such as the total collapse of societal structures. Currently, there's a serious dearth of research to understand these scenarios - their likelihood, causes, and societal outcomes. Scenario planning is an analytical tool used by policymakers, strategists, and academics to explore and prepare for the landscape of possible outcomes in domains defined by uncertainty. Such research typically defines specific parameters that are likely to cause certain scenarios, and id...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app