The Nonlinear Library

The Nonlinear Fund
undefined
Nov 3, 2023 • 20min

LW - 8 examples informing my pessimism on uploading without reverse engineering by Steven Byrnes

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: 8 examples informing my pessimism on uploading without reverse engineering, published by Steven Byrnes on November 3, 2023 on LessWrong. (If you've already read everything I've written, you'll find this post pretty redundant. See especially my old posts Building brain-inspired AGI is infinitely easier than understanding the brain , and Randal Koene on brain understanding before whole brain emulation , and Connectomics seems great from an AI x-risk perspective . But I'm writing it anyway mainly in response to this post from yesterday .) 1. Background / Context 1.1 What does uploading (a.k.a. Whole Brain Emulation (WBE)) look like with and without reverse-engineering? There's a view that I seem to associate with Davidad and Robin Hanson , along with a couple other people I've talked to privately. (But I could be misunderstanding them and don't want to put words in their mouths.) The view says: if we want to do WBE, we do not need to reverse-engineer the brain. For an example of what "reverse-engineering the brain" looks like, I can speak from abundant experience: I often spend all day puzzling over random questions like: Why are there oxytocin receptors in certain mouse auditory cortex neurons? Like, presumably Evolution put those receptors there for a reason - I don't think that's the kind of thing that appears randomly, or as an incidental side-effect of something else. (Although that's always a hypothesis worth considering!) Well, what is that reason? I.e., what are those receptors doing to help the mouse survive, thrive, etc., and how are they doing it? …And once I have a working hypothesis about that question, I can move on to hundreds or even thousands more "why and how" questions of that sort. I seem to find the activity of answering these questions much more straightforward and tractable (and fun!) than do most other people - you can decide for yourself whether I'm unusually good at it, or deluded. For an example of what uploading without reverse-engineering would look like, I think it's the idea that we can figure out the input-output relation of each neuron, and we can measure how neurons are connected to each other, and then at the end of the day we can simulate a human brain doing whatever human brains do. Here's Robin Hanson arguing for the non-reverse-engineering perspective in Age of Em : The brain does not just happen to transform input signals into state changes and output signals; this transformation is the primary function of the brain, both to us and to the evolutionary processes that designed brains. The brain is designed to make this signal processing robust and efficient. Because of this, we expect the physical variables (technically, "degrees of freedom") within the brain that encode signals and signal-relevant states, which transform these signals and states, and which transmit them elsewhere, to be overall rather physically isolated and disconnected from the other far more numerous unrelated physical degrees of freedom and processes in the brain. That is, changes in other aspects of the brain only rarely influence key brain parts that encode mental states and signals. We have seen this disconnection in ears and eyes, and it has allowed us to create useful artificial ears and eyes, which allow the once-deaf to hear and the once-blind to see. We expect the same to apply to artificial brains more generally. In addition, it appears that most brain signals are of the form of neuron spikes, which are especially identifiable and disconnected from other physical variables. If technical and intellectual progress continues as it has for the last few centuries, then within a millennium at the most we will understand in great detail how individual brain cells encode, transform, and transmit signals. This understanding should allow us to directly read rele...
undefined
Nov 3, 2023 • 33min

LW - Integrity in AI Governance and Advocacy by habryka

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Integrity in AI Governance and Advocacy, published by habryka on November 3, 2023 on LessWrong. Ok, so we both had some feelings about the recent Conjecture post on "lots of people in AI Alignment are lying" , and the associated marketing campaign and stuff . I would appreciate some context in which I can think through that, and also to share info we have in the space that might help us figure out what's going on. I expect this will pretty quickly cause us to end up on some broader questions about how to do advocacy, how much the current social network around AI Alignment should coordinate as a group, how to balance advocacy with research, etc. Feelings about Conjecture post: Lots of good points about people not stating their full beliefs messing with the epistemic environment and making it costlier for others to be honest. The lying and cowardice frames feel off to me. I personally used to have a very similar rant to Conjecture. Since moving to DC, I'm more sympathetic to governance people. We could try to tease out why. The post exemplifies a longterm gripe I have with Conjecture's approach to discourse & advocacy, which I've found pretty lacking in cooperativeness and openness (Note: I worked there for ~half a year.) Questions on my mind: How open should people motivated by existential risk be? (My shoulder model of several people says "take a portfolio approach!" - OK, then what allocation?) How advocacy-y should people be? I want researchers to not have to tweet their beliefs 24/7 so they can actually get work done How do you think about this, Oli? How sympathetic to be about governance people not being open about key motivations and affiliations I personally used to have a very similar rant to Conjecture. I'm now more sympathetic to governance people. We could try to tease out why. This direction seems most interesting to me! My current feelings in the space are that I am quite sympathetic to some comms-concerns that people in government have and quite unsympathetic to some other stuff, and I would also like to clarify for myself where the lines here are. Curious whether you have any key set of observations or experiences you had that made you more sympathetic. Observations I've heard secondhand of at least one instance where a person brought up x risk, then their Congressional office took them less seriously. Other staffers have told me talking about x risk wouldn't play well (without citing specific evidence, but I take their opinions seriously). (This didn't update me a ton though. My model already included "most people will think this is weird and take you less seriously". The question is, "Do you make it likelier for people to do good things later, all things considered by improving their beliefs, shifting the Overton window, or convincing 1/10 people, etc.?") I've also personally found it tricky to talk about takeover & existential risks, just because these ideas take a long time to explain, and there are many inferential steps between there and the policies I'm recommending. So, I'm often tempted to mention my x risk motivations only briefly, then focus on whatever's inferentially closest and still true. (Classically, this would be "misuse risks, especially from foreign adversaries and terrorists" and "bioweapon and cyberoffensive capabilities coming in the next few years".) Separate point which we might want to discuss later A thing I'm confused about is: Should I talk about inferentially close things that makes them likeliest to embrace the policies I'm putting on their desk, Or , should I just bite the bullet of being confusing and start many meetings with "I'm deeply concerned about humanity going extinct in the next decade because of advancing AI which might try to take over the world. It's a lot to explain but the scientists are on my side. Please ...
undefined
Nov 3, 2023 • 1min

EA - Still no strong evidence that LLMs increase bioterrorism risk by freedomandutility

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Still no strong evidence that LLMs increase bioterrorism risk, published by freedomandutility on November 3, 2023 on The Effective Altruism Forum. https://www.lesswrong.com/posts/ztXsmnSdrejpfmvn7/propaganda-or-science-a-look-at-open-source-ai-and Linkpost from LessWrong. The claims from the piece which I most agree with are: Academic research does not show strong evidence that existing LLMs increase bioterrorism risk. Policy papers are making overly confident claims about LLMs and bioterrorism risk, and are citing papers that do not support claims of this confidence. I'd like to see better-designed experiments aimed at generating high quality evidence to work out whether or not future, frontier models increase bioterrorism risks, as part of evals conducted by groups like the UK and US AI Safety Institute. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 3, 2023 • 20min

AF - Thoughts on open source AI by Sam Marks

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Thoughts on open source AI, published by Sam Marks on November 3, 2023 on The AI Alignment Forum. Epistemic status: I only ~50% endorse this, which is below my typical bar for posting something. I'm more bullish on "these are arguments which should be in the water supply and discussed" than "these arguments are actually correct." I'm not an expert in this, I've only thought about it for ~15 hours, and I didn't run this post by any relevant experts before posting. Thanks to Max Nadeau and Eric Neyman for helpful discussion. Right now there's a significant amount of public debate about open source AI. People concerned about AI safety generally argue that open sourcing powerful AI systems is too dangerous to be allowed; the classic example here is "You shouldn't be allowed to open source an AI system which can produce step-by-step instructions for engineering novel pathogens." On the other hand, open source proponents argue that open source models haven't yet caused significant harm, and that trying to close access to AI will result in concentration of power in the hands of a few AI labs. I think many AI safety-concerned folks who haven't thought about this that much tend to vaguely think something like "open sourcing powerful AI systems seems dangerous and should probably be banned." Taken literally, I think this plan is a bit naive: when we're colonizing Mars in 2100 with the help of our aligned superintelligence, will releasing the weights of GPT-5 really be a catastrophic risk? I think a better plan looks something like "You can't open source a system until you've determined and disclosed the sorts of threat models your system will enable, and society has implemented measures to become robust to these threat models. Once any necessary measures have been implemented, you are free to open-source." I'll go into more detail later, but as an intuition pump imagine that: the best open source model is always 2 years behind the best proprietary model (call it GPT-SoTA) [1] ; GPT-SoTA is widely deployed throughout the economy and deployed to monitor for and prevent certain attack vectors, and the best open source model isn't smart enough to cause any significant harm without GPT-SoTA catching it. In this hypothetical world, so long as we can trust GPT-SoTA , we are safe from harms caused by open source models. In other words, so long as the best open source models lag sufficiently behind the best proprietary models and we're smart about how we use our best proprietary models, open sourcing models isn't the thing that kills us. In this rest of this post I will: Motivate this plan by analogy to responsible disclosure in cryptography Go into more detail on this plan Discuss how this relates to my understanding of the current plan as implied by responsible scaling policies (RSPs) Discuss some key uncertainties Give some higher-level thoughts on the discourse surrounding open source AI An analogy to responsible disclosure in cryptography [I'm not an expert in this area and this section might get some details wrong. Thanks to Boaz Barak for pointing out this analogy (but all errors are my own). See this footnote [2] for a discussion of alternative analogies you could make to biosecurity disclosure norms, and whether they're more apt to risk from open source AI.] Suppose you discover a vulnerability in some widely-used cryptographic scheme. Suppose further that you're a good person who doesn't want anyone to get hacked. What should you do? If you publicly release your exploit, then lots of people will get hacked (by less benevolent hackers who've read your description of the exploit). On the other hand, if white-hat hackers always keep the vulnerabilities they discover secret, then the vulnerabilities will never get patched until a black-hat hacker finds the vulnerability and explo...
undefined
Nov 3, 2023 • 23min

EA - Rethink Priorities' Cross-Cause Cost-Effectiveness Model: Introduction and Overview by Derek Shiller

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Rethink Priorities' Cross-Cause Cost-Effectiveness Model: Introduction and Overview, published by Derek Shiller on November 3, 2023 on The Effective Altruism Forum. This post is a part of Rethink Priorities' Worldview Investigations Team's CURVE Sequence : "Causes and Uncertainty: Rethinking Value in Expectation." The aim of this sequence is twofold: first, to consider alternatives to expected value maximization for cause prioritization; second, to evaluate the claim that a commitment to expected value maximization robustly supports the conclusion that we ought to prioritize existential risk mitigation over all else. This post presents a software tool we're developing to better understand risk and effectiveness. Executive Summary The cross-cause cost-effectiveness model (CCM) is a software tool under development by Rethink Priorities to produce cost-effectiveness evaluations in different cause areas. The CCM enables evaluations of interventions in global health and development, animal welfare, and existential risk mitigation. The CCM also includes functionality for evaluating research projects aimed at improving existing interventions or discovering more effective alternatives. The CCM follows a Monte Carlo approach to assessing probabilities. The CCM accepts user-supplied distributions as parameter values. Our primary goal with the CCM is to clarify how parameter choices translate into uncertainty about possible results. The limitations of the CCM make it an inadequate tool for definitive comparisons. The model is optimized for certain easily quantifiable effective projects and cannot assess many relevant causes. Probability distributions are a questionable way of representing deep uncertainty. The model may not adequately handle possible interdependence between parameters. Building and using the CCM has confirmed some of our expectations. It has also surprised us in other ways. Given parameter choices that are plausible to us, existential risk mitigation projects dominate others in expected value in the long term, but the results are too high variance to approximate through Monte Carlo simulations without drawing billions of samples. The expected value of existential risk mitigation in the long run is mostly determined by the tail-end possible values for a handful of deeply uncertain parameters. The most promising animal welfare interventions have a much higher expected value than the leading global health and development interventions with a somewhat higher level of uncertainty. Even with relatively straightforward short-term interventions and research projects, much of the expected value of projects results from the unlikely combination of tail-end parameter values. We plan to host an online walkthrough and Q&A of the model with the Rethink Priorities Worldview Investigations Team on Giving Tuesday, November 28, 2023, at 9 am PT / noon ET / 5 pm BT / 6 pm CET. If you would like to attend this event, please sign up here. Overview Rethink Priorities' cross-cause cost-effectiveness model (CCM) is a software tool we are developing for evaluating the relative effectiveness of projects across three general domains: global health and development, animal welfare, and the mitigation of existential risks. You can play with our initial version at ccm.rethinkpriorities.org and provide us feedback in this post or via this form . The model produces effectiveness estimates, understood in terms of the effect on the sum of welfare across individuals, for interventions and research projects within these domains. Results are generated by computations on the values of user-supplied parameters. Because of the many controversies and uncertainties around these parameters, it follows a Monte Carlo approach to accommodating our uncertainty: users don't supply precise values but instead ...
undefined
Nov 3, 2023 • 5min

EA - Promoting Effective Giving this Giving Season: For groups, networks and individuals by GraceAdams

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Promoting Effective Giving this Giving Season: For groups, networks and individuals, published by GraceAdams on November 3, 2023 on The Effective Altruism Forum. Effective giving is a core part of effective altruism as a project - and we'd love to see EA groups, networks of people in the EA community, and individuals helping to promote it, this Giving Season. There's a lot less funding available to many effective charities than there was last year,and that means that we're making less progress than we might have otherwise on pressing global problems. Increasing the funds raised for effective charities by promoting effective giving remains one of the best ways for many of us to prevent deaths and suffering now and into the future. Below, I've listed some actions for both groups/networks and individuals to take. If you have any other ideas for promoting effective giving, we are all ears and would love to figure out how to support you! Feel free to share ideas with others in the comments! Actions for groups or networks: Ask us to host a talk for your group, workplace, social club, etc. We're excited about giving talks about effective giving to groups of more than 20 people online or in-person (in locations that are feasible for us). We can also connect you with other organisations or speakers! We have a particularly good new talk that's been really well received by several consulting and tech companies! Fill out this form to let us know you're interested Host your own Giving Game, everyday philanthropist or fundraising event GWWC will sponsor donations for each participant in a Giving Game and has training and materials to help you run this smoothly! We also have a long list of ideas for fundraising events. Additionally, we think Everyday Philanthropist events could be a really great way to engage both new and existing givers. Here's a brief explanation of how they work (from our event guide): Invite your attendees to help in making a real-world donation decision. One or more donors will play the role of a philanthropist and the attendees will help the donor decide on where they will donate. Ideally the donors will provide a document with what their intentions are (e.g. "most improve the lives of farmed animals") and some suggested charities to help guide the discussion. This works best if either the donor or event organising team provide good summary information on each of the charities. This makes for a great end-of-year event, helps to showcase real people who make effective giving a part of their lives, and offers an opportunity for those without an income to also be involved in effective giving. Giving Game materials and sponsorship request Fundraising event ideas How to run an Everyday Philanthropist event Start a fundraising page for your group You can request to set up a GWWC fundraising page for up to 3 of our supported charities. Why not set a target and encourage your group to ask friends and family to donate? Create a fundraising page with GWWC Host a pledge panel in the new year Hearing from people about their experiences taking a pledge with GWWC can be a great way to answer questions that people might have about the pledge, or help someone feel that it's more achievable and rewarding than they previously thought. Pledge panel event guide Actions for individuals: Contribute a post to the EA Forum about your giving during Giving Season Share your experience with giving and more during a themed week on the EA Forum. Your thinking could influence others to donate more, or differently - and we'd love to see a variety of opinions out there! Themed weeks you might want to contribute to Vote and discuss as part of the EA Forum's Donation Election EA Forum users will have the opportunity to vote on which charities will receive a portion of the Donation Election Fu...
undefined
Nov 3, 2023 • 11min

LW - One Day Sooner by Screwtape

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: One Day Sooner, published by Screwtape on November 3, 2023 on LessWrong. There is a particular skill I would like to share, which I wish I had learned when I was younger. I picked it up through working closely with a previous boss (a CTO who had founded a company and raised it up to hundreds of employees and multi-million dollar deals) but it wasn't until I read The Story of VaccinateCA that I noticed it was a distinct skill and put into words how it worked. The Sazen for this skill is "One Day Sooner." I would like to give warning before explaining further however: This skill can be hazardous to use. It is not the kind of thing "Rationalist Dark Art" describes because it does not involve deception, and I think it's unlikely to damage much besides the user. It's the kind of thing I'd be tempted to label a dark art however. Incautious use can make the user's life unbalanced in ways that are mostly predictable from the phrase "actively horrible work/life balance." It works something like this: when you're planning a project or giving a time estimate, you look at that time estimate and ask what it would take to do this one day sooner, and then you answer honestly and creatively. What does it look like? I used to work directly under the CTO of a medium sized software company. My team was frequently called upon to create software proofs of concept or sales demos. The timelines were sometimes what I will euphemistically call aggressive. Consider a hypothetical scene; it's Thursday and you have just found out that a sales demo is on Tuesday which could use some custom development. Giving a quick estimate, you'd say this needs about a week of work and will be ready next Wednesday. What would it take to do this one day sooner? Well, obviously you can work through the weekend. That gets you two more days. Given a couple of late evenings and getting enough total hours in is easy. That's not the only thing though. There's some resources from Marketing that would be good to have, you emailed them and they said they could meet with you on Monday. You want this faster though, so you walk over to their office and lean in, pointing out this is a direct assignment from the CTO so could we please have the meeting today instead. What else? Oh, there's a bunch of specification writing and robust test writing you'd usually do. Some of that you still do, since it would be a disaster if you built the wrong thing so you need to be sure you're on the right track, but some of it you skip. The software just needs to work for this one demo, on a machine you control, operated by someone following a script that you wrote, so you can skip a lot of reliability testing and input validation. I appreciate The Story of VaccinateCA , a description of an organization whose goal was helping people get the Covid-19 vaccination. I think it is worth reading in full, but I will pull out one particular quote here. We had an internal culture of counting the passage of time from Day 0, the day (in California) we started working on the project. We made the first calls and published our first vaccine availability on Day 1. I instituted this little meme mostly to keep up the perception of urgency among everyone. We repeated a mantra: Every day matters. Every dose matters. Where other orgs would say, 'Yeah I think we can have a meeting about that this coming Monday,' I would say, 'It is Day 4. On what day do you expect this to ship?' and if told you would have your first meeting on Day 8, would ask, 'Is there a reason that meeting could not be on Day 4 so that this could ship no later than Day 5?' This is One Day Sooner. I have worked in environments that had this norm, and environments that did not have it. I have asked questions analogous to "Is there a reason that meeting could not be on Day 4" and received answer...
undefined
Nov 3, 2023 • 1min

LW - The other side of the tidal wave by KatjaGrace

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The other side of the tidal wave, published by KatjaGrace on November 3, 2023 on LessWrong. I guess there's maybe a 10-20% chance of AI causing human extinction in the coming decades, but I feel more distressed about it than even that suggests - I think because in the case where it doesn't cause human extinction, I find it hard to imagine life not going kind of off the rails. So many things I like about the world seem likely to be over or badly disrupted with superhuman AI (writing, explaining things to people, friendships where you can be of any use to one another, taking pride in skills, thinking, learning, figuring out how to achieve things, making things, easy tracking of what is and isn't conscious), and I don't trust that the replacements will be actually good, or good for us, or that anything will be reversible. Even if we don't die, it still feels like everything is coming to an end. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 3, 2023 • 1min

EA - SBF found guilty on all counts by Fermi-Dirac Distribution

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: SBF found guilty on all counts, published by Fermi-Dirac Distribution on November 3, 2023 on The Effective Altruism Forum. Sam Bankman-Fried has been found guilty of all seven charges in his recent trial. The jury deliberated for three and a half hours. Here are the counts, listed by CNN: Count one: Wire fraud on customers of FTX Count two: Conspiracy to commit wire fraud on customers of FTX Count three: Wire fraud on Alameda Research lenders Count four: Conspiracy to commit wire fraud on lenders to Alameda Research Count five: Conspiracy to commit securities fraud on investors in FTX Count six: Conspiracy to commit commodities fraud on customers of FTX Count seven: Conspiracy to commit money laundering There are still a few other charges against him that will be addressed in a March 2024 trial. He (and I think also his convicted co-conspirators Caroline Ellison, Gary Wang, Ryan Salame and Nishad Singh) will be sentenced next March . Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Nov 2, 2023 • 9min

EA - How Long Do Policy Changes Matter? New Paper by zdgroff

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: How Long Do Policy Changes Matter? New Paper, published by zdgroff on November 2, 2023 on The Effective Altruism Forum. A key question for many interventions' impact is how long the intervention changes some output counterfactually, or how long the intervention washes out. This is often the case for work to change policy: the cost-effectiveness of efforts to pass animal welfare ballot initiatives , nuclear non-proliferation policy , climate policy , and voting reform , for example, will depend on (a) whether those policies get repealed and (b) whether they would pass anyway. Often there is an explicit assumption, e.g., that passing a policy is equivalent to speeding up when it would have gone into place anyway by X years. [1] [2] As people routinely note when making these assumptions, it is very unclear what assumption would be appropriate. In a new paper (my economics "job market paper"), I address this question, focusing on U.S. referendums but with some data on other policymaking processes: Policy choices sometimes appear stubbornly persistent, even when they become politically unpopular or economically damaging. This paper offers the first systematic empirical evidence of how persistent policy choices are, defined as whether an electorate's or legislature's decisions affect whether a policy is in place decades later. I create a new dataset that tracks the historical record of more than 800 state policies that were the subjects of close referendums in U.S. states since 1900. In a regression discontinuity design, I estimate that passing a referendum increases the chance a policy is operative 20, 40, or even 100 years later by over 40 percentage points. I collect additional data on U.S. Congressional legislation and international referendums and use existing data on state legislation to document similar policy persistence for a range of institutional environments, cultures, and topics. I develop a theoretical model to distinguish between possible causes of persistence and present evidence that persistence arises because policies' salience declines in the aftermath of referendums. The results indicate that many policies are persistently in place - or not - for reasons unrelated to the electorate's current preferences. Below I'll pull out some key takeaways that I think are relevant to the EA community and in some cases did not make it into the paper. Overview of Results and Methods My strategy in the paper involves comparing how many policies that barely passed or barely failed in U.S. state-level referendums are in place over time. I collect data on all referendums whose vote outcome is within 2.5 percentage points of the threshold for passage (typically 50%) since 1900 in a subset of U.S. states. I then do what's called a regression discontinuity design, which allows me to estimate the effect of passing a referendum on whether it is in place later on. The headline result from the paper is below. Many referendums that barely fail eventually pass in the first few years or decades afterward, and then this levels off. At 100 years later, just under 80% of the barely passed ones are in place compared to just under 40% of the barely failed ones. Importantly, the hazard rate - the rate at which this effect declines over time - is much lower in the later years, meaning that if you were to extrapolate this out beyond 100 years, the effect at 200 years would be expected to be significantly more than 40% * 40%. Something relevant to EAs that I don't focus on in the paper is how to think about the effect of campaigning for a policy given that I focus on the effect of passing one conditional on its being proposed. It turns out there's a method ( Cellini et al. 2010 ) for backing this out if we assume that the effect of passing a referendum on whether the policy is in place lat...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app