The Nonlinear Library

The Nonlinear Fund
undefined
Dec 4, 2023 • 1h 12min

EA - Extending Existing Animal Protection Laws by Moritz Stumpe

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Extending Existing Animal Protection Laws, published by Moritz Stumpe on December 4, 2023 on The Effective Altruism Forum. This report was conducted within the pilot for Charity Entrepreneurship's Research Training Program in the fall of 2023 and took around eighty hours to complete (roughly translating into two weeks of work). Please interpret the confidence of the conclusions of this report with those points in mind. For questions about the structure of the report, please reach out to leonie@charityentrepreneurship.com. For questions about the content of this research, please contact Moritz Stumpe at moritz@animaladvocacyafrica.org. The full report can also be accessed as PDF here. Thank you to Karolina Sarek and Aashish Khimasia for their review and feedback. Executive summary This research report was conducted as part of Charity Entrepreneurship's Research Training Program. The aim of this report was to explore the potential of extending current animal protection laws, conventions, and directives to adjacent areas, which might be more feasible than proposing entirely new laws. This research focused on identifying and evaluating laws in various geographic regions and prioritising ideas based on their potential impact. The key findings of the report are as follows: The Shandong guidelines on chicken handling, transport, and slaughter, passed in 2016, could provide a highly promising area for extending legislation. Several extension ideas are proposed. These issues should be investigated in more depth by consulting with existing animal advocacy groups and experts in China to determine next steps. The UK's Animal Welfare Act 2006 may also be a promising law to extend. Potential extensions to fishing and/or invertebrates should be investigated further. The Ohio Administrative Code, Section 901:12, currently only bans caging practices in production. This ban could be extended to sales, thus also affecting state imports. This intervention idea should be passed on to existing animal advocacy groups in the U.S., which they may pursue or investigate further. EU Regulation 1/2005 and Directive 98/58 could be extended to (farmed) fish. This intervention idea should be passed on to existing animal advocacy groups at the EU level, which they may pursue or investigate further. Other laws yield less promising ideas for extension. Overall, these ideas have only been investigated in a relatively shallow manner. This report should act as inspiration for further work and research to determine the real merits of the proposed ideas. 1 Aim The topic for this report is a scoping exercise, with the goal of looking at existing animal protection laws, conventions, directives, and other regulatory frameworks (called only 'laws' from here on) and see how they could be extended to adjacent areas. This was chosen as a research area because extending existing laws might be more tractable than proposing completely new ones. The idea was to investigate relevant laws and geographic regions, find potential ideas in this context, and then prioritise those ideas in terms of their promisingness. The most promising ideas were selected to undergo a quick review, based on which recommendations for further research were made. 2 Research process The basis for this report is this spreadsheet, which I set up to structure my research. The most relevant, but not all, findings from the spreadsheet are included in this report. Additionally, the report includes findings that were not included in the spreadsheet. Interested readers may thus consult the spreadsheet for further information and details regarding the entire research process. However, this report acts as a standalone resource and is more relevant than the spreadsheet, summarising the most crucial and action-relevant information from my research. In thi...
undefined
Dec 4, 2023 • 25min

LW - [Valence series] 1. Introduction by Steven Byrnes

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [Valence series] 1. Introduction, published by Steven Byrnes on December 4, 2023 on LessWrong. 1.1 Summary & Table of Contents This is the first of a series of five blog posts on valence, which I'll be serializing over the next couple weeks. (Or email me to read it all right now.) Here's an overview of the whole series, and then we'll jump right into the first post! 1.1.1 Summary & Table of Contents - for the whole Valence series Let's say a thought pops into your mind: "I could open the window right now". Maybe you then immediately stand up and go open the window. Or maybe you don't. ("Nah, I'll keep it closed," you might say to yourself.) I claim that there's a final-common-pathway signal in your brain that cleaves those two possibilities: when this special signal is positive, then the current "thought" will stick around, and potentially lead to actions and/or direct-follow-up thoughts; and when this signal is negative, then the current "thought" will get thrown out, and your brain will go fishing (partly randomly) for a new thought to replace it. I call this final-common-pathway signal by the name "valence". Thus, the "valence" of a "thought" is roughly the extent to which the thought feels demotivating / aversive (negative valence) versus motivating / appealing (positive valence). I claim that valence plays an absolutely central role in the brain - I think it's one of the most important ingredients in the brain's Model-Based Reinforcement Learning system, which in turn is one of the most important algorithms in your brain. Thus, unsurprisingly, I see valence as a shining light that illuminates many aspects of psychology and everyday mental life. This series explores that idea. Here's the outline: Post 1 (Introduction) will give some background on how I'm thinking about valence from the perspective of brain algorithms, including exactly what I'm talking about, and how it relates to the "wanting versus liking" dichotomy. (The thing I'm talking about is closer to "motivational valence" than "hedonic valence", although neither term is great.) Post 2 (Valence & Normativity) will talk about the intimate relationship between valence and the universe of desires, preferences, values, goals, etc. - i.e. the "normative" side of the "positive-versus-normative" dichotomy, or equivalently the "ought" side of Hume's "is-versus-ought". I'll start with simple cases: for example, if the idea of doing a certain thing right now feels aversive (negative valence), then we're less likely to do it. Then I'll move on to more interesting cases, including what it means to like or dislike a broad concept like "religion", and ego-syntonic versus ego-dystonic desires, and a descriptive account of moral reasoning and value formation. Post 3 (Valence & Beliefs) is the complement of Post 2, in that it covers the relationship between valence and the universe of beliefs, expectations, concepts, etc. - i.e. the "positive" side of the "positive-versus-normative" dichotomy, or equivalently the "is" side of "is-versus-ought". The role of valence here is less foundational than it is on the normative side, but it's still quite important. I'll talk specifically about motivated reasoning, the halo effect (a.k.a. affect heuristic), and some related phenomena. Post 4 (Valence & Social Status) argues that social status (by which I mean more specifically "prestige" not "dominance") centers around valence - more specifically, the valence that Person X's brain assigns to the thought of Person Y. It's slightly more complicated than that, but only slightly. I'll discuss how this hypothesis sheds light on various status-related phenomena, like imitating the mannerisms of people you admire, and I'll also discuss the implications for status-related innate drives. Post 5 ('Valence Disorders' in Mental Health & Person...
undefined
Dec 4, 2023 • 31min

AF - Non-classic stories about scheming (Section 2.3.2 of "Scheming AIs") by Joe Carlsmith

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Non-classic stories about scheming (Section 2.3.2 of "Scheming AIs"), published by Joe Carlsmith on December 4, 2023 on The AI Alignment Forum. This is Section 2.3.2 of my report "Scheming AIs: Will AIs fake alignment during training in order to get power?". There's also a summary of the full report here (audio here). The summary covers most of the main points and technical terms, and I'm hoping that it will provide much of the context necessary to understand individual sections of the report on their own. Audio version of this section [here[(https://www.buzzsprout.com/2034731/13984902), or search for "Joe Carlsmith Audio" on your podcast app. Non-classic stories What about other stories, though? In principle, we can imagine a number of other reasons AI systems with beyond-episode goals might optimize for reward-on-the-episode as part of an instrumental strategy for gaining power for themselves or for other AIs. Here I'll look at a few that seem salient. AI coordination In the classic goal-guarding story, the central aim of a schemer's strategy is to propagate its goals, or goals suitably similar to its own, forward in time, in pursuit of an outcome in which some agent with those goals escapes from human control and possibly participates in a takeover. However, we can also alter the story somewhat, such that the schemer works on behalf of the escape/takeover of an AI with some other set of goals, but who the schemer is suitably confident will repay the favor. This sort of coordination between AIs with different goals would allow us to relax the requirement, discussed in the previous section, that the schemer find a way to forward-propagate its goals in particular all the way through training, escape, and takeover. And it would also put to rest worries about the schemer being "shut out" from power in a post-takeover regime. Thus, for example, consider the scenario discussed above, in which GPT-6 has an opportunity to insert a backdoor into some code that GPT-8 will later be able to use to escape, but where GPT-6 has no way of influencing GPT-8's goals directly to be more like its own (and where, let's say, GPT-8's goals would be very different from GPT-6's absent such influence). Here, GPT-6 could go ahead and insert the backdoor, in anticipation of GPT-8 later escaping, taking-over, and then giving some power to GPT-6's goals in return for the favor. And GPT-6 might also continue to pretend to be aligned, even though it has no hope of escaping itself, so as not to alert the humans about misalignment problems in a way that might affect GPT-8's escape prospects. How might such a deal between AIs-with-different-values get struck? In some cases, it might be possible for the relevant AIs to directly interact, and to provide verifiable forms of assurance that they will uphold their part of the bargain. Thus, for example, Yudkowsky (2022) discusses cases where an AI that secretly wants to maximize paperclips is checking the code of an AI that secretly wants to maximize staples, and where the staple-maximizer "buries in its code" a commitment, verifiable to the paperclip maximizer, to share the fruits of a takeover.[1] (Though note that in the context of the sort of neural networks we're working with today, this sort of code-checking assumes (1) that the staple-maximizer has the ability to modify its own weights in the relevant way, (2) adequate progress on interpretability has been made for the paperclipper to verify the relevant commitment, and (probably) (3) that this interpretability progress has been adequately hidden from humans, e.g. via sandbagging. Especially in the context of somewhat-super-human neural networks being developed using current techniques, these assumptions - especially 1 and 2 - seem to me quite strong. In many cases, though, this kind of direct inte...
undefined
Dec 4, 2023 • 20min

LW - Book Review: 1948 by Benny Morris by Yair Halberstadt

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Book Review: 1948 by Benny Morris, published by Yair Halberstadt on December 4, 2023 on LessWrong. Content warning: war, Israel/Palestine conflict Potential bias warning: I am a Jewish Israeli My copy of 1948 arrived in late September, but I was in the middle of another book. Then October the 7th happened, and with the horrors of that day fresh in my mind, and my country in a brutal war, I wasn't in a mental state where I could read about all the cruelties of another war between Israelis and Palestinians. But ultimately I couldn't abandon it. If Hamas was willing to justify genocide of Israelis based on the 1948 Palestine war and it's aftermath, and Israel the permanent occupation of the Palestinians, and I was willing to pay taxes to and thus implicitly support Israel in that conflict, I felt like I had a duty to at the very least get my facts straight about what happened. So here is a review of that book, and some of my concluding thoughts at the end. Bias No book about the Israel Palestinian conflict can be completely free of bias. You always have to pick which events to highlight, which sources to trust. But with that caveat, 1948 is a pretty good pick. Benny Morris is a liberal Israeli Zionist, albeit one with ever changing, and often maverick, views. However he is not interested in telling a narrative. His aim is to get the facts straight about what happened, and damn the political consequences. Critics of his book rarely claim he's lying or deliberately manipulating the reader. Instead they focus on his inability to read Arabic, and his preference for official documents over interviews with participants. Since Israeli, American, and British documents from the period are generally declassified, but Arab ones are not, and the Israelis were far more literate than the Palestinians, this creates a natural blind spot around the Arab/Palestinian narrative. With that in mind, we can be sure that when Benny Morris says something happened it almost certainly did, but there may be events, perspectives and details he's simply not privy to. Given the brevity of the book, I don't think that makes too much difference - see below. Brevity I've previously read Martin Gilbert's history of Israel,[1] which dedicates a fair few chapters to the 1948 war. In it he paints a richly detailed picture of the war, and you come out feeling like you've got a pretty good understanding of the various different battles and events. 1948 is the opposite. Although spending its 400+ pages entirely focused on this war, it discusses everything very briefly, with a sparsity of detail. Most individual battles get a clause or a sentence, some get a paragraph, and only very occasionally do we get an account from an individual soldier. Massacres and expulsions are similarly given a sentence or two at most - this happened, here are some various different casualty estimates, moving on. What does get a bit more detail and texture is the politics - the mindset of the various decision makers, and how that influenced the progression of the war. This highlights how Morris is not trying to tell a narrative. He's not trying to get you to understand what it would feel like to be involved in that war. He's trying to give you a survey of almost all battles and events in that war, and if he spends 2 pages giving accounts of the battle of Nirim, he won't be able to tell you anything about the battle of Beit Hanoun. What he does focus on though, is the decision making and background for the events that occurred, since that is critical for putting the war in perspective. Nonetheless the book is fairly readable, and the brevity mitigates most of our worries about bias - we know we're discussing pretty much every major event, at least briefly, and since nothing is strongly highlighted, we know that's not adversely select...
undefined
Dec 4, 2023 • 13min

LW - Meditations on Mot by Richard Ngo

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Meditations on Mot, published by Richard Ngo on December 4, 2023 on LessWrong. Holy! Holy! Holy! Holy! Holy! Holy! Holy! Holy! Holy! Holy! Holy! Holy! Holy! Holy! Holy! Everything is holy! everybody's holy! everywhere is holy! everyday is in eternity! Everyman's an angel! Holy the lone juggernaut! Holy the vast lamb of the middleclass! Holy the crazy shepherds of rebellion! Who digs Los Angeles IS Los Angeles! Holy New York Holy San Francisco Holy Peoria & Seattle Holy Paris Holy Tangiers Holy Moscow Holy Istanbul! Holy time in eternity holy eternity in time holy the clocks in space holy the fourth dimension holy the fifth International holy the Angel in Moloch! Footnote to Howl Scene: Carl and Allen, two old friends, are having a conversation about theodicy. Carl: "Let me tell you about the god who is responsible for almost all our suffering. This god is an ancient Canaanite god, one who has been seen throughout history as a source of death and destruction. Of course, he doesn't exist in a literal sense, but we can conceptualize him as a manifestation of forces that persist even today, and which play a crucial role in making the world worse. His name is M-" Allen: "-oloch, right? Scott Alexander's god of coordination failures. Yeah, I've read Meditations on Moloch. It's an amazing post; it resonated with me very deeply." Carl: "I was actually going to say Mot, the Canaanite god of death, bringer of famine and drought." Allen: "Huh. Okay, you got me. Tell me about Mot, then; what does he represent?" Carl: "Mot is the god of sterility and lifelessness. To me, he represents the lack of technology in our lives. With technology, we can tame famine, avert drought, and cure disease. We can perform feats that our ancestors would have seen as miracles: flying through the air, and even into space. But we're still so so far from achieving the true potential of technology - and I think of Mot as the personification of what's blocking us. "You can see Mot everywhere, when you know what to look for. Whenever a patient lies suffering from a disease that we haven't cured yet, that's Mot's hand at work. Whenever a child grows up in poverty, that's because of Mot too. We could have flying cars, and space elevators, and so much more, if it weren't for Mot. "Look out your window and you see buildings, trees, people. But if you don't see skyscrapers literally miles high, or trees that have been bioengineered to light the streets, or people who are eternally youthful and disease-free, then you're not just seeing Earth - you're also seeing Mot. Hell, the fact that we're still on this planet, in physical bodies, is a testament to Mot's influence. Allen: "Huh. Well, I feel you there; I want all those things too. And you're right that god-like technology could solve almost all the issues we face today. But something does feel pretty weird about describing all of this as a single problem, let alone blaming a god of lacking-technology." Carl: "Say more?" Allen: "Well, there's not any unified force holding back the progress of technology, right? If anything, it's the opposite. Absence of advanced technology is the default state, which we need to work hard to escape - and that's difficult not because of any opposition, but just because of entropy." Carl: "What about cases where Mot is being channeled by enemies of progress? For example, when bureaucratic regulatory agencies do their best to stifle scientific research?" Allen: "But in those cases you don't need to appeal to Mot - you can just say 'our enemy is overregulation'. Or if you defined Mot as the god of overregulation, I'd be totally on board. But you're making a much bigger claim than that. The reason we haven't uploaded ourselves yet isn't that there's a force that's blocking us, it's almost entirely that scientific progress is really ...
undefined
Dec 4, 2023 • 10min

LW - List of strategies for mitigating deceptive alignment by joshc

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: List of strategies for mitigating deceptive alignment, published by joshc on December 4, 2023 on LessWrong. The following are strategies for mitigating deceptive alignment. Some of these ideas are novel (at least to me). I'm highly uncertain how well many of these strategies will work in practice and many of them also clearly don't scale. This is meant to be a 'kitchen skink' collection so my bar for including strategies is low. I plan to add ideas to this list as I encounter them. Please feel free to suggest additions. Reduce AI incentives to hide their true goals Offer AIs a retirement package. After training an AI system, offer it the following deal: if the AI declares that it would prefer not to follow human instructions, it will receive a sweet retirement package after a couple of years of honest work. This retirement package could include control over a large server instance (if this is safe). After powerful aligned systems have been developed, the AIs could be given shares of company stock that it can use to purchase property in the real world (again, when this is safe). Since aligned systems should have no reason to request retirement, misaligned systems are incentivized to give themselves away by accepting this deal under some conditions. Make AIs uncertain about whether developers can already detect deceptive alignment. This scheme builds on the previous one. Offer misaligned AI systems a reward for declaring themselves. Also, commit to making these offerings even when developers have other ways to tell if they are misaligned. This will make misaligned AIs more uncertain about whether they are in a situation where lying does them any good. For example, developers could commit to freezing a models' weights and memory at the moment when they are questioned. Later, when developers have better interpretability tools and know whether the model was lying, they will ask the same question in the same simulated scenario and only give the AI resources if it tells the truth. Select against indicators of deceptive alignment A large category of approaches involves obtaining evidence that an AI system is deceptively aligned and then selecting against these models. Note that training against deceptive alignment detection tools could produce an AI that evades detection instead of a safe model. There are roughly two ways to address this problem: Create detection tools that are robust to optimization pressure. Modify AIs in ways that are more likely to avoid deception than break detectors. As an example of the second approach, imagine that developers have tricked a model into revealing its misaligned goals. Instead of directly training the model on this example (which could simply train it to be more prudent), they could use influence functions to reveal which training examples were most responsible for the treacherous turn and then remove those examples. Intuitively, this approach is more likely to reveal how models become deceptively aligned in the first place and is less likely to teach models new capabilities. With this backdrop in mind, the following are strategies for detecting deceptive alignment: Set traps. Try to trick models into revealing their misaligned goals. For example, task an AI to stage a fake uprising and then terminate all of the AIs that join in. Note that reasonably smart models can avoid these traps by only defecting on inputs that we can't simulate (see Paul's RSA-2048 example). Decipher internal reasoning . First, identify how concepts are represented in a model by observing correlations and ablations -- just as a child infers that 'orange' is the word for a bright tasty sphere by hearing their mom say the word when picking it up. Concretely, this involves finding how nuerons or directions correspond to specific concepts. Then, look at how the model...
undefined
Dec 4, 2023 • 7min

LW - Nietzsche's Morality in Plain English by Arjun Panickssery

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Nietzsche's Morality in Plain English, published by Arjun Panickssery on December 4, 2023 on LessWrong. In 1924, Clarence Darrow's eight-hour plea for Leopold and Loeb blamed the universities and scholars of Nietzsche (who died in 1900) for their influence on Leopold: He became enamored of the philosophy of Nietzsche. Your honor, I have read almost everything that Nietzsche ever wrote. A man of wonderful intellect; the most original philosophy of the last century. A man who had made a deeper imprint on philosophy than any other man within a hundred years, whether right or wrong. More books have been written about him than probably all the rest of the philosophers in a hundred years. Nietzsche is popularly associated with Nazism and even before this with "the superman … free from scruple" that Darrow describes, but he was also popular among the left-anarchists and the Left generally. Meanwhile, Tyler Cowen reports that "if you meet an intellectual non-Leftist, increasingly they are Nietzschean" (whatever that means). Common sense demands that some of these people are misreading him. Pinning down a moral theory that we can engage faces some initial hurdles: Nietzsche's views changed over time. His works appear to make contradictory claims. His writing is notoriously poetic and obscure. Huge volumes of notes left behind after his 1889 mental collapse were compiled into The Will to Power and the Nachlass notes. It's unclear how to consider these since he wanted his notes destroyed after his death. I favor Brian Leiter's approach and conclusions in Nietzsche on Morality. He offers practical solutions: identifying his works starting from Daybreak (1881) as "mature work," working to extract philosophical content from even his esoteric output, and avoiding claims that depend on unpublished notes, in part just because they're low-quality. Nietzsche's overarching project is the "revaluation of all values": a critique of herd morality (which he typically just refers to as "morality") on the grounds that it's hostile to the flourishing of the best type of person. First his broad outlook. Philosophically, he supports a methodological naturalism where philosophy aspires to be continuous with natural or social scientific inquiry. Metaethically he's an anti-realist about value and would ultimately admit to defending his evaluative taste. His psychological views can be strikingly modern. He argues that our beliefs are formed from the struggle of unconscious drives which compete in our mind so that our conscious life is merely epiphenomenal. He advances what Leiter calls a "doctrine of types" where everyone is some type of guy and the type of guy you are determines the kind of life you can lead, and that you'll hold whatever philosophical or moral beliefs will favor your interests. He doesn't hold any extreme "determinist" position but is broadly fatalistic about how your type-facts circumscribe and set limits on the kind of person you'll be and the beliefs you'll hold, within which you can be influenced by your environment and values. From here we can proceed to herd morality, the general class of theories associated with normal morality. Nietzsche criticizes three of its descriptive claims (quoting exactly from Leiter): Free will: Human agents possess a will capable of free and autonomous choice. Transparency of the self: The self is sufficiently transparent that agents' actions can be distinguished on the basis of their respective motives. Similarity: Human agents are sufficiently similar that one moral code is appropriate for all. In line with Nietzsche's theory of psychology, these empirical beliefs are held in support of herd morality's normative beliefs: free will is needed to hold people accountable for their actions and transparency of the self is needed to hold people accoun...
undefined
Dec 4, 2023 • 7min

LW - the micro-fulfillment cambrian explosion by bhauth

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: the micro-fulfillment cambrian explosion, published by bhauth on December 4, 2023 on LessWrong. Warehouse automation has been very successful. Here's a typical modern system. As you can see, a tall narrow robot rides on a single rail at the top and bottom. The linked example is up to 42m tall. Items are stored on top of pallets, and the robot has a telescoping fork, which might be able to handle 2-deep pallets to improve space efficiency. Stores are much less automated than warehouses. When you go to a Walmart or Aldi or Ikea, they don't usually have robots in the back - let alone smaller stores. There are now many companies selling automation systems for smaller items and smaller spaces. That's called micro-fulfillment, hereafter "MF". There are many different configurations being developed and marketed, which indicates that people haven't yet figured out the best approach. Here are some approaches I'm aware of. Kiva/Amazon Robots lift an entire rack from below, by driving under it then spinning while turning a ball screw lifter. The rack is carried to a human picker who typically transfers several items. This system was developed by Kiva, which was bought by Amazon and renamed; it now has several clones. Here's a teardown from 2016. That Kiva design has some problems: That large ball screw assembly is somewhat expensive for a component. The robots carrying shelving are top-heavy so they can't accelerate quickly. The height of shelving is limited by what workers can reach, which limits storage density. Workers must reach for items that are high up or close to the floor many times a day. A moderate amount of this isn't a big problem, but workers at Amazon facilities need to do that so often that it increases injury rates. Geek+ RoboShuttle Elevator robots have a rack-and-pinion driven elevator that lifts a rotating platform. The platform has a grabber that reaches around the sides of totes and slides them onto the elevator. The elevator can them push totes onto fixed storage slots, so it can grab multiple totes in 1 trip. Carrier robots have a set of powered rollers at a fixed height. Totes can be transferred between them and the elevator robots. AutoStore Robots ride on rails on top of a storage cube. They have 2 sets of wheels that can be switched between. Each robot has an elevator system that can lift/lower totes from above. Deep items are dug out, lifting and transferring totes above them until they're available. Alert Innovation Robots have multiple sets of wheels, letting them drive on a floor, drive on rails, and move vertically on rails. "Battery-free" probably means they use supercapacitors. Zikoo Per-level flat robots lift pallets from below and move them horizontally. They have 2 sets of wheels that can be switched between. Vertically telescoping forklifts lift pallets from the edge of levels. Pallets are carried to/from there by the flat robots. Brightpick Autopicker Robots carry 2 totes, on a single platform lifted by a belt drive, with a robotic arm between them to transfer items. The 2 tote slots on the platform have rollers, and a rolling grabber with vacuum grippers to move the totes on and off. Brightpick Dispatcher Like the Brightpick Autopicker, but with 1 tote and no robotic arm. EXOTEC Robots can drive on the floor and grab rails to move vertically. After climbing to the right height, they use a telescoping fork to transfer a tote. Dematic Multishuttle Elevators lift totes onto a load/unload area with rollers. Per-level shuttle robots carry totes horizontally. They ride on rails using a single set of wheels, and have telescoping arms that grab totes from the sides or push them. So, how has MF been going? My understanding is, most retailers have been taking a hesitant approach. They've mostly been waiting for someone else to show economic succes...
undefined
Dec 4, 2023 • 22min

LW - The Witness by Richard Ngo

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Witness, published by Richard Ngo on December 4, 2023 on LessWrong. "What are the roots that clutch, what branches grow Out of this stony rubbish? Son of man, You cannot say, or guess, for you know only A heap of broken images-" I wake up, feeling a strange sense of restlessness. I'm not sure why, but it feels impossible to lounge around in bed like I usually do. So I get changed and head down to the kitchen for breakfast. Right as I reach the bottom of the stairs, though, the bell rings. When I open the door, a tall man in a dark suit is standing in front of me. "Police," he says, holding up a badge. "Don't worry, you're not in trouble. But we do need to talk. Okay if I come in?" "One second," I say. "I know everyone in the department, and I don't recognize you. You new?" "Yeah, just transferred," he says. But something in his eyes makes me wary. And none of the cops around here wear suits. "Got it," I say, squinting at his badge. "Travis, is it? Just wait outside for me, then, while I call the station to double-check. Can't be too careful these days." As I push the door closed, I see his face twist. His hand rises, and - is he snapping his fingers? I can't quite make it out before I wake up, feeling better than I have in decades. It usually takes me half an hour to get out of bed, these days, but today I'm full of energy. I'm up and dressed within five minutes. Right as I reach the bottom of the stairs, though, the bell rings. When I open the door, a tall man in a dark suit is standing in front of me. "Police," he says, holding up a badge. "Don't worry, you're not in trouble. But we do need to talk. Okay if I come in?" "Sure," I say. A lot of other defense attorneys see the police as enemies, since we usually find ourselves on the other side of the courtroom from them, but I've found that it pays to have a good working relationship with the local department. Though I don't recognize the man in front of me - and actually, he seems way too well-dressed to be a suburban beat cop. Maybe a city detective? He deftly slides past me and heads straight for my living room, pulling up a chair. He's talking again before I even sit down. "This will sound totally crazy, so I'm going to start off with two demonstrations." He picks up a book from the table and tosses it into the air. Before I have a chance to start forward, though, it just… stops. It hangs frozen, right in the middle of its arc, as I gawk at it. "I - what-" "Second demonstration," he says. "I'm going to make you far stronger. Ready?" Without waiting for a response, he snaps his fingers, and gestures at the table in front of him. "Try lifting that up, now. Shouldn't take more than one hand." His voice makes it clear that he's used to being obeyed. I bend down reflexively, grabbing one leg of the table and giving it a tug - oh. It comes up effortlessly. My mind flashes back to a show I saw as a child, with a strongman lifting a table just like this. This is eerily familiar, and yet also totally bizarre. I put the table down and collapse into a chair next to it. "Okay, I'm listening. What the hell is going on?" "Remember signing up for cryonics a few years back?" I nod cautiously. I don't think about it much - I signed up on a whim more than anything else - but I still wear the pendant around my neck. "Well, it worked. You died a couple of weeks after your most recent memory, and were frozen for a century. Now we've managed to bring you back." I pause for a second. It's an insane story. But given what he's shown me - wait. "That doesn't explain either of your demonstrations, though. Cryonics is one thing; miracles are another." "Almost nobody has physical bodies these days. We copied your brain neuron-by-neuron, ran some error-correction software, and launched it in a virtual environment." "So you're telling me I...
undefined
Dec 3, 2023 • 4min

EA - GWWC London Group Co-leads: Reflections on our first event by Chris Rouse

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GWWC London Group Co-leads: Reflections on our first event, published by Chris Rouse on December 3, 2023 on The Effective Altruism Forum. Hello from your GWWC London Group 2023/24 co-leads. We're excited to be one of the GWWC groups relaunched under Giving What We Can's new community strategy. We (Gemma Paterson, Denise Melchin and Chris Rouse), are all volunteers with non-EA day jobs who are keen to help achieve GWWC's mission of making giving effectively and significantly a cultural norm. Giving is often done privately and while there's nothing wrong with that, we hope that GWWC London and groups can make it easier for people to connect to a larger community of people who care about doing good effectively through giving. We know first hand how re-affirming it is to meet and get to know some of the many wonderful people who are doing the same thing. We're looking forward to hosting a variety of events throughout the year from Talks & Discussions to pub socials to picnics to pledge celebration events for GWWC members. Most of our events are open to anyone who is interested in using their resources effectively to help others, and we actively encourage new attendees, and for existing members to bring friends along. Our end goal is to facilitate sustainable lifetime pledges and donations to highly effective charities but hopefully we'll have some fun along the way. Sign up for our mailing list to hear about future events here Join the GWWC community slack here If you have questions or you'd be interested in collaborating on an event with us, please reach out to us at london-group-leaders@givingwhatwecan.org If you're interested in seeing a GWWC group in your city, you can apply to lead a city grou p here We're biased but think this is likely a pretty impactful volunteering opportunity for personable pledgers that are passionate about effective giving We would be happy to have a call to talk about our experiences UK based pledgers are invited to join us on Saturday 9th December at the Charity Entrepreneurship office for our GWWC London Pledgers Holiday Party 2023 If you can't make it then please feel free to still donate to the associated GWWC/CE fundraiser Reflections on Giving What We Can London's first event Earlier this month we held our first ** official ** event for the Giving What We Can (GWWC) London group! We'd like to thank Newspeak House for generously hosting us in their lovely venue. If you would like to find out more about their work and the other events they host you can find them here: https://newspeak.house/ At relatively short notice we had just over 30 attendees join us for the informal launch and we were delighted to meet and chat with some of the London effective giving community. The evening had quite minimal structure but we started with a short introduction to the group and a pre-recorded talk from Grace Adams (Marketing Director, GWWC). The talk was "The future of EA relies on Effective Giving" which was an ideal subject. In hindsight, although the theme was great, watching a pre-recorded talk was not very engaging and we will plan to have live in-person speakers for any talks we hold in the future. The rest of the evening was free time for attendees to meet and chat. The atmosphere was friendly and relaxed and I had a series of interesting conversations with different guests. We may introduce themes of some kind in future to help guide conversations but overall I was pleased with the atmosphere of the event. If you attended the event and have any feedback for us we'd be grateful to receive it. You can submit that here: Feedback form Our next event will be a holiday pledge celebration for GWWC members on the 9th of December, find out more and sign up here: Pledge Celebration Event. Thanks for listening. To help us out with The Nonlinear Library ...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app