The Nonlinear Library

The Nonlinear Fund
undefined
Mar 2, 2024 • 4min

LW - The Defence production act and AI policy by NathanBarnard

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Defence production act and AI policy, published by NathanBarnard on March 2, 2024 on LessWrong. Quick Summary Gives the President wide-ranging powers to strengthen the US industrial base Has been around without changing that much since 1953 Has provisions which allow firms to make voluntary agreements that would normally be illegal under antitrust law Provided the legal authority for many of the provisions in Biden's recent Executive Order on AI The Defence Production Act The Defence Production Act (DPA) has been reauthorised (and modified) by Congress since 1950, and in 1953 its powers very significantly reduced. I'm confident that it will continue to be passed - in a roughly similar form - for the foreseeable future. The current version was passed in 2019 under a Republican senate and is due for reauthorisation in 2025. Since the Obama Presidency, there's Republicans have begun to try to prevent bills proposed by Democrats from being passed by default. This is particularly easy for non-spending bills since for non-spending bills, 60 votes are needed to break the filibuster - a method used to prevent bills from being voted on - in the Senate. However, not only are defence bills consistently bipartisan, they have consistently high degrees of support from Republicans in particular. Therefore, I'm not concerned about the DPA not being passed by a Republican senate and a Democratic President when it's next due for reauthorisation. The DPA gives the President very wide-ranging powers since the goal of the act of the act is to ensure that the US industrial base is strong enough to fight and win any war the US might need to undertake. Concretely, this allows the President to instruct firms to accept contracts; incentive expansion of the industrial base; and a grab bag of other specific provisions aimed at making sure that the US production base is strong enough to win a war. Until 1953 the act was much more powerful and essentially allowed the President to take control of the US economy. The act now doesn't give the President authority to set wage or price ceilings, control consumer credit or requisition stuff. Antitrust provisions Various AI governance proposals rely on explicit, voluntary agreements between leading AI labs. For instance, this paper proposes a scheme in which AI firms agree to pause the rollout out and training of large models if one doesn't pass an evaluation which indicates that it could act dangerously. I think it's plausible that this agreement would be illegal under antitrust law. An agreement like this would be an explicit agreement amongst a small number of leading firms to limit supply. This skirts pretty close to being a criminal violation of US antitrust law. Under this law, various forms of agreements between firms to fix prices are considered illigal no matter what firms say is the justification for them (this is known as per se illegal.) Agreements to limit production are considered just as illegal. It's not at all clear that such agreements would be illegal - for instance, professional standards are not considered per se illegal but instead are judged under a rule of reason where their competitive effects need to be outweighed by their procompetitive effects. I won't comment on this further but instead, refer the reader to this excellent by that looks specifically at anti-trust and AI industry self-regulation. Section 708 of the DPA gives the President authority to allow firms to make agreements for that would normally be considered antitrust violations. Recently, this provision was used by the Trump administration during Covid-19. Use in the Biden executive order Some of the most AI safety-relevant elements of the recent Biden executive order on AI were authorised under the legal authority of the DPA. This includes: Requiring AI firms t...
undefined
Mar 2, 2024 • 3min

LW - If you weren't such an idiot... by kave

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: If you weren't such an idiot..., published by kave on March 2, 2024 on LessWrong. My friend Buck once told me that he often had interactions with me that felt like I was saying "If you weren't such a fucking idiot, you would obviously do…" Here's a list of such advice in that spirit. Note that if you do/don't do these things, I'm technically calling you an idiot, but I do/don't do a bunch of them too. We can be idiots together. If you weren't such a fucking idiot… You would have multiple copies of any object that would make you sad if you didn't have it Examples: ear plugs, melatonin, eye masks, hats, sun glasses, various foods, possibly computers, etc. You would spend money on goods and services. Examples of goods: faster computer, monitor, keyboard, various tasty foods, higher quality clothing, standing desk, decorations for your room, mattress, pillow, sheets, etc. Examples of services: uber, doordash, cleaners, personal assistants, editors, house managers, laundry, etc. You would have tried many things at least one time. Examples of things to do: climbing, singing, listening to music, playing instruments, dancing, eating various types of food, writing, parties. You wouldn't do anything absurdly dangerous, like take unknown drugs or ride a bike without a helmet. You wouldn't take irreversible actions if you didn't know what the fuck you were doing. You would exercise frequently. Types of exercise to try: climbing, walking, running, soccer, football, yoga, hiking, fencing, swimming, wrestling, beat saber, etc. You would reliably sleep 6-9 hours a night. Obvious things to try: melatonin blackout curtains putting black tape over LEDs on electronics experimenting with mattress, pillow, blankets, sheets, etc. blue light blocking glasses You would routinely look up key numbers and do numerical consistency checks during thinking. You would have a password manager. You would invest money in yourself. Recall: money can be used to buy goods and services. You would use a email's subject line to succinctly describe what you want from the person. For example, if I want to meet with my advisor, I'll send an email with the subject "Request for Advisory Meeting" or something similar. If I want someone to read a draft of something I wrote, the subject would be "Request for Feedback on ". You would have a good mentor. One way to do this is to email people that you want to be your mentor with the subject "Request for Mentorship". You would drink lots of water. You would take notes in a searchable database. You would summarize things that you read. You would have tried making your room as bright as the outdoors. You would carry batteries to recharge your phone. You would have tried using pens with multiple colors. You would read textbooks instead of popular introductions. You would put a relatively consistent dollar value on your time. I'm sure there are more things that I tell people that can be prefaced with "if you weren't such an idiot…", but that's all I got for now. A post I like by @Mark Xu (who agreed to my crossposting in full). Some more from me: You would make it easier to capture your thoughts. Examples: a pocket notebook, taking more voice notes You wouldn't keep all your money in your current account. You would get help when you were stuck. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Mar 2, 2024 • 5min

LW - The World in 2029 by Nathan Young

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The World in 2029, published by Nathan Young on March 2, 2024 on LessWrong. I open my eyes. It's nearly midday. I drink my morning Huel. How do I feel? My life feels pretty good. AI progress is faster than ever, but I've gotten used to the upward slope by now. There has perhaps recently been a huge recession, but I prepared for that. If not, the West feels more stable than it did in 2024. The culture wars rage on, inflamed by AI, though personally I don't pay much attention. Either Trump or Biden won the 2024 election (85%). If Biden, his term was probably steady growth and good, boring decision making (70%). If Trump there is more chance of global instability (70%) due to pulling back from NATO (40%), lack of support for Ukraine (60%), incompetence in handling the Middle East (30%). Under both administrations there is a moderate chance of a global recession (30%), slightly more under Trump. I intend to earn a bit more and prep for that, but I can imagine that the median person might feel worse off if they get used to the gains in between. AI progress has continued. For a couple of years it has been possible possible for anyone to type a prompt for a simple web app and receive an entire interactive website (60%). AI autocomplete exists in most apps (80%), AI images and video are ubiquitous (80%). Perhaps an AI has escaped containment (45%). Some simple job roles have been fully automated (60%). For the last 5 years the sense of velocity we felt in 2023 onwards hasn't abated (80%). OpenAI has made significant progress on automating AI engineers (70%). And yet we haven't hit the singularity yet (90%), in fact, it feels only a bit closer than it did in 2024 (60%). We have blown through a number of milestones, but AIs are only capable of doing tasks that took 1-10 hours in 2024 (60%), and humans are better at working with them (70%). AI regulation has become tighter (80%). With each new jump in capabilities the public gets more concerned and requires more regulation (60%). The top labs are still in control of their models (75%), with some oversight from the government, but they are red-teamed heavily (60%), with strong anti-copyright measures in place (85%). Political deepfakes probably didn't end up being as bad as everyone feared (60%), because people are more careful with sources. Using deepfakes as scams is a big issue (60%). People in the AI safety community are a little more optimistic (60%). The world is just "a lot" (65%). People are becoming exhausted by the availability and pace of change (60%). Perhaps rapidly growing technologies focus on bundling the many new interactions and interpreting them for us (20%). There is a new culture war (80%), perhaps relating to AI (33%). Peak woke happened around 2024, peak trans panic around a similar time. Perhaps eugenics (10%) is the current culture war or polyamory (10%), child labour (5%), artificial wombs (10%). It is plausible that with the increase in AI this will be AI Safety, e/acc and AI ethics. If that's the case, I am already tired (80%). In the meantime physical engineering is perhaps noticeably out of the great stagnation. Maybe we finally have self-driving cars in most Western cities (60%), drones are cheap and widely used, we are perhaps starting to see nuclear power stations (60%), house building is on the up. Climate change is seen as a bit less of a significant problem. World peak carbon production has happened and nuclear and solar are now well and truly booming. A fusion breakthrough looks likely in the next 5 years. China has maybe attacked Taiwan (25%), but probably not. Xi is likely still in charge (75%) but there has probably been a major recession (60%). The US, which is more reliant on Mexico is less affected (60%), but Europe struggles significantly (60%). In the wider world, both Africa and Indi...
undefined
Mar 2, 2024 • 5min

EA - Running 200 miles for New Incentives by Emma Cameron

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Running 200 miles for New Incentives, published by Emma Cameron on March 2, 2024 on The Effective Altruism Forum. What is this? After a running career[1] across marathons, 50K, 50-mile, 100K, and 100-mile distance events over the past eleven years, I'm tackling the 200-mile distance at the Tahoe 200 from June 14-18 this year. It's a bit of a ridiculous, silly challenge. It's also completely wild that I have a privileged life living in a high-income country like the US that allows me to tackle such an adventure. Given all this, I have decided to fundraise for New Incentives through my training and build-up for the event. This is my PledgeIt donation page. I'm thankful to have the support of the folks at High Impact Athletes in putting my page together and thinking through my campaign. My goal is to raise $10,036 to support 650 children enrolling in New Incentive's vaccination program at a cost of $15.44 per infant[2]. I hope you can help promote my fundraising efforts or consider donating yourself! I'm posting on the Forum because it's a wild enough idea that perhaps I can bring some new folks into effective giving in the process.[3] Can I even finish? The short answer: I think so! I have attempted the 200-mile distance once before, at the Moab 240. I only made it about 120 miles before succumbing to the hotter-than-average 110°F heat in the canyons on the second day. Unfortunately, they last-minute swapped out the Tailwind electrolyte solution on course and replaced it with a non-vegan offering, which I wasn't willing to take. I ended up relying on salt pills instead, which wasn't enough. Even after losing my hearing in one ear due to electrolyte imbalance, I was determined to shoulder ahead. Unfortunately, when I stopped being able to keep food or liquids down, my pacer[4] physically carried me a mile into the next aid station, where I passed out and woke up in a van with the race crew. Understandably, they let me know they would be pulling me from the race. So, I definitely "DNF'd" [Did Not Finish] that race, even though I took away a lot of important lessons from it. I have since completed a number of challenging 100-mile ultramarathons, including: Grindstone 100[5] in Virginia, September 2019 Kettle 100 in Wisconsin, virtual in June 2020 Black Hills 100 in South Dakota, June 2021 Superior 100 in Minnesota, September 2022 Javelina 100 [6] in Arizona, October 2023 I've otherwise completed Ironman Wisconsin, qualified for the Boston Marathon twice, backpacked around Wisconsin, biked, rock climbed, and generally feel that I am well-positioned as an endurance athlete to tackle this challenge in Lake Tahoe this June. I think I'm ready! To be clear, my only real goal is to complete the race. I would describe myself as a 'mid-pack' runner in the 200-mile distance, at best. I will be running alongside such professional legends as Courtney Dauwalter, who has notably attempted to break the course record at the Tahoe 200[7] by completing it in <48 hours. She also took the overall win[8] (for both men and women) in the Moab 240 in 2017. Regardless of how or when I finish the course, I'm excited for the journey of a lifetime this June. I love running and movement - it's one of my greatest passions. Combining it with a way to create impact for cost-effective charities is a bit of a dream, so here I go! Donate & share this link! https://pledgeit.org/200-miles ^ Ultrasignup Results - Note that Ultrasignup isn't integrated with every race, but it covers a good number of them. Notably, for example, I completed a 100K in zero-degree temperatures in northern Wisconsin called the Frozen Otter [now Frigid Fox] in January 2018, which isn't indicated in my Ultrasignup results. ^ httpnumbers://www.newincentives.org/impact The cost per infant is $15.44, while vaccinating an infant against ...
undefined
Mar 2, 2024 • 9min

LW - Increasing IQ is trivial by George3d6

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Increasing IQ is trivial, published by George3d6 on March 2, 2024 on LessWrong. TL;DR - It took me about 14 days to increase my IQ by 13 points, in a controlled experiment that involved no learning, it was a relatively pleasant process, more people should be doing this. A common cliche in many circles is that you can't increase IQ. This is obviously false, the largest well-documented increase in IQ using nothing but training is one of 23 points. A Standard Deviation of IQ Alas it is a myth that persists, and when pushed on it people will say something like: You can't easily increase IQ in a smart and perfectly healthy adult permanently. FINE - I'm a smart and perfectly healthy adult, I tested my IQ with 4 different tests: FSIQ, the public MENSA test, Raven's progressive matrices, and Raven's advanced progressive matrices. Then I threw the kitchen sink at the problem, and went through every intervention I could find to increase IQ over the course of 14 days (this took ~3 hours per day). This included no "learning", or memory games, nor did it include any stimulants. It was all focused on increasing cerebral vascularization and broadening my proprioception. I got a mean increase of 8.5 points in IQ (my control got 2), and if I only take into account the non-verbal components that increase is 12.6 (3.2 for my control). In other words, I became about a 1-standard deviation better shape rotator. I observed an increase of > 4 points on all of the tests (and, sigh, if you must know: p=0.00008 on MWU for me, 0.95 for my control) I used a control who was my age, about as smart as me, shared a lot of my activities, and many of my meals, and lived in the same house as me, in order to avoid any confounding. Also, to account for any "motivation bias" I offered to pay my control a large amount for every point of IQ they "gained" while retaking the tests. Here is the raw data. The Flowers for Algernon The common myths around IQ and its "immutability" are best summarized here by Gwern. "Given that intelligence is so valuable, if it was easy to get more of it, we would be more intelligent" -for one this argument is confusing IQ for intelligence, but, more importantly, it's ignoring reality. Many things are "valuable" yet we don't have them because our evolutionary environment places constraints on us that are no longer present in our current environment. Nor is it obvious that many of the traits we value were useful for the human species to propagate, or had an easy way of being selected in our short evolutionary history. Here, let me try: In the mid-20th century: Your average human has about 50kg of muscles, and the most muscular functional human has about 100kg of muscles. A human with 300kgs of muscles would be stronger than a grizzly bear, an obviously desirable trait, but our genetics just don't go there, and you can only take training and steroids that far. 2021: Here's a random weightlifter I found coming in at over 400kg, I don't have his DEXA but let's say somewhere between 300 and 350kgs of muscle. In the mid-19th century: Fat storage is useful, if we could store as much fat as a bear we could do things like hibernate. Alas, the fatest humans go to about 200kgs, and people try to eat a lot, there's probably a genetic limit on how fat you can get. In the mid-20th century: Here's a guy that weighs 635kg, putting an adult polar bear to shame. And fine you say, becoming stronger and/or fatter than a bear requires tradeoffs, you won't live past 50 or so and you will sacrifice other areas. But then let's look at other things that are genetically determined, evolutionarily selected for (heavily), but where with modern tools we can break past imposed boundaries: Thymic involution Skin aging Bone and cartilage repair Eyesight One reason why this point of view is so popular is becaus...
undefined
Mar 2, 2024 • 13min

LW - Notes on Dwarkesh Patel's Podcast with Demis Hassabis by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Notes on Dwarkesh Patel's Podcast with Demis Hassabis, published by Zvi on March 2, 2024 on LessWrong. Demis Hassabis was interviewed twice this past week. First, he was interviewed on Hard Fork. Then he had a much more interesting interview with Dwarkesh Patel. This post covers my notes from both interviews, mostly the one with Dwarkesh. Hard Fork Hard Fork was less fruitful, because they mostly asked what for me are the wrong questions and mostly get answers I presume Demis has given many times. So I only noticed two things, neither of which is ultimately surprising. They do ask about The Gemini Incident, although only about the particular issue with image generation. Demis gives the generic 'it should do what the user wants and this was dumb' answer, which I buy he likely personally believes. When asked about p(doom) he expresses dismay about the state of discourse and says around 42:00 that 'well Geoffrey Hinton and Yann LeCun disagree so that indicates we don't know, this technology is so transformative that it is unknown. It is nonsense to put a probability on it. What I do know is it is non-zero, that risk, and it is worth debating and researching carefully… we don't want to wait until the eve of AGI happening.' He says we want to be prepared even if the risk is relatively small, without saying what would count as small. He also says he hopes in five years to give us a better answer, which is evidence against him having super short timelines. I do not think this is the right way to handle probabilities in your own head. I do think it is plausibly a smart way to handle public relations around probabilities, given how people react when you give a particular p(doom). I am of course deeply disappointed that Demis does not think he can differentiate between the arguments of Geoffrey Hinton versus Yann LeCun, and the implied importance on the accomplishments and thus implied credibility of the people. He did not get that way, or win Diplomacy championships, thinking like that. I also don't think he was being fully genuine here. Otherwise, this seemed like an inessential interview. Demis did well but was not given new challenges to handle. Dwarkesh Patel Demis Hassabis also talked to Dwarkesh Patel, which is of course self-recommending. Here you want to pay attention, and I paused to think things over and take detailed notes. Five minutes in I had already learned more interesting things than I did from the entire Hard Fork interview. Here is the transcript, which is also helpful. (1:00) Dwarkesh first asks Demis about the nature of intelligence, whether it is one broad thing or the sum of many small things. Demis says there must be some common themes and underlying mechanisms, although there are also specialized parts. I strongly agree with Demis. I do not think you can understand intelligence, of any form, without some form the concept of G. (1:45) Dwarkesh follows up by asking then why doesn't lots of data in one domain generalize to other domains? Demis says often it does, such as coding improving reasoning (which also happens in humans), and he expects more chain transfer. (4:00) Dwarkesh asks what insights neuroscience brings to AI. Demis points to many early AI concepts. Going forward, questions include how brains form world models or memory. (6:00) Demis thinks scaffolding via tree search or AlphaZero-style approaches for LLMs is super promising. He notes they're working hard on search efficiency in many of their approaches so they can search further. (9:00) Dwarkesh notes that Go and Chess have clear win conditions, real life does not, asks what to do about this. Demis agrees this is a challenge, but that usually 'in scientific problems' there are ways to specify goals. Suspicious dodge? (10:00) Dwarkesh notes humans are super sample efficient, Demis says it ...
undefined
Mar 1, 2024 • 1min

LW - Elon files grave charges against OpenAI by mako yass

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Elon files grave charges against OpenAI, published by mako yass on March 1, 2024 on LessWrong. (CN) - Elon Musk says in a Thursday lawsuit that Sam Altman and OpenAI have betrayed an agreement from the artificial intelligence research company's founding to develop the technology for the benefit of humanity rather than profit. In the suit filed Thursday night in San Francisco Superior Court, Musk claims OpenAI's recent relationship with tech giant Microsoft has compromised the company's original dedication to public, open-source artificial general intelligence. "OpenAI, Inc. has been transformed into a closed-source de facto subsidiary of the largest technology company in the world: Microsoft. Under its new board, it is not just developing but is actually refining an AGI to maximize profits for Microsoft, rather than for the benefit of humanity," Musk says in the suit. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Mar 1, 2024 • 7min

EA - Review of EA Global Bay Area 2024 (Global Catastrophic Risks) by frances lorenz

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Review of EA Global Bay Area 2024 (Global Catastrophic Risks), published by frances lorenz on March 1, 2024 on The Effective Altruism Forum. EA Global: Bay Area (Global Catastrophic Risks) took place February 2-4. We hosted 820 attendees, 47 of whom volunteered over the weekend to help run the event. Thank you to everyone who attended and a special thank you to our volunteers - we hope it was a valuable weekend! Photos and recorded talks You can now check out photos from the event. Recorded talks, such as the media panel on impactful GCR communication, Tessa Alexanian's talk on preventing engineered pandemics, Joe Carlsmith's discussion of scheming AIs, and more, are now available on our Youtube channel. A brief summary of attendee feedback Our post-event feedback survey received 184 responses. This is lower than our average completion rate - we're still accepting feedback responses and would love to hear from all our attendees. Each response helps us get better summary metrics and we look through each short answer. To submit your feedback, you can visit the Swapcard event page and click the Feedback Survey button. The survey link can also be found in a post-event email sent to all attendees with the subject line, "EA Global: Bay Area 2024 | Thank you for attending!" Key metrics The EA Global team uses a several key metrics to estimate the impact of our events. These metrics, and the questions we use in our feedback survey to measure them, include: Likelihood to recommend (How likely is it that you would recommend EA Global to a friend or colleague with similar interests to your own? Discrete scale from 0 to 10, 0 being not at all likely and 10 being extremely likely) Number of new connections[1] (How many new connections did you make at this event?) Number of impactful connections[2] (Of those new connections, how many do you think might be impactful connections?) Number of Swapcard meetings per person (This data is pulled from Swapcard) Counterfactual use of attendee time (To what extent was this EA Global a good use of your time, compared to how you would have otherwise spent your time? A discrete scale ranging from, "a waste of my time, <10% the counterfactual" to, "a much better use of my time, >10x the counterfactual") The likelihood to recommend for this event was higher compared to last year's EA Global: Bay Area and our EA Global 2023 average (i.e. the average across the three EA Globals we hosted in 2023) (see Table 1). Number of new connections was slightly down compared to the 2023 average, while the number of impactful connections was slightly up. The counterfactual use of time reported by attendees was slightly higher overall than Boston 2023 (the first EA Global we used this metric at), though there was also an increase in the number of reports that the event was a worse use of attendees' time (see Figure 1). Metric (average of all respondents) EAG BA 2024 (GCR) EAG BA 2023 EAG 2023 average Likelihood to recommend (0 - 10) 8.78 8.54 8.70 Number of new connections 9.05 11.5 9.72 Number of impactful connections 4.15 4.8 4.09 Swapcard meetings per person 6.73 5.26 5.24 Table 1. A summary of key metrics from the post-event feedback surveys for EA Global: Bay Area 2024 (GCRs), EA Global: Bay Area 2023, and the average from all three EA Globals hosted in 2023. Feedback on the GCRs focus 37% of respondents rated this event more valuable than a standard EA Global, 34% rated it roughly as valuable and 9% as less valuable. 20% of respondents had not attended an EA Global event previously (Figure 2). If the event had been a regular EA Global (i.e. not focussed on GCRs), most respondents predicted they would have still attended. To be more precise, approximately 90% of respondents reported having over 50% probability of attending the event in the absence of a GCR ...
undefined
Mar 1, 2024 • 6min

EA - Forum feature updates: add buttons, see your stats, send DMs easier, and more (March '24) by tobytrem

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Forum feature updates: add buttons, see your stats, send DMs easier, and more (March '24), published by tobytrem on March 1, 2024 on The Effective Altruism Forum. It's been a while since our last feature update in October. Since then, the Forum team has done a lot. This post includes: New tools for authors See all your stats on one page Add buttons to posts Improved tables for posts Improved notifications, messaging and onboarding More comprehensive and interactive notifications Subscribe to a user's comments Improved onboarding for new users: Redesign of Forum messaging Other updates and projects Job recommendations Making it easier to fork our codebase and set up new forums Giving Season A Giving portal A custom voting tool for the Donation Election An interactive banner Forum Wrapped 2023 Miscellaneous You can now hide community quick takes Clearer previews of posts, topics, and sequences A new opportunities tab on the Frontpage New tools for authors See all your stats on one page We've built a page which collects the stats for your posts into one overview. These stats include: how many people have viewed/read your posts, how much karma they've accrued, and the number of comments. You can access analytics for a particular post by clicking 'View detailed stats' from your post stats page. Add buttons to posts If you have a "call to action" in a post or comment, you can now add it as a button. Your button could be a link to an application, a survey, a calendar, or any other link you'd like to stand out. Improved tables for posts We've improved tables to make them much more readable and less prone to awkward word cut-offs such as "Wei ght" Improved notifications, messaging and onboarding More comprehensive and interactive notifications Notifications now have their own page, accessed by clicking the bell icon: On this page, you can directly reply to comments. We have also: Created notifications for Reactions Moved message notifications from the notifications list to this symbol in the top bar. Made a notifications summary showing on hover so that you can check your notifications before clicking the bell to clear them (pictured below) We aim to make notifications informative without making them addictive or incentivising behaviour you don't endorse. If notifications bother you, you can: Change your settings to batch your notifications differently (or not be notified at all). Give us feedback. Subscribe to a user's comments You can now subscribe to be notified every time a user comments. We've also clarified subscriptions' functionality, with a new "Get notified" menu. Improved onboarding for new users: We've changed the process a user goes through when signing up for a Forum account. This has already increased the number of users giving information about their role, signing up for the Digest, and subscribing to topics. The new process looks like this: Redesign of Forum messaging The DM page has undergone a total redesign, making it easier to start individual and group messages and navigate your message history. You can now create a new message by clicking the new conversation symbol ( ) and selecting a Forum user (you used to have to navigate to their account). You can also create a group message by adding multiple Forum users in the search box: Other updates and projects Job recommendations You may have noticed that we've recently been exploring ways we could help Forum users hire and be hired. As part of this project, we're experimenting more with targeted job recommendations. We're selecting high-impact jobs and showing each job to a small set of users that we have reason to believe may be interested in it. For example, if the job is limited to a specific country, we use your account location to help determine if it's relevant to you. We'll continue to iterate on thi...
undefined
Mar 1, 2024 • 34min

LW - Locating My Eyes (Part 3 of "The Sense of Physical Necessity") by LoganStrohl

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Locating My Eyes (Part 3 of "The Sense of Physical Necessity"), published by LoganStrohl on March 1, 2024 on LessWrong. This is the third post in a sequence that demonstrates a complete naturalist study, specifically a study of query hugging (sort of), as described in The Nuts and Bolts of Naturalism. This one demos phases one and two: Locating Fulcrum Experiences and Getting Your Eyes On. For context on this sequence, see the intro post. If you're intimidated by the length of this post, remember that this is meant as reference material. Feel free to start with "My Goals For This Post", and consider what more you want from there. Having chosen a quest - "What's going on with distraction?" - my naturalist study began in earnest. In "Nuts and Bolts of Naturalism", the first two phases of study that I discussed after "Getting Started" were "Locating Fulcrum Experiences" and "Getting Your Eyes On". In practice, though, I often do a combination of these phases, which is what happened this time. For the sake of keeping track of where we are in the progression, I think it's best to think of me as hanging out in some blend of the early phases, which we might as well call "Locating Your Eyes". My Goals For This Post Much of the "learning" that happens in the first two phases (or "locating your eyes") could be just as well described as unlearning: a setting aside of potentially obfuscatory preconceptions. My unlearning this time was especially arduous. I was guided by a clumsy story, and had to persist through a long period of deeply uncomfortable doubt and confusion as I gradually weaned myself off of it. It took me a long time to find the right topic and to figure out a good way into it. If this were a slightly different sort of essay, I'd skip all of the messiness and jump to the part where my progress was relatively clear and linear. I would leave my fumbling clumsiness off of the page. Instead, I want to show you what actually happened. I want you to know what it is like when I am "feeling around in the dark" toward the beginning of a study. I want to show you the reality of looking for a fulcrum experience when you haven't already decided what you're looking for. Because in truth, it can be quite difficult and discouraging, even when you're pretty good at this stuff; it's important to be prepared for that. So I want you to see me struggle, to see how I wrestle with challenges. In the rest of this post, I hope to highlight the moves that allowed me to successfully progress, pointing out what I responded to in those moments, what actions I took in response, and what resulted. To summarize my account: I looped through the first two phases of naturalism a few times, studying "distraction", then "concentration", then "crucialness", before giving up in despair. Then I un-gave-up, looped through them once more with "closeness to the issue", and finally settled on the right experience to study: a sensation that I call "chest luster". To understand this account as a demonstration of naturalism, it's important to recognize that every loop was a success, even before I found the right place to focus. When studying distraction and concentration, I was not really learning to hug the query yet; but I was learning to perceive details of my experience in the preconceptual layer beneath concepts related to attention. Laying that foundation for direct contact was valuable, since "hug the query" is a special way of using attention. I will therefore tell you about each loop. I recommend reading through the first loop ("Distraction") even if you're skipping around, since it includes some pretty important updates to my understanding of the naturalist procedure. Distraction I realized during this study that there are a couple crucial distinctions related to fulcrum experiences that I failed to ...

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app