The Nonlinear Library

The Nonlinear Fund
undefined
Jan 27, 2024 • 3min

LW - The Good Balsamic Vinegar by jenn

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The Good Balsamic Vinegar, published by jenn on January 27, 2024 on LessWrong. For a long time I only went to one specialty gourmet store for balsamic vinegar. Their house brand was thick and sweet and amazing on everything, from bread to salad to chicken. The gourmet store only stocked their house brand, and it had an entire dedicated shelf. As far as I knew, the house brand was not available anywhere else in town. The gourmet store was slightly out of the way, and eventually there were times when I wished I could grab balsamic vinegar at the normal grocery stores that I did most of my grocery shopping in. The first time I attempted it, I was rushed for time and it was a disaster. I knew the approximate price range that I should be looking at (around $25 CAD for a ~200ml bottle), but there were a dozen vinegars that fit the bill, and they all had pretty fancy looking packaging, and I was AP'd AF. I basically picked randomly based on vibes, and I picked wrong. The vinegar was the consistency of water, sour, and not fragrant at all. The second time, I was ready. Recall that the balsamic vinegar I wanted was thick and sweet. It turns out that you can use your literacy skills and senses to ensure that the vinegar you buy are both of those things! Again, first I culled all the vinegars that seemed to be priced way too cheaply - like under $10 for a sizeable bottle. Then I started systemically picking up the remaining bottles, and tipping them sideways. Most of the bottles were tinted but not opaque, so you can see the vinegar inside. Anything that moved like water I put back - those were a sizeable portion. A few bottles were truly opaque, those also went back on the shelf. For the vinegars that flowed a bit more slowly, I turned the bottle around to look at the nutrition facts. Sweet vinegars are going to have sugar in them - no one has been brave and visionary enough to make fancy vinegars with aspartame yet. Thickness and sweetness turned out to be traits that were 100% correlated, at least in one direction: all the thick vinegars had sugar content of around 8-12g per tablespoon. I picked the cheapest bottle that met the two criteria to try. It was $2 more than the bottle I get at the gourmet store for the same volume, and slightly better tasting IMO. I am now incrementally more powerful at grocery shopping. Bonus: In fancy restaurants they sometimes give you bread and a bowl of nice vinegar and olive oil to dip it in. This is delicious, but we can do better. When the vinegar and oil are in the same bowl, the bread must travel through the layer of oil (hydrophobic) to get to the vinegar (water-based), and then back out through the oil. This results in bread pieces that have very little vinegar and too much oil on them. If you instead put the vinegar and oil in separate bowls, you can dip the bread lightly into the vinegar first and then dunk it in the oil. This results in a much better ratio of vinegar and oil on your bread. Having fresh baguette slices and bowls of nice olive oil and vinegar out at a party has never been a bad choice in my experience. It's not actually that expensive, and it's vegan by default :) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jan 27, 2024 • 7min

LW - Surgery Works Well Without The FDA by Maxwell Tabarrok

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Surgery Works Well Without The FDA, published by Maxwell Tabarrok on January 27, 2024 on LessWrong. Here is a conversation from the comments of my last post on the FDA with fellow progress blogger Alex Telford that follows a pattern common to many of my conversations about the FDA: Alex: Most drugs that go into clinical trials (90%) are less effective or safe than existing options. If you release everything onto the market you'll get many times more drugs that are net toxic (biologically or financially) than the good drugs you'd get faster. You will almost surely do net harm. Max: Companies don't want to release products that are worse than their competitors. Companies test lots of cars or computers or ovens which are less effective or safe than existing options but they only release the ones that are competitive. This isn't because most consumers could tell whether their car was less efficient or that their computer is less secure, and it's not because making a less efficient car or less secure computer is against the law. Pharmaceutical companies won't go and release hundreds of dud or dangerous drugs just because they can. That would ruin their brand and shut down their business. They have to sell products that people want. Alex: Consumer products like ovens and cars aren't comparable to drugs. The former are engineered products that can be tested according to defined performance and safety standards before they are sold to the public. The characteristics of drugs are more discovered than engineered. You can't determine their performance characteristics in a lab, they can only be determined through human testing (currently). Alex claims that without the FDA, pharmaceutical companies would release lots of bunk drugs. I respond that we don't see this behavior in other markets. Car companies or computer manufacturers could release cheaply made, low quality products for high prices and consumers might have a tough time noticing the difference for a while. But they don't do this, they always try to release high quality products at competitive prices. Alex responds, fairly, that car or computer markets aren't comparable to drug markets. Pharmaceuticals have stickier information problems. They are difficult for consumers to evaluate and, as Alex points out, usually require human testing. This is usually where the conversation ends. I think that consumer product markets are informative for what free-market pharmaceuticals would look like, Alex (and lots of other reasonable people) don't and it is difficult to convince each other otherwise. But there's a much better non-FDA counterfactual for pharmaceutical markets than consumer tech: surgery. The FDA does not have jurisdiction over surgical practice and there is no other similar legal requirement for safety or efficacy testing of new surgical procedures. The FDA does regulate medical devices like the da Vinci surgical robot but once they are approved surgeons can use them in new ways without consulting the FDA or any other government authority. In addition to this lack of regulation, surgery is beset with even thornier information problems than pharmaceuticals. Evaluating the quality of surgery as a customer is difficult. You're literally unconscious as they provide the service and retrospective observation of quality is usually not possible for a layman. Assessing quality is difficult even for a regulator, however. So much of surgery hinges on the skill of a particular surgeon and varies within surgeons day to day or before and after lunch. Running an RCT on a surgical technique is therefore difficult. Standardizing treatment as much as in pharmaceutical trials is basically impossible. It also isn't clear what a surgical placebo should be. Do just put them under anesthetic for a few hours? Or do you cut people open and s...
undefined
Jan 26, 2024 • 2min

EA - Recruiting for Survey on the Psychology of EA by Kyle Fiore Law

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Recruiting for Survey on the Psychology of EA, published by Kyle Fiore Law on January 26, 2024 on The Effective Altruism Forum. I am actively recruiting effective altruists to participate in an online survey mapping their psychological profiles. The survey should take no more than 90 minutes to complete and anyone who identifies as being in alignment with EA can participate. If you have the time, my team and I would greatly appreciate your participation! The survey pays $15 and the link can be found below. Survey Link: https://albany.az1.qualtrics.com/jfe/form/SV_8v31IDPQNq4sKBU The Research Team: Kyle Fiore Law (Project Leader; PhD Candidate in Social Psychology; University at Albany, SUNY): https://www.kyleflaw.com/ Brendan O'Connor (Associate Professor of Psychology; University at Albany, SUNY): Abigail Marsh (Professor of Psychology and Interdisciplinary Neuroscience; Georgetown University) Liane Young (Professor of Psychology and Neuroscience; Boston College) Stylianos Syropoulos (Postdoctoral Researcher; Boston College) Paige Amormino (Graduate Student; Georgetown University) Gordon Kraft-Todd (Postdoctoral Researcher; Boston College) Warmly, Kyle :) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jan 26, 2024 • 6min

LW - Making every researcher seek grants is a broken model by jasoncrawford

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Making every researcher seek grants is a broken model, published by jasoncrawford on January 26, 2024 on LessWrong. When Galileo wanted to study the heavens through his telescope, he got money from those legendary patrons of the Renaissance, the Medici. To win their favor, when he discovered the moons of Jupiter, he named them the Medicean Stars. Other scientists and inventors offered flashy gifts, such as Cornelis Drebbel's perpetuum mobile (a sort of astronomical clock) given to King James, who made Drebbel court engineer in return. The other way to do research in those days was to be independently wealthy: the Victorian model of the gentleman scientist. Eventually we decided that requiring researchers to seek wealthy patrons or have independent means was not the best way to do science. Today, researchers, in their role as "principal investigators" (PIs), apply to science funders for grants. In the US, the NIH spends nearly $48B annually, and the NSF over $11B, mainly to give such grants. Compared to the Renaissance, it is a rational, objective, democratic system. However, I have come to believe that this principal investigator model is deeply broken and needs to be replaced. That was the thought at the top of my mind coming out of a working group on "Accelerating Science" hosted by the Santa Fe Institute a few months ago. (The thoughts in this essay were inspired by many of the participants, but I take responsibility for any opinions expressed here. My thinking on this was also influenced by a talk given by James Phillips at a previous metascience conference. My own talk at the workshop was written up here earlier.) What should we do instead of the PI model? Funding should go in a single block to a relatively large research organization of, say, hundreds of scientists. This is how some of the most effective, transformative labs in the world have been organized, from Bell Labs to the MRC Laboratory of Molecular Biology. It has been referred to as the "block funding" model. Here's why I think this model works: Specialization A principal investigator has to play multiple roles. They have to do science (researcher), recruit and manage grad students or research assistants (manager), maintain a lab budget (administrator), and write grants (fundraiser). These are different roles, and not everyone has the skill or inclination to do them all. The university model adds teaching, a fifth role. The block organization allows for specialization: researchers can focus on research, managers can manage, and one leader can fundraise for the whole org. This allows each person to do what they are best at and enjoy, and it frees researchers from spending 30-50% of their time writing grants, as is typical for PIs. I suspect it also creates more of an opportunity for leadership in research. Research leadership involves having a vision for an area to explore that will be highly fruitful - semiconductors, molecular biology, etc. - and then recruiting talent and resources to the cause. This seems more effective when done at the block level. Side note: the distinction I'm talking about here, between block funding and PI funding, doesn't say anything about where the funding comes from or how those decisions are made. But today, researchers are often asked to serve on committees that evaluate grants. Making funding decisions is yet another role we add to researchers, and one that also deserves to be its own specialty (especially since having researchers evaluate their own competitors sets up an inherent conflict of interest). Research freedom and time horizons There's nothing inherent to the PI grant model that dictates the size of the grant, the scope of activities it covers, the length of time it is for, or the degree of freedom it allows the researcher. But in practice, PI funding has evol...
undefined
Jan 26, 2024 • 3min

LW - "Does your paradigm beget new, good, paradigms?" by Raemon

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Does your paradigm beget new, good, paradigms?", published by Raemon on January 26, 2024 on LessWrong. A very short version of this post, which seemed worth rattling of quickly for now. A few months ago, I was talking to John about paradimicity in AI alignment. John says "we don't currently have a good paradigm." I asked "Is 'Natural Abstraction' a good paradigm?". He said "No, but I think it's something that's likely to output a paradigm that's closer to the right paradigm for AI Alignment." "How many paradigms are we away from the right paradigm?" "Like, I dunno, maybe 3?" said he. Awhile later I saw John arguing on LessWrong with (I think?) Ryan Greenblatt about whether Ryan's current pseudo-paradigm was good. (Sorry if I got the names here or substance here wrong, I couldn't find the original thread, and it seemed slightly better to be specific so we could dig into a concrete example). One distinction in the discussion seemed to be something like: On one hand, Ryan thought his current paradigm (this might have been "AI Control", as contrasted with "AI Alignment") had a bunch of traction on producing a plan that would at least reasonably help if we had to align superintelligent AIs in the near future. On the other hand, John argued that the paradigm didn't feel like the sort of thing that was likely to bear the fruit of new, better paradigms. It focused on an area of the superintelligence problem that, while locally tractable, John thought was insufficient to actually solve the problem, and also wasn't the sort of thing likely to pave the way to new paradigms. Now a) again I'm not sure I'm remembering this conversation right, b) whether either of those points are true in this particular case would be up for debate and I'm not arguing they're true. (also, regardless, I am interested in the idea of AI Control and think that getting AI companies to actually do the steps necessary to control at least nearterm AIs is something worth putting effort into) But it seemed good to promote to attention the idea that: when you're looking at clusters of AI Safety research and thinking about whether it is congealing into a useful, promising paradigm, one of the questions to ask is not just "does this paradigm seem locally tractable" but "do I have a sense that this paradigm will open up new lines of research that can lead to be better paradigms?". (Whether one can be accurate in answering that question is yet another uncertainty. But, I think if you ask yourself "is this approach/paradigm useful", your brain will respond with different intuitions than "does this approach/paradigm seem likely to result in new/better paradigms?") Some prior reading: Look For Principles Which Will Carry Over To The Next Paradigm Open Problems Create Paradigms Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jan 26, 2024 • 4min

EA - Funding circle aimed at slowing down AI - looking for participants by Greg Colbourn

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Funding circle aimed at slowing down AI - looking for participants, published by Greg Colbourn on January 26, 2024 on The Effective Altruism Forum. Are you an earn-to-giver or (aspiring) philanthropist who has short AGI timelines and/or high p(doom|AGI)? Do you want to discuss donation opportunities with others who share your goal of slowing down / pausing / stopping AI development[1]? If so, I want to hear from you! For some context, I've been extremely concerned about short-term AI x-risk since March 2023 (post-GPT-4), and have, since then, thought that more AI Safety research will not be enough to save us (or AI Governance that isn't focused[2] on slowing down AI or a global moratorium on further capabilities advances). Thus I think that on the margin far more resources need to be going into slowing down AI (there are already many dedicated funds for the wider space of AI Safety). I posted this to an EA investing group in late April: And this AGI rising: why we are in a new era of acute risk and increasing public awareness, and what to do now - to the EA Forum in early May. My p(doom|AGI) is ~90% as things stand ( Doom is default outcome of AGI). But my p(doom) overall is ~50% by 2030, because I think there's a decent chance we can actually get a Stop[3]. My timelines are ~ 0-5 years: I have donated >$150k[4] to people and projects focused on slowing down AI since (mostly as kind of seed funding - to individuals, and projects so new they don't have official orgs yet[5]), but I want to do a lot more. Having people with me would be great for multiplying impact and also for my motivation! I'm thinking 4-6 people, each committing ~$100k(+) over 2024, would be good. The idea would be to discuss donation opportunities in the "slowing down AI" space during a monthly call (e.g. Google Meet), and have an informal text chat for the group (e.g. Whatsapp or Messenger). Fostering a sense of unity of purpose[6], but nothing too demanding or official. Active, but low friction and low total time commitment. Donations would be made independently rather than from a pooled fund, but we can have some coordination to get "win-wins" based on any shared preferences of what to fund. Meta-charity Funders is a useful model. We could maybe do something like an S-process for coordination, like what Jaan Tallinn's Survival and Flourishing Fund does[7]; it helps avoid "donor chicken" situations. Or we could do something simpler like rank the value of donating successive marginal $10k amounts to each project. Or just stick to more qualitative discussion. This is all still to be determined by the group. Please join me if you can[8], or share with others you think may be interested. Feel free to DM me here or on X, book a call with me, or fill in this form. ^ If you oppose AI for other reasons (e.g. ethics, job loss, copyright), as long as you are looking to fund strategies that aim to show results in the short term (say within a year), then I'd be interested in you joining the circle. ^ I think Jaan Tallinn's new top priorities are great! ^ After 2030, if we have a Stop and are still here, we can keep kicking the can down the road.. ^ I've made a few more donations since that tweet. ^ Public examples include Holly Elmore, giving away copies of Uncontrollable, and AI-Plans.com. ^ Right now I feel quite isolated making donations in this space. ^ It's a little complicated, but here's a short description: "Everyone individually decides how much value each project creates at various funding levels. We find an allocation of funds that's fair and maximises the funders' expressed preferences (using a number of somewhat dubious but probably not too terrible assumptions). Funders can adjust how much money they want to distribute after seeing everyone's evaluations, including fully pulling out." (paraphr...
undefined
Jan 26, 2024 • 4min

EA - Is it time for EVF to sell Wytham Abbey? by Arepo

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Is it time for EVF to sell Wytham Abbey?, published by Arepo on January 26, 2024 on The Effective Altruism Forum. The purchase of Wytham Abbey was originally justified as a long term investment, when people were still claiming EA wasn't cash constrained. One of the arguments advanced by defenders of the purchase was that the money wasn't lost, merely invested. Right now, EA is hella funding constrained... In the last few months, I've seen multiple posts of EA orgs claiming to be so funding constrained they're facing existential risk (disclaimer: I was a trustee of CEEALAR until last month). By the numbers given by those three orgs, 10% of the price of Wytham would be enough to fund them all for several years. This is to say nothing of all the organisations less urgently seeking funding, the fact that regional groups seem to be getting funding cuts of 40%, of numerous word-of-mouth accounts of people being turned down for funding or not trying to start an organisation because they don't expect to get it, and the fact that earlier this year the EA funds were reportedly suffering some kind of liquidity crisis (and are among those seeking funding now). Here's a breakdown of the small-medium size orgs who've written 'we are funding constrained' posts on the forum in the last 6 months or so, along with the length of time the sale of Wytham Abbey (at its original £15,000,000 purchase price) could fund them: Organisation Annual budget* Number of years Wytham Abbey's sale could fund org Source EA Poland £24-48,000 312-614 Link Centre for Enabling EA Learning & Research £150-£300,000 50-100 Personal involvement AI Safety Camp £46-246,000 48-326 Link Concentric Policies £16,500** 900** Link Center on Long-Term Risk £600,000 24 Link EA Germany £226,000*** 66 Link Vida Plena's 'Group Interpersonal Therapy' project £159,000 94 Link Happier Lives Institute £161,000 93 Link Riesgos Catastróficos Globales £137,000 109 Link Giving What We Can £1,650,000 9 Link All above organisations excluding GWWC (assuming max of budget ranges) £1,893,500 7.9 All above organisations including GWWC (assuming max of budget ranges) £3,543,500 4.2 * Converted from various currencies ** Their stated 'funding gap' for the year. It sounds like that's their whole planned budget, but isn't clear *** They were seeking replacement funding for the 40% shortfall of this, which they've now received ... but in five years, EA probably won't need the long-term savings Wytham Abbey was meant to be a multi-year investment. But though EA is currently funding constrained as heck, the consensus estimate seems to be that within half a decade the movement will have multiple new billionaire donors - so investing for a payoff more than a few years ahead rapidly loses value. Also (disclaimer again noted) CEEALAR has hosted retreats for Allfed and Orthogonal, and is due to host the forthcoming ML4Good bootcamp, so is already serving a similar function to Wytham Abbey - for a fraction of the operational cost, and less than 2% of the purchase/sale value. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
undefined
Jan 26, 2024 • 5min

EA - Cost-effectiveness analysis of ~1260 USD worth of social media ads for fellowship marketing by gergo

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Cost-effectiveness analysis of ~1260 USD worth of social media ads for fellowship marketing, published by gergo on January 26, 2024 on The Effective Altruism Forum. TLDR.: I spent ~1260 USD on social media ads (Facebook/Instagram) over ~1,5 years. We got an additional 53-57 applicants this way, resulting in a cost-effectiveness of 22,1-23,8 USD per applicant. Disclaimer: I wanted to capture 80% of the value of what I have to say without putting a lot of time into writing. This means that the post is somewhat rough around the edges, but hopefully, it will be still useful. I have been very excited about experimenting with paid ads to reach out to people who would otherwise not hear about our Into to EA and AGISF programs. This post is a summary of how I spent ~1260 USD on paid ads for social media, and a botec of what it bought us. Please take all results with a grain of salt, as the data is limited and one thing that works for us might not apply in other contexts. That being said, I'm quite confident that groups that want to increase the number of talented and diverse applicants to their programs should at least experiment with using paid ads. Cost-effectiveness I overall spent ~1260 USD, which resulted in 53 additional applicants to our fellowships over 1,5 years (23,8 USD per applicant). At least 4 of these 53 applicants also invited a friend along with them to our program, and if we count them as well, we got overall 57 additional applicants, which slightly improves the cost-effectiveness to 22,1 USD per applicant. You can take a look at the raw-ish data here as well as see the breakdown by campaign and course type (EA vs. AIS). Impact I think most of the expected impact of this will come from the ~30% of the overall applicants who engaged with the courses very seriously and took a lot of value from them. Unfortunately, I didn't do an amazing job keeping track of what % of the original 53-57 applicants never started the course. I would estimate this to be around 20-35%. Given that many people never start the course, I think it's really valuable to encourage people to sign up for your newsletter[1] as part of the application process - or if they have a good application, reach out to them in the next round. As for the rest of the applications, I think it's pretty similar to the usual fellowship experience, some people drop out after a couple of sessions, some finish it but end up disappearing after the course, etc. It goes without saying, but of course, this is not a judgment on people's intrinsic value! Additional points and caveats: Note that the courses I was advertising were 4 sessions only, and sometimes it was an intensive 1-week course - which I think partly improved the cost-effectiveness but can have other drawbacks, see the discussion here and here. With paid ads, we got to reach out to many talented international students from 3rd world countries, which is awesome - and otherwise, we would have likely not reached them.[2] If you have data on the cost-effectiveness of your social media ads (or want to start gathering such data) make sure to reach out! Conclusion Based on this, I will increase our marketing budget, as well as probably expand it to cities where we don't have an EA presence yet in the country. I think it's possible that once I have more data, these ads won't seem as good as now, but even if I'm currently overestimating the cost-effectiveness by 10x - they would still look pretty good. If you would like to use social media ads for your national/city/university group, feel free to shoot us an email at info[at]eahungary.com ^ see here or here if you don't have one but want to use ours as a template ^ In Hungary, there are a lot of international students from 3rd world countries who are here on a scholarship. This means that they have already had ...
undefined
Jan 26, 2024 • 56min

LW - AI #48: The Talk of Davos by Zvi

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #48: The Talk of Davos, published by Zvi on January 26, 2024 on LessWrong. While I was in San Francisco, the big head honchos headed for Davos, where AI was the talk of the town. As well it should be, given what will be coming soon. It did not seem like anyone involved much noticed or cared about the existential concerns. That is consistent with the spirit of Davos, which has been not noticing or caring about things that don't directly impact your business or vibe since (checks notes by which I mean an LLM) 1971. It is what it is. Otherwise we got a relatively quiet week. For once the scheduling worked out and I avoided the Matt Levine curse. I'm happy for the lull to continue so I can pay down more debt and focus on long term projects and oh yeah also keep us all farther away from potential imminent death. Table of Contents Introduction. Table of Contents. Language Models Offer Mundane Utility. Might not come cheap. Language Models Don't Offer Mundane Utility. The ancient art of walking. Copyright Confrontation. It knows things, but it still cannot drink. Fun With Image Generation. Poisoning portraits in the park. Deepfaketown and Botpocalypse Soon. Use if and only if lonely. They Took Our Jobs. The one saying it won't happen interrupted by one doing it. Get Involved. New jobs, potential unconference. In Other AI News. Various people are doing it, for various values of it. Quiet Speculations. How fast is efficiency improving? Intelligence Squared. Why so much denial that importantly smarter is possible? The Quest for Sane Regulation. New polls, new bad bills, EU AI Act full text. Open Model Weights Are Unsafe and Nothing Can Fix This. More chips, then. The Week in Audio. Nadella, Altman and more. Rhetorical Innovation. Are you for or against the existence of humanity? Malaria Accelerationism. All technology is good, you see, well, except this one. Aligning a Smarter Than Human Intelligence is Difficult. Diversification needed. Other People Are Not As Worried About AI Killing Everyone. Anton and Tyler. The Lighter Side. No spoilers. Language Models Offer Mundane Utility Say you can help people respond to texts on dating apps via a wrapper, charge them $28/month, claim you are making millions, then try to sell the business for $3.5 million. Why so much? Classic black market situation. The readily available services won't make it easy on you, no one reputable wants to be seen doing it, so it falls on people like this one. There is a strange response that 'profits are razor thin.' That cannot possibly be true of the engineering costs. It can only be true of the marketing costs. If you are getting customers via running mobile ads or other similar methods, it makes sense that the effective margins could be trouble. And of course, when marginal cost of production is close to zero, if there are many entrants then price will plunge. But a lot of customers won't know about the competition, or they will know your works and be willing to pay, so a few gouged customers could be the way to go. OpenAI announces partnership with Premiere Party School Arizona State University. Everyone gets full ChatGPT access. Students get personalized AI tutors, AI avatars, AIs for various topics especially STEM. Presumably this helps them learn and also gives them more time for the parties. Chrome feature to automatically organize tab groups. Also they'll let you create a theme via generative AI, I guess. GitLab's code assistant is using Claude. No idea if it is any good. Ethan Mollick: Having just taught initial AI stuff to 250+ undergrads & grad students in multiple classes today: AI use approached 100%. Many used it as a tutor. The vast majority used AI on assignments at least once Knowledge about AI was mostly based on rumors Prompting knowledge was low Prompting knowledge seems very low a...
undefined
Jan 26, 2024 • 1min

EA - Probably Good launched a newsletter with impact-centered career advice! by Probably Good

Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probably Good launched a newsletter with impact-centered career advice!, published by Probably Good on January 26, 2024 on The Effective Altruism Forum. Probably Good recently launched a newsletter to update our audience on new content and help people learn how to increase the impact of their careers at a meaningful but reasonable pace. You can subscribe here! For new subscribers, we'll kick off with an intro series overview of our approach to career planning. In future newsletters, we'll cover topics like: Core concepts & frameworks for thinking about careers New content & services from PG Personal perspectives on career choice Cause-specific overviews and resources Promising job & work opportunities within a range of cause areas If you think you might have signed up for our mailing list at some point in the past, you should have received a confirmation email to let us know you'd like to receive the newsletter. If you didn't receive this email, you can sign up here. As always, we want to express our gratitude for all the encouragement we've received from this community over the past two years. We're excited for what's to come as we keep growing our site and we appreciate the ongoing support. If you have ideas for new career related content you'd like to see, feel free to reach out via our contact form or email us at team@probablygood.org. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app