
The Nonlinear Library EA - My cover story in Jacobin on AI capitalism and the x-risk debates by Garrison
Feb 13, 2024
09:23
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My cover story in Jacobin on AI capitalism and the x-risk debates, published by Garrison on February 13, 2024 on The Effective Altruism Forum.
Google cofounder Larry Page
thinks superintelligent AI is "just the next step in evolution." In fact, Page, who's worth about $120 billion, has reportedly
argued that efforts to prevent AI-driven extinction and protect human consciousness are "speciesist" and "
sentimental nonsense."
In July, former Google DeepMind senior scientist Richard Sutton - one of the pioneers of reinforcement learning, a major subfield of AI
said that the technology "could displace us from existence," and that "we should not resist succession." In a
2015 talk, Sutton said, suppose "everything fails" and AI "kill[s] us all"; he asked, "Is it so bad that humans are not the final form of intelligent life in the universe?"
This is how I begin the
cover story for Jacobin's winter issue on AI. Some very influential people openly welcome an AI-driven future, even if humans aren't part of it.
Whether you're new to the topic or work in the field, I think you'll get something out of it.
I spent five months digging into the AI existential risk debates and the economic forces driving AI development. This was the most ambitious story of my career - it was informed by interviews and written conversations with three dozen people - and I'm thrilled to see it out in the world. Some of the people include:
Deep learning pioneer and Turing Award winner Yoshua Bengio
Pathbreaking AI ethics researchers Joy Buolamwini and Inioluwa Deborah Raji
Reinforcement learning pioneer Richard Sutton
Cofounder of the AI safety field Eliezer Yudkowksy
Renowned philosopher of mind David Chalmers
Sante Fe Institute complexity professor Melanie Mitchell
Researchers from leading AI labs
Some of the most powerful industrialists and companies are plowing enormous amounts of money and effort into increasing the capabilities and autonomy of AI systems, all while acknowledging that superhuman AI could literally wipe out humanity:
Bizarrely, many of the people actively advancing AI capabilities think there's a significant chance that doing so will ultimately cause the apocalypse. A
2022 survey of machine learning researchers found that nearly half of them thought there was at least a 10 percent chance advanced AI could lead to "human extinction or [a] similarly permanent and severe disempowerment" of humanity. Just months before he cofounded OpenAI, Altman
said, "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies."
This is a pretty crazy situation!
But not everyone agrees that AI could cause human extinction. Some think that the idea itself causes more harm than good:
Some fear not the "sci-fi" scenario where AI models get so capable they wrest control from our feeble grasp, but instead that we will entrust
biased,
brittle, and
confabulating systems with too much responsibility, opening a more pedestrian Pandora's box full of awful but familiar problems that scale with the algorithms causing them. This community of researchers and advocates - often labeled "AI ethics" - tends to focus on the immediate harms being wrought by AI, exploring solutions involving model accountability, algorithmic transparency, and machine learning fairness.
Others buy the idea of transformative AI, but think it's going to be great:
A third camp worries that when it comes to AI, we're not actually moving fast enough. Prominent capitalists like billionaire Marc Andreessen
agree with safety folks that AGI is possible but argue that, rather than killing us all, it will usher in an indefinite golden age of radical abundance and borderline magical technologies. This group, largely coming from Silicon Valley and commonly referred to as AI boosters, tends to worry far mo...
Google cofounder Larry Page
thinks superintelligent AI is "just the next step in evolution." In fact, Page, who's worth about $120 billion, has reportedly
argued that efforts to prevent AI-driven extinction and protect human consciousness are "speciesist" and "
sentimental nonsense."
In July, former Google DeepMind senior scientist Richard Sutton - one of the pioneers of reinforcement learning, a major subfield of AI
said that the technology "could displace us from existence," and that "we should not resist succession." In a
2015 talk, Sutton said, suppose "everything fails" and AI "kill[s] us all"; he asked, "Is it so bad that humans are not the final form of intelligent life in the universe?"
This is how I begin the
cover story for Jacobin's winter issue on AI. Some very influential people openly welcome an AI-driven future, even if humans aren't part of it.
Whether you're new to the topic or work in the field, I think you'll get something out of it.
I spent five months digging into the AI existential risk debates and the economic forces driving AI development. This was the most ambitious story of my career - it was informed by interviews and written conversations with three dozen people - and I'm thrilled to see it out in the world. Some of the people include:
Deep learning pioneer and Turing Award winner Yoshua Bengio
Pathbreaking AI ethics researchers Joy Buolamwini and Inioluwa Deborah Raji
Reinforcement learning pioneer Richard Sutton
Cofounder of the AI safety field Eliezer Yudkowksy
Renowned philosopher of mind David Chalmers
Sante Fe Institute complexity professor Melanie Mitchell
Researchers from leading AI labs
Some of the most powerful industrialists and companies are plowing enormous amounts of money and effort into increasing the capabilities and autonomy of AI systems, all while acknowledging that superhuman AI could literally wipe out humanity:
Bizarrely, many of the people actively advancing AI capabilities think there's a significant chance that doing so will ultimately cause the apocalypse. A
2022 survey of machine learning researchers found that nearly half of them thought there was at least a 10 percent chance advanced AI could lead to "human extinction or [a] similarly permanent and severe disempowerment" of humanity. Just months before he cofounded OpenAI, Altman
said, "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies."
This is a pretty crazy situation!
But not everyone agrees that AI could cause human extinction. Some think that the idea itself causes more harm than good:
Some fear not the "sci-fi" scenario where AI models get so capable they wrest control from our feeble grasp, but instead that we will entrust
biased,
brittle, and
confabulating systems with too much responsibility, opening a more pedestrian Pandora's box full of awful but familiar problems that scale with the algorithms causing them. This community of researchers and advocates - often labeled "AI ethics" - tends to focus on the immediate harms being wrought by AI, exploring solutions involving model accountability, algorithmic transparency, and machine learning fairness.
Others buy the idea of transformative AI, but think it's going to be great:
A third camp worries that when it comes to AI, we're not actually moving fast enough. Prominent capitalists like billionaire Marc Andreessen
agree with safety folks that AGI is possible but argue that, rather than killing us all, it will usher in an indefinite golden age of radical abundance and borderline magical technologies. This group, largely coming from Silicon Valley and commonly referred to as AI boosters, tends to worry far mo...
