
The Nonlinear Library LW - Snapshot of narratives and frames against regulating AI by Jan Kulveit
Nov 2, 2023
04:58
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Snapshot of narratives and frames against regulating AI, published by Jan Kulveit on November 2, 2023 on LessWrong.
This is a speculative map of a hot discussion topic. I'm posting it in question form in the hope we can rapidly map the space in answers.
Looking at various claims at X and at the AI summit, it seems possible to identify some key counter-regulation narratives and frames that various actors are pushing.
Because a lot of the public policy debate won't be about
"what are some sensible things to do"
within a particular frame, but rather about fights for frame control, or
"what frame to think in"
, it seems beneficial to have at least some sketch of a map of the discourse.
I'm posting this as a question with the hope we can rapidly map the space, and one example of a "local map":
"It's about open source vs. regulatory capture"
It seems the coalition against AI safety, most visibly represented by Yann LeCun and Meta, has identified
"it's about open source vs. big tech"
as a favorable frame in which they can argue and build a coalition of open-source advocates who believe in the open-source ideology, academics who want access to large models, and small AI labs and developers believing they will remain long-term competitive by fine-tuning smaller models and capturing various niche markets. LeCun and others attempt to portray themselves as
the force of science and open inquiry
, while the scaling labs proposing regulation are
the evil big tech attempting regulatory capture
. Because this seems to be the prefered anti-regulation frame, I will spend most time on this.
Apart from the mentioned groups, this narrative seems to be memetically fit in a "proudly cynical" crowd which assumes everything everyone is doing or saying is primarily self-interested and profit-driven.
Overall, the narrative has clear problems with explaining away inconvenient facts, including:
Thousands of academics calling for regulation are uncanny counter-evidence for x-risk being just a ploy by the top labs.
The narrative strategy seems to explain this by some of the senior academics just being deluded, and others also pursuing a self-interested strategy in expectation of funding.
Many of the people explaining AI risk now were publicly concerned about AI risk before founding labs, and at times when it was academically extremely unprofitable, sometimes sacrificing standard academic careers.
The narrative move is to just ignore this.
Also, many things are just assumed - for example, if the resulting regulation would be in the interest o frontrunners.
What could be memetically viable counter-arguments within the frame?
Personally, I tend to point out that motivation to avoid AI risk is completely compatible with self-interest. Leaders of AI labs also have skin in the game.
Also, recently I try to ask people to use the explanatory frame of
'cui bono'
also to the other side, namely, Meta.
One possible hypothesis here is Meta just loves open source and wants everyone to flourish.
A more likely hypothesis is Meta wants to own the open-source ecosystem.
A more complex hypothesis is Meta doesn't actually love open source that much but has a sensible, self-interested strategy, aimed at a dystopian outcome.
To understand the second option, it's a prerequisite to comprehend the
"commoditize the complement"
strategy. This is a business approach where a company aims to drive down the cost or increase the availability of goods or services complementary to its own offerings. The outcome is an increase in the value of the company's services.
Some famous successful examples of this strategy include Microsoft and PC hardware: PC hardware became a commodity, while Microsoft came close to monopolizing the OS, extracting huge profits. Or, Apple's App Store: The complement to the phone is the apps. Apps have becom...
This is a speculative map of a hot discussion topic. I'm posting it in question form in the hope we can rapidly map the space in answers.
Looking at various claims at X and at the AI summit, it seems possible to identify some key counter-regulation narratives and frames that various actors are pushing.
Because a lot of the public policy debate won't be about
"what are some sensible things to do"
within a particular frame, but rather about fights for frame control, or
"what frame to think in"
, it seems beneficial to have at least some sketch of a map of the discourse.
I'm posting this as a question with the hope we can rapidly map the space, and one example of a "local map":
"It's about open source vs. regulatory capture"
It seems the coalition against AI safety, most visibly represented by Yann LeCun and Meta, has identified
"it's about open source vs. big tech"
as a favorable frame in which they can argue and build a coalition of open-source advocates who believe in the open-source ideology, academics who want access to large models, and small AI labs and developers believing they will remain long-term competitive by fine-tuning smaller models and capturing various niche markets. LeCun and others attempt to portray themselves as
the force of science and open inquiry
, while the scaling labs proposing regulation are
the evil big tech attempting regulatory capture
. Because this seems to be the prefered anti-regulation frame, I will spend most time on this.
Apart from the mentioned groups, this narrative seems to be memetically fit in a "proudly cynical" crowd which assumes everything everyone is doing or saying is primarily self-interested and profit-driven.
Overall, the narrative has clear problems with explaining away inconvenient facts, including:
Thousands of academics calling for regulation are uncanny counter-evidence for x-risk being just a ploy by the top labs.
The narrative strategy seems to explain this by some of the senior academics just being deluded, and others also pursuing a self-interested strategy in expectation of funding.
Many of the people explaining AI risk now were publicly concerned about AI risk before founding labs, and at times when it was academically extremely unprofitable, sometimes sacrificing standard academic careers.
The narrative move is to just ignore this.
Also, many things are just assumed - for example, if the resulting regulation would be in the interest o frontrunners.
What could be memetically viable counter-arguments within the frame?
Personally, I tend to point out that motivation to avoid AI risk is completely compatible with self-interest. Leaders of AI labs also have skin in the game.
Also, recently I try to ask people to use the explanatory frame of
'cui bono'
also to the other side, namely, Meta.
One possible hypothesis here is Meta just loves open source and wants everyone to flourish.
A more likely hypothesis is Meta wants to own the open-source ecosystem.
A more complex hypothesis is Meta doesn't actually love open source that much but has a sensible, self-interested strategy, aimed at a dystopian outcome.
To understand the second option, it's a prerequisite to comprehend the
"commoditize the complement"
strategy. This is a business approach where a company aims to drive down the cost or increase the availability of goods or services complementary to its own offerings. The outcome is an increase in the value of the company's services.
Some famous successful examples of this strategy include Microsoft and PC hardware: PC hardware became a commodity, while Microsoft came close to monopolizing the OS, extracting huge profits. Or, Apple's App Store: The complement to the phone is the apps. Apps have becom...
