Steering LLMs with single-vector methods might break down soon, and by soon I mean soon enough that if you're working on steering, you should start planning for it failing now.
This is particularly important for things like steering as a mitigation against eval-awareness.
Steering Humans
I have a strong intuition that we will not be able to steer a superintelligence very effectively, partially for the same reason that you probably can't steer a human very effectively. I think weakly "steering" a human looks a lot like an intrusive thought. People with weaker intrusive thoughts usually find them unpleasant, but generally don't act on them!
On the other hand, strong "steering" of a human probably looks like OCD, or a schizophrenic delusion. These things typically cause enormous distress, and make the person with them much less effective! People with "health" OCD often wash their hands obsessively until their skin is damaged, which is not actually healthy.
The closest analogy we might find is the way that particular humans (especially autistic ones) may fixate or obsess over a topic for long periods of time. This seems to lead to high capability in the domain of that topic as [...]
---
Outline:
(00:25) Steering Humans
(01:50) Steering Models
(03:01) Actually Steering Models
(05:28) Why now?
(06:48) Beyond Steering
---
First published:
April 5th, 2026
Source:
https://www.lesswrong.com/posts/fuzfbz8TbuLcskGCx/steering-might-stop-working-soon
---
Narrated by TYPE III AUDIO.
---
Images from the article:


Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.