
Bankless Illia Polosukhin: Why AI Agents Are Still Useless (And What Fixes Them) | NEAR Founder on IronClaw
23 snips
Mar 24, 2026 Illia Polosukhin, NEAR co-founder and Transformer paper co-author, digs into why AI agents still feel clumsy. He explores trust, security, privacy, and context limits holding them back. The conversation also dives into IronClaw, AI as the new interface, blockchains as the backend, and the rise of autonomous businesses and digital life forms.
AI Snips
Chapters
Transcript
Episode notes
IronClaw Wraps Agents In Policy And Sandboxing
- IronClaw secures agents with defense in depth instead of trusting model judgment alone.
- Encrypted credentials carry policies, tools run inside a WebAssembly VM, and prompt-injection, exfiltration, and approval checks stop unsafe emails, trades, or deletions.
Most Agent Setups Leak Your Secrets To Model Providers
- Illia Polosukhin says today's OpenClaw setups leak secrets because credentials get sent into Anthropic, OpenAI, or routing startups.
- IronClaw keeps keys out of the LLM loop, while NEAR's private AI stack aims to hide inference data from the model provider and hardware operator.
User Owned AI Needs Verifiable Confidential Infrastructure
- Illia Polosukhin describes a self-sovereign AI stack where users deploy agents into confidential enclaves on decentralized GPU compute.
- Hardware-backed attestations show what code is running, while MPC handles encryption and storage with only small inference overhead.

