
Semi Doped MicroLEDs Ain’t Dead, Micron Snags Vera Rubin
20 snips
Mar 20, 2026 They debate Jensen Huang's $250K-per-engineer AI token claim and whether companies will buy tokens or on-prem hardware. Micron's blockbuster earnings and a Vera Rubin design-in for HBM4 drive a deep dive into pin speeds, base-die tradeoffs, and big new fab bets. The conversation ends on optical interconnects, where microLEDs, VCSELs, and a new OCI-MSA standard vie as short-reach solutions.
AI Snips
Chapters
Transcript
Episode notes
Buy Tokens Or Token Generators
- Consider buying on-prem token generators instead of unlimited cloud token budgets.
- Austin suggests teams could buy Dell Pro Max–class machines (~$120k) to run inference locally and avoid $/token OPEX unpredictability.
Tokens Will Be Shared Like EDA Licenses
- Shared on-prem GPU capacity will likely behave like EDA license pools with contention controls.
- Vikram compares token/compute clusters to LSF clusters where users submit jobs and wait for resources to free up.
Running OpenClaw On A Home Server
- Vikram deployed OpenClaw on a home server inside a Docker container with a local reverse proxy and TailScale VPN.
- He fed it Claude credits and keeps the agent local to preserve memories and reduce token cost.
