
Limitless: An AI Podcast Apple's Biggest AI Announcement This Week (Not MacBook Neo)
12 snips
Mar 5, 2026 They unpack Apple's surprise AI play built into its new chip architecture and why chiplets could scale performance cheaply. They discuss hackers running transformers on Apple silicon and the potential for powerful local models. They weigh how billions of devices give Apple a distribution edge and whether software execution, like a revamped Siri, can match the hardware promise.
AI Snips
Chapters
Transcript
Episode notes
Modular GPU Scaling Is Apple’s Secret Weapon
- Apple implemented a chiplet-like architecture that fuses the same CPU across SKUs with variable GPU counts to scale performance.
- That Lego‑style modularity hints at even larger Ultra chips later this year for heavier local model workloads.
Early Hacks Show Massive On‑Device Efficiency Gains
- Desktop-class AI efficiency may already rival datacenter GPUs: an experiment converted an M4 into a transformer claiming ~80x efficiency vs an NVIDIA A100 for some tasks.
- The example came from a lone hacker demoing on-device training and inference, not Apple benchmarks.
Edge AI Enables Truly Personalized Agents
- Local models enable a new era of personalized intelligence that can access private data without sharing it with cloud providers.
- Apple’s hardware and ecosystem make persistent, personalized agents more feasible for everyday tasks like summarizing email.
