Ex-Google TPU Architect Says Current AI Chips Aren't Cut Out for LLMs
MatX CEO Reiner Pope breaks down why today's AI accelerators fall short for large language models.
Reiner Pope spent years designing Google's TPUs. Now he's betting that every major AI chip on the market is fundamentally wrong for running large language models.
Pope is co-founder and CEO of MatX, a startup building specialized silicon purpose-built for LLMs. In a wide-ranging Q&A with Stripe co-founder John Collison, Pope laid out the case that general-purpose AI accelerators carry too much architectural baggage to efficiently handle the specific compute patterns LLMs demand.
The core argument: today's chips try to be good at everything. MatX wants to be great at one thing — running the transformer-based models that power modern AI. That means rethinking chip design from the ground up rather than bolting LLM optimizations onto existing architectures.
It's a bold bet against Nvidia, Google, and every other silicon giant. But Pope has the TPU credentials to back it up.