Most tech giants are hoarding Nvidia’s AI accelerators, but Google has a different game plan. Its cloud AI infrastructure runs on custom Tensor Processing Units (TPUs), and now it's unveiled the eighth generation, TPU8t and TPU8i.
The new TPUs aren't just faster; they're smarter. They’re designed to swiftly train cutting-edge AI models, slashing months of training time down to mere weeks. Google’s pods, with 9600 chips and two petabytes of shared memory, are a testament to this ambition.
This isn’t just about speed; it's about efficiency. The TPU8t can handle massive models with up to a million chips in one cluster, making these super-sized AI wonders faster than ever before. But if you’re not involved in the nitty-gritty of building these giants, you might find yourself paying more for RAM.
Google is betting on this “agent era,” where AI is integrated into everyday processes seamlessly. The TPU8t and TPU8i are part of that vision, offering a platform that’s not just faster but also more efficient in its approach to training and inference.







