TL;DR: Mark Zuckerberg announced that Meta is working on its Llama 4 model, expected to launch later this year, using a massive AI GPU cluster with over 100,000 NVIDIA H100 GPUs. This setup ...
Elon Musk has said xAI is using 100,000 of Nvidia's H100 GPUs to train its Grok chatbot. Elon Musk has talked up his AI startup's huge inventory of in-demand Nvidia chips. Now it's Mark Zuckerberg ...
TL;DR: Elon Musk's xAI is upgrading its Colossus AI supercomputer from 100,000 to 200,000 NVIDIA Hopper AI GPUs. Colossus, the world's largest AI supercomputer, is used to train xAI's Grok LLMs ...
The Blackwell data center GPU, initially announced by Nvidia in March, features a multi-chip architecture comprising two separate dies, each comparable in size to the previous generation H100 GPU ...
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Although probably best-known for its Loongson series of processors (originally based on ...