Nvidia’s closest rival once again obliterates cloud giants in AI performance; Cerebras Inference is 75x faster than AWS, 32x faster than Google on Llama 3.1 405B
Cerebras hits 969 tokens/second on Llama 3.1 405B, 75x faster than AWS Claims industry-low 240ms latency, twice as fast as…