Mark Zuckerberg said on a Meta earnings call earlier this week that the company is training Llama 4 models “on a cluster that is bigger than 100,000 H100 AI GPUs, or bigger than anything that I’ve seen reported for what others are doing.”…
Meta is using more than 100,000 Nvidia H100 AI GPUs to train Llama-4 — Mark Zuckerberg says that Llama 4 is being trained on a cluster “bigger than anything that I’ve seen”
-
[td_block_social_counter facebook="TagDiv" twitter="tagdivofficial" youtube="tagdiv" style="style4 td-social-colored" custom_title="FOLLOW US" block_template_id="td_block_template_2" f_header_font_family="445" f_header_font_size="18" f_header_font_line_height="1.4" f_header_font_transform="uppercase" header_text_color="#f45511" f_header_font_weight="400" tdc_css="eyJhbGwiOnsiZGlzcGxheSI6IiJ9LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiNDAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9"]