By Tiernan Ray
Publication Date: 2025-12-15 14:00:00
Follow ZDNET: Add us as a preferred source on Google.
ZDNET key takeaways
- Nvidia’s Nemotron 3 claims advances in accuracy and cost efficiency.
- Reports suggest Meta is leaning away from open-source technology.
- Nvidia argues it’s more open than Meta with data transparency.
Seizing upon a shift in the field of open-source artificial intelligence, chip giant Nvidia, whose processors dominate AI, has unveiled the third generation of its Nemotron family of open-source large language models.
The new Nemotron 3 family scales the technology from what had been one-billion-parameter and 340-billion-parameter models, the number of neural weights, to three new models, ranging from 30 billion for Nano, 100 billion for Super, and 500 billion for Ultra.
Also: Meta’s Llama 4 ‘herd’ controversy and AI contamination, explained
The Nano model, available now on the HuggingFace code hosting platform, increases the throughput in tokens per second by four times and extends the context window — the amount of data that can be manipulated in the model’s memory — to one million tokens, seven times as large as its predecessor.
Nvidia emphasized that the models aim to address several concerns for enterprise users of generative AI, who are concerned about accuracy, as well as the rising cost of processing an increasing number of tokens each time AI makes a prediction.
“With Nemotron 3, we are aiming to solve those problems of openness,…