In June 2020, OpenAI, an independent artificial intelligence research laboratory based in San Francisco, announced GPT-3, the third generation of its massive Generative Pre-trained Transformer language model that can write anything from computer code to poetry.

A year later, with much less fanfare, Beijing Academy of Artificial Intelligence released an even larger model, Wu Dao 2.0, with ten times as many parameters – the neural network values ​​that encode information. While GPT-3 has 175 billion parameters, the developers of Wu Dao 2.0 claim it has a whopping 1.75 trillion. In addition, the model is able to generate not only text like GPT-3, but also images from text descriptions like OpenAI’s 12 billion parameter DALL-E model and has a similar scaling strategy as the 1.6 trillion -Parameter switch transformer model from Google.

Tang Jie, a professor at Tsinghua University who heads the Wu Dao project, said in a recent interview that the group is a …

.



Source link

Leave a Reply