After a year, researchers at the Beijing Academy of Artificial Intelligence announced Tuesday the launch of their own generative deep-learning model, Wu Dao, a major breakthrough in AI that can do everything GPT-3 can and more.
For starters, it is enormous: it has been trained on 1.75 trillion parameters, ten times larger than the 175 billion GPT-3 and 150 billion parameters larger than Google’s Switch transformers.
Wu Dao came only three months after the release of version 1.0 in March when the BAAI researchers first developed an open source learning system called FastMoE, which resembles Google’s mix of experts.
This system, which can be operated on PyTorch, enabled the model to be trained on both supercomputer clusters and conventional GPUs.
This allowed FastMoE more flexibility than Google’s system, as the former does not require proprietary hardware such as Google’s TPUs and can therefore run on off-the-shelf hardware supercomputing clusters regardless.
This opens up a lot of possibilities because Wu Dao is multimodal, much like Facebook’s AI against hate speech or Google’s recently released MUM.
BAAI researchers demonstrated Wu Dao’s abilities to perform natural speech processing, text generation, image recognition and image generation during the lab’s annual conference last Tuesday.
The new model can not only write essays, poems and couplets in traditional Chinese but can also generate alt-text based on a static image and near-photorealistic images based on natural language descriptions.
Wu Dao also demonstrated its ability to power virtual idols and predict 3D structures of proteins like AlphaFold.
For more information, read the original story in Engadget.