Meta unveils new AI-powered LLaMA models

Share post:

Meta has announced the release of a new large language model that can run on a single graphics processing unit (GPU) rather than a cluster of GPUs. LLaMA-13B is a new AI-powered large language model (LLM) that can outperform OpenAI’s GPT-3 model despite being “10x smaller.”

The new model is a collection of language models with parameters ranging from 7 billion to 65 billion. In comparison, OpenAI’s GPT-3 model, which serves as the foundation for ChatGPT, has 175 billion parameters. LLaMA is not a chatbot in the traditional sense; it is a research tool that, according to Meta, will likely solve problems with AI language models. It was trained using publicly available datasets such as Common Crawl, Wikipedia, and C4, which means the company could potentially open source the model and weights.

Smaller models trained on more tokens (word fragments) are easier to retrain and fine-tune for specific potential product use cases, according to Meta. As a result, LLaMA 65B and LLaMA 33B were trained on 1.4 trillion tokens. LLaMA 7B, its smallest model, is trained on one trillion tokens.

It competes with similar offerings from rival AI labs DeepMind, Google, and OpenAI. It is also said to outperform GPT-3 when measured across eight standard “common sense reasoning” benchmarks such as BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, and OpenBookQA while running on a single GPU. LLaMA-13B, in contrast to the data center requirements for GPT-3 derivatives, paves the way for ChatGPT-like performance on consumer-level hardware in the near future.

“Smaller, more performant models such as LLaMA enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field,” said Meta in its official blog.

Meta refers to its LLaMA models as “foundational models,” implying that the company intends for the models to serve as the foundation for future, more refined AI models built on the technology, similar to how OpenAI built ChatGPT on a foundation of GPT-3. LLaMA, according to the company, will be useful in natural language research and potentially power applications such as “question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of current language models.”

The sources for this piece include an article in ArsTechnica.


Related articles

Microsoft’s AI success may spell defeat for it’s climate goals

Microsoft's ambitious strides in AI technology are now posing a significant challenge to its own climate goals, as...

OpenAI’s Chief Scientist Ilya Sutskever Departs Company

Ilya Sutskever, co-founder and chief scientist of OpenAI, has officially announced his departure from the company. This move...

OpenAI snubs Microsoft, launching GPT-4o only on macOS

OpenAI, despite Microsoft's substantial $10 billion investment, has chosen to release its new ChatGPT app exclusively on macOS,...

Apple to integrate ChatGPT into iPhones

Apple Inc. is on the brink of solidifying a deal with OpenAI to integrate the ChatGPT technology into...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways