Meta unveils new AI-powered LLaMA models

Share post:

Meta has announced the release of a new large language model that can run on a single graphics processing unit (GPU) rather than a cluster of GPUs. LLaMA-13B is a new AI-powered large language model (LLM) that can outperform OpenAI’s GPT-3 model despite being “10x smaller.”

The new model is a collection of language models with parameters ranging from 7 billion to 65 billion. In comparison, OpenAI’s GPT-3 model, which serves as the foundation for ChatGPT, has 175 billion parameters. LLaMA is not a chatbot in the traditional sense; it is a research tool that, according to Meta, will likely solve problems with AI language models. It was trained using publicly available datasets such as Common Crawl, Wikipedia, and C4, which means the company could potentially open source the model and weights.

Smaller models trained on more tokens (word fragments) are easier to retrain and fine-tune for specific potential product use cases, according to Meta. As a result, LLaMA 65B and LLaMA 33B were trained on 1.4 trillion tokens. LLaMA 7B, its smallest model, is trained on one trillion tokens.

It competes with similar offerings from rival AI labs DeepMind, Google, and OpenAI. It is also said to outperform GPT-3 when measured across eight standard “common sense reasoning” benchmarks such as BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, and OpenBookQA while running on a single GPU. LLaMA-13B, in contrast to the data center requirements for GPT-3 derivatives, paves the way for ChatGPT-like performance on consumer-level hardware in the near future.

“Smaller, more performant models such as LLaMA enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field,” said Meta in its official blog.

Meta refers to its LLaMA models as “foundational models,” implying that the company intends for the models to serve as the foundation for future, more refined AI models built on the technology, similar to how OpenAI built ChatGPT on a foundation of GPT-3. LLaMA, according to the company, will be useful in natural language research and potentially power applications such as “question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of current language models.”

The sources for this piece include an article in ArsTechnica.

Featured Tech Jobs

SUBSCRIBE NOW

Related articles

AI surpasses human benchmarks in most areas: Stanford report

Stanford University’s Institute for Human-Centered Artificial Intelligence (HAI) has published the seventh annual issue of its AI Index...

Microsoft and OpenAI partner to build a $100 Billion AI supercomputer “Stargate”

In a bold stride towards computational supremacy, Microsoft, in partnership with OpenAI, is reported to be laying the...

US Bill Aims to Unveil AI Training Data Sources Amid Copyright Concerns

In a significant move toward transparency, a bill was introduced in the US Congress on Tuesday by California...

AI presents an “extinction level threat” – US Gov’t Report: Hashtag Trending for Tuesday, March 12, 2024

A new US government report warns that AI presents an “extinction level threat to the human species. Elon Musk is outsourcing his Grok AI code. Hackers have breached the Cybersecurity and Infrastructure Security Agency in the US and a researcher shows how to steal a Tesla by leveraging a feature of the Tesla charging stations.

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways