Meta unveils new AI-powered LLaMA models

Share post:

Meta has announced the release of a new large language model that can run on a single graphics processing unit (GPU) rather than a cluster of GPUs. LLaMA-13B is a new AI-powered large language model (LLM) that can outperform OpenAI’s GPT-3 model despite being “10x smaller.”

The new model is a collection of language models with parameters ranging from 7 billion to 65 billion. In comparison, OpenAI’s GPT-3 model, which serves as the foundation for ChatGPT, has 175 billion parameters. LLaMA is not a chatbot in the traditional sense; it is a research tool that, according to Meta, will likely solve problems with AI language models. It was trained using publicly available datasets such as Common Crawl, Wikipedia, and C4, which means the company could potentially open source the model and weights.

Smaller models trained on more tokens (word fragments) are easier to retrain and fine-tune for specific potential product use cases, according to Meta. As a result, LLaMA 65B and LLaMA 33B were trained on 1.4 trillion tokens. LLaMA 7B, its smallest model, is trained on one trillion tokens.

It competes with similar offerings from rival AI labs DeepMind, Google, and OpenAI. It is also said to outperform GPT-3 when measured across eight standard “common sense reasoning” benchmarks such as BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, and OpenBookQA while running on a single GPU. LLaMA-13B, in contrast to the data center requirements for GPT-3 derivatives, paves the way for ChatGPT-like performance on consumer-level hardware in the near future.

“Smaller, more performant models such as LLaMA enable others in the research community who don’t have access to large amounts of infrastructure to study these models, further democratizing access in this important, fast-changing field,” said Meta in its official blog.

Meta refers to its LLaMA models as “foundational models,” implying that the company intends for the models to serve as the foundation for future, more refined AI models built on the technology, similar to how OpenAI built ChatGPT on a foundation of GPT-3. LLaMA, according to the company, will be useful in natural language research and potentially power applications such as “question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of current language models.”

The sources for this piece include an article in ArsTechnica.

Featured Tech Jobs



Related articles

Stack Overflow moderators strike over limited authority to remove AI-generated content

Stack Overflow’s moderators are on strike over a new policy that limits their ability to remove AI-generated content....

Generative AI holds promise for marketers, finds Salesforce study

Over 1,000 marketing professionals from all sectors and sizes are exploring the practical uses of generative AI while...

Altman tips Israel for big role in AI risk mitigation

During a discussion with Israeli President Isaac Herzog, OpenAI CEO Sam Altman voiced his opinion that Israel can... records two million downloads in first week

A new AI platform founded by ex-Google employees Noam Shazeer and Daniel De Freitas with over 10 million...

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways