Dell Technologies and Meta have teamed up to make it easier for customers to deploy Meta’s Llama 2 large language model (LLM) on-premises. This partnership will give customers the ability to run Llama 2 on their own IT infrastructure, rather than having to access it via the cloud.
Dell is offering a portfolio of Validated Designs for Generative AI, which are pre-tested hardware builds that are jointly engineered with Nvidia. These designs are combined with deployment and configuration guidance to help customers get up and running quickly.
Dell has also integrated the Llama 2 models into its system sizing tools to help customers choose the right configuration for their needs.
“Generative AI models including Llama 2 have the potential to transform how industries operate and innovate,” said Jeff Boudreau, Dell’s chief AI officer. “With the Dell and Meta technology collaboration, we’re making open source GenAI more accessible to all customers, through detailed implementation guidance paired with the optimal software and hardware infrastructure for deployments of all sizes.”
According to Dell, Llama 2 with 7 billion parameters can be run with just a single GPU, while the 13 billion parameter version requires two GPUs, and the 70 billion version calls for eight.
The sources for this piece include an article in TheRegister.