Meta, Facebook’s parent company, has announced ambitions to build in-house infrastructure devoted to AI workloads, including generative AI.
The new device, dubbed the Meta Training and Inference Accelerator (MTIA), is part of a chip family meant to boost AI tasks. MTIA is an ASIC device that integrates many circuits on a single board and may be designed to do simultaneous operations. The goal is to improve efficiency and performance across diverse services by developing an unique solution in collaboration with the model, software stack, and system hardware.
Previously, Meta depended on a mix of CPUs and a proprietary AI accelerator device. However, in 2022, the corporation withdrew its plans for a large-scale launch of the proprietary processor and instead bought billions of dollars in Nvidia GPUs, resulting in massive data center redesigns. In an effort to turn things around, Meta revealed plans to build an in-house chip capable of both training and executing AI models, which will be available in 2025.
The advantage of establishing their own capabilities, according to Alexis Bjorlin, VP of Infrastructure at Meta, is that it enables control at every level, from datacenter architecture to training frameworks. This vertical integration is viewed as critical for pushing the boundaries of AI research on a wide scale, particularly in the generative AI area, since it faces hurdles in turning its ambitious AI research into viable products.
The sources for this piece include an article in TechCrunch.