Meta, a tech giant, recently introduced proprietary processors, a data center design, and a supercomputer for AI workloads. The company is looking to gain and maintain control over its entire AI infrastructure stack, as it feels the pressure of other tech giants in the field of AI developments.
The Meta Training and Inference Accelerator (MTIA) is an ASIC accelerator that focuses specifically on inference workloads, offering more computing power, better performance, less latency, and more efficiency. It is designed to be used in combination with GPUs, likely Nvidia GPUs.
Meta also announced the arrival of the Research SuperCluster (RSC) AI Supercomputer, which is one of the fastest AI training supercomputers in the world. It consists of 16,000 GPUs and a 3-level Clos network fabric that provides bandwidth for 2,000 training systems.
Meta is committed to providing the best AI solutions and applications, and is taking steps to ensure it maintains control over its AI infrastructure stack. The company’s recent announcements demonstrate its dedication to this goal.