AI Technology

Meta Unveils Four New MTIA AI Chips in Bold Move to Reduce NVIDIA Dependency

Meta announces four new generations of its custom MTIA AI chips (300/400/450/500). The MTIA 500 delivers 4.5x HBM bandwidth and 25x inference performance, accelerating the company's strategy to reduce NVIDIA dependency.

MetaMTIAAI ChipsSemiconductorsNVIDIA
※ このページにはアフィリエイトリンクが含まれています。リンク経由でご購入いただくと、運営費の一部として還元されます。

Meta Unveils Four New MTIA AI Chips in Bold Move to Reduce NVIDIA Dependency


On March 12, 2026, Meta announced four new generations of its custom-designed silicon chip, the MTIA (Meta Training and Inference Accelerator). The four models — MTIA 300, 400, 450, and 500 — are set to be rolled out over the next two years, marking a significant escalation in the company's strategy to reduce its reliance on external chip makers like NVIDIA and AMD.


Each of the four new models has been optimized for distinct AI workloads. The MTIA 300 is already in production and is primarily used for training ranking and recommendation systems. The MTIA 400 and 450 are designed with a focus on generative AI (GenAI) inference tasks and are scheduled for deployment from late 2026 through 2027.


The most noteworthy model is the flagship MTIA 500. This chip delivers a 4.5x improvement in HBM (High Bandwidth Memory) bandwidth and a 25x increase in inference performance compared to its predecessors, offering dominant performance in large language model (LLM) inference processing. Meta has adopted a rapid iteration strategy, releasing new chips every six months, positioning itself to respond quickly to market changes.


Meta's chip strategy is characterized by an inference-first approach. The company's AI infrastructure handles a massive volume of inference processing on a daily basis, powering recommendation algorithms for Facebook and Instagram, ad delivery systems, and the Meta AI assistant. By developing its own chips, Meta can process these workloads more efficiently and at a lower cost.


Furthermore, Meta plans to ensure smooth adoption by maintaining compatibility with existing software ecosystems through adherence to industry standards such as PyTorch and the Open Compute Project (OCP). This move is part of a broader trend of major tech companies accelerating in-house AI chip development, following Google, Amazon, and Microsoft, and has the potential to significantly reshape the NVIDIA-dominated AI chip market.

AI Newsletter

Get the latest AI tools and news delivered daily

Related Articles