MiniMax Releases M2.5: Open-Source Model Matches Frontier Performance at 1/20th the Cost of Claude Opus
Shanghai-based MiniMax releases M2.5 under a modified MIT license, a 230-billion-parameter MoE model that achieves 80.2% on SWE-Bench Verified and runs at roughly one-twentieth the cost of Claude Opus 4.6.
MiniMax, a Shanghai-based AI startup, released M2.5 on February 12, 2026, an open-source model that achieves near-frontier performance at approximately one-twentieth the cost of Anthropic's Claude Opus 4.6. The model is available on Hugging Face under a modified MIT license, with two variants: M2.5 and M2.5 Lightning.
Performance at a Fraction of the Cost
M2.5 is a mixture-of-experts model with 230 billion total parameters and 10 billion active parameters. On the SWE-Bench Verified benchmark — the standard evaluation for AI-assisted software engineering — M2.5 achieves 80.2%, placing it within range of closed-source frontier models. On Multi-SWE-Bench, which tests multi-repository code understanding, it scores 51.3%. On BrowseComp, a web navigation benchmark, it reaches 76.3%.
The cost differential is the headline number: M2.5 runs at approximately $1 per hour at 100 tokens per second — roughly one-twentieth the cost of Claude Opus 4.6 for equivalent workloads. The 37% speed improvement over the previous M2.1 model on SWE-Bench Verified indicates that MiniMax is improving both capability and efficiency simultaneously. The model was trained across more than 10 languages in 200,000+ real-world environments, suggesting a diverse training distribution.
Open-Source Economics
The release under a modified MIT license means that any organization can download, run, and modify M2.5 without licensing fees. For enterprises that currently pay API fees to access frontier-class models, the availability of a competitive open-source alternative at a fraction of the operating cost creates immediate economic pressure on proprietary pricing. The gap between the best closed-source models and the best open-source models continues to narrow, and each narrowing makes the premium that proprietary API vendors can charge harder to justify.
The Chinese Open-Source AI Wave
M2.5's release comes alongside Alibaba's Qwen 3.5 and ahead of DeepSeek V4, continuing a pattern in which Chinese AI companies release capable open-source models at rapid intervals. The combined effect is a growing library of freely available models that cover coding, reasoning, and multimodal tasks at performance levels that were exclusive to proprietary models just months ago. For developers and enterprises building AI-powered products, the practical implication is that the cost of accessing frontier-class AI capabilities is falling faster than most pricing forecasts anticipated.
Related Articles
NVIDIA GTC 2026 Keynote: Jensen Huang Unveils Vera Rubin Platform and Six New Chips
NVIDIA CEO Jensen Huang opened GTC 2026 in San Jose with the formal unveiling of the complete Vera Rubin GPU platform — six new chips featuring 288 GB of HBM4 memory, 336 billion transistors, and 50 PetaFLOPS of FP4 performance. Over 30,000 attendees from 190 countries gathered for the AI industry's most anticipated annual event.
OpenAI Acquires Promptfoo to Strengthen AI Agent Security and Red-Teaming
OpenAI has agreed to acquire Promptfoo, the open-source AI security and red-teaming platform used by over 25% of the Fortune 500, in a deal that will integrate the tool directly into OpenAI's enterprise agent platform. The acquisition signals OpenAI's growing focus on safety infrastructure as it pushes deeper into autonomous AI agent deployment.
NVIDIA Releases Nemotron 3 Super: Open 120B-Parameter Model Targets Enterprise Agentic AI
NVIDIA has released Nemotron 3 Super, a 120-billion-parameter open-weights model built on a hybrid Mamba-Transformer architecture with a one-million-token context window. The model delivers 5x throughput improvements over its predecessor and is designed specifically for enterprise agentic AI workflows.