Skip to main content
Open Source 1 min read 548 views

Mistral AI Releases Mistral 3 Family Under Apache 2.0 License

New model family includes 3B, 8B, 14B dense models plus Large 3 MoE with 675B parameters, ranking #2 on LMArena.

TD

TechDrop Editorial

Share:

Mistral AI has announced Mistral 3, a comprehensive new model family released under the Apache 2.0 license. The release includes three dense models (3B, 8B, and 14B parameters) plus Mistral Large 3—their most capable model with 675B total parameters in a sparse mixture-of-experts architecture.

Performance Benchmarks

Mistral Large 3 debuted at #2 in the open-source non-reasoning models category on the LMArena leaderboard (#6 among all OSS models). The model delivers 92% of GPT-5.2's performance at roughly 15% of the price, making it an attractive option for cost-conscious deployments.

Ministral 3 for Edge Deployment

For edge and local use cases, Mistral released the Ministral 3 series in 3B, 8B, and 14B parameter sizes. Each comes in base, instruct, and reasoning variants with image understanding capabilities, providing flexibility for on-device AI applications.

Efficiency Gains

Mistral Small 3's capabilities match 70B models like Meta's Llama 3.3 despite having only 24B parameters—running over 3x faster on the same hardware. This efficiency breakthrough makes high-quality AI more accessible for developers without enterprise-grade infrastructure.

Open Source Commitment

By releasing under Apache 2.0, Mistral continues its commitment to open-source AI development. The permissive license allows commercial use, modification, and distribution, positioning these models as practical alternatives to proprietary offerings for organizations prioritizing openness and customization.

Related Articles