NVIDIA Partners With Mistral AI to Accelerate New Family of Open Models

Today, Mistral AI announced the Mistral 3 family of open-source multilingual, multimodal models, optimized across NVIDIA supercomputing and edge platforms.  

Mistral Large 3 is a mixture-of-experts (MoE) model — instead of firing up every neuron for every token, it only activates the parts of the model with the most impact. The result is efficiency that delivers scale without waste, accuracy without compromise and makes enterprise AI not just possible, but practical. 

Mistral AI’s new models deliver industry-leading accuracy and efficiency for enterprise AI. It will be available everywhere, from the cloud to the data center to the edge, starting Tuesday, Dec. 2.  

With 41B active parameters, 675B total parameters and a large 256K context window, Mistral Large 3 delivers scalability, efficiency and adaptability for enterprise AI workloads.  

By combining NVIDIA GB200 NVL72 systems and Mistral AI’s MoE architecture, enterprises can efficiently deploy and scale massive AI models, benefiting from advanced parallelism and hardware optimizations.  

This combination makes the announcement a step toward the era of  what Mistral AI calls ‘distributed intelligence,’ bridging the gap between research breakthroughs and real-world applications. 

The model’s granular MoE architecture unlocks the full performance benefits of large-scale expert parallelism by tapping into NVIDIA NVLink’s coherent memory domain and using wide expert parallelism optimizations.  

These benefits stack with accuracy-preserving, low-precision NVFP4 and NVIDIA Dynamo disaggregated inference optimizations, ensuring peak performance for large-scale training and inference. 

On the GB200 NVL72, Mistral Large 3 achieved 10x performance gain compared with the priorgeneration NVIDIA H200. This generational gain translates into a better user experience, lower per-token cost and higher energy efficiency.

Mistral AI isn’t just driving state of the art for frontier large language models; it also released nine small language models that help developers run AI anywhere.  

The compact Ministral 3 suite is optimized to run across NVIDIA’s edge platforms, including NVIDIA Spark, RTX PCs and laptops and NVIDIA Jetson devices.  

To deliver peak performance, NVIDIA collaborates on top AI frameworks such as Llama.cpp and Ollama to deliver peak performance across NVIDIA GPUs on the edge.  

Today, developers and enthusiasts can try out the Ministral 3 suite via Llama.cpp and Ollama for fast and efficient AI on the edge.

The Mistral 3 family of models is openly available, empowering researchers and developers everywhere to experiment, customize and accelerate AI innovation while democratizing access to frontier-class technologies.   

By linking Mistral AI’s models to open-source NVIDIA NeMo tools for AI agent lifecycle development — Data Designer, Customizer, Guardrails and NeMo Agent Toolkit — enterprises can customize these models further for their own use cases, making it faster to move from prototype to production.

And to achieve efficiency from cloud to edge, NVIDIA has optimized inference frameworks including NVIDIA TensorRT-LLM, SGLang and vLLM for the Mistral 3 model family. 

Mistral 3 is available today on leading open-source platforms and cloud service providers. In addition, the models are expected to be deployable soon as NVIDIA NIM microservices. 

Wherever AI needs to go, these models are ready. 

See notice regarding software product information. 

Go to Source