Mistral AI has released Mathstral, a new language model built on Mistral 7B that specialises in STEM subjects, achieving state-of-the-art reasoning capabilities in its size category across various industry-standard benchmarks.

This release is part of Mistral AI's collaboration with Project Numina and their broader effort to support academic projects.

Mathstral is built on the foundation of Mistral 7B, with a focus on STEM subjects. It achieves 56.6% accuracy on the MATH benchmark and 63.47% on MMLU, demonstrating significant performance improvements over Mistral 7B in STEM-related subjects. The model can achieve even better results with more inference-time computation, reaching 68.37% on MATH with majority voting and 74.59% with a strong reward model among 64 candidates.

As an instructed model, Mathstral is available for use or fine-tuning through Mistral AI's documentation. The model weights are hosted on HuggingFace, and it can be deployed using mistral-inference or adapted with mistral-finetune.

This release underscores Mistral AI's philosophy of building specialised models for specific purposes, as promoted in their la Plateforme service, which now offers fine-tuning capabilities.

With the release of Mathstral, Mistral AI continues to push the boundaries of specialised language models. This STEM-focused model offers researchers and developers a powerful tool for tackling complex mathematical and scientific problems



Share this post
The link has been copied!