In an announcement published by Mistral this week, Mixtral 8x22B was introduced, the latest addition to the family of open models, promises to set a new standard for performance and efficiency, offering unparalleled capabilities and cost-effectiveness.

Mixtral 8x22B is a sparse Mixture-of-Experts (SMoE) model that utilises only 39B active parameters out of its total 141B, making it incredibly cost-efficient for its size. .

Mixtral 8x22B is a multilingual AI model fluent in English, French, Italian, German, and Spanish, with strong mathematics, coding, and native function calling capabilities. It has outperformed other leading open models on various industry benchmarks, excelling in common sense reasoning, knowledge-based tasks, multilingual capabilities, and coding and mathematics tasks. The model's extensive 64K token context window allows for precise information recall from large documents, making it suitable for tasks requiring long-range dependencies and extensive contextual understanding.

Mixtral 8x22B is released under the Apache 2.0 licence, the most permissive open-source licence, allowing anyone to use the model anywhere without restrictions.

Share this post
The link has been copied!