Mistral
Mistral Large 2, a new AI model, handles 128,000 words of context and many languages. It's better at coding, math, and reasoning, with less incorrect information.
NVIDIA's new NeMo Retriever microservices improve AI accuracy by efficiently fetching relevant business data. They reduce inaccurate answers by 30% in enterprise settings.
Mistral AI and NVIDIA release Mistral NeMo, a 12B parameter model with a 128k token context. It excels in reasoning, knowledge, and coding across multiple languages, outperforming similar models.
Mistral AI and NVIDIA launch Mistral NeMo 12B, a 12-billion-parameter language model for enterprise use, excelling in diverse tasks and easy customisation.
Mistral AI releases Mathstral, a STEM-focused language model built on Mistral 7B. It excels in mathematical reasoning, achieving top performance on MATH and MMLU benchmarks for its size. The model is available for use and fine-tuning.
Codestral Mamba offers linear time inference and infinite sequence handling. It excels in code generation and reasoning, matching top transformer models.