Mistral
NVIDIA's Mistral-NeMo-Minitron 8B model combines pruning and distillation to achieve high accuracy with 8B parameters, running on RTX workstations.
Mistral AI enables model customisation via La Plateforme and introduces AI Agents, allowing developers to create complex, shareable workflows.
Mistral Large 2, a new AI model, handles 128,000 words of context and many languages. It's better at coding, math, and reasoning, with less incorrect information.
NVIDIA's new NeMo Retriever microservices improve AI accuracy by efficiently fetching relevant business data. They reduce inaccurate answers by 30% in enterprise settings.
Mistral AI and NVIDIA release Mistral NeMo, a 12B parameter model with a 128k token context. It excels in reasoning, knowledge, and coding across multiple languages, outperforming similar models.
Mistral AI and NVIDIA launch Mistral NeMo 12B, a 12-billion-parameter language model for enterprise use, excelling in diverse tasks and easy customisation.