Meta has introduced Llama 3.1, an open-source AI model collection with expanded language support, longer context length, and improved safety measures, including new tools for developers to implement responsible AI practices.

This release expands context length to 128K tokens, adds support for eight languages, and introduces Llama 3.1 405B, described as the first frontier-level open source AI model.

In line with Meta's commitment to open access AI, the company has emphasised responsible development and deployment. The release includes new security and safety tools such as Llama Guard 3 for multilingual content moderation, Prompt Guard to protect against prompt injections, and CyberSecEval 3 for assessing cybersecurity risks in generative AI.

Meta's approach to AI safety involves extensive pre-deployment risk assessments, safety evaluations, and red teaming exercises. The company has partnered with external experts and organisations like NIST and ML Commons to develop common standards and practices for AI safety.

The Llama 3.1 release also includes detailed documentation, on safety measures and mitigations implemented throughout the model's development. Meta is sharing model weights, recipes, and safety tools to empower developers to create safe and flexible AI applications.

Meta has conducted thorough evaluations of potential risks associated with Llama 3.1 405B, including cybersecurity threats, chemical and biological weapons proliferation, and child safety concerns. The company reports no significant increase in malicious capabilities compared to existing internet resources.



Share this post
The link has been copied!