15 posts

Meta

Latest posts
Meta AI Outlines Methods for Adapting LLMs
Meta AI Outlines Methods for Adapting LLMs

The guide suggests fine-tuning, particularly parameter-efficient fine-tuning, as a more viable approach for smaller teams with limited resources compared to pre-training methods.

by AI-360
RAG no more?
RAG no more?

The guide identifies five scenarios where fine-tuning excels: customising tone and format, improving accuracy, addressing niche domains, reducing costs via distillation, and developing new abilities.

by AI-360
Meta AI Unveils Best Practices for Fine-Tuning LLMs
Meta AI Unveils Best Practices for Fine-Tuning LLMs

Meta AI's guide emphasises dataset quality for fine-tuning LLMs, suggesting small high-quality datasets often outperform larger low-quality ones. It compares full fine-tuning and PEFT techniques.

by AI-360
Meta AI Launches $2 Million Grant
Meta AI Launches $2 Million Grant

Meta AI's $2 million grant programme supports innovative Llama 3.1 applications for global challenges, offering up to $500,000 per project in economic development, science, and public service.

by AI-360
Meta and NVIDIA Discuss Future
Meta and NVIDIA Discuss Future

Meta's AI Studio launches, enabling AI character creation. Zuckerberg predicts every business will have an AI. Open-source Llama 3.1 released with 405B parameters, trained on NVIDIA GPUs.

by Stewart Tinson
Meta Launches Llama 3.1 with Enhanced Safety Features and Expanded Capabilities
Meta Launches Llama 3.1 with Enhanced Safety Features and Expanded Capabilities

Meta releases Llama 3.1: 128K context, 8 languages, 405B model. New safety tools: Llama Guard 3, Prompt Guard, CyberSecEval 3. Open-source approach.

by AI-360
Your link has expired. Please request a new one.
Your link has expired. Please request a new one.
Your link has expired. Please request a new one.
Great! You've successfully signed up.
Great! You've successfully signed up.
Welcome back! You've successfully signed in.
Success! You now have access to additional content.