Meta
The guide suggests fine-tuning, particularly parameter-efficient fine-tuning, as a more viable approach for smaller teams with limited resources compared to pre-training methods.
The guide identifies five scenarios where fine-tuning excels: customising tone and format, improving accuracy, addressing niche domains, reducing costs via distillation, and developing new abilities.
Meta AI's guide emphasises dataset quality for fine-tuning LLMs, suggesting small high-quality datasets often outperform larger low-quality ones. It compares full fine-tuning and PEFT techniques.
Meta AI's $2 million grant programme supports innovative Llama 3.1 applications for global challenges, offering up to $500,000 per project in economic development, science, and public service.
Meta's AI Studio launches, enabling AI character creation. Zuckerberg predicts every business will have an AI. Open-source Llama 3.1 released with 405B parameters, trained on NVIDIA GPUs.
Meta releases Llama 3.1: 128K context, 8 languages, 405B model. New safety tools: Llama Guard 3, Prompt Guard, CyberSecEval 3. Open-source approach.