Meta
Llama models near 350M downloads, with 20M in the past month. Usage doubled from May to July 2024, and major firms are integrating Llama-based AI.
Infosys integrates Llama 3.1 into Topaz for document, video, and audio processing. It's also used in a legal assistant with RAG for cited information.
The guide suggests fine-tuning, particularly parameter-efficient fine-tuning, as a more viable approach for smaller teams with limited resources compared to pre-training methods.
The guide identifies five scenarios where fine-tuning excels: customising tone and format, improving accuracy, addressing niche domains, reducing costs via distillation, and developing new abilities.
Meta AI's guide emphasises dataset quality for fine-tuning LLMs, suggesting small high-quality datasets often outperform larger low-quality ones. It compares full fine-tuning and PEFT techniques.
Meta AI's $2 million grant programme supports innovative Llama 3.1 applications for global challenges, offering up to $500,000 per project in economic development, science, and public service.