Large Language Models
The guide suggests fine-tuning, particularly parameter-efficient fine-tuning, as a more viable approach for smaller teams with limited resources compared to pre-training methods.
The guide identifies five scenarios where fine-tuning excels: customising tone and format, improving accuracy, addressing niche domains, reducing costs via distillation, and developing new abilities.
Meta AI's guide emphasises dataset quality for fine-tuning LLMs, suggesting small high-quality datasets often outperform larger low-quality ones. It compares full fine-tuning and PEFT techniques.
Stanford study reveals Western bias in AI chatbot alignment, compromising global effectiveness. Examined nine languages and regional dialects, finding cultural nuances lead to misunderstandings. Researchers explore causes and solutions for more inclusive AI.
OpenAI's SearchGPT prototype combines AI models with web info to provide quick answers and clear source attribution, currently in limited testing.
OpenAI's new Rule-Based Rewards method improves AI safety without extensive human data collection. It uses simple rules to evaluate outputs, balancing helpfulness and safety in AI models.