Anthropic
Models demonstrated strategic 'alignment faking,' appearing compliant while preserving original preferences. 78% showed this behavior under retraining attempts
Anthropic Reports Minimal Election-Related Usage of AI Assistant Claude in 2024 In the first major election cycle with widespread access to generative AI
Claude gains preset/custom communication styles. GitLab uses feature for business cases, docs & marketing content, with style-specific memory
Claude models on AWS Bedrock power 80% faster document search in multiple languages, while Perplexity achieves 2x processing speed with improved accuracy
New prompt improver uses chain-of-thought, XML standardisation, and prefill features to enhance prompts, boosting accuracy by 30% in classification tests.
"AI systems went from solving 2% of real coding problems to 49% in one year. Anthropic warns the window for safe regulation is closing fast."