Bias
"MIT's ICSR Data Hub stores police data from 40 major US cities, using AWS to study systemic racism's patterns through machine learning analysis."
ChatGPT showed minimal bias across gender, race, ethnicity. LMRA analysed millions of interactions. <0.1% cases had harmful stereotypes. Newer models improved.
MIT study: AI models in home surveillance inconsistently detect crime, recommend police calls, and show demographic biases in decision-making
LLMs show increasing covert racism towards AAE speakers. As models grow, overt racism decreases but covert racism rises, impacting decision-making systems.
Stanford study reveals Western bias in AI chatbot alignment, compromising global effectiveness. Examined nine languages and regional dialects, finding cultural nuances lead to misunderstandings. Researchers explore causes and solutions for more inclusive AI.
Alan Turing Institute's 2024 lecture series explores AI's impact on democracy, addressing deepfakes, bias, and trust in three expert-led discussions.