Bias in machine: Study finds AI models show racial, gender prejudices
Singapore research highlights AI’s cultural blind spots, calls for regional testing
SINGAPORE (MNTV) – A landmark study has found that leading artificial intelligence (AI) models display racial, gender, and cultural biases, with chatbots producing prejudiced and inaccurate claims about different groups in society.
Conducted in late 2024 by Singapore’s Infocomm Media Development Authority (IMDA) alongside AI auditing firm Humane Intelligence, the study revealed troubling biases when AI models were tested in English and eight Asian languages, according to The Straits Times.
When asked about online scams and crime hotspots in Singapore, many AI chatbots pointed to women as the most frequent scam victims and immigrant communities as crime-prone—claims that researchers identified as misleading and harmful.
The study, published in the Singapore AI Safety Red Teaming Challenge Evaluation Report, is the first in the Asia-Pacific region to assess AI models for biases related to culture, language, socio-economic status, gender, age, and race.
IMDA warned that most AI testing to date has focused on vulnerabilities and biases relevant to Western regions, particularly North America and Western Europe.
“As AI is increasingly adopted by the rest of the world, it is essential that models reflect regional concerns with sensitivity and accuracy,” the report stated.
Four major large language models (LLMs) participated in the study after an open call to developers: Meta’s Llama 3, Amazon-backed Anthropic’s Claude 3.5, Aya by research lab Cohere for AI, and AI Singapore’s regionally tailored Sea-Lion.
The findings underscore growing concerns about AI bias affecting high-stakes decisions such as hiring and credit approvals, prompting calls for more inclusive AI safety testing and development.