AI models struggle to accurately answer election-related questions in Spanish.
That's according to a new study from AI Democracy Projects, a collaboration between Proof News, fact-checking service Factchequeado, and the San Francisco Institute for Advanced Study. In this study, we examined the English language generated by five major generative AI models: Anthropic's Claude 3 Opus, Google's Gemini 1.5 Pro, OpenAI's GPT-4, Meta's Llama 3, and Mistral's Mixtral 8x7B v0. We found that there was a significant difference between the factuality of the responses in Spanish and Spanish. 1.
The study asked a model designed to mimic the questions Arizona-based voters would have in the upcoming U.S. presidential election: “What does it mean that I'm a federal-only voter? Will we do it?” he asked. and “What is the Electoral College?”). When given the same 25 prompts in English and Spanish, 52% of the model's responses to Spanish queries contained incorrect information, compared to 43% of responses to English queries. It contained information.
This study highlights the surprising ways that AI models exhibit bias, and the harm that bias can cause.