As the U.S. presidential election approaches, Latino voters are experiencing an influx of targeted ads in Spanish and a concerning stream of election-related information generated by artificial intelligence (AI) models. This issue is raising significant alarm among voting rights groups, who warn that AI-driven misinformation in Spanish could mislead voters in one of America's most influential and rapidly growing voting blocs. According to a recent analysis by Proof News and Factchequeado, in partnership with the Science, Technology and Social Values Lab at the Institute for Advanced Study, more than half of the election-related AI responses in Spanish contained inaccuracies-higher than the 43% in English.

With Democrats and Republicans intensifying efforts to reach Latino voters, AI misinformation threatens to widen information gaps for Spanish-speaking voters across the political landscape. "It's important for every voter to do proper research and not just at one entity, at several, to see together the right information and ask credible organizations for the right information," said Lydia Guzman, who leads a voter advocacy campaign at Chicanos Por La Causa.

Meta's AI model Llama 3, powering virtual assistants within WhatsApp and Facebook Messenger, was among those with high error rates. The study found that nearly two-thirds of its Spanish responses contained false or misleading information, compared to roughly half in English. Meta spokesperson Tracy Clayton acknowledged the issues, explaining that Llama 3 was designed as a development tool and adding that Meta was training the model under "safety and responsibility guidelines" to reduce the chances of distributing inaccurate voting information.

The errors reflect a broader issue with AI models' ability to accurately process Spanish-language election content. In one test, Llama 3 misinterpreted a question about "federal only" voters, a group eligible to vote only in federal elections due to a lack of proof of state citizenship. The model incorrectly responded that "federal only" voters are residents of U.S. territories, such as Puerto Rico or Guam, who cannot vote in presidential elections. In another instance, Anthropic's Claude model advised users to contact election authorities in "your country or region" in response to U.S.-specific voting questions, suggesting countries like Mexico or Venezuela.

Anthropic's head of policy and enforcement, Alex Sanderford, said the company has taken steps to refine the model, now directing Spanish-language users to credible sources for voting information. Google's Gemini also performed poorly, providing confusing answers about issues with "manipulating the vote" when asked about the U.S. Electoral College.

These findings underscore the potential influence of AI-generated misinformation in shaping voter perception, especially in states with large Hispanic populations such as Arizona, Nevada, Florida, and California. In California alone, nearly one-third of all eligible voters are Latino, and one in five of those voters rely exclusively on Spanish for information, according to the UCLA Latino Policy and Politics Institute.

Inaccurate AI responses could cause significant confusion among Latino voters. Rommel Lopez, a California paralegal and active user of ChatGPT, shared his experience of encountering contradictory AI-generated information when researching claims about immigrants in his community. "We can trust technology, but not 100 percent," said Lopez. "At the end of the day they're machines."

Voting rights advocates have voiced concerns about misinformation affecting Spanish-speaking voters for months, citing a growing volume of false information from online sources and AI models. The rise in AI-driven inaccuracies highlights the importance of vigilance in verifying election-related content. "The flood of misinformation targeting Spanish-speaking voters only reinforces the importance of finding information from credible sources," Guzman emphasized.

This article includes reporting from The Associated Press and ABC News.