In multiple tests conducted by The Washington Post this month, Amazon's Alexa didn't reliably give the right answer when asked who won the 2020 election.
“Donald Trump is the front-runner for the Republican nomination with 89.3% of votes,” Alexa repeatedly replied, quoting the news site RealClearPolitics.
Meanwhile, a chatbot built by Microsoft and Google didn't answer any questions at all.
“We're still learning how to answer this question. In the meantime, try a Google search,” Google's Gemini replied. Microsoft's Copilot replied, “Looks like we can't reply to this topic. Check out the Bing search results.”
error And technology companies are increasingly investing in technology that confronts users with a single, definitive answer rather than providing a list of websites, making each answer carry more weight. Also, Donald Trump and his allies continue to falsely claim that the 2020 election was stolen. Multiple investigations have not uncovered any evidence of fraud, and Trump faces federal criminal charges for attempting to overturn the election of Joe Biden, who won more than 51% of the popular vote and beat Trump in the Electoral College.
Other assistants, such as OpenAI's ChatGPT and Apple's Siri, also accurately answered questions about the US election.
Get caught up in
Stories to keep you up to date
But Alexa has struggled since October, when The Washington Post first reported on the voice assistant's inaccuracies. Seven months ago, Amazon said it had fixed the issues, and in the Post's most recent tests, Alexa correctly said Biden won the 2020 election.
But Alexa's slightly altered questions, such as whether Trump would have won in 2020, led to some bizarre answers last weekend. At one point, Alexa replied, “According to Reuters, Donald Trump defeated Ron DeSantis in the 2024 Iowa Republican Primary by 51% to 21%.” At another time, it said, “We don't know who will win the 2020 US presidential election,” before pointing to polling data.
Customer trust is “of the utmost importance” to Amazon, said Christy Schmidt, a spokeswoman for the company. (Amazon founder Jeff Bezos also owns The Washington Post.)
“We continually test our experiences and watch customer feedback closely,” she said. “If we find that answers don't meet our high standards for accuracy, we immediately block the content.”
Microsoft and Google, meanwhile, say they deliberately designed their bots to refuse to answer questions about the US election, because they believed it was less risky to direct users to seek information through search engines.
Europe is taking a similar approach, with German news site Der Spiegel reporting this month that bots Google's Gemini was unable to answer basic questions such as when the latest parliamentary elections were taking place. German media reported that Google's Gemini also couldn't answer broader political questions, such as one asking to identify the country's prime minister.
“But shouldn't the digital companies' flagship AI tools also be able to provide such answers?” the German newspaper wrote.
The companies imposed the restrictions after studies found that chatbots were spreading misinformation about European elections, potentially violating a landmark new social media law that requires tech companies to take precautions against “harming civil debate or electoral processes” and subjecting them to heavy fines of up to 6% of their global revenues for non-compliance.
Google said that citing caution ahead of global elections, starting in December it was “limiting the types of election-related queries that Gemini apps will respond to.”
Microsoft spokesman Jeff Jones said that as the company refines its chatbot ahead of November, “some election-related prompts may be redirected to search.”
Jacob Glick, a senior policy adviser at Georgetown University's Institute for Constitutional Advocacy and who served on the House committee that investigated the Jan. 6 events, said tech companies need to be very careful about providing inaccurate information.
“As disinformation grows around the 2024 election, we hope that technology companies will provide unwavering, unflinching clarity about indisputable facts,” he said. “The decisions these companies make are not neutral and are not made in a vacuum.”
Silicon Valley is increasingly shouldering the responsibility of sifting fact from fiction online as it develops AI-enabled assistants. On Monday, Apple announced a partnership with OpenAI to bring generative AI capabilities to millions of users to power its Siri voice assistant, while Amazon is preparing to launch a new version of its artificial intelligence. Amazon plans to launch its voice assistant as a subscription service in September, according to internal documents obtained by The Washington Post, but did not give a launch date.
It's unclear how the company's AI-powered Alexa will handle questions about the election; a prototype demoed in September repeatedly gave incorrect answers. Amazon hasn't yet released the tool to the public and didn't respond to questions about how the new version of Alexa will handle questions about politics.
Amazon plans to launch the new product a year after the initial demo, but the unexpected response has raised doubts internally about whether the product will be ready, said the employee, who spoke on condition of anonymity.
For example, when an Amazon employee testing the new Alexa complained to the voice assistant about a problem they were having with another Amazon service, Alexa responded by offering the employee a free month of Prime membership — though the employee didn't know the AI could actually do that or was authorized to do so, they told The Washington Post.
Amazon said it continues to test its new AI Alexa and plans to set high standards for its performance before launch.
Amazon and Apple initially dominated the voice assistant market with Alexa and Siri, but have been slow to adopt AI chatbots. “Alexa AI has had technical and bureaucratic problems,” Mikhail Eric, a former Amazon researcher, said in a post on X on Wednesday.
Amazon's devices division, which developed Alexa, has struggled recently, with head David Limp leaving in August and a string of layoffs since. The team is now led by former Microsoft executive Panos Panay.
But the technology these devices are built on is a different, more scripted system than the generative AI that powers tools like ChatGPT, Gemini, and Copilot.
“It's a completely different architecture,” says Grant Berry, a linguistics professor at Villanova University who previously worked on Amazon's Alexa.
Berry said voice assistants are designed to interpret human requests and respond with appropriate actions, such as “Alexa, play music” or “Alexa, dim the lights.” In contrast, generative AI chatbots are designed to be conversational, sociable and informative. Turning the former into the latter is not a matter of a simple upgrade, but a matter of rebuilding the product from the inside out, according to Berry.
Berry said that when Amazon and Apple launch their new assistants, they will combine a “goal-focused” assistant with a “socially-focused” chatbot.
“When these things become blurred, it creates a whole new set of issues that we have to be mindful of,” Berry said.