In a series of recent SEC filings, major technology companies including Microsoft, Google, Meta and NVIDIA have highlighted significant risks associated with the development and deployment of artificial intelligence (AI).
These revelations reflect growing concerns that AI could trigger reputational damage, legal liability and regulatory scrutiny.
Concerns about AI
Microsoft expressed optimism about AI but warned that poor implementation and development could result in “reputational or competitive harm, or liability” for the company. The company highlighted the widespread integration of AI into its products and the potential risks that come with such advances. The company outlined several areas of concern, including flawed algorithms, biased datasets, and harmful content generated by AI.
Microsoft acknowledged that poor AI practices could lead to legal, regulatory and reputational issues. The company also cited the impact of current and proposed legislation, such as the EU AI Act and the U.S. AI Executive Order, that could further complicate the adoption and acceptance of AI.
Google's allegations mirror many of Microsoft's concerns and highlight the growing risks associated with the company's AI efforts, where the company has identified potential issues related to harmful content, inaccuracy, discrimination and data privacy.
Google highlighted the ethical challenges posed by AI and the significant investments required to responsibly manage these risks. The company also acknowledged that it may not be able to identify or solve all AI-related issues before they occur, which could lead to regulatory action or reputational damage.
Meta said its AI efforts “may not be successful” and incur similar business, operational and financial risks. The company warned of significant risks associated with them, including harmful or illegal content, misinformation, bias and potential cybersecurity threats.
Mehta expressed concern over the changing regulatory environment, noting that new or increased scrutiny could negatively impact its business. The company also highlighted competitive pressures and challenges from other companies developing similar AI technologies.
Nvidia did not include a section on AI risk factors, but it did address the issue extensively in its regulatory concerns. The company discussed the potential impact of a range of laws and regulations, including those related to intellectual property, data privacy and cybersecurity.
NVIDIA highlighted specific challenges posed by AI technology, such as export controls and geopolitical tensions. The company noted that increased regulatory attention on AI could significantly increase compliance costs and disrupt operations.
Nvidia, along with other companies, highlighted the EU's AI law as an example of regulation that could lead to regulatory action.
Risks do not always occur
Bloomberg first reported the news on July 3, noting that the risk factors disclosed were not probable outcomes. Rather, the disclosure was an effort to avoid finger pointing.
“The SEC has been trying to get the company to buy stocks that are not publicly traded,” Adam Pritchard, a professor of corporate and securities law at the University of Michigan Law School, told Bloomberg.
“If a company does not disclose the risks faced by its peers, it could become a target for litigation.”
Bloomberg also cited Adobe, Dell, Oracle, Palo Alto Networks and Uber as other companies that have made AI risk disclosures in SEC filings.