Last week, governments and major technology companies around the world made new pledges on artificial intelligence safety. At a summit in Seoul, South Korea, they promised to invest in research, testing, safety, and even AI “kill switches.”
Amazon, Google, Meta, Microsoft, OpenAI and Samsung are among the companies that have made voluntary, non-binding commitments to prevent AI from being used for biological weapons, disinformation and automated cyberattacks, according to statements from the summit and reports by Reuters and The Associated Press.
The two companies agreed to build a “kill switch” into their AI tools, allowing them to effectively shut down the systems in the event of a catastrophe.
“We cannot sleepwalk into a dystopian future where the power of AI is controlled by a few,” UN Secretary-General Antonio Guterres said in a statement. “How we act now will define our time.”
The pledges from governments and big tech companies are the latest in a series of efforts to build rules and guardrails for the growing use of AI. A year and a half after OpenAI released its generative AI chatbot, ChatGPT, companies have flocked to the technology to help automate and communicate.
Companies are using AI to monitor the safety of infrastructure, identify cancer in patient scans and guide kids through their math homework. (Check out CNET's hands-on reviews of generative AI products like Gemini, Claude, ChatGPT and Microsoft Copilot, plus AI news, tips and commentary, on our AI Atlas resources page.)
read more: AI Atlas, your guide to artificial intelligence today
The Seoul summit came as on the other side of the Pacific, Microsoft unveiled its latest AI tools at its Build conference for developers and engineers, and a week after search giant Google's I/O developer conference, where it announced advancements to its Gemini AI system and also outlined its efforts in AI safety.
A first step towards AI safety
Despite promises of safety, AI experts warn that AI development carries huge risks.
“While first steps have been promising, societal response has not been commensurate with the potential for rapid, transformative progress that many experts hope for,” a group of 15 experts, including AI pioneer Geoffrey Hinton, wrote in Science magazine earlier that week. “Responsible paths exist, if we have the wisdom to choose them.”
Last week's agreement between governments and major AI companies follows a series of commitments made by companies in November, when representatives from 28 countries agreed to curb potentially “catastrophic risks” from AI through legislation and other measures.
Look at this: Everything Google announced at I/O 2024
Correction, May 22: This story originally incorrectly stated the location of this week's AI Summit, which was actually held in Seoul, South Korea.
Editor's note: CNET has used its AI engine to create dozens of articles and label them accordingly. The notes you're reading are attached to articles that substantively address AI topics, but are all written by our expert editors and writers. For more information, visit AI Policy.