You can't put AI back into Pandora's box. But the world's largest AI companies have voluntarily announced a new deal to address the biggest concerns surrounding the technology and allay concerns that unchecked AI development could lead to sci-fi scenarios where AI turns against its creators. We are working with the government. But without strict legal provisions to strengthen governments' AI efforts, the debate will only go so far.
This morning, 16 influential AI companies, including Anthropic, Microsoft, and OpenAI, 10 countries, and the European Union held a summit in Seoul to develop guidelines for responsible AI development. One of the big outcomes of yesterday's summit was that the AI companies in attendance agreed to a so-called kill switch, a policy that halts development of cutting-edge AI models if they are deemed to have exceeded a certain risk threshold. However, it is unclear how effective this policy will be in practice, given the failure to attach real legal weight to the agreement or define specific risk thresholds. Other of his AI companies that were not present, as well as competitors of companies that have mentally agreed to the terms, are not subject to the pledge.
A policy document signed by AI companies such as Amazon, Google, and Samsung states: “At the extreme, organizations may not be able to develop or deploy models or systems if mitigations cannot be applied to reduce risk below a threshold.'' You promise not to do it at all.” To. The summit follows the Bletchley Park AI Safety Summit held in October last year, which brought together similar AI developers and argued that there is a lack of viable short-term efforts to protect humanity from proliferation. It was criticized as “valuable, but pointless.'' A.I.
Following the previous summit, a group of participants wrote an open letter stating that the forum had not established formal rules and that AI companies were playing a disproportionate role in driving regulation within their own industries. criticized that. “Experience has shown that the best way to address these harms is through enforceable regulatory mandates, rather than self-regulation or voluntary action,” the letter said.
Writers and researchers have been warning about the risks of powerful artificial intelligence for decades, first in science fiction and now in the real world. One of the most well-known references is “Terminator 'Scenario' is the theory that if left unchecked, AI could become more powerful than its human creators and potentially attack humans. The theory's name comes from the 1984 Arnold Schwarzenegger movie. The film features a woman whose cyborg travels back in time to kill her unborn son, who ends up fighting her AI system, which plans to cause a nuclear holocaust.
“AI offers huge opportunities to transform our economy and solve our biggest challenges, but I believe we will not be able to realize its full potential unless we understand the risks posed by rapidly evolving, complex technologies. We have always been clear about this,” said British Technology Secretary Michelle Donnellan.
AI companies themselves recognize that their cutting-edge products are venturing into uncharted territory, both technologically and morally. OpenAI CEO Sam Altman said artificial general intelligence (AGI), defined as AI that surpasses human intelligence, is “on the horizon” and carries risks.
“AGI will also carry significant risks of misuse, serious accidents, and societal disruption,” OpenAI's blog post reads. “The benefits of AGI are so great that we don't believe it is possible or desirable for society to permanently halt its development. Instead, society and AGI developers will need to figure out how to get it right.”
But so far, efforts to put together a global regulatory framework around AI have been scattered and largely lacking legislative authority. A UN policy framework calling on countries to prevent AI-related risks to human rights, monitor the use of personal data, and mitigate AI risks was unanimously approved last month but is non-binding. The Bletchley Declaration, a centerpiece of the World AI Summit held in the UK last October, did not include any specific regulatory commitments.
Meanwhile, AI companies themselves are starting to establish their own organizations to promote AI policy. Yesterday, Amazon and Meta joined the Frontier Model Foundation, an industry nonprofit organization “dedicated to improving the safety of frontier AI models,” according to its website. They join founding members Anthropic, Google, Microsoft, and OpenAI. The nonprofit group has not yet come up with a firm policy proposal.
Individual governments have had more success: Government leaders dismissed President Biden's executive order regulating AI safety last October with vague promises outlined in other documents with similar intentions. “This is the first time that the government has taken the lead'' by incorporating strict legal requirements that go beyond the above. policy. Biden, for example, has invoked the Defense Production Act to require AI companies to share safety test results with the government. The EU and China have also enacted formal policies addressing topics such as copyright law and the collection of users' personal data.
States are also taking action, with Colorado Governor Jared Polis yesterday banning algorithmic discrimination in AI and requiring developers to share internal data with state regulators to ensure compliance. announced a new bill requiring the
This won't be the last chance for global AI regulation. France is set to host another summit early next year, following the meetings in Seoul and Bletchley Park. By then, participants say they will have produced a formal definition of risk benchmarks that would require regulatory action, a major step forward for what has been a relatively cautious process so far.