Early last year, hackers accessed the internal messaging system of ChatGPT's developer, OpenAI, and stole details about the design of the company's AI technology.
The hackers stole details of discussions in online forums where employees discussed OpenAI's latest technology, but did not penetrate the systems where the company stores and builds its artificial intelligence, according to two people familiar with the incident.
OpenAI executives disclosed the incident to employees at an all-hands meeting at the company's San Francisco offices in April 2023 and reported it to the board, according to two people who spoke on condition of anonymity to discuss confidential company information.
But because no information about customers or partners was stolen, executives decided not to go public with the news, two of the people said. Executives did not consider the incident a threat to national security because they believed the hackers were private citizens with no known ties to foreign governments. The company did not report the incident to the FBI or other law enforcement agencies.
For some OpenAI employees, the news heightened fears that foreign adversaries, such as China, might steal their AI technology — which is currently primarily a tool for work and research but could ultimately endanger U.S. national security. It also raised questions about how seriously OpenAI takes security, exposing internal rifts about the risks of artificial intelligence.
After the leak, Leopold Aschenbrenner, a technical program manager at OpenAI who focuses on ensuring future AI technologies can't cause significant harm, sent a memo to OpenAI's board of directors arguing that the company hasn't done enough to prevent the theft of sensitive information by the Chinese government or other foreign adversaries.
Aschenbrenner said OpenAI fired him this spring for leaking other information outside the company, and he claimed his firing was politically motivated. He mentioned the leak in a recent podcast, but details of the incident have not been previously reported. He said OpenAI's security was not strong enough to stop a foreigner from stealing important secrets even if he had managed to get into the company.
“We appreciate the concerns Leopold raised during his time at OpenAI and they did not lead to his departure,” OpenAI spokesperson Liz Bourgeois said. Referring to the company's work building general artificial intelligence — machines that can do anything the human brain can — she added that while she shares his passion for building safe AGI, she disagrees with many of the points he has made about the company's work since, including his views on our security, and that this incident in particular was addressed and brought to the board's attention before he joined the company.
Concerns that hacks of U.S. tech companies may have a China connection are not unreasonable: Last month, Microsoft President Brad Smith testified before Congress about how Chinese hackers used the tech giant's systems to launch widespread attacks on federal government networks.
But under federal and California law, OpenAI cannot ban employees because of their nationality, and policy researchers say excluding foreign talent from U.S. projects could severely hinder AI progress in the United States.
“We need the best and the brightest to work on this technology,” Matt Knight, security director at OpenAI, told The New York Times in an interview. “There's some risk involved, and we need to figure that out.”
(The New York Times has sued OpenAI and its partner Microsoft, alleging copyright infringement of news content related to the AI system.)
OpenAI isn't the only company harnessing rapidly advancing AI technology to build increasingly powerful systems. Some of them, notably Meta, the owner of Facebook and Instagram, are sharing their designs for free with the world as open-source software. They believe that today's AI technology poses low risks, and that sharing code will allow engineers and researchers across industries to identify and fix problems.
Today's AI systems can help spread misinformation online through text, still images, and increasingly, video, and they are starting to replace some jobs.
OpenAI and its competitors, including Anthropic and Google, are trying to add guardrails to AI applications before making them available to individuals and businesses, to prevent people from using the apps to spread misinformation or cause other problems.
But there's not much evidence that AI technology today is a significant risk to national security. Studies from OpenAI, Anthropic and others over the past year have found that AI is no more dangerous than search engines. Daniella Amodei, co-founder and president of Anthropic, said the company's latest AI technology doesn't pose a significant risk, even if its designs are stolen and freely shared with others.
“If it was someone else's property, would that do great harm to society as a whole? The answer is, no, probably not,” she told The Times last month. “Would it encourage bad actors to act in the future? Maybe. It's really a matter of speculation.”
Still, researchers and tech executives have long worried that AI could one day help develop new biological weapons or help hack government computer systems — and some even think it could destroy humanity.
Many companies have already locked down their technology operations, including OpenAI and Anthropic. OpenAI recently created a safety and security committee to consider how to address risks posed by future technologies. The committee includes former Army Gen. Paul Nakasone, who led the National Security Agency and Cyber Command, and has been appointed to OpenAI's board of directors.
“We started investing in security years before ChatGPT,” Knight said. “We're on a journey to not only understand risk and get ahead of it, but to deepen our resilience.”
Federal officials and state lawmakers are also pushing for government regulations that would bar companies from releasing certain AI techniques and impose multimillion-dollar fines if the techniques cause harm, but experts say any real dangers are still years, or even decades, away.
Chinese companies are building their own systems that are powerful enough to rival leading U.S. systems, and by some measures, China has surpassed the U.S. to become the largest source of AI talent, producing nearly half of the world's top AI researchers.
“It's not crazy to think that China will soon overtake the US,” said Clement Delange, CEO of Hugging Face, a company that hosts many of the world's open-source AI projects.
Some researchers and national security leaders have argued that the mathematical algorithms at the heart of current AI systems are not dangerous now but could become so in the future, and they have called for stricter controls over AI labs.
“Even if the worst-case scenario has a relatively low probability, if it has major consequences, then it's our responsibility to take it seriously,” Susan Rice, a former domestic policy adviser to President Biden and former national security adviser to President Barack Obama, said at an event in Silicon Valley last month. “I don't think this is science fiction, as many people claim.”