Really support
Independent journalism
Our mission is to provide unbiased, fact-based journalism that holds power accountable and exposes the truth.
Every donation counts, whether it's $5 or $50.
Support us to deliver journalism without purpose.
Hackers have accessed artificial intelligence company OpenAI's internal messaging system and “stole” details of the company's technology.
The data breach occurred earlier this year, but the company did not determine that it posed a threat to national security and did not publicize it or report it to authorities.
A person familiar with the matter said, The New York TimesThe hackers allegedly stole details of AI technology from discussions in online forums where OpenAI employees discuss the company's latest tech.
But they were unable to penetrate the systems where the company stores and builds its artificial intelligence, the people said.
OpenAI executives disclosed the incident to employees during a meeting at the company's San Francisco offices in April 2023. It was also reported to the company's board of directors.
However, sources told the paper that executives decided not to make the news public because no information about customers or partners was stolen.
The incident was not considered a threat to national security because the hackers were believed to be private citizens with no known ties to foreign governments, so OpenAI executives allegedly did not report it to the FBI or other law enforcement agencies.
But for some employees, Times The news reportedly raised concerns that foreign adversaries, such as China, could steal AI technology and ultimately endanger U.S. national security.
It also raised questions about how seriously OpenAI takes security, exposing internal rifts over the risks of artificial intelligence.
Following the leak, Leopold Aschenbrenner, technical program manager at OpenAI, who is focused on ensuring future AI technology cannot cause significant harm, sent a memo to the company's board of directors.
Aschenbrenner argued that the company had done insufficient measures to prevent the theft of sensitive information by the Chinese government or other foreign adversaries.
He also said OpenAI's security is not strong enough to prevent the theft of important secrets even if a foreign power were to penetrate the company.
Aschenbrenner later claimed that OpenAI had fired him this spring for leaking other information outside the company, and that his firing was politically motivated. He mentioned the leaks in a recent podcast, but details of the incident had not previously been reported.
“We appreciate the concerns that Leopold raised during his time at OpenAI, and they are not leading to his departure,” OpenAI spokeswoman Liz Bourgeois said. The New York Times.
“While we share his commitment to building safe AGI, we disagree with many of the claims he has made about our work since then.
“This includes his explanation regarding our security, specifically this incident, which we addressed and shared with the board prior to his joining the company.”