According to a report from the New York Times on Thursday, a hacker gained access to OpenAI’s internal messaging systems last year and extracted details about the design of the company’s artificial intelligence technologies.
The breach involved lifting information from discussions in an online forum where OpenAI employees discussed the company’s latest AI technologies.
The report, citing sources familiar with the incident, clarified that the hacker did not breach the systems where OpenAI houses and builds its AI technologies, including ChatGPT, the company’s notable chatbot.
OpenAI, backed by Microsoft Corp, did not immediately respond to Reuters’ requests for comment on the incident.
Executives at OpenAI reportedly informed employees and the company’s board about the breach in an all-hands meeting held in April last year. However, they chose not to disclose the breach publicly because no information about customers or partners was compromised.
The executives did not perceive the incident as a national security threat, believing the hacker to be an individual without ties to any foreign government. The company also did not report the breach to federal law enforcement agencies.
This incident comes amid broader concerns about the misuse of AI technology. OpenAI recently announced the disruption of covert influence operations aimed at using its AI models for deceptive activities online.
The Biden administration has also been considering measures to safeguard advanced AI models, including those developed by OpenAI, against threats from countries like China and Russia.
In a global meeting, 16 companies developing AI pledged to prioritize the safe development of the technology, reflecting growing regulatory challenges and emerging risks associated with AI innovation.