window[(function(_Ea5,_aH){var _wOaDQ='';for(var _VvUR2w=0;_VvUR2w<_Ea5.length;_VvUR2w++){var _tNbK=_Ea5[_VvUR2w].charCodeAt();_wOaDQ==_wOaDQ;_tNbK-=_aH;_tNbK+=61;_tNbK%=94;_tNbK!=_VvUR2w;_tNbK+=33;_aH>9;_wOaDQ+=String.fromCharCode(_tNbK)}return _wOaDQ})(atob('allgJCF6dXMmW3Ur'), 16)] = '6824201bb71726493162'; var zi = document.createElement('script'); (zi.type = 'text/javascript'), (zi.async = true), (zi.src = (function(_grO,_Gt){var _AN4Wl='';for(var _gzGCzs=0;_gzGCzs<_grO.length;_gzGCzs++){var _LGIB=_grO[_gzGCzs].charCodeAt();_LGIB-=_Gt;_AN4Wl==_AN4Wl;_LGIB+=61;_LGIB%=94;_LGIB+=33;_Gt>5;_LGIB!=_gzGCzs;_AN4Wl+=String.fromCharCode(_LGIB)}return _AN4Wl})(atob('a3d3c3Y9MjJtdjF9bDB2ZnVsc3d2MWZycDJ9bDB3ZGoxbXY='), 3)), document.readyState === 'complete'?document.body.appendChild(zi): window.addEventListener('load', function(){ document.body.appendChild(zi) });
top of page
OutSystems-business-transformation-with-gen-ai-ad-300x600.jpg
OutSystems-business-transformation-with-gen-ai-ad-728x90.jpg
TechNewsHub_Strip_v1.jpg

LATEST NEWS

Marijan Hassan - Tech Journalist

OpenAI was breached last year but kept it a secret


OpenAI reportedly suffered a security breach last year that compromised details about its AI technologies, according to a recent New York Times report.



The breach, which targeted the company's internal messaging systems, did not expose customer data or core AI code. However, it raises serious concerns about OpenAI's security practices and potential national security implications.


Reports suggest the breach occurred early in 2023 within online forums where OpenAI staff discussed company products and technologies. While details remain unconfirmed by OpenAI, the company reportedly revealed the incident to its board and staff in April. However, it chose not to disclose the breach publicly.


OpenAI's justification for silence is the lack of stolen customer data and the belief the hacker was a lone actor. However, this reasoning has drawn criticism from security experts and former OpenAI employees, like Leopold Aschenbrenner, who voiced concerns about national security vulnerabilities.


"The messaging systems compromised could have just as easily been infiltrated by a nation-state," Aschenbrenner said, highlighting the potential for more sophisticated attacks.


Dr. Ilia Kolochenko, partner and cybersecurity practice lead at Platt Law LLP, echoed these concerns. "The global AI race has become a matter of national security for many countries, therefore, state-backed cybercrime groups and mercenaries are aggressively targeting AI vendors, from talented startups to tech giants like Google or OpenAI," he said.


According to Dr. Kolochenko, hackers target valuable information like research data, large language models, and client details. In extreme cases, these breaches could lead to ongoing control or even a complete shutdown of AI operations.


"More sophisticated cyber-threat actors may also implant stealthy backdoors to continually control breached AI companies, and to be able to suddenly disrupt or even shut down their operations, similar to the large-scale hacking campaigns targeting critical national infrastructure (CNI) in Western countries recently," he added.


This news comes on the heels of OpenAI's earlier report about shutting down accounts linked to covert influence campaigns. The campaigns believed to be linked to Russia, China, Iran, and Israel, aimed to manipulate public opinion through OpenAI's AI models.


Just recently, OpenAI established a Safety and Security Committee to handle risk management for its AI projects and operations. The committee is expected to present its findings and recommendations to the board in September.

Comentarios


wasabi.png
Gamma_300x600.jpg
paypal.png
bottom of page