Image: ln24SA
OpenAI has taken action to disable a network of ChatGPT accounts reportedly connected to state-sponsored groups from countries like Russia, China, Iran, and North Korea after discovering their involvement in cyber operations and influence campaigns.
These accounts were reportedly used to assist in creating malware, automating social media posts, and gathering intelligence on sensitive technologies. One Russian-speaking group is said to have used the AI chatbot repeatedly to refine malware code written in the Go programming language, with each account being used only once to maintain operational secrecy.
The malware was then disguised as a legitimate gaming application and distributed online, enabling attackers to steal sensitive information and maintain persistent access to infected devices.
Groups linked to China, including some advanced persistent threat (APT) actors, reportedly utilized the AI for technical purposes such as researching satellite communication systems, developing automation scripts for Android applications, and conducting penetration testing.
Other accounts were associated with propaganda efforts and influence campaigns, producing divisive content in various languages, impersonating journalists, and simulating public debates around elections and geopolitical issues.
The prohibited activities also encompassed scams, social engineering, and politically motivated misinformation. OpenAI emphasised that while misuse was detected, there were no instances of highly sophisticated or large-scale attacks solely driven by their technology.
Get the latests of our Loveworld News from our Johannesburg Stations and News Station South Africa, LN24 International