Government-affiliated hackers from China, Iran, North Korea and Russia have used the technology behind the ChatGPT chatbot as clients to enhance their attacks. In cooperation with Microsoft, the accounts of five hacker groups were closed, said development company ChatGPT OpenAi.
Hackers have primarily used artificial intelligence technology to automate software development. Some group members also used it to translate technical documentation and find publicly available information. Iranian and North Korean hackers also asked AI to write texts for phishing attacks. In such attacks, victims are tricked into entering their login credentials into fake hacker websites using emails that appear deceptively real.
At the same time, OpenAI said the finding confirms the assessment that current AI technology is only partially more useful for developing cyberattacks than conventional tools. Microsoft also stressed that it has not yet seen any new attacks using AI.
ChatGPT sparked the hype around artificial intelligence just over a year ago. These AI chatbots are trained with enormous amounts of information and can formulate texts at the linguistic level of a human, write software code, and summarize information. The principle is that they estimate, word by word, how a sentence should continue. The downside is that the software can sometimes give completely wrong answers, even if they are based only on correct information. However, software development is now often successfully automated using AI.
“Bacon nerd. Extreme zombie scholar. Hipster-friendly alcohol fanatic. Subtly charming problem solver. Introvert.”