Skip to content

ChatGPT – A Blessing Or Curse For Internet?

paul pon raj

Artificial intelligence (AI) technology has become a popular tool for hackers looking to steal personal data and engage in other malicious activities. One such AI tool is ChatGPT, a third-generation Generative Pre-trained Transformer developed by OpenAI, a company owned by Microsoft.

While ChatGPT is currently free to use as part of a feedback exercise, with a paid subscription coming soon, its limitless capabilities are both a blessing and a curse.

Cybersecurity company Check Point Research (CPR) has observed attempts by Russian cybercriminals to bypass OpenAI’s restrictions in order to use ChatGPT for malicious purposes.

In underground hacking forums, hackers are discussing how to circumvent IP addresses, payment cards, and phone number controls, all of which are needed to gain access to ChatGPT from Russia.

According to Sergey Shykevich, Threat Intelligence Group Manager at Check Point, “Right now, we are seeing Russian hackers already discussing and checking how to get past the geofencing to use ChatGPT for their malicious purposes.

We believe these hackers are most likely trying to implement and test ChatGPT into their day-to-day criminal operations.”

The AI technology behind ChatGPT can make a hacker more cost-efficient, which is why cybercriminals are growing more and more interested in it. For example, on December 29, a thread named “ChatGPT – Benefits of Malware” appeared on a popular underground hacking forum.

The publisher of the thread disclosed that he was experimenting with ChatGPT to recreate malware strains and techniques described in research publications and write-ups about common malware.

Another threat is that ChatGPT can be used to spread misinformation and fake news. OpenAI is aware of this threat and has collaborated with Georgetown University’s Center for Security and Emerging Technology and the Stanford Internet Observatory in the US to investigate how large language models might be misused for disinformation purposes.

As generative language models improve, they open up new possibilities in fields such as healthcare, law, education, and science.

However, as with any new technology, it is important to consider how it can be misused. According to a report by a workshop that brought together 30 disinformation researchers, machine learning experts, and policy analysts, “We believe that it is critical to analyze the threat of AI-enabled influence operations and outline steps that can be taken before language models are used for influence operations at scale.”