Skip to main content
All About Tech

ChatGPT: A potential cybersecurity threat?

By February 16, 2023March 21st, 2023No Comments
Is ChatGPT a potential cybersecurity threat?

2023 began with a bang as the ChatGPT disruption continued to evolve with promising outcomes for businesses and individuals alike. It came over as a fresh wave of exciting change bundled with endless discourses around it. Months down and now the initial euphoria conversations have taken a different turn, one of caution. 

For anyone who is still unaware of the concept of ChatGPT, here is a simple breakdown; ChatGPT is an artificial intelligence model of Open.ai that scans data from the online world and gives the user a concise or long response (as requested) to the prompt (question) posed to it. Here is what ChatGPT has to say about itself:

ChatGPT

Chatbots have been growing in popularity as a means of customer service and communication, but with the advent of large language models like ChatGPT, the capabilities of these chatbots have become truly remarkable. While these language models are designed to provide helpful and informative responses, they can also be a potential risk to cybersecurity if not properly secured.

Pros and Cons of ChatGPT

The cybersecurity market is and will be significantly impacted by ChatGPT, as with any new technology. This artificial intelligence can be used to develop advanced cybersecurity products. Many believe the broader use of AI and ML is essential in identifying potential threats. ChatGPT can play a vital role in noticing and reacting to cyberattacks. ChatGPT can also be used for bug bounty programs. However, it should be noted that, where there is technology, there are cyber risks.

ChatGPT and similar models can be used to enhance cybersecurity by providing secure and efficient communication channels between organizations and their clients. With their advanced language processing capabilities, they can effectively recognize and respond to potential security threats, thereby reducing the risk of cyberattacks. Additionally, ChatGPT can also be used to analyze large amounts of data to identify patterns and detect suspicious activity, making it a valuable tool in the fight against cybercrime.

Malpractices with ChatGPT

Many users have reported that despite the prompts, ChatGPT will not write malware code if asked. It has defined guardrails and security protocols to recognise inappropriate content requests. However, there have been a few instances whereby developers have tried various ways to bypass the protocols and succeeded in getting the desired output/ code. 

The catch is that if a prompt is detailed enough to explain the steps of writing the malware instead of a direct prompt, ChatGPT will answer the prompt effectively, resulting in construction malware on demand.

ChatGPT

The use of AI programs, such as ChatGPT, may soon make it faster and easier for attackers to launch cyberattacks using AI-generated code since there are already criminal groups offering malware-as-a-service. ChatGPT has enabled even less experienced attackers to write more accurate malware code, which was previously only possible for experts.

The deployment of language models like ChatGPT also creates new vulnerabilities that must be addressed. As these models are fed massive amounts of data, they can become targets of malicious actors who wish to access sensitive information. Furthermore, the deployment of these models on public cloud platforms can create security risks if proper safeguards are not put in place. This is because the data that these models process and store can be vulnerable to theft or unauthorised access, putting organizations and their clients at high risk. 

Ripples from the Dark Web

Nothing surpasses the dark web and the same applies to the underground crime forums that operate off the radar. According to a report from Israel security Company – Check Point, a hacker who previously shared Android malware exhibited a code written by ChatGPT that stole files of concern, compressed them and sent it across the web. The hackers also went on to show another tool that could further upload malware to an already infected PC. 

Another unnamed user, in the same forum, shared a Python code that could encrypt files, saying OpenAI’s app helped build it. Check Point mentioned in its report that such codes are used for unharmful purposes at the moment but they can also be easily amended for malpractices. 

ChatGPT is not alone

While we are contemplating the long-term effects of ChatGPT on the users and their security, Google recently introduced its AI-induced model – Bard AI. The search giant in its announcement stated that they’ve been on a journey with large language models. About two years ago they revealed their next-gen language & conversation capabilities powered by LaMDA. 

Bard AI seeks to combine the depths of knowledge with creativity, intelligence and power of the language models. The online world is the powerhouse of data that backs the responses generated by Bard which is currently in a testing phase and available to limited groups. Bard’s capability in curbing cybercrime is an interesting angle to explore in near future.

Working around ChatGPT 

In conclusion, while ChatGPT and other large language models have the potential to enhance cybersecurity, they must also be treated with caution and secured properly. Organizations no more can be blind to the potential vulnerabilities that these models pose and prepare ahead and implement measures to mitigate these risks. This may include regular security audits and the deployment of encryption and the implementation of multi-factor authentication systems. By doing so, organizations ensure that they avert growing cybercrime threats.

Authored by Richa

For more information, please reach out to the Marketing Team.