ChatGPT and the future of cybersecurity

The ChatGPT craze has swept the world in just a few months, with significant impacts on technology, education, finance, and other fields, giving rise to various related researches. OpenAI is obviously optimistic to this ChatGPT frenzy, as observed from its further launch of many additional subscription features.

In the face of this formidable threat, tech giant Google has immediately launched its own AI chatbot, Bard, while Baidu has just released the Chinese version of ChatGPT called “ERNIE Bot”. In response to the moves of its competitors, Microsoft can’t be left behind and also announced that the new version of its search engine, Bing, which combines the ChatGPT-4 functions, has been fully launched. The chatbot will integrate the search results into short texts and categorize or filter them based on users’ commands.

In the third week of March 2023, we got amazed by the rapid development of AI technology to its next high level. OpenAI has upgraded to GPT-4, which reportedly demonstrated more reliable performance, greater creativity, and the ability to handle more sophisticated instructions. Google Workplace and Microsoft 365 Copilot have also introduced generative AI assistants to change the future of office work. It is evident that the tech giants are actively competing for a share of the AI market. At the same time, experts pointed out that powerful AI functions may have become an easy weapon for hackers. Not only is the EU planning to tighten law enforcement, but OpenAI developers are also calling for increased regulations to prevent cyber-security issues from escalating.

 

When AI search engine war begins, cyber-security becomes a hidden danger

Since the launch of ChatGPT, the number of active users has exceeded 100 million within two months, whereas Google, Baidu, and Microsoft have all joined the AI battle. Yet, it is worth noting that as artificial intelligence capabilities become more powerful, related regulations have not kept up with, and therefore cyber-security issues may become a big trouble.

According to a recent survey released by BlackBerry, which surveyed 1,500 IT executives across North America, the UK, and Australia, half (51%) of them predicted that we are less than a year away from a successful cyber-attack being credited to ChatGPT, and 71% believed that foreign states are already likely to be using the technology for malicious purposes against other nations.

In addition, as ChatGPT is widely tested and discussed internationally, 74% acknowledged its potential threat to cyber-security and are concerned. Among them, ChatGPT’s ability to help hackers craft more legitimate sounding phishing emails is the top global concern (53%), along with enabling less experienced hackers to improve their technical knowledge and skills (49%) and its use for spreading misinformation (49%), were identified as the main threats.

 

When hackers don’t need to know how to code, ChatGPT becomes a tool for cyber-crime?

Although ChatGPT has built-in mechanisms to filter malicious requests, according to an article by Forbes, people with malicious intent can bypass or fool the initial detection systems. As the platform receives more harmful information (such as malware codes and those with ransomware scripts), AI will learn from them and generate more sophisticated cyber-attack instructions, creating stronger threats to the information security.

It further pointed out the findings from a Check Point Research released in early January 2023, threat actors in underground criminal forums had a post titled “ChatGPT – The Benefits for Malware,” in which there were discussions on how ChatGPT has given them “a nice hand” in creating their own malicious scripts. The researchers said many of the cyber-criminals involved had “no development skills at all, and this is perhaps the most worrying aspect.

Check Point Research’s team also found that through ChatGPT and the basic command system, they could generate phishing emails that were difficult to distinguish from real ones. In addition, a report by threat intelligence collection company Recorded Future also indicated that ChatGPT has significantly reduced the entry barrier for cyber-criminals.

Shishir Singh, the CTO of Cyber-security at BlackBerry, on the other hand, warned that “there are a lot of benefits to be gained from this kind of advanced technology and we’re only beginning to scratch the surface, but we also can’t ignore the ramifications. As the maturity of the platform and the hackers’ experience of putting it to use progresses, it will get more and more difficult to defend without also using AI in defence to level the playing field,” he said.

 

ChatGPT is a double-edged sword, EU proposes stricter AI law

On one hand, ChatGPT seems to have become the latest weapon for hackers to use, but at the same time, it can also provide more convenient and advanced integrated technology for network security.

Network security requires a large number of analysts to collect and digest massive amounts of data in order to identify and authenticate cyberattacks, but has long suffered from a shortage of manpower. Robert Boyce, Global Cyber Resilience Service Lead of Accenture, told CRN that ChatGPT is expected to assist in automating some integration work by “erasing some of the noises, which helps to get the signals faster. This is an exciting prospect.”

Despite the mixed emotions in the industry over the impact of ChatGPT, the lack of corresponding global regulations remains a major concern for all sectors. As the AI search engine war begins, calls for stronger regulation have been on the rise. Thierry Breton, EU Commissioner for the Internal Market, has proposed amendments to the current AI Act for developing international regulatory laws applicable to related AI technologies such as ChatGPT. “As showcased by ChatGPT, AI solutions can offer great opportunities for businesses and citizens, but can also pose risks.” Breton wrote in a written response to Reuters. “This is why we need a solid regulatory framework to ensure trustworthy AI based on high-quality data.”

The upcoming “AI Act” being developed will include regulations on misinformation and bias to improve information transparency, as well as address concerns of infringement during AI data collection and learning processes. However, related AI development companies are also concerned that the “AI Act” will increase expenses and hinder development activities.

It is worth noting that Mira Murati, OpenAI’s CTO and co-developer of ChatGPT, has also called for government authorities to actively formulate relevant regulations. In an interview with Time magazine, she stated that filtering information for accuracy is still a major challenge for language model-based AI like ChatGPT.

“AI can be misused, or it can be used by bad actors. So, then there are questions about how you govern the use of this technology globally.” Murati pointed out the pressing challenge that needs to be addressed. ” How do you govern the use of AI in a way that’s aligned with human values?”

How to regulate AI without hindering its development has become a major challenge that the global community is currently facing.

Leave a Reply

Your email address will not be published. Required fields are marked *

More from the blog:
    This site is registered on wpml.org as a development site.