Before the debate on whether generative AI poses an existential threat to humanity could even be resolved, news emerged of a hacker creating WormGPT. This new AI tool is being sold on the dark web to cybercriminals, enabling them to carry out sophisticated phishing and business email compromise (BEC) attacks.
WormGPT, a new AI tool based on the GPTJ language model, is branded on cybercrime forums as the "biggest enemy" of ChatGPT. While ChatGPT has safeguards and limitations to prevent misuse, WormGPT has no ethical boundaries and is specifically designed to cause harm.
GPTJ is a powerful AI system capable of generating coherent and realistic text on any topic, given some input words or sentences. This makes WormGPT a dangerous tool in the hands of cybercriminals, who can use it to launch sophisticated phishing and business email compromise (BEC) attacks.
WormGPT works by taking advantage of the weaknesses and vulnerabilities of human psychology and communication. It can create fake emails, malicious code, or fake news that look and sound authentic and convincing, but are actually designed to trick people into giving up their personal or financial information, clicking on harmful links, or downloading malware. WormGPT can also adapt and learn from its targets' responses and behaviors, making it harder to detect and stop.
The potential for harm when using a tool like WormGPT for malicious purposes is both obvious and concerning. It can inflict serious damage on individuals, businesses, organizations, and society as a whole by undermining trust, disseminating false information, swaying opinions, and breaching security. Additionally, it can tarnish the credibility and reputation of ChatGPT and other generative AI tools that are being used for beneficial purposes.
The contrast between ChatGPT and WormGPT illustrates that AI is not inherently good or evil, but rather its impact depends on how it is created and used by humans and for what purposes. It is humans who pose the real threat to AI. AI can be a powerful tool for augmenting human abilities and tackling complex challenges, but it can also be a dangerous weapon for exploiting human vulnerabilities and creating new problems.
The ethical and social implications of WormGPT and other generative AI tools are far-reaching and complex. They prompt discussions about the responsibility, accountability, and regulation of AI development and deployment. Additionally, they necessitate increased awareness, education, and collaboration among various stakeholders, including researchers, developers, users, regulators, and the general public.
AI can be used for both beneficial and harmful purposes, depending on who is in control and how it is utilized. There are numerous examples of AI being used for positive purposes, such as in education, healthcare, entertainment, and social good. For example, ChatGPT can assist individuals with writing, solving problems, and generating ideas and information. Additionally, it can facilitate communication tasks, such as chatting, language learning, or forming friendships.
However, there are also many examples of how AI can be used for evil, such as cybercrime, propaganda, misinformation, and manipulation. For instance, WormGPT can help criminals with phishing tasks, such as creating fake emails, malicious code, or fake news. It can also help criminals with BEC tasks, such as impersonating trusted contacts, requesting money transfers, or compromising accounts.
While it may be difficult to prevent WormGPT from becoming widely adopted on the dark web, we cannot simply accept this as inevitable. To begin, we must protect ourselves from AI abuse and misuse by verifying the source and authenticity of information before trusting or acting upon it. It is also crucial to be aware of the dangers of phishing and BEC scams, which were responsible for the costliest cyber incidents in 2022, with an average cost of $9.8 million.
Organizations must recognize the significant threat that WormGPT poses to their operations and be willing to use AI to combat it. This requires substantial investment in training AI models to address the dangers posed by WormGPT and other malicious AI tools. Additionally, it is crucial to establish governance for generative AI that guides the use of AI systems within their operations.
Governments must take action to address the threat posed by WormGPT by monitoring online forums and marketplaces where it is being sold or offered for access. They must also investigate the data sources and training methods used by the developer of WormGPT, as well as track the identities and activities of buyers and sellers.
Collaborating with other governments, law enforcement agencies, cybersecurity firms, and AI researchers is now more important than ever. By sharing intelligence on how to detect and defend against WormGPT attacks, we can unite our efforts to combat a common threat.
AI is a powerful and promising technology that holds the potential to bring numerous benefits and opportunities to humanity. However, it also presents risks and challenges that must be addressed. It is our responsibility to use AI wisely and responsibly, preventing its misuse for harmful purposes. We must not allow WormGPT and other malicious AI tools to undermine the reputation of ChatGPT and other beneficial AI tools.