Dark Side of Generative AI: Emerging Threats and Attack Vectors
Synopsis
In today’s digital age, generative AI is transforming various sectors. But its dark side is becoming increasingly evident, revealing a growing range of threats and attack vectors. Cybercriminals use it to spread harmful content, especially targeting women and children, which harms trust in society and worsens social problems like online abuse. It motivates us to study its emerging threats, attack vectors, and the corresponding case studies. It is used to manipulate information, breach security, and conduct advanced cyber scams. It is used to craft convincing phishing emails and messages to trick people more easily and extort money. It also enables the automated creation of malicious code, disinformation campaigns, and hallucinated content, amplifying both the speed and scale of cybercrime. It enables unethical practices such as political deepfakes, identity theft, vishing, and the creation of deepfake pornography used for revenge or reputational damage. Alarmingly, the rise of dark LLMs designed explicitly for malicious purposes represents a new frontier in cybercrime. Given the growing threats, there is an urgent need for increased public awareness, strong policy frameworks, and advanced mitigation strategies. Addressing these challenges is essential to prevent misuse while preserving its benefits in society across healthcare, education, business, and more.








