Canadian cybersecurity expert, Sami Khoury, has sounded the alarm on the alarming fusion of artificial intelligence (AI) and cybercrime.
Hackers and propagandists are leveraging AI to create malevolent software, craft convincing phishing emails, and spread disinformation with greater precision and sophistication. As the tech revolution surges forward, rogue actors are harnessing its power for nefarious purposes, causing growing concern among cybersecurity watchdogs.
The Canadian Centre for Cyber Security has already observed the use of AI in phishing emails and the development of malicious code designed to deceive unsuspecting victims. While specific evidence remains undisclosed, Khoury’s warning elevates the urgency to address the potential threats posed by AI in the hands of cybercriminals.
Reports from cyber watchdog groups have highlighted the hypothetical risks associated with large language models (LLMs), like OpenAI’s ChatGPT. Such models can convincingly impersonate individuals or organizations, manipulating targets into risky situations, such as making unauthorized cash transfers.
Khoury acknowledges that AI’s deployment in drafting malicious code is still in its nascent stages. However, the pace of AI model development poses a challenge for anticipating and countering the threats they might pose.
The sources for this piece include an article in Reuters.