Darktrace, a cyber security firm, conducted a survey which discovered that 82% of employees are worried that hackers may use generative AI to craft fake emails that are nearly impossible to distinguish from real ones. Additionally, the survey found that 30% of employees have previously fallen for a scam email or text.
According to Darktrace, the rise of generative AI has resulted in a 135% increase in social engineering attack emails during the first two months of 2023. These attacks targeted thousands of customers, and the firm stated that the increase corresponds to the adoption rate of ChatGPT. The attacks employ “sophisticated linguistic techniques,” such as increasing text volume, sentence length, and punctuation in emails.
Also, Darktrace saw a decrease in the frequency of fraudulent emails that included an attachment or a link. According to the business, this might indicate that criminal actors are using generative AI, such as ChatGPT, to generate tailored assaults more quickly.
The study also asked about the top three features of a phishing email. The most revealing clue, according to 68% of respondents, was being asked to click on a link or open a file, while 61% named an unfamiliar sender or unexpected content. Also, 61% picked bad spelling and punctuation.
70% of employees reported an increase in the frequency of scam emails in the previous six months, and 79% said their organization’s spam filters hinder real communications from reaching their inbox. Additionally, 87% of employees were concerned about the quantity of personal information available online that may be used in phishing or email scams.
The sources for this piece include an article in ITpro.