New Mimecast Threat Intelligence: How ChatGPT Upended Email
Mimecast researchers built a detection engine to showcase whether a message is human- or AI-generated based on a mixture of current and historical emails, and synthetic AI-generated emails
Key Points
- AI tools enable threat actors to generate well-constructed and contextually accurate emails.
- Overall, generative AI emails have a polished and professional tone, making them more convincing.
- Analyst investigations should increasingly look for words and phrases associated with generative AI models and not just the sender information and payloads.
Across most cybersecurity media, references to generative AI are not only increasing exponentially, but publications are stating its use for malicious activities which has a potential impact upon every organization. When interviewing Mimecast threat researchers for our recent Threat Intelligence report, questions were asked about the pervasiveness of AI in phishing emails - but no metrics could be quantified. This left unanswered questions, including how prevalent is this and can it be measured? Our data science team took on the challenge to help, by building a detection engine to determine if a message is human- or AI-generated based on a mixture of current and historical emails, and synthetic AI-generated emails.
The research indicates a point in time when we start observing an increasing trend in AI-generated emails correlating with the release of ChatGPT. We also observed malicious AI-generated BEC, fraud and phishing emails. The net effect of this is a need for understanding by analysts/security teams and end users of the indicators of AI generated content which could help them spot these attacks.
Telltale Signs of AI-Generated Emails
ChatGPT made AI-assisted email writing accessible to everyone, even malicious actors but this is not the only set of tools available to them. In a previous blog post we outline some of their generative AI tools Previously, such tools were mainly for businesses. Now, anyone can use AI to write well-crafted emails suited to various situations. As AI-generated content becomes more prevalent, the ability to discern between human-written and machine-generated text has become increasingly difficult. One of the most notable characteristics of AI language models is the use of complex words and sentence structures, which can reveal their involvement in writing. Researchers at Cornell University found AI language models favor certain words in scientific writing. “Analyzing 14 million papers from 2010-2024, they noticed a sharp increase in specific ‘style words’ after late 2022, when AI tools became widely available. For example, ‘delves’ appeared 25 times more often in 2024 than before. Other AI-favored words include ‘showcasing,’ ‘underscores,’ and ‘crucial.’”
How We Know ChatGPT Changed Email
Mimecast’s data science team started with the intention to train a model the differences between human- and AI written emails. In total over 20,000 emails were utilized from Mimecast’s data coupled with LLM generated - OpenAI’s GPT4o, Anthropic’s Claude 3.5 Sonnet, Cohere’s Command R+, AI21’s Jamba Instruct and Meta’s Llama3 - synthetic data. The deep learning model created determined what characteristics make each data point related to the language utilized to either be human or AI written. For testing, to ensure that our model did not overfit to our training set, but could generalize well, we used four datasets:
- 4,000 emails from Mimecast
- 2,600 LLM generated synthetic data
- Human and LLM dataset from Kaggle (link)
- Fraud dataset from Kaggle (link). All emails are assumed to be human written, as they were collected before the rise of LLMs
Once training was complete, our model was shown one email after the other and asked to determine whether that example is written by a human or AI. We repeated this exercise hundreds of times on different sets of emails. We were able to use it to analyze a subset of emails to predict whether it was written by a human or AI. The results from this exercise can be found in figure 1 which also highlights the increase of AI-written emails. It is important to note that the model was not looking to identify malicious AI-written emails, but rather to estimate the pervasiveness of AI. Prior to undertaking this study it was known that AI-written messages were being seen but we did not know the scale.
Figure 1 – Human vs AI-written emails
We sampled 1000 emails per month from January 2022 to June 2024. These statistics show that out of 30, 000 emails analyzed it was found that 2330 were AI-written representing 7.8% of all emails in the dataset. But importantly the line chart is showing not only a marked increase in the use of AI to write emails but the reduction in human writing which continues to fit with what is being seen in publications. Whether this is attributed to non-English language speakers or the use of AI to aid in writing to try and make them better is unknown at present.
Examples of AI-Generated Emails
During the process of reviewing the submissions a few malicious examples were found containing distinctive language.
Example #1 of Gen AI spam message
Indicators:
- "delves into the intricacies of", "navigating through the complexities of"
- Overuse of bullets
Example #2 of Gen AI BEC message
Indicators:
- ‘I hope this message finds you well'
- Repetition of the words ‘gift cards’ and ‘surprise’
Example #3 of Gen AI BEC message
Indicators:
- ‘Hello!’
Example #4 of Gen AI phishing message
Indicators:
- ‘delve deeper into this’
- ‘stumbled’ or ‘stumbled upon’
- Long ‘-’ utilized across ChatGPT
Recommendations
These findings indicate that manual phishing investigations should remain a crucial layer of defense, especially when flagged by end users. It's vital that threat researchers scrutinize the language for specific markers that align with our findings; by cross-referencing indicators such as “delve deeper into this” or “hello!”, particularly among end users who commonly don’t use such language with known threat patterns, you can identify phishing threats more effectively, reducing remediation time and mitigating organizational risk.
As always, security teams should ensure their indicators evolve alongside large language models and new data sets.
Subscribe to Cyber Resilience Insights for more articles like these
Get all the latest news and cybersecurity industry analysis delivered right to your inbox
Sign up successful
Thank you for signing up to receive updates from our blog
We will be in touch!