abnormalsecurity.com Open in urlscan Pro
76.76.21.21  Public Scan

URL: https://abnormalsecurity.com/blog/generative-ai-chatgpt-enables-threat-actors-more-attacks
Submission: On June 20 via manual from SG — Scanned from SG

Form analysis 2 forms found in the DOM

<form class="EmbeddedForm__form">
  <div class="EmbeddedForm__field mb-4">
    <div class="EmbeddedForm__field mb-4 text-black-01 text-black-01"><label class="EmbeddedForm__labelWrapper block bg-white-01 bg-black-01/10 text-black-01">
        <div class="EmbeddedForm__label text-xs opacity-50 px-4 pt-2 text-current">Email Address:</div><input name="email" type="email" required="" class="EmbeddedForm__input form-input w-full pt-0 bg-transparent" value="">
      </label></div>
  </div>
  <div class="EmbeddedForm__field mb-4"><input name="subscriptionCenterBlog" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value="1"></div>
  <div class="EmbeddedForm__field mb-4"><input name="Referring_landing_page_source__c" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value="URL"></div>
  <div class="EmbeddedForm__field mb-4"><input name="marketingRelease" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value="Processing"></div>
  <div class="EmbeddedForm__field mb-4"><input name="CStatus" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value="Subscribed to Blog"></div>
  <div class="EmbeddedForm__field mb-4"><input name="contentStage" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value="Stage 1"></div>
  <div class="EmbeddedForm__field mb-4"><input name="flexField" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value=""></div>
  <div class="EmbeddedForm__field mb-4"><input name="utm_campaign" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value="701f4000000WAPXAA4"></div>
  <div class="EmbeddedForm__field mb-4"><input name="utm_content" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value=""></div>
  <div class="EmbeddedForm__field mb-4"><input name="utm_medium" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value="Organic Website"></div>
  <div class="EmbeddedForm__field mb-4"><input name="utm_source" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value="Direct"></div>
  <div class="EmbeddedForm__field mb-4"><input name="utm_term" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value=""></div>
  <div class="EmbeddedForm__field mb-4"><input name="engagementDetailTemp" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value="This person subscribed to our blog directly from the website."></div>
  <div class="EmbeddedForm__field mb-4"><input name="GCLID__c" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value=""></div>
  <div class="EmbeddedForm__field mb-4"><input name="GACLIENTID__c" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value=""></div>
  <div class="EmbeddedForm__field mb-4"><input name="GAUSERID__c" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value=""></div>
  <div class="EmbeddedForm__field mb-4"><input name="GATRACKID__c" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value=""></div>
  <div class="EmbeddedForm__field mb-4"><input name="leadSource" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value=""></div>
  <div class="EmbeddedForm__field mb-4"><input name="campaign" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value=""></div>
  <div class="EmbeddedForm__field mb-4"><input name="Lead_Source_Detail__c" type="hidden" class="EmbeddedForm__hiddenInput form-input w-full bg-white-02" value=""></div>
  <div class="EmbeddedForm__buttons"><button type="submit" class="EmbeddedForm__button flex items-center justify-center ml-auto h-10 px-5 text-white-01 bg-black-01 rounded-sm">Subscribe</button></div>
</form>

<form>
  <input class="st-default-search-input st-search-set-focus" type="text" value="" placeholder="Search this site" aria-label="Search this site" id="st-overlay-search-input" autocomplete="off" autocorrect="off" autocapitalize="off">
</form>

Text Content

Why Abnormal/Products/Solutions/Customers/Resources/

See a Demo


 1. Abnormal Blog/
 2. Thought Leadership/
 3. Generative AI Enables Threat Actors to Create More (and More Sophisticated)
    Email Attacks


GENERATIVE AI ENABLES THREAT ACTORS TO CREATE MORE (AND MORE SOPHISTICATED)
EMAIL ATTACKS

New attacks stopped by Abnormal show how attackers are using ChatGPT and similar
tools to create more realistic and convincing email attacks.
Dan Shiebler
June 14, 2023

--------------------------------------------------------------------------------

Anyone who has spent time online in 2023 has likely heard about ChatGPT and
Google Bard, two of the more popular platforms that harness generative
artificial intelligence (AI) to complete written commands. In the few months
since their release, they’ve already had a profound impact on various aspects of
our digital world.

By leveraging advanced machine learning techniques, generative AI enables
computers to generate original content including text, images, music, and code
that closely resembles what a human could create. The technology itself has
far-reaching implications, many of which can be used for both personal and
professional good. Artists and authors can use it to explore new creative
directions, pilots and doctors can use it for training and real-world
simulation, and travel agents can ask it to create trip itineraries—among
thousands of other applications.


But like anything else, cybercriminals can take advantage of this technology as
well. And unfortunately, they already have. Platforms including ChatGPT can be
used to generate realistic and convincing phishing emails and dangerous malware,
while tools like DeepFaceLab can create sophisticated deepfake content including
manipulated video and audio recordings. And this is likely only the beginning.


NEW EMAIL ATTACKS GENERATED BY ARTIFICIAL INTELLIGENCE

Security leaders have worried about the possibilities of AI-generated email
attacks since ChatGPT was released, and we’re starting to see those fears
validated. Abnormal has recently stopped a number of attacks that contain
language strongly suspected to be written by AI.

Here are three real-life examples stopped for our customers, along with our
analyses of how these attacks were deemed likely to be AI-generated. They
showcase a variety of attack types, including credential phishing, an evolution
of the traditional business email compromise (BEC) scheme, and vendor fraud.



AI-GENERATED PHISHING ATTACK IMPERSONATES FACEBOOK

Users have long been taught to look for typos and grammatical errors in emails
to understand whether it is an attack, but generative AI can create
perfectly-crafted phishing emails that look completely legitimate—making it
nearly impossible for employees to decipher an attack from a real email.

This email sent by “Meta for Business” states that the recipient’s Facebook Page
has been found in violation of community standards and the Page has been
unpublished. To fix the issue, the recipient should click on the included link
and file an appeal. Of course, that link actually leads to a phishing page where
if the user were to input their credentials, attackers would immediately have
access to their Facebook profile and associated Facebook Page.




In the entirety of the email there are no grammatical errors, and the text
sounds nearly identical to the language expected from Meta for Business. The
fact that this email is so well-crafted makes it more difficult to detect by
humans, who are more likely to click on the link than if it had been riddled
with grammatical errors or typos.


HOW TO DETERMINE AI-GENERATED TEXT

The simplest way to detect whether an email was written by AI is to use AI. We
run these email texts through several open-source large language models in the
Abnormal platform to analyze how likely each word would be predicted given the
context to the left. If the words in the email are consistently high likelihood
(meaning each word is highly aligned with what an AI model would say, more so
than in human text) then we classify the email as possibly written by AI.

Here is the output of that analysis for the email example above. Green words are
judged as highly aligned with the AI—in the top 10 predicted words—while yellow
words are in the top 100 predicted words.


[Source: http://gltr.io/ ]

After detecting emails that show indicators of generative AI, we validate these
assumptions with two external AI detection tools: OpenAI Detector and GPTZero.

It is important to note that this method is not perfect and these tools may flag
some non-AI generated emails. For example, emails created based on a
template—such as marketing or sales outreach emails—may contain some sequences
of words that are nearly identical to those that an AI would generate. Or,
emails containing common phrases—such as copy-pasted passages from the Bible or
the Constitution—may result in a false AI classification.

However, these analyses do give us some indication that an email may have been
created by AI and we use that signal (among thousands of others) to determine
malicious intent.



EMPLOYEE IMPERSONATED IN AI-CREATED PAYROLL DIVERSION SCAM

Unfortunately, use of generative AI to create malicious emails has moved beyond
phishing and to business email compromise. While phishing emails contain a
malicious link to aid in detection, these text-based emails lack those
traditional indicators of compromise.

In this real-world attack, an employee’s account has been impersonated and the
attacker has emailed the payroll department to update the direct deposit
information on file. Again, the email is free of grammatical errors or typos,
and is written very professionally—as the payroll specialist likely expects it
to be.




Other than the impersonated sender name, there is nothing here to indicate an
attack, showcasing just how dangerous generative AI can be in the wrong hands.

Here is the result of our generative AI-likelihood analysis:


[Source: http://gltr.io/ ]


AI-GENERATED VENDOR COMPROMISE AND INVOICE FRAUD

And it’s not just traditional BEC either, as attackers are also using
ChatGPT-like tools to impersonate vendors. Vendor email compromise (VEC) attacks
are among the most successful social engineering attacks because they exploit
the trust that already exists in relationships between vendors and customers.
And because discussions with vendors often involve issues around invoices and
payments, it becomes harder to catch attacks that mimic these
conversations—especially when there are no suspicious indicators of attack like
typos.

This vendor fraud email involves an impersonation of an attorney, requesting
payment for an outstanding invoice.




Similar to the two examples above, this email also shows no grammatical errors
and is written in a tone that’s expected from an attorney. The impersonated
attorney is also from a real-life law firm—a detail that gives the email an even
greater sense of legitimacy and makes it more likely to deceive its victim.

Again, our analysis determined a high likelihood that this email was generated
by AI.


[Source: http://gltr.io/ ]


FIGHTING BAD AI WITH GOOD AI

A decade ago, cybercriminals created new domains to run their attacks, which
were quickly identified by security tools as malicious and subsequently blocked.
In response, threat actors changed their tactics and began using free webmail
accounts like Gmail and Outlook, knowing that security tools could not block
these domains since they are often used to conduct legitimate business.

Generative AI is much the same. Employees can use ChatGPT and Google Bard to
create legitimate communications for normal, everyday business, which means that
security tools cannot simply block every email that appears to be generated by
AI. Instead, they must use this as one indicator of potential attack, alongside
thousands of other signals.

As these examples show, generative AI will make it nearly impossible for the
average employee to tell the difference between a legitimate email and a
malicious one, which makes it more vital than ever to stop attacks before they
reach the inbox. Modern solutions like Abnormal use AI to understand the signals
of known good behavior, creating a baseline for each user and each organization
and then blocking the emails that deviate from that—whether they are written by
AI or by humans.

These three examples are only a small percentage of the email attacks generated
by AI, which Abnormal is now seeing on a near-daily basis. Unfortunately, as the
technology continues to evolve, cybercrime will evolve with it and both the
volume and the sophistication of these attacks will continue to increase. Now
more than ever, it’s time to take a look at AI, both good and bad, and
understand how good AI can stop the bad—before it’s too late.

Discover more about the rise of AI-generated email attacks in our new CISO Guide
to Generative AI Attacks.


Get the Guide

SEE ABNORMAL IN ACTION

Schedule a Demo


GET THE LATEST EMAIL SECURITY INSIGHTS

Subscribe to our newsletter to receive updates on the latest attacks and new
trends in the email threat landscape.

Email Address:



















Subscribe
 


SEE THE ABNORMAL SOLUTION TO THE EMAIL SECURITY PROBLEM

Protect your organization from the full spectrum of email attacks with Abnormal.

See a Demo
 




RELATED POSTS

Thought Leadership
14 Cybersecurity Influencers to Follow This Year

Stay up to date on the latest cybersecurity trends, industry news, and best
practices by following these 14 innovative and influential thought leaders on
social media.

Read More
Thought Leadership
Generative AI Enables Threat Actors to Create More (and More Sophisticated)
Email Attacks

New attacks stopped by Abnormal show how attackers are using ChatGPT and similar
tools to create more realistic and convincing email attacks.

Read More
Email Platform Attacks
5 Security Risks of Collaboration Tools

Business collaboration tools help employees with productivity and communication.
But they can also present a number of cybersecurity risks for your business.

Read More
Product
Modernizing Your Email Security Architecture: Pure API vs Transport Rules

Learn about the distinct functionalities of transport rule-based and API-based
solutions to make informed decisions about your security architecture.

Read More
Data & Trends
2023 Verizon Data Breach Investigations Report Recap: Socially-Engineered BEC
Attacks Doubled Over the Past Year

Discover the biggest takeaways about business email compromise and social
engineering from the 2023 Verizon Data Breach Investigations Report (DBIR).

Read More
Business Email Compromise
3 Ways Cybercriminals are Targeting Your Email

Businesses need to stay ahead of malicious apps, social engineering, and more to
keep their emails safe.

Read More


Why Abnormal
Modern API Architecture
Protect More
Spend Less
Secure the Future
Align to Gartner
Products
Inbound Email Security
Abuse Mailbox Automation
Email Account Takeover Protection
Email Productivity
Email Security Posture Management
Abnormal + CrowdStrike
Demo Videos
Solutions
Microsoft 365
Google Workspace
Slack
Zoom
Business Email Compromise
Vendor Email Compromise
Credential Phishing
Lateral Phishing
Malware - Attachment
Customers
Customer Stories
Gartner Peer Insights
Support Portal
Partners
Solution Partners
Partner Portal
Become a Partner
Microsoft Partnership
Resources
Resource Center
Blog Posts
Abnormal Intelligence
Glossary
Company
About
Careers
Team
Business School
News & Press
Upcoming Events
Contact Us
Protect more. Spend less. Secure the future.
©2023 Abnormal Security Corp. All rights reserved.
Responsible DisclosureTrust CenterStatusPrivacy PolicyTerms of UseLegal













Close


suggested results