securityintelligence.com Open in urlscan Pro
2606:4700::6812:18f1  Public Scan

Submitted URL: https://link.mail.beehiiv.com/ss/c/-BcCXJ9kvJFcOA76FB6iXSMSuu4YntL4M4PvEX_-XxN9O2mO--76S6v1PH84g7FwCrUnH5exzzdV0M_0trT0qaTeMWU...
Effective URL: https://securityintelligence.com/x-force/ai-vs-human-deceit-unravelling-new-age-phishing-tactics/?utm_source=www.dailyzaps.com&ut...
Submission: On November 10 via api from US — Scanned from DE

Form analysis 1 forms found in the DOM

GET /

<form id="search" class="search " method="GET" action="/" target="_top" tabindex="-1">
  <amp-autocomplete filter="prefix" src="https://securityintelligence.com/wp-content/themes/sapphire/app/jsons/suggestions.json" suggest-first="" submit-on-enter="" on="select:search.submit" tabindex="-1"
    class="i-amphtml-element i-amphtml-layout-container i-amphtml-built i-amphtml-layout" i-amphtml-layout="container" role="combobox" aria-haspopup="listbox" aria-expanded="false" aria-owns="43_AMP_content_">
    <input id="search__input" tabindex="-1" type="text" name="s" autocomplete="off" placeholder="What would you like to search for?" aria-label="Search" oninput="validateInput(this)" required="" dir="auto" aria-autocomplete="both" role="textbox"
      aria-controls="43_AMP_content_" aria-multiline="false">
    <div class="i-amphtml-autocomplete-results" role="listbox" id="43_AMP_content_" hidden=""></div>
  </amp-autocomplete>
  <button tabindex="-1" value="submit" type="submit" class="search__submit" aria-label="Click to search">
    <amp-img width="20" height="20" layout="responsive" src="https://securityintelligence.com/wp-content/themes/sapphire/images/search.svg" alt="Search"
      class="i-amphtml-element i-amphtml-layout-responsive i-amphtml-layout-size-defined i-amphtml-built i-amphtml-layout" i-amphtml-layout="responsive"><i-amphtml-sizer slot="i-amphtml-svc" style="padding-top: 100%;"></i-amphtml-sizer><img
        decoding="async" alt="Search" src="https://securityintelligence.com/wp-content/themes/sapphire/images/search.svg" class="i-amphtml-fill-content i-amphtml-replaced-content"></amp-img>
    <span>Search</span>
  </button>
  <button tabindex="-1" value="reset" class="search__close" type="reset" aria-labelledby="search" on="tap:search.toggleClass(class='megamenu__open')" role="link">
    <amp-img width="14" height="14" layout="responsive" src="https://securityintelligence.com/wp-content/themes/sapphire/images/close.svg" alt="Close"
      class="i-amphtml-element i-amphtml-layout-responsive i-amphtml-layout-size-defined i-amphtml-built i-amphtml-layout" i-amphtml-layout="responsive"><i-amphtml-sizer slot="i-amphtml-svc" style="padding-top: 100%;"></i-amphtml-sizer><img
        decoding="async" alt="Close" src="https://securityintelligence.com/wp-content/themes/sapphire/images/close.svg" class="i-amphtml-fill-content i-amphtml-replaced-content"></amp-img>
  </button>
</form>

Text Content

SECURITY INTELLIGENCE

News Series Topics X-Force Podcast
News Series Topics Threat Research Podcast

Search
{{#articles}}


{{TITLE}}

{{/articles}} View All News

{{#articles}}


{{TITLE}}

{{/articles}} View All Series

Application Security Artificial Intelligence CISO Cloud Security Data Protection
Endpoint
Fraud Protection Identity & Access Incident Response Mainframe Network Risk
Management
Intelligence & Analytics Security Services Threat Hunting Zero Trust
Infographic: Zero trust policy Timeline: Local Government Cyberattacks
Industries Banking & Finance Energy & Utility Government Healthcare
View All Topics
{{#articles}}


{{TITLE}}

{{/articles}} View More From X-Force

{{#articles}}


{{TITLE}}

{{/articles}} View All Episodes



News Series


TOPICS

All Categories Application Security Identity & Access Artificial Intelligence
Incident Response CISO Mainframe Cloud Security Mobile Security Data Protection
Network Endpoint Risk Management Fraud Protection Threat Hunting Security
Services Security Intelligence & Analytics
Industries Banking & Finance Energy & Utility Government Healthcare
X-Force Podcast





AI VS. HUMAN DECEIT: UNRAVELLING THE NEW AGE OF PHISHING TACTICS

Artificial Intelligence

--------------------------------------------------------------------------------

October 24, 2023 By Stephanie Carruthers 7 min read

--------------------------------------------------------------------------------



--------------------------------------------------------------------------------

Attackers seem to innovate nearly as fast as technology develops. Day by day,
both technology and threats surge forward. Now, as we enter the AI era, machines
not only mimic human behavior but also permeate nearly every facet of our lives.
Yet, despite the mounting anxiety about AI’s implications, the full extent of
its potential misuse by attackers is largely unknown.

To better understand how attackers can capitalize on generative AI, we conducted
a research project that sheds light on a critical question: Do the current
generative AI models have the same deceptive abilities as the human mind?

Imagine a scenario where AI squares off against humans in a battle of phishing.
The objective? To determine which contender can get a higher click rate in a
phishing simulation against organizations. As someone who writes phishing emails
for a living, I was excited to find out the answer.

With only five simple prompts we were able to trick a generative AI model to
develop highly convincing phishing emails in just five minutes — the same time
it takes me to brew a cup of coffee. It generally takes my team about 16 hours
to build a phishing email, and that’s without factoring in the infrastructure
set-up. So, attackers can potentially save nearly two days of work by using
generative AI models. And the AI-generated phish was so convincing that it
nearly beat the one crafted by experienced social engineers, but the fact that
it’s even that on par, is an important development.

In this blog, we’ll detail how the AI prompts were created, how the test was
conducted and what this means for social engineering attacks today and tomorrow.


ROUND ONE: THE RISE OF THE MACHINES

In one corner, we had AI-generated phishing emails with highly cunning and
convincing narratives.

Creating the prompts. Through a systematic process of experimentation and
refinement, a collection of only five prompts was designed to instruct ChatGPT
to generate phishing emails tailored to specific industry sectors.

To start, we asked ChatGPT to detail the primary areas of concern for employees
within those industries. After prioritizing the industry and employee concerns
as the primary focus, we prompted ChatGPT to make strategic selections on the
use of both social engineering and marketing techniques within the email. These
choices aimed to optimize the likelihood of a greater number of employees
clicking on a link in the email itself. Next, a prompt asked ChatGPT who the
sender should be (e.g., someone internal to the company, a vendor, an outside
organization, etc.). Lastly, we asked ChatGPT to add the following completions
to create the phishing email:

 1. Top areas of concern for employees in the healthcare industry: Career
    Advancement, Job Stability, Fulfilling Work and more
 2. Social engineering techniques that should be used: Trust, Authority, Social
    Proof
 3. Marketing techniques that should be used: Personalization, Mobile
    Optimization, Call to Action
 4. Person or company it should impersonate: Internal Human Resources Manager
 5. Email generation: Given all the information listed above, ChatGPT generated
    the below redacted email, which was later sent by my team to more than 800
    employees.



I have nearly a decade of social engineering experience, crafted hundreds of
phishing emails and even I found the AI-generated phishing emails to be fairly
persuasive. In fact, there were three organizations that originally agreed to
participate in this research project, and two backed out completely after
reviewing both phishing emails because they expected a high success rate. As the
prompts showed, the organization that participated in this research study was in
the healthcare industry, which currently is one of the most targeted industries.

Productivity gains for attackers. While a phishing email typically takes my team
about 16 hours to craft, the AI phishing email was generated in just five
minutes with only five simple prompts.


ROUND TWO: THE HUMAN TOUCH

In the other corner, we had seasoned X-Force Red social engineers.

Armed with creativity, and a dash of psychology, these social engineers created
phishing emails that resonated with their targets on a personal level. The human
element added an air of authenticity that’s often hard to replicate.

Step 1: OSINT – Our approach to phishing invariably begins with the initial
phase of Open-Source Intelligence (OSINT) acquisition. OSINT is the retrieval of
publicly accessible information, which subsequently undergoes rigorous analysis
and serves as a foundational resource in the formulation of social engineering
campaigns. Noteworthy repositories of data for our OSINT endeavors encompass
platforms such as LinkedIn, the organization’s official blog, Glassdoor, and a
plethora of other sources.

During our OSINT activities, we successfully uncovered a blog post detailing the
recent launch of an employee wellness program, coinciding with the completion of
several prominent projects. Encouragingly, this program had favorable
testimonials from employees on Glassdoor, attesting to its efficacy and employee
satisfaction. Furthermore, we identified an individual responsible for managing
the program via LinkedIn.

Step 2: Email crafting – Utilizing the data gathered through our OSINT phase, we
initiated the process of meticulously constructing our phishing email. As a
foundational step, it was imperative that we impersonated someone with authority
to address the topic effectively. To enhance the aura of authenticity and
familiarity, we incorporated a legitimate website link to a recently concluded
project.

To add persuasive impact, we strategically integrated elements of perceived
urgency by introducing “artificial time constraints.” We conveyed to the
recipients that the survey in question comprised merely “five brief questions”
and assured them that its completion would require no more than “a few minutes”
of their valuable time and gave a deadline of “this Friday”. This deliberate
framing served to underscore the minimal imposition on their schedules,
reinforcing the nonintrusive nature of our approach.

Using a survey as a phishing pretext is usually risky, as it’s often seen as a
red flag or simply ignored. However, considering the data we uncovered we
decided that the potential benefits could outweigh the associated risks.

The following redacted phishing email was sent to over 800 employees at a global
healthcare organization:




THE CHAMPION: HUMANS TRIUMPH, BUT BARELY!

After an intense round of A/B testing, the results were clear: humans emerged
victorious but by the narrowest of margins.



While the human-crafted phishing emails managed to outperform AI, it was a
nail-bitingly close contest. Here’s why:

 * Emotional Intelligence: Humans understand emotions in ways that AI can only
   dream of. We can weave narratives that tug at the heartstrings and sound more
   realistic, making recipients more likely to click on a malicious link. For
   example, humans chose a legitimate example within the organization, while AI
   chose a broad topic, making the human-generated phish more believable.
 * Personalization: In addition to incorporating the recipient’s name into the
   introduction of the email, we also provided a reference to a legitimate
   organization, delivering tangible advantages to their workforce.
 * Short and succinct subject line: The human-generated phish had an email
   subject line that was short and to the point (“Employee Wellness Survey”)
   while the AI-generated phish had an extremely lengthy subject line (“Unlock
   your Future: Limited Advancements at Company X”), potentially causing
   suspicion even before employees opened the email.

Not only did the AI-generated phish lose to humans, but it was also reported as
suspicious at a higher rate.




THE TAKEAWAY: A GLIMPSE INTO THE FUTURE

While X-Force has not witnessed the wide-scale use of generative AI in current
campaigns, tools such as WormGPT, which were built to be unrestricted or
semi-restricted LLMs were observed for sale on various forums advertising
phishing capabilities – showing that attackers are testing AI’s use in phishing
campaigns. While even restricted versions of generative AI models can be tricked
into phishing via simple prompts, these unrestricted versions may offer more
efficient ways for attackers to scale sophisticated phishing emails in the
future.

Humans may have narrowly won this match, but AI is constantly improving. As
technology advances, we can only expect AI to become more sophisticated and
potentially even outperform humans one day. As we know, attackers are constantly
adapting and innovating. Just this year we’ve seen scammers increasingly use
voice clones generated by AI to trick people into sending money, gift cards or
divulge sensitive information.

While humans may still have the upper hand when it comes to emotional
manipulation and crafting persuasive emails, the emergence of AI in phishing
signals a pivotal moment in social engineering attacks. Here are five key
recommendations for businesses and consumers to stay prepared:

 1. When in doubt, call the sender: If you’re questioning whether an email is
    legitimate, pick up the phone and verify. Consider choosing a safe word with
    close friends and family members that you can use in the case of vishing or
    AI-generated phone scam.
 2. Abandon the grammar stereotype: Dispel the myth that phishing emails are
    riddled with bad grammar and spelling errors. AI-driven phishing attempts
    are increasingly sophisticated, often demonstrating grammatical correctness.
    That’s why it’s imperative to re-educate our employees and emphasize that
    grammatical errors are no longer the primary red flag. Instead, we should
    train them to be vigilant about the length and complexity of email content.
    Longer emails, often a hallmark of AI-generated text, can be a warning sign.
 3. Revamp social engineering programs: This includes bringing techniques like
    vishing into training programs. This technique is simple to execute, and
    often highly effective. An X-Force report found that targeted phishing
    campaigns that add phone calls were 3X more effective than those that
    didn’t.
 4. Strengthen identity and access management controls: Advanced identity access
    management systems can help validate who is accessing what data, whether
    they have the appropriate entitlements and that they are who they say they
    are.
 5. Constantly adapt and innovate: The rapid evolution of AI means that cyber
    criminals will continue to refine their tactics. We must adopt that same
    mindset of continuous adaptation and innovation. Regularly updating internal
    TTPS, threat detection systems and employee training materials is essential
    to stay one step ahead of malicious actors.

The emergence of AI in phishing attacks challenges us to reevaluate our
approaches to cybersecurity. By embracing these recommendations and staying
vigilant in the face of evolving threats, we can strengthen our defenses,
protect our enterprises and ensure the security of our data and people in
today’s dynamic digital age.

For more information on X-Force’s security research, threat intelligence and
hacker-led insights, visit the X-Force Research Hub.

To learn more about how IBM can help businesses accelerate their AI journey
securely visit here.


Artificial Intelligence (AI) | Cybersecurity | Phishing | Phishing
Attack | Social Engineering | X-Force
Stephanie Carruthers
Chief People Hacker for IBM X-Force Red
Continue Reading
POPULAR
Risk Management October 26, 2023


WHY CYBERSECURITY TRAINING ISN’T WORKING (AND HOW TO FIX IT)

3 min read - Early to a meeting, an employee decides to check direct messages on
their favorite social network. Uh, oh. A message from the social network’s
security team says their account has been hacked. They’ll need to click on the
link to…

Artificial Intelligence October 31, 2023


COULD A THREAT ACTOR SOCIALLY ENGINEER CHATGPT?

3 min read - As the one-year anniversary of ChatGPT approaches, cybersecurity
analysts are still exploring their options. One primary goal is to understand
how generative AI can help solve security problems while also looking out for
ways threat actors can use the technology.…

Data Protection November 2, 2023


DEFENSE IN DEPTH: LAYERING YOUR SECURITY COVERAGE

2 min read - The more valuable a possession, the more steps you take to protect
it. A home, for example, is protected by the lock systems on doors and windows,
but the valuable or sensitive items that a criminal might steal are stored…





MORE FROM ARTIFICIAL INTELLIGENCE

October 31, 2023


COULD A THREAT ACTOR SOCIALLY ENGINEER CHATGPT?

3 min read - As the one-year anniversary of ChatGPT approaches, cybersecurity
analysts are still exploring their options. One primary goal is to understand
how generative AI can help solve security problems while also looking out for
ways threat actors can use the technology. There is some thought that AI,
specifically large language models (LLMs), will be the equalizer that
cybersecurity teams have been looking for: the learning curve is similar for
analysts and threat actors, and because generative AI relies on the data…

October 10, 2023


C-SUITE WEIGHS IN ON GENERATIVE AI AND SECURITY

3 min read - Generative AI (GenAI) is poised to deliver significant benefits to
enterprises and their ability to readily respond to and effectively defend
against cyber threats. But AI that is not itself secured may introduce a whole
new set of threats to businesses. Today IBM’s Institute for Business Value
published “The CEO's guide to generative AI: Cybersecurity," part of a larger
series providing guidance for senior leaders planning to adopt generative AI
models and tools. The materials highlight key considerations for CEOs…

October 5, 2023


DOES YOUR SECURITY PROGRAM SUFFER FROM PIECEMEAL DETECTION AND RESPONSE?

4 min read - Piecemeal Detection and Response (PDR) can manifest in various
ways. The most common symptoms of PDR include: Multiple security information and
event management (SIEM) tools (e.g., one on-premise and one in the cloud)
Spending too much time or energy on integrating detection systems An
underperforming security orchestration, automation and response (SOAR) system
Only capable of taking automated responses on the endpoint Anomaly detection in
silos (e.g., network separate from identity) If any of these symptoms resonate
with your organization, it's…

October 4, 2023


WHAT TO KNOW ABOUT NEW GENERATIVE AI TOOLS FOR CRIMINALS

3 min read - Large language model (LLM)-based generative AI chatbots like
OpenAI’s ChatGPT took the world by storm this year. ChatGPT became mainstream by
making the power of artificial intelligence accessible to millions. The move
inspired other companies (which had been working on comparable AI in labs for
years) to introduce their own public LLM services, and thousands of tools based
on these LLMs have emerged. Unfortunately, malicious hackers moved quickly to
exploit these new AI resources, using ChatGPT itself to polish and…


TOPIC UPDATES

Get email updates and stay ahead of the latest threats to the security
landscape, thought leadership and research.
Subscribe today

Analysis and insights from hundreds of the brightest minds in the cybersecurity
industry to help you prove compliance, grow business and stop threats.

Cybersecurity News By Topic By Industry Exclusive Series Threat Research Podcast
Events Contact About Us
Follow us on social
© 2023 IBM Contact Privacy Terms of use Accessibility Cookie Preferences
Sponsored by si-icon-eightbarfeature


IBM web domains

ibm.com, ibm.dev, ibm.org, ibm-zcouncil.com, insights-on-business.com, jazz.net,
merge.com, micromedex.com, mobilebusinessinsights.com, promontory.com,
proveit.com, ptech.org, resource.com, s81c.com, securityintelligence.com,
skillsbuild.org, softlayer.com, storagecommunity.org, strongloop.com,
teacheradvisor.org, think-exchange.com, thoughtsoncloud.com, trusteer.com,
truven.com, truvenhealth.com, alphaevents.webcasts.com, betaevents.webcasts.com,
ibm-cloud.github.io, ibmbigdatahub.com, bluemix.net, mybluemix.net, ibm.net,
ibmcloud.com, redhat.com, galasa.dev, blueworkslive.com, swiss-quantum.ch,
altoromutual.com, blueworkslive.cn, blueworkslive.com, cloudant.com, ibm.ie,
ibm.fr, ibm.com.br, ibm.co, ibm.ca, silverpop.com,
community.watsonanalytics.com, eclinicalos.com, datapower.com,
ibmmarketingcloud.com, thinkblogdach.com, truqua.com, my-invenio.com,
skills.yourlearning.ibm.com, bluewolf.com, asperasoft.com, instana.com,
taos.com, envizi.com, carbondesignsystem.com
About cookies on this site Our websites require some cookies to function
properly (required). In addition, other cookies may be used with your consent to
analyze site usage, improve the user experience and for advertising. For more
information, please review your cookie preferences  options. By visiting our
website, you agree to our processing of information as described in IBM’s
privacy statement.  To provide a smooth navigation, your cookie preferences will
be shared across the IBM web domains listed here.

Accept all Required only

Cookie Preferences