rollcall.com Open in urlscan Pro
3.94.218.249  Public Scan

Submitted URL: https://www.venminder.com/e3t/Ctc/WW+113/c2Npz04/VWdr1-1c_r67W2LbbXZ9gyvKDW5JQdxX5555WKN1zSXrs5nR32W50kH_H6lZ3kLV2s9TJ4ZS-...
Effective URL: https://rollcall.com/2023/10/24/cyber-defense-systems-seek-to-outduel-criminals-in-ai-race/?utm_campaign=Third%20Part...
Submission: On October 26 via api from BE — Scanned from DE

Form analysis 1 forms found in the DOM

GET https://rollcall.com/

<form class="flex m-0" role="search" method="get" action="https://rollcall.com/">
  <div class="w-full">
    <label for="query" class="overflow-hidden absolute p-0 -m-px w-px h-px leading-4 text-black align-baseline border-0 cursor-default">Search RollCall.com</label>
    <input type="search" name="s" placeholder="Search..." value="" tabindex="-1"
      class="w-full min-w-0 appearance-none rounded-md border-0 bg-white px-3 py-2 text-base text-black shadow-sm ring-1 ring-inset ring-white placeholder:text-black focus:ring-2 focus:ring-inset focus:ring-white focus-visible:outline-none">
  </div>
  <button type="button" aria-label="Close Search" class="ml-2 cursor-pointer" x-on:click="searchOpen = ! searchOpen">
    <svg class="" width="18" height="18" viewBox="0 0 24 24" fill="none" stroke="#FFF" stroke-width="2" stroke-linecap="round" stroke-linejoin="arcs">
      <line x1="18" y1="6" x2="6" y2="18"></line>
      <line x1="6" y1="6" x2="18" y2="18"></line>
    </svg>
  </button>
</form>

Text Content

Skip to content
 * Politics
   * Campaigns
   * Congress
   * White House
 * Policy
   * Defense
   * Energy/Environment
   * Fintech
   * Health Care
   * Technology
   * Transportation
   * All Policy
 * Heard on the Hill
 * Podcasts
   * CQ Budget
   * Equal Time
   * Fintech Beat
   * Political Theater
   * Oversight
 * Visuals
 * More
   * Newsletters
   * Capitol Ink
   * Roll Call e-Edition
   * Opinion
   * Events
   * Classifieds

RollCall logo

Search RollCall.com
RollCall logo
 * Politics
   * Campaigns
   * Congress
   * White House
 * Policy
   * Defense
   * Energy/Environment
   * Fintech
   * Health Care
   * Technology
   * Transportation
   * All Policy
 * Heard on the Hill
 * Podcasts
   * CQ Budget
   * Equal Time
   * Fintech Beat
   * Political Theater
   * Oversight
 * Visuals
 * More
   * Newsletters
   * Capitol Ink
   * Roll Call e-Edition
   * Opinion
   * Events
   * Classifieds



POLICY


CYBER-DEFENSE SYSTEMS SEEK TO OUTDUEL CRIMINALS IN AI RACE


AI TOOLS ON THE WEB CAN CRAFT SPEAR-PHISHING EMAILS, BREAK PASSWORDS AND WRITE
MALWARE




At a hearing of the Senate Intelligence Committee in September, Chairman Mark
Warner, D-Va., said generative models can improve cybersecurity. (Bill Clark/CQ
Roll Call)
By Gopal Ratnam
Posted October 24, 2023 at 7:00am
 * Facebook
 * Twitter
 * Email
 * Reddit

Not long after generative artificial intelligence models like ChatGPT were
introduced with a promise to boost economic productivity, scammers launched the
likes of FraudGPT, which lurks on the dark web promising to assist criminals by
crafting a finely tailored cyberattack. 

The cybersecurity firm Netenrich in July identified FraudGPT as a “villain
avatar of ChatGPT” that helps craft spear-phishing emails, provides tools to
break passwords, and writes undetectable malware or other malicious code.

And so the AI arms race was on.

Companies are embracing cyber-defenses based on generative AI hoping to outpace
attackers’ use of similar tools. But more effort is needed, experts warn,
including to safeguard the data and algorithms behind the generative AI models,
lest the models themselves fall victim to cyberattacks. 



This month, IBM released survey results of corporate executives, in which 84
percent of respondents said they would “prioritize generative AI security
solutions over conventional ones” for cybersecurity purposes. By 2025, AI-based
security spending is expected to be 116 percent greater than in 2021, according
to the survey that was based on responses from 200 CEOs, chief security officers
and other executives at U.S.-based companies.

Top lawmakers already are concerned about the dangers that AI can pose to
cybersecurity. 

At a hearing of the Senate Intelligence Committee in September, Chairman Mark
Warner, D-Va., said “generative models can improve cybersecurity, helping
programmers identify coding errors and contributing toward safer coding
practices … but with that potential upside, there’s also a downside since these
same models can just as readily assist malicious actors.”

Separately, the Pentagon’s Defense Advanced Research Projects Agency in August
announced a competition to design AI-based tools that can fix bugs in commonly
used software. The two-year contest is intended to create systems that can
automatically defend any kind of software from attack.

IBM said it is developing cybersecurity solutions based on generative AI models
to “improve the speed, accuracy and efficacy of threat detection and response
capabilities and drastically increase productivity of security teams.”




DETECTING DEVIATIONS

Darktrace, a cybersecurity firm with offices in the United States and around the
world, is deploying custom-built generative AI models for cybersecurity
purposes, said Marcus Fowler, the company’s senior vice president for strategic
engagements and threats. 

The company is using AI to predict potential attacks and designing proprietary
self-learning AI models that observe and understand “the behavior of the
environment that they’re deployed within,” meaning a computer network’s normal
patterns of use in a corporate or government setting. It maps activities of
individuals, peer groups, and outliers, said Fowler, who previously served at
the CIA developing the agency’s global cyber-operations.

The system then is able to detect “deviations from normal and provide a context
for such deviations,” allowing security experts to take action, he said. 

The company also developed AI systems to study how security experts investigate
a breach and create “an autonomous triaging capability” that automates the first
30 minutes or so of an investigation, allowing security officials to take swift
action when an attack or a breach is detected, Fowler said. 



In addition to detecting anomalies and aiding in investigations of a
cyberattack, AI tools ought to be useful in analyzing malware to determine the
origins of attackers, said Jose-Marie Griffiths, president of Dakota State
University, who previously served on the congressional National Security
Commission on Artificial Intelligence. 

“Reverse engineering a malware to identify who sent it, what was the intent, is
one area where we haven’t seen a lot” of use of AI tools, “but we could
potentially see quite a bit of work, and that’s an area we are interested in,”
Griffiths said, referring to the university’s ongoing work.

While malware is mostly software code, hackers often include notes in their own
language, either to themselves or others, about a particular line of code’s
function. Using AI to glean such messages, especially those written in languages
other than English, could help sharpen attribution, Griffiths said. 

Use of generative AI models to improve cybersecurity is gaining momentum, but
security experts also must pay attention to safeguarding the generative AI
models themselves because attackers could attempt to break into the models and
their underlying data, Griffiths said. 

Broader use of generative AI in cybersecurity could help ease chronic problems
facing security experts, said John Dwyer, head of research at IBM’s X-Force, the
company’s cybersecurity unit. 



“Alert fatigue, talent shortage and mental health issues have sort of been
associated with cybersecurity for a long time,” Dwyer said. “And it turns out
that we can apply [AI] technologies to really move the needle to help address
some of these core problems that everyone’s been dealing with.” 

Cybersecurity experts are burned out by being constantly on alert, doing
repetitive tasks, “sifting through a bunch of hay looking for a needle,” and
either leaving the industry or confronting mental health challenges, Dwyer
said. 

Using AI models to offload some of those repetitive tasks could ease the
workload, and allow security analysts to focus on high-value tasks, Dwyer said. 

As with all advances in technology online, progress in legitimate uses on the
publicly accessible parts of the web often is accompanied by a “much faster rate
of growth” in the underwater or dark web, where criminals and hackers operate,
Griffiths said. In the case of generative AI, as defenders rush to incorporate
the tools in defense, the attackers are racing to use the same tools. 

“That’s unfortunately the battle we are in,” she said. “It’s going to be
constant.”


RECENT STORIES


JOHNSON BRINGS DEFENSE BACKGROUND TO SPEAKERSHIP


GOP-DRAWN MAP SPURS FRESHMAN REP. JEFF JACKSON TO RUN FOR NC ATTORNEY GENERAL


JOHNSON’S HEALTH POLICY RECORD FOCUSED ON ABORTION, GENDER CARE


CAPITOL INK | BIG SHOES TO FILL


ANSWERING THE CALL: MONTANA VOLUNTEERS TRY TO FILL RURAL EMT GAP


‘NEVER GO INTO A MEETING UNPREPARED’: SEN. ALEX PADILLA ON WORKING FOR DIANNE
FEINSTEIN


THE SOURCE FOR NEWS ON
CAPITOL HILL SINCE 1955


 * About
 * Contact Us
 * Advertise
 * Events
 * Privacy
 * RC Jobs
 * Newsletters
 * The Staff
 * Subscriptions

CQ Roll Call is a part of FiscalNote, the leading technology innovator at the
intersection of global business and government. Copyright 2023 CQ Roll Call. All
rights reserved.

X

X