www.dataversity.net Open in urlscan Pro
52.37.201.119  Public Scan

Submitted URL: https://email.semperis.com/MjM5LUNQTi04NTEAAAGRBAzBFt6Ve4PuD-i9yKPDFVwup8DHH2GSoC8DLcsSwVigYJoW8zb4Is7qxqXDYZcL5TJSGoo=
Effective URL: https://www.dataversity.net/the-impact-of-ai-on-cybersecurity/?mkt_tok=MjM5LUNQTi04NTEAAAGRBAzBFkYsR_8qDPFbSrJzimMX0Jyx7vHn2...
Submission: On February 01 via api from ES — Scanned from ES

Form analysis 1 forms found in the DOM

GET /

<form method="get" autocomplete="off" action="/">
  <input type="text" name="s" placeholder="Search..">
</form>

Text Content

Stay up to date in the latest in Data Management Education

Subscribe


Skip to content
 * Conferences
 * Training Center
 * Subscribe
 * Webinars
 * White Papers
 * Podcasts
 * Product Demos
 * Data Topics
 * On Demand Webinars
 * Articles
 * Blogs
 * What is…?
 * Case Studies

More



DATA TOPICS

 * Analytics
 * Database
 * Data Architecture
 * Data Literacy
 * Data Science
 * Data Strategy
 * Data Modeling
 * EIM
 * Governance & Quality
 * Smart Data

Advertisement

 * Homepage
 * >
 * Data Education
 * >
 * Smart Data News, Articles, & Education
 * >
 * The Impact of AI on Cybersecurity


THE IMPACT OF AI ON CYBERSECURITY

By Igor Baikalov on December 5, 2023December 4, 2023
Read more about author Igor Baikalov.

Artificial intelligence has drawn a lot of media attention for everything from
taking people’s jobs to spreading disinformation and infringing copyrights, but
AI’s impact on cybersecurity may be its most pressing immediate issue.

AI’s impact on security teams is predictably double-edged. When properly
applied, it can be a powerful force multiplier for cybersecurity practitioners,
through such means as processing vast amounts of data at computer speeds,
finding connections between distant data points, discovering patterns, detecting
attacks, and predicting attack progressions. But, as security practitioners are
well aware, AI is not always properly applied. It intensifies the already
imposing lineup of cybersecurity threats, from identity compromise and phishing
to ransomware and supply chain attacks.





CISOs and security teams need to understand both the advantages and risks of AI,
which requires a substantial rebalancing of skills. Security engineers, for
example, must grasp the basics of machine learning, model quality and biases,
confidence levels, and performance metrics. Data scientists need to learn
cybersecurity fundamentals, attack patterns, and risk modeling to effectively
contribute to hybrid teams.


AI MODELS NEED PROPER TRAINING TO ASSIST CYBERSECURITY

The task of dealing with the proliferation of AI-fueled threats compounds the
challenges for CISOs and already overworked security teams who must not only
deal with new sophisticated phishing campaigns crafted by a large language model
(LLM) like ChatGPT, but still have to worry about an unpatched server in the DMZ
that could pose a bigger threat.

AI, on the other hand, can save teams a lot of time and effort in risk
assessment and detecting threats. It can also help with response – although that
must be done carefully. An AI model can shoulder-surf analysts to learn how they
triage incidents, and then either perform those tasks on its own or prioritize
cases for human review. But teams need to be sure that the right people are
giving the AI instruction.

Years ago, for example, I ran an experiment where I had 10 analysts of varying
skill levels review 100 cases of suspected data exfiltration. Two senior
analysts correctly identified all positives and negatives, three less
experienced analysts got almost all of the cases wrong, and the remaining five
got random results. No matter how good an AI model is, it would be useless if
trained by a team like that.

AI is like a powerful car: It can do wonders in the hands of an experienced
driver or a lot of damage in the hands of an inexperienced one. That’s one area
where the skills shortage can affect AI’s cybersecurity impact.


HOW CAN CTOS CHOOSE AN AI SOLUTION?

Given the hype about AI, organizations might be tempted to simply rush into
adopting the technology. But in addition to properly training AI, there are
questions CTOs need to answer, starting with suitability issues:

 * Does AI fit into the organization’s ecosystem? This includes the platform,
   external components such as a database and search engine, free and
   open-source software and licensing, and also the organization’s security and
   certifications, backup, and failover. 
 * Does AI scale to the size of the enterprise?
 * What skillsets are required for the security team to maintain and operate AI?

CTOs also must address questions specifically for an AI solution: 

 * Which of the claimed functions of a specific AI product align with your
   business objectives?
 * Can the same functionality be achieved using existing tools?
 * Does the solution actually detect threats?

That last question can be difficult to answer because malicious cybersecurity
events occur on a minuscule scale compared with legitimate activity. In a
limited proof-of-concept study using live data, an AI tool may detect nothing if
nothing is there. Vendors often use synthetic data or Red Team attacks to
demonstrate an AI’s capability, but the question remains whether it is
demonstrating true detection capability or simply validating the assumption
under which the indicators were generated.

It’s difficult to determine why an AI thinks something was an attack because AI
algorithms are essentially black boxes, still unable to explain how they reached
a certain conclusion – as demonstrated by DARPA’s Explainable AI (XAI) program.


MITIGATING THE RISKS OF AI

An AI solution is only as good as the data it works with. To ensure ethical
behavior, AI models should be trained on ethical data, not on the wholesale
collection of garbage that is on the World Wide Web. And any data scientist
knows that producing a well-balanced, unbiased, clean dataset to train a model
is a difficult, tedious, and unglamorous task. 

Because of this, AI models, including LLMs, may eventually be managed in a way
similar to how they would best serve cybersecurity – as specialized models (as
opposed to “all-knowing” general purpose models) that serve particular fields
and are trained on data curated by subject matter experts in the field. 

Trying to censor AI in response to the media outcry of the moment will not solve
the problem. Only diligent work in creating reliable datasets can do that. Until
AI companies – and the VCs that back them – accept this approach as the only way
to deliver respectable content, it’s garbage in/garbage out. 


SHOULD AI DEVELOPMENT BE MORE REGULATED?

AI’s development has generated a lot of legitimate concerns about everything
from deepfakes and voice cloning to advanced phishing/vishing/smishing, killer
robots, and even the possibility of an AI apocalypse. Eliezer Yudkowsky, one of
the most respected names in Artificial General Intelligence (AGI), recently
issued a call to “shut it all down,” saying a proposed six-month moratorium
wasn’t enough.

But you cannot stop the development of new technologies, a fact that has been
evident since the days of alchemists in ancient times. So, from a practical
point of view, what can be done to keep AI from growing out of control and to
mitigate the risk of an AI-driven extinction event? The answer is many of the
same sets of controls employed in other fields that have a potential for
weaponization: 

 * Transparent research. Open-source AI development not only drives innovation
   and democratizes access, but it also has many safety benefits, from spotting
   security flaws and dangerous lines of development to creating defenses
   against potential abuse. Big Tech so far supports open-source efforts, but
   that could change if competition intensifies. There might be a need for
   legislative measures to retain open-source access.
 * Contain experimentation. All experiments with sufficiently advanced AI need
   to be sandboxed, with safety and security procedures strictly enforced. These
   aren’t foolproof measures but might make the difference between a local
   disturbance and a global catastrophe.
 * Kill switches. Like antidotes and vaccines, countermeasures against runaway
   or destructive AI variants need to be an integral part of the development
   process. Even ransomware creators build in a kill switch. 
 * Regulate how it’s used. AI is a technology that can be applied for the good
   of humanity or abused with disastrous consequences. Regulation of its
   applications is a task for world governments, and the urgency is much higher
   than the need to censor the next version of ChatGPT. The EU AI Act is a
   well-put, concise foundation aimed at preventing misuse without stifling
   innovation. The U.S. AI Bill of Rights and the recent Executive Order on AI
   are less specific and seem to focus more on political correctness than on the
   issues of proper model development, training, and containment. Those measures
   are just a start, however. 


CONCLUSION

AI is coming to cybersecurity whether CISOs want it or not, and it will bring
both substantial benefits and risks to the cybersecurity field, particularly
with the eventual arrival of post-quantum cryptography. At a minimum, CISOs
should invest the time to understand the benefits of AI-hyped tools and the
threats of AI-driven attacks. Whether they invest money in AI depends largely on
the tangible benefits of AI security products, the publicized consequences of AI
attacks and, to a certain degree, their personal experience with ChatGPT. 

The challenge CISOs face is how to implement AI effectively and responsibly.

LEARN HOW TO IMPLEMENT MACHINE LEARNING IN YOUR ORGANIZATION
Share on:





DATAVERSITY.net
TDAN.com



CONFERENCES

 * Enterprise Data World
 * Data Governance & Information Quality


ONLINE CONFERENCES

 * Enterprise Data Governance Online
 * Data Architecture Online
 * Enterprise Analytics Online


DATAVERSITY RESOURCES

 * DATAVERSITY Training Center
 * Women in Data Management and Governance
 * White Papers
 * Product Demos
 * What is…?


COMPANY INFORMATION

 * Why Train with DATAVERSITY
 * About Us
 * Contact Us
 * Advertise With Us
 * Press Room


NEWSLETTERS

 * DATAVERSITY Weekly
 * DATAVERSITY Email Preferences


DATAVERSITY EDUCATION

 * Data Conferences
 * Trade Journal
 * Online Training
 * Upcoming Live Webinars

 * 
 * 

© 2011 – 2024 Dataversity Digital LLC | All Rights Reserved. Cookies
SettingsTerms of Service Privacy Policy CA: Do Not Sell My Personal Information

By clicking “Accept All Cookies”, you agree to the storing of cookies on your
device to enhance site navigation, analyze site usage, and assist in our
marketing efforts.
Cookies Settings Reject All Accept All Cookies



PRIVACY PREFERENCE CENTER

When you visit any website, it may store or retrieve information on your
browser, mostly in the form of cookies. This information might be about you,
your preferences or your device and is mostly used to make the site work as you
expect it to. The information does not usually directly identify you, but it can
give you a more personalized web experience. Because we respect your right to
privacy, you can choose not to allow some types of cookies. Click on the
different category headings to find out more and change our default settings.
However, blocking some types of cookies may impact your experience of the site
and the services we are able to offer.
More information
Allow All


MANAGE CONSENT PREFERENCES

TARGETING COOKIES

Targeting Cookies

These cookies may be set through our site by our advertising partners. They may
be used by those companies to build a profile of your interests and show you
relevant adverts on other sites. They do not store directly personal
information, but are based on uniquely identifying your browser and internet
device. If you do not allow these cookies, you will experience less targeted
advertising.

PERFORMANCE COOKIES

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and
improve the performance of our site. They help us to know which pages are the
most and least popular and see how visitors move around the site. All
information these cookies collect is aggregated and therefore anonymous. If you
do not allow these cookies we will not know when you have visited our site, and
will not be able to monitor its performance.

FUNCTIONAL COOKIES

Functional Cookies

These cookies enable the website to provide enhanced functionality and
personalisation. They may be set by us or by third party providers whose
services we have added to our pages. If you do not allow these cookies then some
or all of these services may not function properly.

STRICTLY NECESSARY COOKIES

Always Active

These cookies are necessary for the website to function and cannot be switched
off in our systems. They are usually only set in response to actions made by you
which amount to a request for services, such as setting your privacy
preferences, logging in or filling in forms. You can set your browser to block
or alert you about these cookies, but some parts of the site will not then work.
These cookies do not store any personally identifiable information.

Back Button


COOKIE LIST



Search Icon
Filter Icon

Clear
checkbox label label
Apply Cancel
Consent Leg.Interest
checkbox label label
checkbox label label
checkbox label label

Reject All Confirm My Choices