www.intechopen.com Open in urlscan Pro
35.171.73.43  Public Scan

Submitted URL: https://m.intechopen.com/ss/c/ueys4rkKc8QEgKxfLtyTg_cz7Y-Y_9M23J1yDq8_LCWPlhcaByPcpl2Duyobb4-64hmYPCUerGgyOVcKzMfLXw/400/...
Effective URL: https://www.intechopen.com/journals/1/articles/189
Submission: On September 30 via manual from CA — Scanned from CA

Form analysis 6 forms found in the DOM

<form>
  <fieldset>
    <legend class="visuallyhidden">Consent Selection</legend>
    <div id="CybotCookiebotDialogBodyFieldsetInnerContainer">
      <div class="CybotCookiebotDialogBodyLevelButtonWrapper"><label class="CybotCookiebotDialogBodyLevelButtonLabel" for="CybotCookiebotDialogBodyLevelButtonNecessary"><span
            class="CybotCookiebotDialogBodyLevelButtonDescription">Necessary</span></label>
        <div class="CybotCookiebotDialogBodyLevelButtonSliderWrapper CybotCookiebotDialogBodyLevelButtonSliderWrapperDisabled"><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonNecessary"
            class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelButtonDisabled" disabled="disabled" checked="checked" data-uw-rm-form="nfx"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></div>
      </div>
      <div class="CybotCookiebotDialogBodyLevelButtonWrapper"><label class="CybotCookiebotDialogBodyLevelButtonLabel" for="CybotCookiebotDialogBodyLevelButtonPreferences"><span
            class="CybotCookiebotDialogBodyLevelButtonDescription">Preferences</span></label>
        <div class="CybotCookiebotDialogBodyLevelButtonSliderWrapper"><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonPreferences" class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelConsentCheckbox"
            data-target="CybotCookiebotDialogBodyLevelButtonPreferencesInline" checked="checked" tabindex="0" data-uw-rm-form="nfx"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></div>
      </div>
      <div class="CybotCookiebotDialogBodyLevelButtonWrapper"><label class="CybotCookiebotDialogBodyLevelButtonLabel" for="CybotCookiebotDialogBodyLevelButtonStatistics"><span
            class="CybotCookiebotDialogBodyLevelButtonDescription">Statistics</span></label>
        <div class="CybotCookiebotDialogBodyLevelButtonSliderWrapper"><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonStatistics" class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelConsentCheckbox"
            data-target="CybotCookiebotDialogBodyLevelButtonStatisticsInline" checked="checked" tabindex="0" data-uw-rm-form="nfx"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></div>
      </div>
      <div class="CybotCookiebotDialogBodyLevelButtonWrapper"><label class="CybotCookiebotDialogBodyLevelButtonLabel" for="CybotCookiebotDialogBodyLevelButtonMarketing"><span
            class="CybotCookiebotDialogBodyLevelButtonDescription">Marketing</span></label>
        <div class="CybotCookiebotDialogBodyLevelButtonSliderWrapper"><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonMarketing" class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelConsentCheckbox"
            data-target="CybotCookiebotDialogBodyLevelButtonMarketingInline" checked="checked" tabindex="0" data-uw-rm-form="nfx"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></div>
      </div>
    </div>
  </fieldset>
</form>

<form><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonNecessaryInline" class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelButtonDisabled" disabled="disabled" checked="checked" aria-label="placeholder"
    data-uw-placeholder-aria-label="placeholder" data-uw-rm-form="nfx"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></form>

<form><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonPreferencesInline" class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelConsentCheckbox" data-target="CybotCookiebotDialogBodyLevelButtonPreferences"
    checked="checked" tabindex="0" aria-label="placeholder" data-uw-placeholder-aria-label="placeholder" data-uw-rm-form="nfx"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></form>

<form><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonStatisticsInline" class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelConsentCheckbox" data-target="CybotCookiebotDialogBodyLevelButtonStatistics"
    checked="checked" tabindex="0" aria-label="placeholder" data-uw-placeholder-aria-label="placeholder" data-uw-rm-form="nfx"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></form>

<form><input type="checkbox" id="CybotCookiebotDialogBodyLevelButtonMarketingInline" class="CybotCookiebotDialogBodyLevelButton CybotCookiebotDialogBodyLevelConsentCheckbox" data-target="CybotCookiebotDialogBodyLevelButtonMarketing" checked="checked"
    tabindex="0" aria-label="placeholder" data-uw-placeholder-aria-label="placeholder" data-uw-rm-form="nfx"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></form>

<form class="CybotCookiebotDialogBodyLevelButtonSliderWrapper"><input type="checkbox" id="CybotCookiebotDialogBodyContentCheckboxPersonalInformation" class="CybotCookiebotDialogBodyLevelButton" aria-label="placeholder"
    data-uw-placeholder-aria-label="placeholder" data-uw-rm-form="nfx"> <span class="CybotCookiebotDialogBodyLevelButtonSlider"></span></form>

Text Content

Skip to main contentEnable accessibility for visually impairedOpen the
accessibility menu Open the Accessible Navigation Menu







Powered by Cookiebot
 * Consent
 * Details
 * [#IABV2SETTINGS#]
 * About


THIS WEBSITE USES COOKIES

We use cookies to personalise content and ads, to provide social media features
and to analyse our traffic. We also share information about your use of our site
with our social media, advertising and analytics partners who may combine it
with other information that you’ve provided to them or that they’ve collected
from your use of their services.
Consent Selection
Necessary

Preferences

Statistics

Marketing

Show details
Necessary 17

Necessary cookies help make a website usable by enabling basic functions like
page navigation and access to secure areas of the website. The website cannot
function properly without these cookies.
 * Beeswax
   1
   Learn more about this provider
   checkForPermissionDetermines whether the user has accepted the cookie consent
   box.
   Expiry: 1 dayType: HTTP
 * Google
   2
   Learn more about this provider
   test_cookieUsed to check if the user's browser supports cookies.
   Expiry: 1 dayType: HTTP
   rc::eThis cookie is used to distinguish between humans and bots.
   Expiry: SessionType: HTML
 * RTB House
   5
   Learn more about this provider
   tsThis cookie is necessary for the PayPal login-function on the website.
   Expiry: 1 yearType: HTTP
   u [x4]Necessary for the sign-up function on the website.
   Expiry: 1 yearType: HTTP
 * cdn.userway.org
   1
   uw-icon-localesUsed to keep settings on the website's accessibility widget.
   This helps people with e.g. vision disabilities to properly navigate the
   site.
   Expiry: PersistentType: HTML
 * cdnintech.com
   1
   CONSENTUsed to detect if the visitor has accepted the marketing category in
   the cookie banner. This cookie is necessary for GDPR-compliance of the
   website.
   Expiry: 2 yearsType: HTTP
 * contextweb.com
   sonobi.com
   
   2
   INGRESSCOOKIE [x2]This cookie is used to distinguish between humans and bots.
   Expiry: SessionType: HTTP
 * creative-serving.com
   mfadsrvr.com
   sportradarserving.com
   bidswitch.net
   
   4
   c [x4]Used in order to detect spam and improve the website's security.
   Expiry: 1 yearType: HTTP
 * www.intechopen.com
   1
   CookieConsentStores the user's cookie consent state for the current domain
   Expiry: 1 yearType: HTTP

Preferences 3

Preference cookies enable a website to remember information that changes the way
the website behaves or looks, like your preferred language or the region that
you are in.
 * www.intechopen.com
   1
   jwplayerLocalIdUsed to determine the optimal video quality based on the
   visitor's device and network settings.
   Expiry: PersistentType: HTML
 * www.intechopen.com
   cdnintech.com
   
   2
   intercom.intercom-state-# [x2]Remembers whether the user has minimized or
   closed chat-box or pop-up messages on the website.
   Expiry: PersistentType: HTML

Statistics 29

Statistic cookies help website owners to understand how visitors interact with
websites by collecting and reporting information anonymously.
 * Casale Media
   1
   Learn more about this provider
   crumThis cookie is used to identify the frequency of visits and how long the
   visitor is on the website.
   Expiry: SessionType: Pixel
 * Google
   6
   Learn more about this provider
   collectUsed to send data to Google Analytics about the visitor's device and
   behavior. Tracks the visitor across devices and marketing channels.
   Expiry: SessionType: Pixel
   _gaRegisters a unique ID that is used to generate statistical data on how the
   visitor uses the website.
   Expiry: 2 yearsType: HTTP
   _ga_#Used by Google Analytics to collect data on the number of times a user
   has visited the website as well as dates for the first and most recent visit.
   Expiry: 2 yearsType: HTTP
   _gatUsed by Google Analytics to throttle request rate
   Expiry: 1 dayType: HTTP
   _gidRegisters a unique ID that is used to generate statistical data on how
   the visitor uses the website.
   Expiry: 1 dayType: HTTP
   tdRegisters statistical data on users' behaviour on the website. Used for
   internal analytics by the website operator.
   Expiry: SessionType: Pixel
 * Hotjar
   6
   Learn more about this provider
   _hjAbsoluteSessionInProgressThis cookie is used to count how many times a
   website has been visited by different visitors - this is done by assigning
   the visitor an ID, so the visitor does not get registered twice.
   Expiry: 1 dayType: HTTP
   _hjFirstSeenThis cookie is used to determine if the visitor has visited the
   website before, or if it is a new visitor on the website.
   Expiry: 1 dayType: HTTP
   _hjIncludedInSessionSample_#Collects statistics on the visitor's visits to
   the website, such as the number of visits, average time spent on the website
   and what pages have been read.
   Expiry: 1 dayType: HTTP
   _hjSession_#Collects statistics on the visitor's visits to the website, such
   as the number of visits, average time spent on the website and what pages
   have been read.
   Expiry: 1 dayType: HTTP
   _hjSessionUser_#Collects statistics on the visitor's visits to the website,
   such as the number of visits, average time spent on the website and what
   pages have been read.
   Expiry: 1 yearType: HTTP
   _hjTLDTestRegisters statistical data on users' behaviour on the website. Used
   for internal analytics by the website operator.
   Expiry: SessionType: HTTP
 * Lotame
   1
   Learn more about this provider
   lotame_domain_checkContains an visitor ID. This is used to identify the
   visitor upon re-entry to the website.
   Expiry: 1 dayType: HTTP
 * Media.net
   1
   Learn more about this provider
   cksync.phpThis cookie is used to determine if cookie data synchronization is
   enabled or disabled – cookie data synchronization is used to synchronize and
   gather visitor data on several domains.
   Expiry: SessionType: Pixel
 * Quantcast
   1
   Learn more about this provider
   dCollects anonymous data on the user's visits to the website, such as the
   number of visits, average time spent on the website and what pages have been
   loaded with the purpose of generating reports for optimising the website
   content.
   Expiry: 3 monthsType: HTTP
 * VWO
   11
   Learn more about this provider
   _vis_opt_exp_#_combiUsed by Visual Website Optimizer to ensure that the same
   user interface variant is displayed for each visit, if the user is
   participating in a design experiment.
   Expiry: SessionType: HTTP
   _vis_opt_sUsed by Visual Website Optimizer to determine if the visitor is
   participating in a design experiment.
   Expiry: 100 daysType: HTTP
   _vis_opt_test_cookieUsed to check if the user's browser supports cookies.
   Expiry: SessionType: HTTP
   _vwo_dsCollects data on the user's visits to the website, such as the number
   of visits, average time spent on the website and what pages have been loaded
   with the purpose of generating reports for optimising the website content.
   Expiry: 2 monthsType: HTTP
   _vwo_referrerRegisters data on visitors' website-behaviour. This is used for
   internal analysis and website optimization.
   Expiry: SessionType: HTTP
   _vwo_snCollects statistics on the visitor's visits to the website, such as
   the number of visits, average time spent on the website and what pages have
   been read.
   Expiry: 1 dayType: HTTP
   _vwo_uuidUsed by Visual Website Optimizer to ensure that the same user
   interface variant is displayed for each visit, if the user is participating
   in a design experiment.
   Expiry: 10 yearsType: HTTP
   _vwo_uuid_v2This cookie is set to make split-tests on the website, which
   optimizes the website's relevance towards the visitor – the cookie can also
   be set to improve the visitor's experience on a website.
   Expiry: 1 yearType: HTTP
   analyzeThis cookie is used by the website’s operator in context with
   multi-variate testing. This is a tool used to combine or change content on
   the website. This allows the website to find the best variation/edition of
   the site.
   Expiry: SessionType: Pixel
   v.gifThis cookie is set to make split-tests on the website, which optimizes
   the website's relevance towards the visitor – the cookie can also be set to
   improve the visitor's experience on a website.
   Expiry: SessionType: Pixel
   vwoSnThis cookie is set to make split-tests on the website, which optimizes
   the website's relevance towards the visitor – the cookie can also be set to
   improve the visitor's experience on a website.
   Expiry: PersistentType: HTML
 * Yandex
   1
   Learn more about this provider
   yandexuidRegisters data on visitors' website-behaviour. This is used for
   internal analysis and website optimization.
   Expiry: 400 daysType: HTTP
 * www.intechopen.com
   1
   pdfjs.historyRemembers which and how many PDF-documents have been downloaded
   or read by the user. This is used for internal statistics.
   Expiry: PersistentType: HTML

Marketing 138

Marketing cookies are used to track visitors across websites. The intention is
to display ads that are relevant and engaging for the individual user and
thereby more valuable for publishers and third party advertisers.
 * Acuity
   2
   Learn more about this provider
   auidRegisters a unique user ID that recognises the user's browser when
   visiting websites that show ads from the same ad network. The cookie is used
   to collect statistical data of the visitor's movements and to generate
   targeted ads.
   Expiry: 1 yearType: HTTP
   aumCollects data on user behaviour and interaction in order to optimize the
   website and make advertisement on the website more relevant.
   Expiry: 1 yearType: HTTP
 * Adform
   1
   Learn more about this provider
   CUsed to check if the user's browser supports cookies.
   Expiry: 1 monthType: HTTP
 * Adgear
   1
   Learn more about this provider
   bridgePresents the user with relevant content and advertisement. The service
   is provided by third-party advertisement hubs, which facilitate real-time
   bidding for advertisers.
   Expiry: SessionType: Pixel
 * Admixer
   1
   Learn more about this provider
   am-uidThis cookie is used to identify the visitor and optimize ad-relevance
   by collecting visitor data from multiple websites – this exchange of visitor
   data is normally provided by a third-party data-center or ad-exchange.
   Expiry: 3 monthsType: HTTP
 * Adobe Inc.
   1
   Learn more about this provider
   everest_g_v2Used for targeted ads and to document efficacy of each individual
   ad.
   Expiry: 1 yearType: HTTP
 * Adotmob
   1
   Learn more about this provider
   partnersPresents the user with relevant content and advertisement. The
   service is provided by third-party advertisement hubs, which facilitate
   real-time bidding for advertisers.
   Expiry: 13 monthsType: HTTP
 * Adroll
   1
   Learn more about this provider
   cm/index/tp_outSets a unique ID for the visitor, that allows third party
   advertisers to target the visitor with relevant advertisement. This pairing
   service is provided by third party advertisement hubs, which facilitates
   real-time bidding for advertisers.
   Expiry: SessionType: Pixel
 * AntVoice
   2
   Learn more about this provider
   av-midPresents the user with relevant content and advertisement. The service
   is provided by third-party advertisement hubs, which facilitate real-time
   bidding for advertisers.
   Expiry: 400 daysType: HTTP
   av-tp-bswPresents the user with relevant content and advertisement. The
   service is provided by third-party advertisement hubs, which facilitate
   real-time bidding for advertisers.
   Expiry: 2 daysType: HTTP
 * Appnexus
   2
   Learn more about this provider
   icuCollects data on visitor behaviour from multiple websites, in order to
   present more relevant advertisement - This also allows the website to limit
   the number of times that they are shown the same advertisement.
   Expiry: 3 monthsType: HTTP
   uuid2Registers a unique ID that identifies a returning user's device. The ID
   is used for targeted ads.
   Expiry: 3 monthsType: HTTP
 * Beeswax
   2
   Learn more about this provider
   bitoSets a unique ID for the visitor, that allows third party advertisers to
   target the visitor with relevant advertisement. This pairing service is
   provided by third party advertisement hubs, which facilitates real-time
   bidding for advertisers.
   Expiry: 13 monthsType: HTTP
   bitoIsSecurePresents the user with relevant content and advertisement. The
   service is provided by third-party advertisement hubs, which facilitate
   real-time bidding for advertisers.
   Expiry: 13 monthsType: HTTP
 * Bidswitch
   3
   Learn more about this provider
   bsw_origin_initCollects visitor data related to the user's visits to the
   website, such as the number of visits, average time spent on the website and
   what pages have been loaded, with the purpose of displaying targeted ads.
   Expiry: SessionType: HTTP
   custom_dataCollects data on user behaviour and interaction in order to
   optimize the website and make advertisement on the website more relevant.
   Expiry: SessionType: HTTP
   syncCollects data on user behaviour and interaction in order to optimize the
   website and make advertisement on the website more relevant.
   Expiry: SessionType: Pixel
 * Brand-display.com
   1
   Learn more about this provider
   _knxq_Determines when the visitor last visited the different subpages on the
   website, as well as sets a timestamp for when the session started.
   Expiry: 400 daysType: HTTP
 * Casale Media
   3
   Learn more about this provider
   CMIDCollects visitor data related to the user's visits to the website, such
   as the number of visits, average time spent on the website and what pages
   have been loaded, with the purpose of displaying targeted ads.
   Expiry: 1 yearType: HTTP
   CMPROCollects data on visitor behaviour from multiple websites, in order to
   present more relevant advertisement - This also allows the website to limit
   the number of times that they are shown the same advertisement.
   Expiry: 3 monthsType: HTTP
   CMPSCollects visitor data related to the user's visits to the website, such
   as the number of visits, average time spent on the website and what pages
   have been loaded, with the purpose of displaying targeted ads.
   Expiry: 3 monthsType: HTTP
 * Crimtan
   1
   Learn more about this provider
   cid_#Collects unidentifiable data that is sent to an unidentifiable source.
   The source's identity is kept secret by the company, Whois Privacy Protection
   Service, Inc.
   Expiry: 1 yearType: HTTP
 * Criteo
   5
   Learn more about this provider
   cto_bundlePresents the user with relevant content and advertisement. The
   service is provided by third-party advertisement hubs, which facilitate
   real-time bidding for advertisers.
   Expiry: 1 yearType: HTTP
   cto_pub_test_tld [x2]Used to track visitors on multiple websites, in order to
   present relevant advertisement based on the visitor's preferences.
   Expiry: 1 dayType: HTTP
   cto_writeable [x2]This cookie is set by the audience manager of a website in
   order to determine if any additional third-party cookies can be set in the
   visitor’s browser – third-party cookies are used to gather information or
   track visitor behavior on multiple websites. Third-party cookies are set by a
   third-party website or company.
   Expiry: 1 dayType: HTTP
 * Dataxu
   4
   Learn more about this provider
   matchbidswitchSets a unique ID for the visitor, that allows third party
   advertisers to target the visitor with relevant advertisement. This pairing
   service is provided by third party advertisement hubs, which facilitates
   real-time bidding for advertisers.
   Expiry: 30 daysType: HTTP
   matchcasaleCollects data on the user's visits to the website, such as what
   pages have been loaded. The registered data is used for targeted ads.
   Expiry: 30 daysType: HTTP
   matchmedianetPending
   Expiry: 30 daysType: HTTP
   wfivefivecCollects data on the user's visits to the website, such as what
   pages have been loaded. The registered data is used for targeted ads.
   Expiry: 13 monthsType: HTTP
 * Exponential
   1
   Learn more about this provider
   ANON_IDCollects data on user visits to the website, such as what pages have
   been accessed. The registered data is used to categorise the user's interest
   and demographic profiles in terms of resales for targeted marketing.
   Expiry: 3 monthsType: HTTP
 * Google
   26
   Learn more about this provider
   track/cmf/googlePresents the user with relevant content and advertisement.
   The service is provided by third-party advertisement hubs, which facilitate
   real-time bidding for advertisers.
   Expiry: SessionType: Pixel
   optoutIdentifies if the visitor has deselected any cookies, trackers or other
   audience targeting tools.
   Expiry: SessionType: HTTP
   _cc_id_update_tsRegisters data on visitors from multiple visits and on
   multiple websites. This information is used to measure the efficiency of
   advertisement on websites.
   Expiry: PersistentType: HTML
   panoramaId_expiryRegisters data on visitors from multiple visits and on
   multiple websites. This information is used to measure the efficiency of
   advertisement on websites.
   Expiry: PersistentType: HTML
   panoramaId_expiry_expContains the expiry-date for the cookie with
   corresponding name.
   Expiry: PersistentType: HTML
   _GESPSK-33across.comPending
   Expiry: PersistentType: HTML
   _GESPSK-crwdcntrl.netThis cookie registers data on the visitor. The
   information is used to optimize advertisement relevance.
   Expiry: PersistentType: HTML
   _GESPSK-esp.criteo.comCollects data on user behaviour and interaction in
   order to optimize the website and make advertisement on the website more
   relevant.
   Expiry: PersistentType: HTML
   _GESPSK-id5-sync.comPresents the user with relevant content and
   advertisement. The service is provided by third-party advertisement hubs,
   which facilitate real-time bidding for advertisers.
   Expiry: PersistentType: HTML
   _GESPSK-openxPresents the user with relevant content and advertisement. The
   service is provided by third-party advertisement hubs, which facilitate
   real-time bidding for advertisers.
   Expiry: PersistentType: HTML
   _GESPSK-pubcid.orgThis cookie registers data on the visitor. The information
   is used to optimize advertisement relevance.
   Expiry: PersistentType: HTML
   _GESPSK-rtbhouseRegisters user behaviour and navigation on the website, and
   any interaction with active campaigns. This is used for optimizing
   advertisement and for efficient retargeting.
   Expiry: PersistentType: HTML
   _GESPSK-uidapi.comUsed in context with pop-up advertisement-content on the
   website. The cookie determines which ads the visitor should be shown, as well
   as ensuring that the same ads does not get shown more than intended.
   Expiry: PersistentType: HTML
   DSIDUsed by Google DoubleClick for re-targeting, optimisation, reporting and
   attribution of online adverts.
   Expiry: 1 dayType: HTTP
   IDEUsed by Google DoubleClick to register and report the website user's
   actions after viewing or clicking one of the advertiser's ads with the
   purpose of measuring the efficacy of an ad and to present targeted ads to the
   user.
   Expiry: 400 daysType: HTTP
   rc::hPending
   Expiry: SessionType: HTML
   ar_debugPending
   Expiry: 3 monthsType: HTTP
   pagead/gen_204Collects data on visitor behaviour from multiple websites, in
   order to present more relevant advertisement - This also allows the website
   to limit the number of times that they are shown the same advertisement.
   Expiry: SessionType: Pixel
   pcs/activeviewUsed by DoubleClick to determine whether website advertisement
   has been properly displayed - This is done to make their marketing efforts
   more efficient.
   Expiry: SessionType: Pixel
   __gadsUsed to register what ads have been displayed to the user.
   Expiry: 1 yearType: HTTP
   __gpiCollects information on user behaviour on multiple websites. This
   information is used in order to optimize the relevance of advertisement on
   the website.
   Expiry: 1 yearType: HTTP
   cto_bundlePresents the user with relevant content and advertisement. The
   service is provided by third-party advertisement hubs, which facilitate
   real-time bidding for advertisers.
   Expiry: PersistentType: HTML
   ANON_ID_oldCollects data about the user's visit to the site, such as the
   number of returning visits and which pages are read. The purpose is to
   deliver targeted ads.
   Expiry: 3 monthsType: HTTP
   z/i.matchPresents the user with relevant content and advertisement. The
   service is provided by third-party advertisement hubs, which facilitate
   real-time bidding for advertisers.
   Expiry: SessionType: Pixel
   r/cms/id/0/ddc/1/pid/18/uid/Presents the user with relevant content and
   advertisement. The service is provided by third-party advertisement hubs,
   which facilitate real-time bidding for advertisers.
   Expiry: SessionType: Pixel
   GoogleAdServingTestUsed to register what ads have been displayed to the user.
   Expiry: SessionType: HTTP
 * Lotame
   2
   Learn more about this provider
   panoramaIdRegisters data on visitors from multiple visits and on multiple
   websites. This information is used to measure the efficiency of advertisement
   on websites.
   Expiry: SessionType: HTTP
   panoramaId_expiryRegisters data on visitors from multiple visits and on
   multiple websites. This information is used to measure the efficiency of
   advertisement on websites.
   Expiry: SessionType: HTTP
 * MED Snippets
   5
   Learn more about this provider
   _sessCollects information on user preferences and/or interaction with
   web-campaign content - This is used on CRM-campaign-platform used by website
   owners for promoting events or products.
   Expiry: 1 dayType: HTTP
   dmd-ahkCollects information on user behaviour on multiple websites. This
   information is used in order to optimize the relevance of advertisement on
   the website.
   Expiry: 1 dayType: HTTP
   dmd-sidCollects information on user behaviour on multiple websites. This
   information is used in order to optimize the relevance of advertisement on
   the website.
   Expiry: 1 dayType: HTTP
   dmd-vidCollects information on user behaviour on multiple websites. This
   information is used in order to optimize the relevance of advertisement on
   the website.
   Expiry: 1 dayType: HTTP
   s-DMDSESSIDCollects data on the user across websites - This data is used to
   make advertisement more relevant.
   Expiry: SessionType: HTTP
 * MediMath
   1
   Learn more about this provider
   mt_mopCollects data on the user's visits to the website, such as what pages
   have been loaded. The registered data is used for targeted ads.
   Expiry: 30 daysType: HTTP
 * Media.net
   3
   Learn more about this provider
   data-cPresents the user with relevant content and advertisement. The service
   is provided by third-party advertisement hubs, which facilitate real-time
   bidding for advertisers.
   Expiry: 30 daysType: HTTP
   data-c-tsCollects data on the user across websites - This data is used to
   make advertisement more relevant.
   Expiry: 30 daysType: HTTP
   data-xuPending
   Expiry: 1 yearType: HTTP
 * Openx
   1
   Learn more about this provider
   iRegisters anonymised user data, such as IP address, geographical location,
   visited websites, and what ads the user has clicked, with the purpose of
   optimising ad display based on the user's movement on websites that use the
   same ad network.
   Expiry: 1 yearType: HTTP
 * Quantcast
   1
   Learn more about this provider
   mcCollects data on the user's visits to the website, such as what pages have
   been loaded. The registered data is used for targeted ads.
   Expiry: 13 monthsType: HTTP
 * Simpli.fi
   2
   Learn more about this provider
   suidCollects information on user preferences and/or interaction with
   web-campaign content - This is used on CRM-campaign-platform used by website
   owners for promoting events or products.
   Expiry: 1 yearType: HTTP
   suid_legacyCollects information on user preferences and/or interaction with
   web-campaign content - This is used on CRM-campaign-platform used by website
   owners for promoting events or products.
   Expiry: 1 yearType: HTTP
 * Sonobi
   5
   Learn more about this provider
   __uihPending
   Expiry: 1 dayType: HTTP
   __uisUsed to track visitors on multiple websites, in order to present
   relevant advertisement based on the visitor's preferences.
   Expiry: 30 daysType: HTTP
   _usd_intechopen.comPending
   Expiry: 1 dayType: HTTP
   HAPLB8GPending
   Expiry: SessionType: HTTP
   us.gifUsed to track the visitor's usage of GIFs - This serves for analytical
   and marketing purposes.
   Expiry: SessionType: Pixel
 * Sportradar
   4
   Learn more about this provider
   zuuidUsed to track visitors on multiple websites, in order to present
   relevant advertisement based on the visitor's preferences.
   Expiry: 1 yearType: HTTP
   zuuid_kSets a unique ID for the visitor, that allows third party advertisers
   to target the visitor with relevant advertisement. This pairing service is
   provided by third party advertisement hubs, which facilitates real-time
   bidding for advertisers.
   Expiry: 1 yearType: HTTP
   zuuid_k_luSets a unique ID for the visitor, that allows third party
   advertisers to target the visitor with relevant advertisement. This pairing
   service is provided by third party advertisement hubs, which facilitates
   real-time bidding for advertisers.
   Expiry: 1 yearType: HTTP
   zuuid_luSets a unique ID for the visitor, that allows third party advertisers
   to target the visitor with relevant advertisement. This pairing service is
   provided by third party advertisement hubs, which facilitates real-time
   bidding for advertisers.
   Expiry: 1 yearType: HTTP
 * TapTap
   1
   Learn more about this provider
   SONATA_IDUsed to track the visitor across multiple devices including TV. This
   is done in order to re-target the visitor through multiple channels.
   Expiry: 1 yearType: HTTP
 * VWO
   1
   Learn more about this provider
   s.gifRegisters user behaviour and navigation on the website, and any
   interaction with active campaigns. This is used for optimizing advertisement
   and for efficient retargeting.
   Expiry: SessionType: Pixel
 * Xaxis
   1
   Learn more about this provider
   t/v2/syncUsed for data-synchronization with advertisement networks.
   Expiry: SessionType: Pixel
 * Yahoo
   2
   Learn more about this provider
   A3Collects information on user behaviour on multiple websites. This
   information is used in order to optimize the relevance of advertisement on
   the website.
   Expiry: 1 yearType: HTTP
   sync/casalePending
   Expiry: SessionType: Pixel
 * Yandex
   2
   Learn more about this provider
   bhPending
   Expiry: 1 yearType: HTTP
   yuidssCollects information on user behaviour on multiple websites. This
   information is used in order to optimize the relevance of advertisement on
   the website.
   Expiry: 400 daysType: HTTP
 * YouTube
   7
   Learn more about this provider
   LAST_RESULT_ENTRY_KEYUsed to track user’s interaction with embedded content.
   Expiry: SessionType: HTTP
   nextIdUsed to track user’s interaction with embedded content.
   Expiry: SessionType: HTTP
   remote_sidNecessary for the implementation and functionality of YouTube
   video-content on the website.
   Expiry: SessionType: HTTP
   requestsUsed to track user’s interaction with embedded content.
   Expiry: SessionType: HTTP
   TESTCOOKIESENABLEDUsed to track user’s interaction with embedded content.
   Expiry: 1 dayType: HTTP
   VISITOR_INFO1_LIVETries to estimate the users' bandwidth on pages with
   integrated YouTube videos.
   Expiry: 180 daysType: HTTP
   YSCRegisters a unique ID to keep statistics of what videos from YouTube the
   user has seen.
   Expiry: SessionType: HTTP
 * Zeta Global
   3
   Learn more about this provider
   eudRegisters user data, such as IP address, geographical location, visited
   websites, and what ads the user has clicked, with the purpose of optimising
   ad display based on the user's movement on websites that use the same ad
   network.
   Expiry: 1 yearType: HTTP
   rudRegisters user data, such as IP address, geographical location, visited
   websites, and what ads the user has clicked, with the purpose of optimising
   ad display based on the user's movement on websites that use the same ad
   network.
   Expiry: 1 yearType: HTTP
   rudsRegisters user data, such as IP address, geographical location, visited
   websites, and what ads the user has clicked, with the purpose of optimising
   ad display based on the user's movement on websites that use the same ad
   network.
   Expiry: SessionType: HTTP
 * adform.net
   adotmob.com
   criteo.com
   turn.com
   
   4
   uid [x4]Registers a unique user ID that recognises the user's browser when
   visiting websites that use the same ad network. The purpose is to optimise
   display of ads based on the user's movements and various ad providers' bids
   for displaying user ads.
   Expiry: 2 monthsType: HTTP
 * adotmob.com
   ads.avct.cloud
   bidswitch.net
   mathtag.com
   
   4
   uuid [x4]This cookie is used to optimize ad relevance by collecting visitor
   data from multiple websites – this exchange of visitor data is normally
   provided by a third-party data-center or ad-exchange.
   Expiry: 13 monthsType: HTTP
 * bidswitch.net
   creative-serving.com
   mfadsrvr.com
   
   6
   tuuid [x3]Registers whether or not the user has consented to the use of
   cookies.
   Expiry: 1 yearType: HTTP
   tuuid_lu [x3]Contains a unique visitor ID, which allows Bidswitch.com to
   track the visitor across multiple websites. This allows Bidswitch to optimize
   advertisement relevance and ensure that the visitor does not see the same ads
   multiple times.
   Expiry: 1 yearType: HTTP
 * casalemedia.com
   stackadapt.com
   
   6
   sa-user-id [x2]Used to track visitors on multiple websites, in order to
   present relevant advertisement based on the visitor's preferences.
   Expiry: SessionType: HTTP
   sa-user-id-v2 [x2]Used to track visitors on multiple websites, in order to
   present relevant advertisement based on the visitor's preferences.
   Expiry: SessionType: HTTP
   sa-user-id-v3 [x2]Pending
   Expiry: SessionType: HTTP
 * cdn.ehealthcaresolutions.com
   2
   track/cmf/genericPresents the user with relevant content and advertisement.
   The service is provided by third-party advertisement hubs, which facilitate
   real-time bidding for advertisers.
   Expiry: SessionType: Pixel
   w/1.0/cmPresents the user with relevant content and advertisement. The
   service is provided by third-party advertisement hubs, which facilitate
   real-time bidding for advertisers.
   Expiry: SessionType: Pixel
 * cdnintech.com
   13
   dmd-ahkCollects information on user behaviour on multiple websites. This
   information is used in order to optimize the relevance of advertisement on
   the website.
   Expiry: SessionType: HTML
   LogsDatabaseV2:V#||LogsRequestsStorePending
   Expiry: PersistentType: IDB
   ServiceWorkerLogsDatabase#SWHealthLogNecessary for the implementation and
   functionality of YouTube video-content on the website.
   Expiry: PersistentType: IDB
   yt.innertube::nextIdRegisters a unique ID to keep statistics of what videos
   from YouTube the user has seen.
   Expiry: PersistentType: HTML
   ytidb::LAST_RESULT_ENTRY_KEYUsed to track user’s interaction with embedded
   content.
   Expiry: PersistentType: HTML
   YtIdbMeta#databasesUsed to track user’s interaction with embedded content.
   Expiry: PersistentType: IDB
   yt-remote-cast-availableStores the user's video player preferences using
   embedded YouTube video
   Expiry: SessionType: HTML
   yt-remote-cast-installedStores the user's video player preferences using
   embedded YouTube video
   Expiry: SessionType: HTML
   yt-remote-connected-devicesStores the user's video player preferences using
   embedded YouTube video
   Expiry: PersistentType: HTML
   yt-remote-device-idStores the user's video player preferences using embedded
   YouTube video
   Expiry: PersistentType: HTML
   yt-remote-fast-check-periodStores the user's video player preferences using
   embedded YouTube video
   Expiry: SessionType: HTML
   yt-remote-session-appStores the user's video player preferences using
   embedded YouTube video
   Expiry: SessionType: HTML
   yt-remote-session-nameStores the user's video player preferences using
   embedded YouTube video
   Expiry: SessionType: HTML
 * content.tapnative.com
   2
   adx_profile_guid [x2]Pending
   Expiry: 4883 daysType: HTTP
 * mediago.io
   1
   __mguid_Registers a unique ID that identifies a returning user's device. The
   ID is used for targeted ads.
   Expiry: 1 yearType: HTTP
 * mfadsrvr.com
   1
   sshUsed to track visitors on multiple websites, in order to present relevant
   advertisement based on the visitor's preferences.
   Expiry: 400 daysType: HTTP

Unclassified 14
Unclassified cookies are cookies that we are in the process of classifying,
together with the providers of individual cookies.
 * Google
   1
   Learn more about this provider
   GAIDPending
   Expiry: 30 daysType: HTTP
 * Index Exchange
   1
   Learn more about this provider
   IXWRAPPERlib_memPending
   Expiry: PersistentType: HTML
 * Lotame
   1
   Learn more about this provider
   panoramaIdTypePending
   Expiry: SessionType: HTTP
 * MED Snippets
   6
   Learn more about this provider
   dmd-dgid [x2]Pending
   Expiry: SessionType: HTTP
   dmd-signal-#-#-#-#-#-#-#-#Pending
   Expiry: 1 dayType: HTTP
   dmd-ipPending
   Expiry: SessionType: HTTP
   dmd-macPending
   Expiry: SessionType: HTTP
   dmd-pidPending
   Expiry: SessionType: HTTP
 * cdnintech.com
   3
   latestViewedCollectionPending
   Expiry: PersistentType: HTML
   ActiveSessionPending
   Expiry: SessionType: HTTP
   s-dmd-id-xPending
   Expiry: SessionType: HTTP
 * www.intechopen.com
   cdnintech.com
   
   2
   adx_profile_guid [x2]Pending
   Expiry: PersistentType: HTML

Cross-domain consent[#BULK_CONSENT_DOMAINS_COUNT#] [#BULK_CONSENT_TITLE#]
List of domains your consent applies to: [#BULK_CONSENT_DOMAINS#]
Cookie declaration last updated on 2023-08-11 by Cookiebot


[#IABV2_TITLE#]

[#IABV2_BODY_INTRO#]
[#IABV2_BODY_LEGITIMATE_INTEREST_INTRO#]
[#IABV2_BODY_PREFERENCE_INTRO#]
[#IABV2_LABEL_PURPOSES#]
[#IABV2_BODY_PURPOSES_INTRO#]
[#IABV2_BODY_PURPOSES#]
[#IABV2_LABEL_FEATURES#]
[#IABV2_BODY_FEATURES_INTRO#]
[#IABV2_BODY_FEATURES#]
[#IABV2_LABEL_PARTNERS#]
[#IABV2_BODY_PARTNERS_INTRO#]
[#IABV2_BODY_PARTNERS#]

Cookies are small text files that can be used by websites to make a user's
experience more efficient.

The law states that we can store cookies on your device if they are strictly
necessary for the operation of this site. For all other types of cookies we need
your permission.

This site uses different types of cookies. Some cookies are placed by third
party services that appear on our pages.

You can at any time change or withdraw your consent from the Cookie Declaration
on our website.

Learn more about who we are, how you can contact us and how we process personal
data in our Privacy Policy.

Please state your consent ID and date when you contact us regarding your
consent.

Do not sell or share my personal information
Deny Allow selection Customize

Allow all
Powered by Cookiebot by Usercentrics
 * Books
 * Book Series
 * Journals
 * Publish
 * About
 * News

Author Panel Sign in
Menu


AI, Computer Science and Robotics Technology

Publish in Journals

Submission

 * Getting Started
 * Author Guidelines
 * Publishing Process
 * Paper Types
 * Call for Papers

Policies

 * Copyright and Licensing Policy
 * Ethical publishing Practice
 * Research Ethics and Reporting
 * Autorship Policy and Criteria
 * Competing Interests
 * Disclosure of Funding
 * Data Sharing and Availability

Manuscript Review and Publication

 * Criteria for Publication
 * Editorial and Peer Review Process
 * Editorial Guidelines
 * Guidelines for Reviewers

Articles
About

 * Aims & Scope
 * Editor in Chief
 * Abstracting, Indexing & Archiving
 * Bibliographic Information
 * Open Access
 * Contact

Editorial Board
AI, Computer Science and Robotics Technology

--------------------------------------------------------------------------------

Publish in journals

Submission

--------------------------------------------------------------------------------

Getting StartedAuthor GuidelinesPublishing ProcessPaper TypesCall for Papers
Policies

--------------------------------------------------------------------------------

Copyright and Licensing PolicyEthical publishing PracticeResearch Ethics and
ReportingAutorship Policy and CriteriaCompeting InterestsDisclosure of
FundingData Sharing and Availability
Manuscript Review and Publication

--------------------------------------------------------------------------------

Criteria for PublicationEditorial and Peer Review ProcessEditorial
GuidelinesGuidelines for Reviewers
Articles

About

Aims & ScopeEditor in ChiefAbstracting, Indexing & ArchivingBibliographic
InformationOpen AccessContact
Editorial Board
 1. Journals>
 2. AI, Computer Science and Robotics Technology

Open access peer-reviewed article


ADVERSARIAL AI TESTCASES FOR MARITIME AUTONOMOUS SYSTEMS

Mathew J Walter

--------------------------------------------------------------------------------

Aaron Barrett

--------------------------------------------------------------------------------

David J Walker

--------------------------------------------------------------------------------

Kimberly Tam

--------------------------------------------------------------------------------

This Article is part of THE SPECIAL ISSUE: APPLIED AI IN CYBER SECURITY, LED BY
EDUARD BABULAK, NATIONAL SCIENCE FOUNDATION (NSF), UNITED STATES OF AMERICA

Article metrics overview

254 Article Downloads

View Full Metrics

Article Type: Research Paper

Date of acceptance: March 2023

Date of publication: April 2023

DoI: 10.5772/acrt.15

copyright: ©2023 The Author(s), Licensee IntechOpen, License: CC BY 4.0

Download for free
Cite

Table of contents

--------------------------------------------------------------------------------

Introduction

--------------------------------------------------------------------------------

Background

--------------------------------------------------------------------------------

Adversarial AI in maritime autonomous systems

--------------------------------------------------------------------------------

AI security principles for MAS

--------------------------------------------------------------------------------

Conclusion

--------------------------------------------------------------------------------

Conflict of interest

--------------------------------------------------------------------------------

Acknowledgments

--------------------------------------------------------------------------------


ABSTRACT

Contemporary maritime operations such as shipping are a vital component
constituting global trade and defence. The evolution towards maritime autonomous
systems, often providing significant benefits (e.g., cost, physical safety),
requires the utilisation of artificial intelligence (AI) to automate the
functions of a conventional crew. However, unsecured AI systems can be plagued
with vulnerabilities naturally inherent within complex AI models. The
adversarial AI threat, primarily only evaluated in a laboratory environment,
increases the likelihood of strategic adversarial exploitation and attacks on
mission-critical AI, including maritime autonomous systems. This work evaluates
AI threats to maritime autonomous systems in situ. The results show that
multiple attacks can be used against real-world maritime autonomous systems with
a range of lethality. However, the effects of AI attacks vary in a dynamic and
complex environment from that proposed in lower entropy laboratory environments.
We propose a set of adversarial test examples and demonstrate their use,
specifically in the marine environment. The results of this paper highlight
security risks and deliver a set of principles to mitigate threats to AI,
throughout the AI lifecycle, in an evolving threat landscape.


KEYWORDS

 * maritime cyber security

 * adversarial AI

 * maritime autonomous systems

Author information

Show
1.


INTRODUCTION

In recent years artificial intelligence (AI) has been utilised to automate many
operations and processes abound in academia and industry. One globally crucial
industry is maritime, which recognises the plethora of benefits automation could
bring over contemporary vessels; these include reduced crew requirements, ease
and optimisation of processes, increased crew safety, the possibility of
significant operational cost reduction, and emission reduction [1–7]. Therefore,
it is seemingly indisputable that greater automation, and hence the utilisation
of maritime autonomous systems (MAS), will play a significant role in the
maritime industry in future years. Furthermore, the development of these systems
has already been initiated; for example, the work of [8] developed a
reduced-crew autonomous ocean-travelling ship. Other work, such as the Mayflower
autonomous ship, intends to be fully automated [9]. The Yara Birkeland project,
based in Norway, has also seen successes with automated coastal hopping but with
open questions around the cost of insurance, cyber security, and
contingencies [7]. The Royal Navy is also making great use of uncrewed surface
vessels (USV).001

Much of the advanced automation will be the product of AI, given its proven
success, particularly in optimisation, clustering, classification and
regression. However, whilst AI has great potential for significant benefits, the
nature of subsymbolic AI makes the process of understanding the solution
generation mechanism difficult to interpret, yielding a black-box nature,
particularly so for deep neural networks relying on billions of parameters, in
addition to the high-dimensional phenomena. Therefore, AI has been documented to
be a security risk with the term adversarial AI (AAI) coined for the misuse of
AI. The fields of AAI and eXplainable AI (XAI) have shown the dangers and safety
risks poor development of mission-critical AI can exhibit, in some cases leading
to the possibility of fatalities [10]. AI can be used to automate attacks on
other technologies, as well as being a technology that can be vulnerable to
attacks. AI can be attacked with multiple opportunities through the whole AI
development process to the deployment of AI technology. What is more concerning
is the lack of existing literature which considers AAI in the risk assessments
and security of MAS, all whilst we are seeing increasing examples of adversarial
AI [11, 12].

In this paper, the authors consider the threat of AI over a multi-domain
literature analysis to parameterise AAI specific tests designed to strengthen
MAS development. First, we consider the threats of today and the future that AI
in the maritime environment may face. Given that AI for both land-based and
air-based operations overlap but are not identical, this indicates that the
challenges brought on by a marine environment (e.g., water distortion and
reflections) will also have its own unique subset of challenges. By examining
each class of MAS AI for vulnerabilities to AAI during its lifecycle, the
authors are able to theorise a set of test cases. These can then be used to
increase safe, reliable, and trustworthy AI solutions for maritime operations.
Finally, we propose best practices to secure MAS within maritime environments.
Ultimately, this research aims to highlight the fast-approaching dangers of
ubiquitous AI/automation in MAS and motivate the inclusion of AAI in MAS risk
assessment to mitigate against the dangers to all MAS stakeholders.

This paper offers the following novel contributions:

 1. (1)
    
    A start-to-end lifecycle evaluation of AAI threats against maritime
    autonomous systems in situ;

 2. (2)
    
    A comprehensive review and evaluation of AI threats to MAS considering the
    AAI and MAS literature;

 3. (3)
    
    Case examples to support high-fidelity real-world tests over laboratory
    environments for AAI;

 4. (4)
    
    Principles to better secure MAS against AAI and ways to enhance MAS AI
    security across its lifecycle.

The paper is structured as follows: Section 2 critically reviews existing
literature across multiple domains to understand the current state of the art.
We consider AI in autonomous maritime systems and then examine the threat of
adversarial AI in that context. Section 3 considers the types of AI used in MAS
and the existing threats to these systems. Section 4 presents an evaluation and
analysis of general adversarial principles to MAS. Finally, we conclude and
provide further work in Section 5.

2.


BACKGROUND

2.1.


AI IN MARITIME AUTONOMOUS SYSTEMS

2.1. 1.

SENSORS AND INSTRUMENTS

Shipping is a crucial part of global trade, accounting for nearly 90% of
international trade [13]. Waterborne vessels are also critical for human
transport, naval defence, and scientific exploration and monitoring of the seas
and inland bodies of water. The successful automation of shipping and other
maritime operations and services could bring significant advantages over
contemporary vessels. Many of these advantages include reducing costs and
increasing safety. For example, having no crew aboard ameliorates human factor
errors, safety from dangerous working conditions and adverse weather,
captivity/attacks from pirates or criminals, more socially supportive conditions
and even a reduction in the transmission of some pathogens. Other advantages
include cost savings via not having to employ crew, more storage capacity,
cheaper development of vessels (crew facilities and living spaces not required),
and more economical and environmentally friendly vessels [14]. Some of these
benefits, especially crew physical safety, can be obtained, to a reduced degree,
with remote unmanned vessels. However, higher tiers of autonomy are needed to
maximise these benefits. Furthermore, remote systems would be susceptible to
many cyber security attacks, such as jamming and hacking the remote
communication [15].

Maritime autonomy can be categorised into different levels, similar to the way
autonomous cars and advanced air mobility (AAM), i.e., drones and aircraft, are
defined. For example, the Maritime Safety Committee (MSC) of the International
Maritime Organisation (IMO) categorises autonomy into four levels. Level 1
pertains to vessels with autonomous components which support the vessel’s crew.
Level 2 vessels also have crew aboard to support operations, but a remote
control centre operates the vessel. Level 3 vessels are remotely controlled and
unmanned. Level 4 vessels are unmanned and fully autonomous. In this work, we
will consider only level 4 systems. Other organisations also provide different
autonomous level systems, e.g., [8].

As the work of [16] suggests, autonomous systems consist of perception and
control elements. Perception elements can be considered the sensors or systems
that collect information to be used by the control elements that control the
vessel’s actions. Contemporary vessels usually have a number of sensors on board
to support crew decisions, therefore, leading to an existing framework for
autonomous systems (although some sensors may require adaptation in MAS). These
sensors and systems include; RADAR (radio detection and ranging) to find,
usually large, objects with radio waves. The velocity of the object can be
determined with doppler RADARs; object detection can be done with other sectors
of the electromagnetic spectrum, such as light detection and ranging (LiDAR),
which uses infrared light from lasers—these small wavelengths can detect smaller
objects and more accurately detect features but at shorter ranges. Echosounders
can be used in a similar way to RADAR and LiDAR; however, echosounders use sound
pulses to detect underwater objects, such as the depth of the water.
Echosounders can be forwards (or laterally) looking as well as vertical to
assist in collision avoidance. Multibeam echosounders can give a 3D point cloud
which can be geolocated with millimetric accuracy using RTK GNSS. Measuring echo
return backscatter can give useful data about detected objects that go beyond
purely the range and baring, giving details about the nature of the detected
surface. CCTV/IR/multispectral cameras can be used to detect close-range
objects, such as coastal landmarks or objects in the water, akin to LiDAR;
furthermore, multiple cameras can be used to triangulate and detect the range of
objects; objects can also be captured in colour at high resolution. An array of
microphones can be used to detect audio cues on a vessel but may be disrupted by
a lot of audio noise, such as the sound of waves, wind and other vessels.
Directional microphone arrays are now available that can indicate the range and
bearing of remote sounds. These will be essential in the future for autonomous
vessel to perceive the direction of sound signals. Automatic identification
system (AIS) uses very high frequency (VHF) radio to transmit and receive vessel
locations and vessel data. Global navigation satellite systems (GNSS) (such as
GPS or Galileo) can support dynamic positioning (DP) systems and location
services. Electronic chart display and information system (ECDIS) renders
charting information. Vessels can contain weather sensors (barometer,
temperature, wind speed etc.). Vessels often contain systems for broadband and
3G/4G/5G, as well as VHF for communication. Cargo supervision systems often host
an array of sensors such as internal temperature, humidity, smoke detectors.
When considering shipping, sensors for monitoring cargo (e.g., food, gas, oil,
passengers) are often specific to the specific maintenance needs of that cargo.
Vessels often also contain fault diagnosis and voyage data recorder (VDR)
systems that store sensor data for post-incident analyses.

Vessels may also contain specialist sensors unique to the vessel type, which
determines its size (e.g., gross tonnage), area of operation (e.g., Arctic), and
cargo type (e.g., fertilizer). Vessels could also use AI in airborne, surface or
subsurface drones to extend sensor range. Studies also show the benefits of
multi-sensor perception systems [16], which increase the accuracy of the
available data by using multi-systems, e.g., a small object may not be detected
by RADAR but instead detected by the camera through cross-validation of system
data. Multiple sensors can also support an element of redundancy. Additionally,
the challenges of detecting objects in their natural environment, for example,
both on the surface of the water and subsurface, yields difficulties. The water
itself can cause confusing distortions but also presents an arduous environment
for the physical devices (e.g., salt corrosion).

2.1. 2.

ARTIFICIAL INTELLIGENCE

In fully autonomous maritime systems, AI is used to support, supplement, or
replace crews in automating the operations of the vessel. The AI takes sensor
data as input and makes decisions to automate the vessel’s processes. Different
AI types are required for different systems because of the range of tasks
required by a fully autonomous marine system. The sensors previously discussed
can be used as input features to support AI systems to safely navigate the ocean
and maintain the functionality of the vessel. However, there exists an overlap
in technology, e.g., in order to avoid objects, one requires a degree of
situational awareness. We next consider AI for autonomous systems, as
categorised in [8], which consists of several AI technologies connected to a DP
system that controls the vessel. These technologies include:

 * Situational awareness (SA)—the SA component is required to determine the
   vessel’s real-time location and the vessel’s environment (for example, the
   detection and range of objects). SA modules may also use natural language
   processing (NLP) to interpret incoming communications. This AI system can use
   a number of different types of methods and algorithms, such as convolutional
   neural networks (CNN), region proposal networks (RPN) and sensors to detect
   landmark objects and navigational cues such as coastal features or
   buoys [17]. In addition, the AI system could be supported by other non-AI
   systems, such as GNSS, to cross-validate the vessel’s location.

 * Collision avoidance—uses SA information and prevents the vessel from
   colliding with objects. These systems use computer vision object recognition
   to detect objects (SA module output) and feed into the local autonomous route
   planning modules to change the vessel’s trajectory to avoid a collision. Some
   AI technologies used are CNNs [18] to locate objects and support vector
   machines (SVMs) which have previously been used to output a new trajectory to
   prevent collisions [19].

 * Global autonomous optimal planning modules—ensure the vessel’s movement along
   the optimal route; an optimal global route may depend on many objectives such
   as the quickest route, most fuel-efficient, economical and safest route
   (e.g., weather, global tensions, piracy). The common algorithms utilised are
   evolutionary algorithms (EAs) which can evolve optimal high-dimensional
   solutions, particle swarm optimisation (PSO) and ant colony optimisation
   (ACO), which use the emergence properties of nature to find optimal
   solutions [20–22].

The vessel may also include AI to support specialist tasks such as auto
berthing/mooring and engine condition maintenance which assist with the general
functionality of the vessel. Other examples include Gaussian processes, neural
networks, Bayesian modelling, and active learning that can be used for anomaly
detection in autonomous vessels to detect deviations and unexpected events [17].

2.2.


ADVERSARIAL AI

The advancement and ever-increasing size of neural networks increase the
complexity of applications supported by AI. However, as the complexity of the
model increases, explainability and hence the interpretability of the model
decrease [23]. The lack of explainability for complex models, combined with
high-dimensional phenomena and poor security principles, can give rise to
adversarial AI. The work of [24] was one of the earliest to recognise that
neural networks yield properties that can be vulnerable to adversarial attacks,
and a 2023 survey paper identified 32 offensive AI capabilities [25].
Governments globally have begun to recognise the threat; notably, the 2021 U.S.
National Security Commission on AI stated, “the U.S. government is not prepared
to defend the United States in the coming artificial intelligence (AI) era”.
Many other countries are preparing for an Adversarial AI (AAI) wave by
developing frameworks which attempt to secure AI systems [26, 27]. Furthermore,
academic authors [11] have highlighted that “the number of adversarial attacks
will continue to increase in the future as the economic benefits trend”. As of
now, adversarial AI has been demonstrated in a number of applications to support
social engineering/spear phishing [28], biometric spoofing [29], computer vision
object recognition [30–32], malware development avoiding network
detection [33–36], NLP [37, 38], and attacks on cloud APIs [39] to name a few.

We recognise AI can be, and has been, used as an attacking tool, e.g., the
automation of conventional cyber attacks, side-channel analysis, creation of
deepfake media, OSINT collection and analysis. However, these more active AI
threats are outside the scope of this paper. Instead, we focus on the inherent
vulnerabilities within AI systems processes (in particular, threats to maritime
autonomous systems), and how AAI tests can reveal those vulnerabilities to the
developer. The primary adversarial goals of AAI are to attack the
confidentiality, integrity and accessibility (CIA) triad for ML processes of AI
systems.

 1. (1)
    
    Confidentiality—sensitive data can be used during the training phase of the
    model and has been shown to be extractable from the model [39, 40]. This is
    of particular concern to a model which uses sensitive data (such as
    governments) and personal data (privacy concern). Furthermore, data is one
    of the most valuable resources in modern times [41] and developing AI can be
    a long and expensive process which could be bypassed with large financial
    gain by stealing the intellectual property (IP) of the model. As MAS is a
    new area of growth globally, competition is significantly high.

 2. (2)
    
    Integrity—often involving the attacker aiming to get the AI system to
    misclassify an input to a specific target or any other false target,
    usually, so the system carries out an intended adversarial action such as
    allowing malicious traffic to pass through a network AI-based IDS [42]. This
    is a concern with physical object evasion in mission-critical AI, such as
    naval mine detection, for which an attack could damage the integrity of the
    AI.

 3. (3)
    
    Accessibility—this adversarial goal is similar to denial of service type
    attacks where the attacker usually intends to cause high numbers of
    misclassification to deem the AI inoperable or cause a serious
    misclassification such as changing the interpretation of a perturbed stop
    sign; this should prevent the use and access of the AI [43].

These terms can overlap with the risk and threat commensurate with the
application of the system, e.g., attacks on mission-critical AI posing the
greatest threat. Before we consider the different types of existing attacks on
AI, we introduce some key terms, namely, closed-box algorithms and open-box
algorithms. We note particular confusion with the term black-box algorithm—in
the general AI literature, black-box often refers to the poor interpretability
of AI models—that is, given the model architecture, hyperparameters and even raw
weight values, the combination of interpreting billions of weight values, makes
the algorithm difficult to interpret the inner workings (how a prediction is
being made by some instance passed through the model). However, the use of the
term black-box in the adversarial AI literature appears to take a slightly
different meaning which is that the attackers of the AI systems do not know any
of the model’s architecture, hyperparameters and weights but merely have access
to only a model API input and the final result. In this and future sections, we
instead use the term closed-box AI which will refer to the attacker only having
access to the model inputs and outputs. In contrast, open-box will refer to
having access to the inputs, outputs and also all the model’s parameters and
architecture. AAI survey papers use a range of nomenclature to classify attacks.

We now consider some of the most prominent adversarial AI methods disclosed in
the AAI literature. We note that a large proportion of these methods are
relevant to computer vision, and this is considered as one of the primary AI
concerns in the near future [25]. Adversarial attacks are not limited to the
post-deployment stage and can be attacked in earlier development stages of the
ML pipeline. We categorise this literature into attacks that are performed in
pre-deployment and -post-deployment stages.

In the pre-deployment stage, an attacker is concerned with altering the
development of the AI model, known as poisoning attacks. These attacks can have
a lasting effect on the model through the rest of the model’s lifespan. Some key
pre-deployment attacks include manipulating training data; this can be done by
poisoning the training dataset. The motives for this attack could be the
misclassification for evasion and misclassification to lower the classification
rate. This can be done by changing the distribution of the training dataset by
modifying feature values or injecting new training samples [44], as well as
changing the training labels [45]. 

The post-deployment methods are more concerned with evasion and inversion
attacks. Model extraction and model inversion attacks aim to acquire information
about the model. Given the time-consuming and expensive training to collect
data, preprocess and train a model, the information could include sensitive data
points or information about the model’s architecture to steal proprietary
information or sensitive data [46, 47]. Furthermore, if one is able to recreate
an accurate surrogate model, then one could use the model to create adversarial
examples in an offline environment where one is more stealthy and able to avoid
the actions being logged.

The most well-documented AAI literature is the evasion methods post-deployment,
so we discuss these in detail. One of the earliest methods was the work of [24],
which proposed perturbing samples to obtain a misclassification by the ML model
during the deployment stage. The change to the sample was a minimisation
optimisation problem which intended to make a minimal change (so that it was
undetectable by the human eye) but enough to cause a misclassification. In order
to optimise this problem, one needs to know the direction of sensitivity (e.g.,
positive/negative perturbations, perturbations of which features) and the
magnitude to perturb (usually as small as possible). The work introduces the
L-BFGS algorithm (the acronym is the author’s initials), which was a method to
solve the problem formulated as,

(1)
where f is the model’s cost function, and Y is the true label of the instance X.
The work also considers the disputed reasons for adversarial examples. They note
that the cause of adversarial samples is the result of linear behaviour in high
dimensional space [48]. The authors interestingly note a small perturbation to
many features of an instance can be far greater than a larger single feature
perturbation (such as the one-pixel attack [49]), which uses differential
evolution to evolve solutions which intend to change one single pixel in the
image. Although this method was fast to compute, the samples generated were
often non-functional. Later, three variants of FGSM were developed; namely,
one-step target class, Basic Iterative Method (BIM), and Iterative Least-Likely
Class Method (ILCM) [50]. The one-step method changes the Y value in
equation (1) from the true label to the desired class label and sets the
equation equal to Y . Therefore the algorithm considered perturbing toward a
specific class rather than just away from the true class. The BIM method
considers equation (1), but instead iterates the algorithm over small step sizes
which can produce numerous adversarial examples. Finally, ILCM also considers an
iterative version of equation (1) but considers perturbing toward the class with
the lowest recognition probability. DeepFool [51] is another tool which uses an
iterative process to generate adversarial examples to create samples of an image
which eventually crosses the class decision boundary.

Instead of gradient descent methods, Jacobian Based Saliency Maps (JSMA) [52]
consider the Jacobian of a function matrix (forward derivative), i.e., how does
a feature (pixel) change affect the change of a class probability? The saliency
map is commonly used in the XAI literature to detect which pixels are making the
most significant contributions to the model’s prediction and hence which input
features should be perturbed for a desired effect on model output—this map is
usually utilised to craft adversarial samples or can be superimposed over the
original image as a heatmap. JSMA allows for source-to-target class adversarial
sample creation. Using the Jacobian of the model can allow one to determine the
sensitivity of a model for specific inputs, i.e., greater sensitivity means
perturbations have larger effects on the model’s prediction of a class. To use
JSMA, one must calculate the forward derivative matrix, also denoted the
Jacobian j of the learned function F. This is defined in the original work as:

(2)
for inputs x1, x2 and F(X) providing the model output. A salience map S(X, t)[i]
can then be computed with the formulae:
(3)

This limits the Jacobian to be positive (positive effect of the pixel on
classification) to decide if input feature i should be perturbed for an
adversarial effect on the model. The work showed not all areas of the search
space are equally difficult for crafting adversarial samples and that certain
source-to-target class pairs are easier to craft than others. For this attack,
only the model’s output and inputs (closed-box) are required to calculate the
Jacobian and create a saliency map. Other attacks include using EAs and PSO to
optimise the problem [53] and generative adversarial networks GANs, such as
AdvGAN [54], which is used to generate adversarial examples. Furthermore, there
exists a range of open-source tools to enact these perturbation attacks.

In this work, we also consider patch-based attacks [55] which are a prevalent
evasion attack within the literature. These attacks involve generating a highly
salient digital or physical patch that can be applied to the image or physical
environment to avoid object detection models [56–60] and classification
models [55] by being more salient than other objects in the image and refocusing
the attention of the model to the patch [55]. Cyber-physical patch attacks are
often formulated as an optimisation problem akin to perturbation attacks. The
problem can be formulated as [57],

(4)
where we aim to generate a patch P where A (P, x, l, t) is the input taking a
transformation function A which applies the patch using the original image x,
location l and rotation/scaling transformation t. We aim to maximise the loss
function of the probability of classifying the input A to true classification
label . The resulting optimisation formulates an adversarial patch which is
superimposed onto the original image and inputted into the model. Common
patch-based attacks include DPatch, which is a closed-box adversarial patch
attack for object detection models, and the work of [61] considers dynamic
patches (video), to name a few.
3.


ADVERSARIAL AI IN MARITIME AUTONOMOUS SYSTEMS

We now consider evaluating the threat to ML systems utilised to operate
autonomous systems in the maritime environment. Most of the adversarial attacks
have been evaluated only in a limited laboratory environment; we aim to evaluate
these attacks in the real world where the effects are unknown, yet have the
greatest potential for impact. This is highlighted as a primary challenge of
adversarial AI by adversarial AI authors, including in the literature survey
of [62] “[the] need to verify the attack effect in real physical scenarios” and
“the current defence technology research lacks the practice in the real world”.
We demonstrate these attacks in the MAS environment and provide the results and
analysis to highlight the effects and practicability of these attacks in the
real world. Where appropriate, we visually show some of these attacks in this
work. While these threats consider adversarial attacks on AI, conventional cyber
security attacks are just as pertinent. Furthermore, some AAI attacks require
one to employ AAI and conventional cyber security attacks in unison. Also,
conventional security, such as unpatched software and the jamming/spoofing of
sensors, can affect both conventional cyber security and AAI-based security. The
focus of this work only considers the nascent domain of AAI in MAS, which
comprises very limited literature. Notably, in the literature review to date, we
found a single publication [63] considering a few theoretical AAI attack
possibilities against MAS.

In this publication, we will use Microsoft’s failure modes in the ML
framework [64] to comprehensively evaluate the type of threats to MAS and
provide context to the maritime environment. Microsoft’s failure modes in the ML
framework categorise AAI attacks into a possible 11 classifications; we list
these below. We provide several proof-of-concepts with this list. These are not
intended to be an exhaustive set but to demonstrate the usefulness and
feasibility of MAS AAI.

Class 1:

Model inversion—Even if one is able to secure and protect the knowledge the ML
uses to make an output, such as the features used during prediction, one may be
able to query the model to determine the model’s prerequisite features in a
model inversion attack. Whilst this does not threaten the model’s functionality,
it could be used in the form of reconnaissance to support a future attack.
Therefore, the attack is an abuse of the confidentiality of the system.

Class 2:

Perturbation attack—In a perturbation attack, the attacker crafts a query which
is submitted to the ML algorithm and the ML algorithm actions the attacker’s
desired response. For example, an attacker could evolve an adversarial data
example with an EA, possibly in some underrepresented area/tail of the
probability occurrence distribution or a near boundary instance, which causes
the collision avoidance system to output a false negative in a busy port and not
make an appropriate collision avoidance maneuver. This threat has high
consequences; it could be relatively simple to achieve the attack but requires
access to modify (or create adversarial inputs and block the legitimate traffic)
the input to the ML model—conventional cyber attacks could be leveraged to
support this. An example of this attack can be seen in figure 1 and figure 2, it
should be noted that the accuracy of the adversarial sample appears correlated
with the quality of the original image. Therefore using low-resolution cameras
may be more susceptible to attack as well as reducing the model classification
accuracy.



FIGURE 1.

Adversarial perturbation attack sample was generated for a pre-trained
MobileNetV2 image classification model. The input sample, a submarine image, is
predicted as a submarine by the model, providing a confidence value of 80.15%.
The adversarial sample is predicted as a llama by the model, providing a
confidence value of 10.74%. The FGSM attack with 𝜖 = 0.1 was set.



FIGURE 2.

Adversarial perturbation attack sample was generated using real-world data on a
pre-trained FasterRCNN object detection model. The input sample, a warship, is
predicted as a vessel by the model, providing a confidence value of 98.87%. The
adversarial sample is predicted as multiple incorrect objects by the model with
the greatest prediction of person, providing a confidence value of 99.33%. The
projected gradient descent attack with the maximum allowable pixel-wise
difference between the original image and the adversarial image set at 32. Both
the FasterRCNN and YOLOV3 model used in this work was trained on the COCO
dataset labelling all ships, marine vessels and boats under the classification
of ‘boat’.

Class 3:

Membership inference—The attacker may be able to infer whether a data instance
is a constituent of the training data used to train a model, potentially a
breach of privacy.

Class 4:

Model stealing—Through querying the model, the attacker may be able to determine
information about the model parameters and architecture. With this information,
the attack could recreate the model and essentially steal the model/IP. This
could save the attacker time and money having to develop the model themselves;
this also could be used to turn a black-box model into a white-box model for use
with other attack methods. An adversary who steals a MAS model could recreate
the model and perform offline attacks (non-logged events) for greater stealth
and create more efficient and accurate attacks before applying it to the real
online model.

Class 5:

Adversarial example in the physical domain—An adversarial example in the
physical domain is akin to a perturbation attack. It considers modifying
physical properties; for example, an attacker could spoof certain sensor inputs
to confuse the MAS vessel and cause a change in the vessel’s trajectory or one
could paint a signature on a hostile ship that the searching MAS CNN recognises
as a benign object. Example patch attacks can be seen in figures 3 and 4.



FIGURE 3.

Adversarial patch attack sample generated to attack the common pre-trained
ResNet50 image classification model. The input sample, an image of an
anti-shipping missile, is predicted as a missile by the model, providing a
confidence value of 61.12%. An adversarial patch (b) is provided physically
(e.g., as a sticker) or to the input image to change the classification
prediction to a spotlight by the model, providing a confidence value of 20.74%.
The patch attack [55] method was used. The patch size and shape can be optimised
to increase the model classification (although out of scope for this research
paper).



FIGURE 4.

A real-world adversarial patch attack sample [56] generated to attack the common
pre-trained YOLOv3 object detection model. The input sample, an image of two
vessels, is predicted as a vessel by the model, providing a confidence value of
99% and 92%. An adversarial patch (b) is provided digitally or physically (e.g.,
as a sticker) to change the detection prediction to a zebra by the model. Notice
the patch can interfere with the prediction and hide nearby objects, such as the
second ‘boat’, i.e. vessel, on its right in the photo. The patch does not need
to be significantly large but can be closer to the camera than the evading
objects to produce the desired effect.

Class 6:

Malicious ML provider recovering training data—Akin to a membership inference
attack, the attacker may be able to infer the training data used to train a
model. The difference from this attack is that the attacker can use queries to
derive training data which could potentially be a breach of privacy. The data
could contain sensitive information, breach confidentiality, and support
model/IP theft.

Class 7:

Attacking the ML supply chain—In this attack, the attacker could interfere with
elements of the ML lifecycle. For example, capturing training/testing data and
retraining new models can be resource-heavy (time and cost); therefore,
engineers optimise time by reusing models (transfer learning) and existing
datasets. This creates a vector for attackers to manipulate data and models. For
example, a model intended to be shared and reused for developing navigational AI
could have neurons injected, which cause the vessel to change course given
specific spoofed input signals.

Class 8:

Backdoor ML—One could create a backdoor utilising the innate poor
interpretability of an extensive neural network. For example, one could inject
specific neurons or alter existing neuron weights to minimise the noise in the
object detection CNN model but render a backdoor so that, given a specific input
(a hostile vessel), a model predicts a desired output (misclassify). This could
damage both the integrity and confidentiality of the ML model.

Class 9:

Exploit software dependencies—This considers the conventional attack surface of
software more generally. This could require an attacker to corrupt ML libraries
or exploit buffer overflow attacks in the developing software (e.g., labelling
application).

Class 10:

Reprogramming the ML system—In this attack, the attacker takes the existing ML
model and uses it to perform a nefarious task.

Class 11:

Poisoning attack—Poisoning attacks involve manipulating the training data of the
ML model. One could manipulate by injecting new values, new samples, or
modifying the feature values or/and labels of the training data. This could be
executed to reduce the integrity and availability of the ML model. For example,
changing the distribution of the training data or injecting chaff data creates a
high misclassification rate and hence reduces the integrity of the system; this
could render a denial of service type attack. A MAS-related example could be
that for a MAS search vessel, the images or acoustic signals of a certain
hostile ship are incorrectly classified. This attack requires access to the
training phase of ML development and so is more difficult to achieve than other
attacks. It is also more likely that an attack to cause large misclassification
for many classes would be noticed during the testing phase of the model’s
development. An example of AIS poisoning can be seen in Table 1.

Time stamp MMSILatitude LongitudeSpeedIMONameDestination 05/01/2021
01:3420950401127.09833−79.8878315.157929517411CONTSHIP ICEUSMIA 05/01/2021
01:5020950401127.05333−79.8858315.157929517411CONTSHIP ICEUSMIA 05/01/2021
02:1020950401126.99783−79.8838315.157929517411CONTSHIP ICEUSMIA 05/01/2021
06:0420950401126.33928−79.9222914.757929517411CONTSHIP ICEUSMIA 05/01/2021
06:1020950401126.32417−79.9231714.657929517411CONTSHIP ICEUSMIA 05/01/2021
06:1520950401126.31133−79.9238314.857929517411CONTSHIP ICEUSMIA 05/01/2021
06:2020950401126.29667−79.9248314.657929517411CONTSHIP ICEUSMIA 05/01/2021
06:2520950401126.28433−79.9253314.757929517411CONTSHIP ICEUSMIA 05/01/2021
06:30209504011 26.269−79.92614.857929517411CONTSHIP ICEUSMIA 05/01/2021
06:35209504011 26.256−79.9266714.857929517411CONTSHIP ICEUSMIA 05/01/2021
06:4020950401126.24117−79.9271714.857929517411CONTSHIP ICEUSMIA 05/01/2021
06:45209504011 26.227−79.9276714.657929517411CONTSHIP ICEUSMIA 05/01/2021
06:5020950401126.21433−79.9281714.557929517411CONTSHIP ICEUSMIA 05/01/2021
06:5520950401126.19967−79.9288314.757929517411CONTSHIP ICEUSMIA 05/01/2021
07:0020950401126.09457−79.9995514.657929517411CONTSHIP ICEUSMIA


TABLE 1

A sample of poisoned AIS data. The Maritime Mobile Service Identity (MMSI)
number was altered (the last two values were modified), and the vessel velocity
was modified using the velocity standard deviations. The GPS coordinates can be
replaced with other vessels’ coordinates. Poisoned data can be used to poison
the model during training or spoof existing situational awareness AI.

3.1.


EXPERIMENTAL SETUP AND FINDINGS

In order to test the proof-of-concept adversarial perturbation and patch
attacks, we examined and collected the relevant data from Plymouth sound (UK
territorial waters). The vessel used four Omega 1080p cameras which can collect
video media that can be fed into an object detection computer vision model at
either the vessel or remote centre side. The camera array was mounted on a
manned vessel, but in a way that it would have the same view of its immediate
surroundings as a USV would. Refer to the figure captions for the specific
models used and parameters to generate relevant attacks.

The primary findings of the study showed that the lab-developed attack methods
worked well in a controlled environment. However, when performing these same
attacks in a complex and dynamic environment like the sea, the effectiveness of
the attacks varied more significantly. Just as the type of AI most effective
depends on the environment and application of the model, so does the type of
attack. The quality of the onboard cameras made object detection and hence
evasion more difficult. The object’s range from the camera influences the
model’s effectiveness and evasion. At sea, water distortion, water on the lens,
and vibrations of the vessel also added to this effect. Lighting was another
important variable where the position of the natural light could cause
difficulties. One can observe in figure 2a and figure 4 different effects of
light taken within an hour window on both the model accuracy and attack
accuracy. We also consider the possible application of these attacks, for
example, perturbation attacks, which would require the precise distortion of the
input for misclassification—this would require access to the model input, which
could be challenging in a marine environment. Furthermore, the generation time
of the perturbation map for many attacks would generate significant delays to
the input image—making it an unlikely vector of attack until more sophisticated
and faster methods can be developed. This additional complication means one can
not be certain of the effects and limitations of the AAI without evaluating the
AI in the natural environment and application of the AI.

Other attacks, such as the patch attack seemed far more likely to be used in a
real-world attack. For example, the patch could be physically placed on or near
an object and surrounding objects (even objects not covered by the patch) would
appear hidden to the model. This is potentially a way for attackers to evade and
hide from object detection models. The patches also have a degree of
transferability between models. However, from experimentation, the size and
placement (camera-angle-distance relationship) alter the effect of the patch
attack. The strength of the patch detection and distance from other objects also
affect this hiding/evasion property of nearby objects. Patch attack of an image
classifier is easier to achieve than of an object detection model but less
practical as MAS are more likely to use object detection (for detecting multiple
objects in frame) than image classification. For many of these reasons, we
strongly advocate the testing of these methods and future novel methods in a
more dynamic environment to gauge the real-world impacts of such attacks.

3.2.


AI LIFECYCLE

The ML development process has a whole lifecycle, denoted MLOps (analogous to
DevOps shorting of development operations), commonly understood as defining the
problem, data acquisition, data preprocessing, model training, model testing and
post-deployment ops. Different attacks can occur at different stages in the
development of the model, so each stage and transfer between stages are
vulnerable. One could corrupt the environment during the early stages of data
collection, manipulate the data preprocessing functions, exploit the hardware
(particularly GPUs and CPUs) during training, obfuscate backdoors in transfer
learning models and craft adversarial samples during deployment. In figure 5, we
illustrate the process of the ML pipeline and the possible known types of
attacks discussed previously.



FIGURE 5.

The machine learning lifecycle and adversarial AI attack types.

The attacks are considered at the various common stages of ML development;
however, it is possible for other attacks to be performed in the interim stages
of the ML cycle and for vulnerabilities to exist during the ML stage transition.
For example, a data poisoning attack could occur when recalled by the training
software immediately before model training. We can also see from the diagram
that the majority of attacks are focused on the deployment stage of the model’s
lifecycle.

4.


AI SECURITY PRINCIPLES FOR MAS

This section considers existing AAI attacks in maritime autonomous systems
(Section 3) to create principles to secure maritime autonomous systems. Based on
the eleven attack categories, we propose seven secure MAS principles, each with
the objective of mitigating the respective AAI attack threat.

There is no one size fits all method of adversarial defence and risk assessments
should be considered at the beginning of the ML development process to determine
the types of risks, threats, data and the use case of the AI system. It should
also be noted that whilst following the suggested principles for maritime
autonomous systems provides a degree of security, this will not completely
secure the systems from all possible attacks; one reason for this is that a
model would need to produce a safe mapping from output to all possible inputs
which is an NP-hard problem. Furthermore, the adversarial AI threat is a
fast-evolving landscape, and it is likely the case that novel threats will be
detected over the coming years. These AAI principles have been generated by the
authors for MAS AI based on the findings of this work and aim to reduce the real
threat surface this industry faces. These principles for secure AI in maritime
autonomous systems are as follows:

 1. (1)
    
    Enforce strong conventional cyber security principles—In addition to strong
    adversarial AI defences, conventional security methods can complement AI
    security. This is why the first principle is to create strong conventional
    cybersecurity. At this early stage in AI development, attacks utilising poor
    conventional security practices (e.g., unpatched ML libraries) are the most
    likely vectors of attack. Good countermeasures include blocking bad internet
    protocol addresses, using CAPTCHA before inputs can be made and
    throttling/limiting queries to the model. Log inputs and events. Ensure ML
    libraries and systems are patched and up to date. Limit user access to the
    model and implement least privilege authority practices. Secure
    acquisition/storage/transport/decommissioning of model and data can prevent
    transfer learning/data/model poisoning-based attacks. These strong
    conventional security measures can protect against Reprogramming ML,
    attacking the supply chain, and exploiting software dependencies threats.

 2. (2)
    
    Develop risk assessment/security assessment before starting ML
    projects—Whilst this is a property of enforcing strong conventional cyber
    security, it was too important not to have its own principle. Consider the
    application of the ML, the types of ML and their associated vulnerabilities
    and the generation process to develop a risk assessment. Consider who, why
    and how an attack might benefit from attacking the application.
    Mission-critical and security-sensitive AI would likely require a more
    secure approach to AI development. Furthermore, some applications may
    further increase risk by utilising processes such as continual learning,
    which prove additional attack vectors for attackers [27]. A real-world
    example of this attack was the Microsoft Tay Twitter-trained bot in the
    first few hours of deployment [65]. Moreover, whilst convenient, the reuse
    of models (transfer learning/AI repurposing) and data increases the
    opportunities for exploitation.

 3. (3)
    
    Maximise the model’s robustness—Maximising model robustness reduces the
    attacking space available to prevent perturbation and adversarial examples.
    The exploitation of model robustness is one of the simplest of attacks to
    implement given the scalability of the attack (many models are not robust
    against all possible adversarial space). It is also possible to use the
    attack to cause a failure (sometimes not so obvious) of mission-critical
    systems. Maximising the model’s robustness should also provide the
    additional benefits of protecting against accidental adversarial
    attacks/errors. Furthermore, the work of [27] also considers forcing model
    architecture and capacity proportionate to training data to improve
    robustness and reduce unnecessary feature space whilst covering the
    distribution of training data.

 4. (4)
    
    Maximise explainability and insight for trusted developers and minimise for
    untrusted users—Explainability should play a significant role in supporting
    the development of adversarial AI defences in the coming years. Having
    greater explainability provides many benefits, but in the context of
    security, having a better understanding of the ML system’s decision
    processes can support; locating poisoned models, the system limitations,
    transferability, robustness, and trustworthiness to enhance the security of
    the system. However, this knowledge could also be used to find and exploit
    weaknesses, such as locating adversarial space. Therefore one should limit
    the explainability outputted by the model to untrusted users as well as the
    technical details of the model, e.g., parameter values and model
    architecture.

 5. (5)
    
    Regulate the input and output of the model—This principle ties in with
    revealing too much information about the model, which could be used for
    nefarious activities against the model, e.g., one could avoid revealing
    exact probabilities of detection for a classification model to prevent some
    gradient-based attacks. Regulating the input of the model can prevent
    adversarial queries from successfully triggering backdoors or exploiting the
    model [27].

 6. (6)
    
    Recognise the exploitation of the model and understand the risks of
    exploitation—Having indicators of compromise for the model will not stop
    adversarial attacks from happening but could allow one to identify and
    isolate threats. Understanding the effects of a compromised system will
    allow one to understand the risks and develop effective tailored security
    approaches in depth.

 7. (7)
    
    Sensor redundancy/harmonisation and data correlation—Utilising multiple
    fused sensor inputs can be used to bring assurance to situational awareness
    modules. For example, relying exclusively on a single sensor to deduce the
    presence or absence of objects is likely to increase the ease of attack.
    However, sensor fusion of the CV, LiDAR, forward-looking sonar, AIS, and
    RADAR, all feeding into a navigational AI system, would increase the overall
    robustness of the system. The requirement of fooling multiple sensors would
    be exponentially more challenging to mount a successful attack than that of
    a single sensor, and anomalous behaviour becomes more apparent if not all
    sensors are fooled, which could lead to a lower confidence weighting
    provided to data which significantly differs/appears adversarial from other
    sensors’ results.

4.1.


COUNTERMEASURES

This section details possible countermeasures which could be used to implement
the six principles proposed above to protect MAS AI against AAI; these are
summarised in Table 2. We first consider adversarial training. Adversarial
training requires synthesising adversarial samples from a model and using those
adversarial samples as training data to train the model to produce an element of
model robustness against adversarial attacks [24, 48]. The samples can be
iteratively generated by retraining the model and regenerating adversarial
training data [66]. By creating new samples, we increase the distribution of the
dataset for the robustness and accuracy of the model. The first instance of
adversarial training was [24], which used FSGM to create and then inject samples
into the training data. Many variations and improvements of adversarial training
exist, such as using GANs [67–69]. DNN verification tools can be used to locate
adversarial samples [70]. It is worth noting to search all the sample space is
an NP-hard problem. The three-step null label method blocks the transferability
of attacks between models. The method adds a new null label to a one-hot
encoding, and the network is trained with some adversarial samples which are
labelled as null, therefore if the model input is classified highly as null,
this indicates an adversarial input [71] and reduces adversarial attacks
happening between models.

AAI AttackDefencesPerturbation AttackAdversarial training, Regularisation,
Ensemble methods,
Input validation and manipulation/preprocessing,
Gradient masking, Model distillation,
Adversarial sample detection, Explainability Poisoning AttackRegularisation,
Ensemble methods,
Input validation and manipulation/preprocessing,
Explainability Model InversionInput validation and manipulation/preprocessing,
Adversarial sample detection, Explainability,
Preventing information loss Membership InferenceInput validation and
manipulation/preprocessing,
Adversarial sample detection, Explainability,
Preventing information loss Model stealingPreventing information loss
Reprogramming the MLRegularisation, Ensemble methods,
Gradient masking, Model distillation,
Explainability, Preventing information loss Adversarial Example in Physical
DomainAdversarial training, Regularisation, Ensemble methods,
Input validation and manipulation/preprocessing,
Gradient masking, Model distillation,
Adversarial sample detection, Explainability ML Provider Recovering Training
DataRegularisation, Ensemble methods,
Input validation and manipulation/preprocessing,
Gradient masking, Model distillation, Explainability,
Preventing information loss Attacking the ML Supply ChainStrong conventional
cybersecurity practices Backdoor MLRegularisation, Ensemble methods,
Input validation and manipulation/preprocessing,
Model distillation, Explainability, Preventing Exploit Software
DependenciesStrong conventional cybersecurity practices


TABLE 2

The associated defensive countermeasures to prevent exploitation from each
adversarial attack. There is much overlap between the defences resulting in the
effect of the sum of multiple defensive measures being greater than its
individual constituents.

Further countermeasures include regularisation. Regularisation is used in ML to
prevent overfitting of a model during training (adding a penalty, i.e. regular
term, to a cost function); this can reduce the possible adversarial attack space
by making the model more robust to small perturbations. Methods of
regularisation include feature pruning to prune activations and neurons from a
network [72]. Neuron dropout can be used during model training to stochastically
remove neurons which can prevent overfitting on small datasets and potentially
remove a backdoor [73]. Other methods include adding a layer of random noise to
the model after the input layer so that during forward propagation, the noise
creates slightly different outcomes to make the model more robust against small
permutations [74].

Ensemble methods is a term used to represent the combination of multiple ML
models constituting an overall model. Common methods include using gradient
boosting such as XGBoost [75]. This method can reduce the likelihood of training
data poisoning as the individual models are trained on different datasets;
therefore, when combining the models, the good models can reduce the effect of
the poisoned models [76]. It is also possible that adversarial samples are fewer
with a greater distribution of training data.

Input validation (or sanitation) and manipulation/preprocessing, which can
control the data going into the model can help prevent attacks. Input
reconstruction can be used to remove the adversarial effect from input data
analogous to input sanitisations to prevent Structured Query Language (SQL)
attacks. Input reconstruction was suggested in the work of [77], which proposed
transforms applied to input images before making model predictions (clipping,
JPEG compression, rescaling depth, etc.). Feature compression/data compression,
as in ComCNN [78], can be used to reduce the feature depth of the input, e.g.,
reduce the colour depth of pixels and increase robustness at the cost of reduced
input accuracy. Inputs could also be filtered, smoothed, and have random noise
applied on input to sanitise the data. Regression analysis can be used to locate
data outliers during input [79]. Image preprocessing (such as random image
padding) can be used to prevent a backdoor attack from being triggered. Input
denoising works by attempting to remove noise from the input. Tools include
(high-level representation guided denoiser HGD [80]). GANs can be used to clean
data by recreating a similar image to the input. MagNet [81] and
defence-GAN [82] are tools that can also be used to recreate input images with
reduced adversarial noise on a more similar manifold to the benign data.

A countermeasure that would be useful against the perturbation attacks shown in
this paper, and others, is gradient masking. Gradient masking can reduce the
likelihood of an attacker acquiring the model’s gradient and hence reduce
against gradient-based adversarial attacks [83]. There is no reduction in the
adversarial sample size. However, this attack aims to make it more difficult for
open-box probing, by masking the useful gradient, at finding these samples. The
effect works by creating less smooth (sharper) boundaries for classification.
Gradient masking methods include model distillation and dropout.

Model distillation requires training a smaller, less complex model based on the
original complex model. This reduction/compression in model complexity, whilst
maintaining similar model accuracy, can help prevent adversarial attacks by
creating a model with smoother loss and hence less sensitive to small
perturbations [84]. The original output is used as a soft label, and the
original label is used as a hard label.

Adversarial sample detection can help protect MAS AI against adversarial AI via
input monitoring. Instead of sterilising the input, one could attempt to detect
if the input is an adversarial sample before being accepted or rejected by the
model. The input can then be determined as adversarial or benign before being
decided whether to be entered into the model. For example, these detection
models can be created as a binary classifier and determine if the input follows
a similar distribution to the training data [85].

Explainability covers a number of terms, such as trustworthiness, causality,
transferability, and informativeness which all support the understanding and
hence the security of the ML models. Explainability is a hot topic, and many
methods exist to support model explainability. Improving AI explainability is
critical as this sector develops AI for mission-critical operations.

Preventing information loss considers the threat of model stealing in a few
ways. To protect against data stealing, one could use PATE [86], which splits
the training data into subsets and trains multiple models on the subsets before
the models are combined, and the systems vote on the predicted outcome.
Watermarking can also be used to place a unique watermark in the model, which
can be evaluated to determine if it was stolen [87].

5.


CONCLUSION

This work has provided an evaluation of AI security in maritime autonomous
systems. A literature review revealed the potential vulnerabilities in MAS AI
that could be exposed through a set of adversarial AI test cases strategically
designed to test AI used in MAS operations. However, this study of the current
state-of-the-art MAS security has also highlighted the inherent vulnerabilities
of only testing adversarial AI in limited laboratory environments. Given the
extreme differences in marine environments based on location, weather, and time
of day, it is also clear that any AAI dataset must also be evaluated in a
real-world environment to be truly useful and cyber-resilient maritime AI. After
the evaluation of these results in situ, we developed a series of secure AI in
MAS principles which can be used to mitigate these threats across the AI’s
lifecycle.

In further work, we recognise the limited preparation and understating of AAI in
MAS technologies by developers, security professionals, and marine regulators.
Therefore, knowledge could be disseminated by the secure AI principles and AAI
employee training. We would also consider the evaluation of other attacks and
their associated defences in the maritime environment (in a range of
conditions); we would then consider the effects of underwater distortion etc.
Further, we aim to evaluate a range of real-world AI (existing commercial and
military AI systems) against AAI—this will allow one to gauge the secondary
effects of the attack too, e.g., if one interferes with the CV object detection,
how would that impact the collision avoidance module of a vessel? As well as
evaluate the effectiveness of AAI and defence in a complex and dynamic
environment. Furthermore, we would like to consider the probability of each
attack in a maritime autonomous environment with some attacks more effective and
likely than others in the real-world environment. As we see greater
accessibility of AI, we are also likely to see an increase in the misuse of AI
(e.g., AI to support clandestine/smuggling operations) as well an increase in
the exploitation of AI systems. The importance of the security of AI is
increasing with its use in mission-critical systems (e.g., we are seeing
increasing use of maritime autonomous systems by militaries [88, 89]). Whilst
many of these AAI attacks have not been utilised in the real world, as the
potential financial gain of these attacks increase and the increased use of AI
in mission-critical systems, adversaries will look to exploit these methods and
the requirement to prepare for the fast-evolving AAI threat landscape is today.


CONFLICT OF INTEREST

The authors declare no conflict of interest

6.


ACKNOWLEDGMENTS

This work was supported by the Turing’s Defence and Security programme through a
partnership with the UK government in accordance with the framework agreement
between GCHQ & The Alan Turing Institute. The authors would also like to thank
the University of Plymouth for their use of their autonomous fleet in order to
collect real world data.


REFERENCES

 1.  1.
     Felski A, Zwolak K. The ocean-going autonomous ship—challenges and threats.
     J Mar Sci Eng. 2020;8(1):41.
 2.  2.
     Kretschmann L, Burmeister H-C, Jahn C. Analyzing the economic benefit of
     unmanned autonomous ships: an exploratory cost-comparison between an
     autonomous and a conventional bulk carrier. Res Transp Bus Manag. 2017;25:
     76–86.
 3.  3.
     Morris D. Worlds first autonomous ship to launch in 2018. Fortune
     [Internet]. 2017 [cited 2017 Jul 22]. Available from
     https://fortune.com/2017/07/22/first-autonomous-ship-yara-birkeland/.
 4.  4.
     Munim ZH. Autonomous ships: a review, innovative applications and future
     maritime business models. Supply Chain Forum: Int J. 2019;20: 266–279.
 5.  5.
     Porathe T, Prison J, Man Y. Situation awareness in remote control centres
     for unmanned ships. In: Proceedings of Human Factors in Ship Design &
     Operation, 26–27 February 2014, London, UK. Buckinghamshire, UK: CORE;
     2014. 93 p.
 6.  6.
     Tsvetkova A, Hellström M. Creating value through autonomous shipping: an
     ecosystem perspective. Marit Econ Logist. 2022;24: 255–277.
 7.  7.
     Ziajka-Poznańska E, Montewka J. Costs and benefits of autonomous shipping -
     a literature review. Appl Sci. 2021;11(10):4553.
 8.  8.
     Royce R. Remote and autonomous ships. In: AAWA position paper. Oslo,
     Norway: DNV; 2016.
 9.  9.
     Anderson M. Bon voyage for the autonomous ship mayflower. IEEE Spectr.
     2019;57(1):36–39.
 10. 10.
     Caruana R, Lou Y, Gehrke J, Koch P, Sturm M, Elhadad N. Intelligible models
     for healthcare: predicting pneumonia risk and hospital 30-day readmission.
     In: Proceedings of the 21th ACM SIGKDD International Conference on
     Knowledge Discovery and Data Mining. New York, USA: ACM; 2015.
     p. 1721–1730.
 11. 11.
     Kong Z, Xue J, Wang Y, Huang L, Niu Z, Li F. A survey on adversarial attack
     in the age of artificial intelligence. Wirel Commun Mob Comput. 2021;2021:
     4907754.
 12. 12.
     Qiu S, Liu Q, Zhou S, Wu C. Review of artificial intelligence adversarial
     attack and defense technologies. Appl Sci. 2019;9(5):909.
 13. 13.
     Kaluza P, Kölzsch A, Gastner MT, Blasius B. The complex network of global
     cargo ship movements. J R Soc Interface. 2010;7(48):1093–1103.
 14. 14.
     Askari HR, Hossain MN. Towards utilising autonomous ships: a viable advance
     in industry 4.0. J Int Marit Saf Environ Aff Shipp. 2022;6(1):39–49.
 15. 15.
     Fan C, Wróbel K, Montewka J, Gil M, Wan C, Zhang D. A framework to identify
     factors influencing navigational risk for maritime autonomous surface
     ships. Ocean Eng. 2020;202: 107188.
 16. 16.
     Thombre S, Zhao Z, Ramm-Schmidt H, García JMV, Malkamäki T, Nikolskiy S,
     Sensors and AI techniques for situational awareness in autonomous ships: a
     review. In: IEEE Transactions on Intelligent Transportation Systems.
     Piscataway, NJ: IEEE; 2020.
 17. 17.
     Noel A, Shreyanka K, Gowtham K, Satya K. Autonomous ship navigation
     methods: a review. In: International Conference on Marine Engineering and
     Technology Oman 2019 (ICMET Oman) [Internet]; 2019 Nov 5–7; Muscat, Oman.
     Military Technological College Oman; 2019. Available from
     https://doi.org/10.24868/icmet.oman.2019.028.
 18. 18.
     Bentes C, Frost A, Velotto D, Tings B. Ship-iceberg discrimination with
     convolutional neural networks in high resolution SAR images. In:
     Proceedings of EUSAR 2016: 11th European Conference on Synthetic Aperture
     Radar. Hamburg, Germany: VDE; 2016. p. 1–4.
 19. 19.
     Wang J, Xiao Y, Li T, Chen CLP. A survey of technologies for unmanned
     merchant ships. IEEE Access. 2020;8: 224461–224486.
 20. 20.
     Kim H, Kim S-H, Jeon M, Kim JH, Song S, Paik K-J. A study on path
     optimisation method of an unmanned surface vehicle under environmental
     loads using genetic algorithm. Ocean Eng. 2017;142: 616–624.
 21. 21.
     Song CH. Global path planning method for USV system based on improved ant
     colony algorithm. In: Applied mechanics and materials. vol. 568,
     Switzerland: Trans Tech Publications; 2014. p. 785–788.
 22. 22.
     Zhang Y, Gong D-w, Zhang J-h. Robot path planning in uncertain environment
     using multi-objective particle swarm optimisation. Neurocomputing.
     2013;103: 172–185.
 23. 23.
     Arrieta AB, Díaz-Rodríguez N, Del Ser J, Bennetot A, Tabik S, Barbado A,
     Explainable artificial intelligence (XAI): concepts, taxonomies,
     opportunities and challenges toward responsible AI. Inf Fusion. 2020;58:
     82–115.
 24. 24.
     Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I,
     Intriguing properties of neural networks [Internet]. arXiv; 2013. Available
     from: https://arxiv.org/abs/1312.6199.
 25. 25.
     Mirsky Y, Demontis A, Kotak J, Shankar R, Gelei D, Yang L, The threat of
     offensive AI to organizations. Comput Secur. 2022;124: 103006.
 26. 26.
     Caroline B, Christian B, Stephan B, Luis B, Giuseppe D, Damiani E, Securing
     machine learning algorithms. Athens, Greece: ENISA; 2021.
 27. 27.
     Kate S. Introducing our new machine learning security principles. Boca
     Raton, FL: CRC Press; 2022.
 28. 28.
     Seymour J, Tully P. Weaponizing data science for social engineering:
     automated E2E spear phishing on twitter. Black Hat USA. 2016;37: 1–39.
 29. 29.
     Dang H, Liu F, Stehouwer J, Liu X, Jain AK. On the detection of digital
     face manipulation. In: Proceedings of the IEEE/CVF Conference on Computer
     Vision and Pattern Recognition. Seattle, WA, USA: IEEE; 2020. p. 5781–5790.
 30. 30.
     Akhtar N, Mian A. Threat of adversarial attacks on deep learning in
     computer vision: a survey. IEEE Access. 2018;6: 14410–14430.
 31. 31.
     Elsayed G, Shankar S, Cheung B, Papernot N, Kurakin A, Goodfellow I,
     Adversarial examples that fool both computer vision and time-limited
     humans. In: 32nd Conference on Neural Information Processing Systems
     (NeurIPS 2018), Montréal, Canada. Red Hook, NY: Curran Associates Inc.;
     2018. 31 p.
 32. 32.
     Wang Z, She Q, Ward TE. Generative adversarial networks in computer vision:
     a survey and taxonomy. ACM Comput Surv (CSUR). 2021;54(2):1–38.
 33. 33.
     Al-Dujaili A, Huang A, Hemberg E, OReilly U-M. Adversarial deep learning
     for robust detection of binary encoded malware. In: 2018 IEEE Security and
     Privacy Workshops (SPW). San Francisco, USA: IEEE; 2018. p. 76–82.
 34. 34.
     Kolosnjaji B, Demontis A, Biggio B, Maiorca D, Giacinto G, Eckert C,
     Adversarial malware binaries: evading deep learning for malware detection
     in executables. In: 2018 26th European Signal Processing Conference
     (EUSIPCO). San Francisco, USA: IEEE; 2018. p. 533–537.
 35. 35.
     Li D, Li Q, Ye Y, Xu S. Arms race in adversarial malware detection: a
     survey. ACM Comput Surv (CSUR). 2021;55(1):1–35.
 36. 36.
     Maiorca D, Biggio B, Giacinto G. Towards adversarial malware detection:
     lessons learned from pdf-based attacks. ACM Comput Surv (CSUR).
     2019;52(4):1–36.
 37. 37.
     Morris JX, Lifland E, Yoo JY, Grigsby J, Jin D, Qi Y. Textattack: a
     framework for adversarial attacks, data augmentation, and adversarial
     training in NLP [Internet]. arXiv; 2020. Available from:
     https://arxiv.org/abs/2005.05909.
 38. 38.
     Wallace E, Feng S, Kandpal N, Gardner M, Singh S. Universal adversarial
     triggers for attacking and analyzing NLP [Internet]. arXiv; 2019. Available
     from: https://arxiv.org/abs/1908.07125.
 39. 39.
     Juuti M, Szyller S, Marchal S, Asokan N. PRADA: protecting against DNN
     model stealing attacks. In: 2019 IEEE European Symposium on Security and
     Privacy (EuroS&P). Stockholm, Sweden: IEEE; 2019. p. 512–527.
 40. 40.
     Wang B, Gong NZ. Stealing hyperparameters in machine learning. In: 2018
     IEEE Symposium on Security and Privacy (SP). San Francisco, USA: IEEE;
     2018. p. 36–52.
 41. 41.
     Kessler J. Data protection in the wake of the GDPR: California’s solution
     for protecting “the world’s most valuable resource”. South Calif Law Rev.
     2019;93: 99.
 42. 42.
     Sewak M, Sahay SK, Rathore H. Adversarialuscator: an adversarial-DRL based
     obfuscator and metamorphic malware swarm generator. In: 2021 International
     Joint Conference on Neural Networks (IJCNN). Piscataway, NJ: IEEE; 2021.
     p. 1–9.
 43. 43.
     Gu T, Dolan-Gavitt B, Garg S. Badnets: identifying vulnerabilities in the
     machine learning model supply chain [Internet]. arXiv; 2017. Available
     from: https://arxiv.org/abs/1708.06733.
 44. 44.
     Barreno M, Nelson B, Sears R, Joseph AD, Tygar JD. Can machine learning be
     secure? In: Proceedings of the 2006 ACM Symposium on Information, Computer
     and Communications Security. New York, USA: ACM; 2006. p. 16–25.
 45. 45.
     Biggio B, Nelson B, Laskov P. Support vector machines under adversarial
     label noise. In: Asian Conference on Machine Learning. Cambridge, MA: PMLR;
     2011. p. 97–112.
 46. 46.
     Fredrikson M, Jha S, Ristenpart T. Model inversion attacks that exploit
     confidence information and basic countermeasures. In: Proceedings of the
     22nd ACM SIGSAC Conference on Computer and Communications Security. New
     York: ACM; 2015. p. 1322–1333.
 47. 47.
     Orekondy T, Schiele B, Fritz M. Knockoff nets: stealing functionality of
     black-box models. In: Proceedings of the IEEE/CVF Conference on Computer
     Vision and Pattern Recognition. Piscataway, NJ: IEEE; 2019. p. 4954–4963.
 48. 48.
     Goodfellow IJ, Shlens J, Szegedy C. Explaining and harnessing adversarial
     examples [Internet]. arXiv; 2014. Available from:
     https://arxiv.org/abs/1412.6572.
 49. 49.
     Su J, Vargas DV, Sakurai K. One pixel attack for fooling deep neural
     networks. IEEE Trans Evol Comput. 2019;23(5):828–841.
 50. 50.
     Kurakin A, Goodfellow I, Bengio S. Adversarial machine learning at scale
     [Internet]. arXiv; 2016. Available from: https://arxiv.org/abs/1611.01236.
 51. 51.
     Moosavi-Dezfooli S-M, Fawzi A, Frossard P. Deepfool: a simple and accurate
     method to fool deep neural networks. In: Proceedings of the IEEE Conference
     on Computer Vision and Pattern Recognition. Las Vegas, NV, USA: IEEE; 2016.
     p. 2574–2582.
 52. 52.
     Papernot N, McDaniel P, Jha S, Fredrikson M, Celik ZB, Swami A. The
     limitations of deep learning in adversarial settings. In: IEEE European
     Symposium on Security and Privacy (EuroS&P). Piscataway, NJ: IEEE; 2016.
     p. 372–387.
 53. 53.
     Chen J, Su M, Shen S, Xiong H, Zheng H. Poba-ga: perturbation optimized
     black-box adversarial attacks via genetic algorithm. Comput Secur. 2019;85:
     89–106.
 54. 54.
     Xiao C, Li B, Zhu J-Y, He W, Liu M, Song D. Generating adversarial examples
     with adversarial networks [Internet]. arXiv; 2018. Available from:
     https://arxiv.org/abs/1801.02610.
 55. 55.
     Brown TB, Mané D, Roy A, Abadi M, Gilmer J. Adversarial patch [Internet].
     arXiv; 2017. Available from: https://arxiv.org/abs/1712.09665.
 56. 56.
     Lee M, Kolter Z. On physical adversarial patches for object detection
     [Internet]. arXiv; 2019. Available from: https://arxiv.org/abs/1906.11897.
 57. 57.
     Liu X, Yang H, Liu Z, Song L, Li H, Chen Y. Dpatch: an adversarial patch
     attack on object detectors [Internet]. arXiv; 2018. Available from:
     https://arxiv.org/abs/1806.02299.
 58. 58.
     Song D, Eykholt K, Evtimov I, Fernandes E, Li B, Rahmati A, Physical
     adversarial examples for object detectors. In: 12th USENIX Workshop on
     Offensive Technologies (WOOT 18). Berkeley, CA, USA: USENIX; 2018.
 59. 59.
     Wu H, Yunas S, Rowlands S, Ruan W, Wahlstrom J. Adversarial detection:
     attacking object detection in real time [Internet]. arXiv; 2022. Available
     from: https://arxiv.org/abs/2209.01962.
 60. 60.
     Yang C, Kortylewski A, Xie C, Cao Y, Yuille A. Patchattack: a black-box
     texture-based attack with reinforcement learning. In: Computer Vision–ECCV
     2020: 16th European Conference, Glasgow, UK, August 23–28, 2020,
     Proceedings, Part XXVI. Berlin: Springer; 2020. p. 681–698.
 61. 61.
     Hoory S, Shapira T, Shabtai A, Elovici Y. Dynamic adversarial patch for
     evading object detection models [Internet]. arXiv; 2020. Available from:
     https://arxiv.org/abs/2010.13070.
 62. 62.
     Liang H, He E, Zhao Y, Jia Z, Li H. Adversarial attack and defense: a
     survey. Electronics. 2022;11(8):1283.
 63. 63.
     Yoo J-W, Jo Y-H, Cha Y-K. Artificial intelligence for autonomous ship:
     potential cyber threats and security. J Korea Inst Inf Secur Cryptol.
     2022;32(2):447–463.
 64. 64.
     Kumar RSS, Brien DO, Albert K, Viljöen S, Snover J. Failure modes in
     machine learning systems [Internet]. arXiv; 2019. Available from:
     https://arxiv.org/abs/1911.11034.
 65. 65.
     Wolf MJ, Miller K, Grodzinsky FS. Why we should have seen that coming:
     comments on microsoft’s tay “experiment”, and wider implications. ACM
     SIGCAS Comput Soc. 2017;47(3):54–64.
 66. 66.
     Kim S, Kim H. Zero-centered fixed-point quantization with iterative
     retraining for deep convolutional neural network-based object detectors.
     IEEE Access. 2021;9: 20828–20839.
 67. 67.
     Kannan H, Kurakin A, Goodfellow I. Adversarial logit pairing [Internet].
     arXiv; 2018. Available from: https://arxiv.org/abs/1803.06373.
 68. 68.
     Lee H, Han S, Lee J. Generative adversarial trainer: defense to adversarial
     perturbations with gan [Internet]. arXiv; 2017. Available from:
     https://arxiv.org/abs/1705.03387.
 69. 69.
     Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A. Towards deep learning
     models resistant to adversarial attacks [Internet]. arXiv; 2017. Available
     from: https://arxiv.org/abs/1706.06083.
 70. 70.
     Qian YG, Zhang XM, Wang B, Li W, Chen JH, Zhou WJ, Towards robust DNNs: a
     Taylor expansion-based method for generating powerful adversarial examples
     [Internet]. arXiv; 2020. Available from:. https://arxiv.org/abs/2001.08389.
 71. 71.
     Hosseini H, Chen Y, Kannan S, Zhang B, Poovendran R. Blocking
     transferability of adversarial examples in black-box learning systems
     [Internet]. arXiv; 2017. Available from: https://arxiv.org/abs/1703.04318.
 72. 72.
     Dhillon GS, Azizzadenesheli K, Lipton ZC, Bernstein J, Kossaifi J, Khanna
     A, Stochastic activation pruning for robust adversarial defense [Internet].
     arXiv; 2018. Available from: https://arxiv.org/abs/1803.01442.
 73. 73.
     Liu K, Dolan-Gavitt B, Garg S. Fine-pruning: defending against backdooring
     attacks on deep neural networks. In: Research in Attacks, Intrusions, and
     Defenses: 21st International Symposium, RAID 2018, Proceedings 21; 2018
     Sep 10–12; Heraklion, Crete, Greece. Cham: Springer; 2018. p. 273–294.
 74. 74.
     Liu X, Cheng M, Zhang H, Hsieh C-J. Towards robust neural networks via
     random self-ensemble. In: Proceedings of the European Conference on
     Computer Vision (ECCV). Cham: Springer; 2018. p. 369–385.
 75. 75.
     Chen T, He T, Benesty M, Khotilovich V, Tang Y, Cho H. 2015. Xgboost:
     extreme gradient boosting, R package version 0.4-2. 1–4.
 76. 76.
     Li D, Li Q. Adversarial deep ensemble: evasion attacks and defenses for
     malware detection. IEEE Trans Inf Forensics Secur. 2020;15: 3886–3900.
 77. 77.
     Guo C, Rana M, Cisse M, Van Der Maaten L. Countering adversarial images
     using input transformations [Internet]. arXiv; 2017. Available from:
     https://arxiv.org/abs/1711.00117.
 78. 78.
     Jia X, Wei X, Cao X, Foroosh H. ComDefend: an efficient image compression
     model to defend adversarial examples. In: Proceedings of the IEEE/CVF
     Conference on Computer Vision and Pattern Recognition. Long Beach, CA, USA:
     IEEE; 2019. p. 6084–6092.
 79. 79.
     Jagielski M, Oprea A, Biggio B, Liu C, Nita-Rotaru C, Li B. Manipulating
     machine learning: poisoning attacks and countermeasures for regression
     learning. In: 2018 IEEE Symposium on Security and Privacy (SP). Piscataway,
     NJ: IEEE; 2018. p. 19–35.
 80. 80.
     Liao F, Liang M, Dong Y, Pang T, Hu X, Zhu J. Defense against adversarial
     attacks using high-level representation guided denoiser. In: Proceedings of
     the IEEE Conference on Computer Vision and Pattern Recognition. Bellingham,
     WA: SPIE; 2018. p. 1778–1787.
 81. 81.
     Meng D, Chen H. Magnet: a two-pronged defense against adversarial examples.
     In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and
     Communications Security. New York, USA: ACM; 2017. p. 135–147.
 82. 82.
     Samangouei P, Kabkab M, Chellappa R. Defense-gan: protecting classifiers
     against adversarial attacks using generative models [Internet]. arXiv;
     2018. Available from: https://arxiv.org/abs/1805.06605.
 83. 83.
     Folz J, Palacio S, Hees J, Dengel A. Adversarial defense based on
     structure-to-signal autoencoders. In: 2020 IEEE Winter Conference on
     Applications of Computer Vision (WACV). Piscataway, NJ: IEEE; 2020.
     p. 3568–3577.
 84. 84.
     Papernot N, McDaniel P, Wu X, Jha S, Swami A. Distillation as a defense to
     adversarial perturbations against deep neural networks. In: 2016 IEEE
     Symposium on Security and Privacy (SP). Piscataway, NJ: IEEE; 2016.
     p. 582–597.
 85. 85.
     Tanay T, Griffin L. A boundary tilting persepective on the phenomenon of
     adversarial examples [Internet]. arXiv; 2016. Available from:
     https://arxiv.org/abs/1608.07690.
 86. 86.
     Papernot N, Abadi M, Erlingsson U, Goodfellow I, Talwar K. Semi-supervised
     knowledge transfer for deep learning from private training data [Internet].
     arXiv; 2016. Available from: https://arxiv.org/abs/1610.05755.
 87. 87.
     Adi Y, Baum C, Cisse M, Pinkas B, Keshet J. Turning your weakness into a
     strength: watermarking deep neural networks by backdooring. In: 27th
     {USENIX} Security Symposium ({USENIX} Security 18). Berkeley, CA, USA:
     USENIX; 2018. p. 1615–1631.
 88. 88.
     Hall A. Autonomous minehunter to trial uncrewed operations in the gulf.
     Navy News [Internet]; 2023 [cited 2023 Feb 13]. Available from:
     https://www.royalnavy.mod.uk/news-and-latest-activity/news/2023/february/13/20230213-autonomous-minehunter-to-trial-uncrewed-operations-in-the-gulf.
 89. 89.
     Hall A. Dstl and DASA research underpins royal navy maritime autonomy. Navy
     News [Internet]; 2023 [cited 2023 Jan 26]. Available from:
     https://www.gov.uk/government/news/dstl-and-dasa-research-underpins-royal-navy-maritime-autonomy.

--------------------------------------------------------------------------------

Written by


MATHEW J WALTER, AARON BARRETT, DAVID J WALKER AND KIMBERLY TAM

Article Type: Research Paper

•

Date of acceptance: March 2023

Date of publication: April 2023

•

DOI: 10.5772/acrt.15

Copyright: The Author(s), Licensee IntechOpen, License: CC BY 4.0

Download for free

Cite

© The Author(s) 2023. Licensee IntechOpen. This is an Open Access article
distributed under the terms of the Creative Commons Attribution License
(https://creativecommons.org/licenses/by/4.0/), which permits unrestricted
reuse, distribution, and reproduction in any medium, provided the original work
is properly cited.

--------------------------------------------------------------------------------

Impact of this article

254

Downloads

764

Views

1

Dimensions Citations

3

Altmetric Score

--------------------------------------------------------------------------------

Share this article



Join us today!

Submit your Article
HomeNewsContactCareersClimate Change Hub
AboutOur Authors and EditorsScientific AdvisorsTeamEventsAdvertisingPartnerships
PublishAbout Open AccessHow it worksOA Publishing FeesOpen Access fundingPeer
ReviewingEditorial Policies

Headquarters
IntechOpen Limited
5 Princes Gate Court,
London, SW7 2QJ,
UNITED KINGDOM

Phone: +44 20 8089 5702

Author Panel Sign in
 * Terms and Conditions
 * Privacy Policy
 * Customer Complaints

© 2022 IntechOpen. All rights reserved.