<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://urlscan.io/blog/feed.xml" rel="self" type="application/atom+xml" /><link href="https://urlscan.io/blog/" rel="alternate" type="text/html" /><updated>2026-04-15T17:03:10+02:00</updated><id>https://urlscan.io/blog/feed.xml</id><title type="html">Blog - urlscan.io</title><subtitle>urlscan.io Blog - Announcements, Product News, Tutorials, Service Incidents</subtitle><author><name>urlscan.io</name></author><entry><title type="html">Proxying Trust</title><link href="https://urlscan.io/blog/2026/04/15/ProxyingTrust/" rel="alternate" type="text/html" title="Proxying Trust" /><published>2026-04-15T17:02:00+02:00</published><updated>2026-04-15T17:02:00+02:00</updated><id>https://urlscan.io/blog/2026/04/15/ProxyingTrust</id><content type="html" xml:base="https://urlscan.io/blog/2026/04/15/ProxyingTrust/"><![CDATA[<p>During routine monitoring of malicious web activity on the urlscan platform, the urlscan Threat Research Team identified a phishing campaign abusing the Ultraviolet (UV) client-side proxy framework. This framework was being leveraged to obscure attacker infrastructure, evade traditional detection methods, and deliver high-fidelity credential harvesting content.</p>

<!--more-->

<p>This brief provides a technical analysis of how threat actors repurpose client-side proxy frameworks like Ultraviolet and its successor Scramjet for phishing campaigns, the observable network and page-level artifacts, and detection strategies for these novel evasion techniques.</p>

<hr />

<h3 id="contents">Contents</h3>
<ul>
  <li><a href="#service-workers---a-brief-overview">Service workers - A brief overview</a></li>
  <li><a href="#ultraviolet-proxy-framework">Ultraviolet Proxy Framework</a></li>
  <li><a href="#case-study:-microsoft-login-phishing-via-ultraviolet">Case Study: Microsoft Login Phishing via Ultraviolet</a></li>
  <li><a href="#emerging-trends:-scramjet">Emerging Trends: Scramjet</a></li>
  <li><a href="#why-threat-actors-use-proxy-frameworks">Why Threat Actors Use Proxy Frameworks</a></li>
  <li><a href="#conclusion">Conclusion</a></li>
  <li><a href="#references-and-further-reading">References and further reading</a></li>
</ul>

<hr />

<h3 id="service-workers---a-brief-overview"><a name="service-workers---a-brief-overview">Service workers - A brief overview</a></h3>

<p>What if a website could quietly place a programmable layer inside your browser that controls how network traffic is handled? A service worker is JavaScript code that a website downloads into the web browser. Once installed, it sits inside the browser and acts like a helper for that website, handling some of the communication between the browser and the web server. The service worker can control what information is sent back and forth, store data to retrieve later, and run background operations. Service workers operate in the background, without an obvious difference to the end user.</p>

<p>At its core, a service worker operates like a proxy server sitting between the web application, the browser, and the network. However, it is intentionally restricted: service workers have no DOM access, run asynchronously on a different thread, and cannot use certain synchronous APIs such as synchronous XHR or Web Storage. Further reference reading on service workers (<a href="https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API">Mozilla - Service Worker API</a>).</p>

<p>What service workers can do:</p>

<ul>
  <li>Intercept network requests and provide custom responses.</li>
  <li>Cache resources to enable effective offline experiences.</li>
  <li>Support capabilities such as push notifications and background synchronization.</li>
</ul>

<p><br /></p>

<p>What they cannot do:</p>

<ul>
  <li>Access or manipulate the DOM.</li>
  <li>Run synchronous operations or certain blocking APIs.</li>
  <li>Intercept network requests from browser tabs/windows other than its own</li>
</ul>

<p><br /></p>

<div>
   <div class="col col-md-12">
      <img src="/blog/assets/images/Proxying-Trust/Service-workers.png" title="Service Worker Diagram" alt="Service Worker Diagram" />
      <p class="caption">Service Worker Diagram - Source: https://web.dev/articles/workers-overview</p>
   </div>
</div>

<p>Threat actors benefit significantly from using service workers as a proxy layer in credential-phishing campaigns because these workers can intercept and modify network requests while operating persistently in the background. A malicious service worker could alter responses, inject harmful content, or redirect victims to phishing pages, enabling attackers to capture sensitive information without the user’s knowledge. Once compromised, a worker may intercept all requests made by a web application and steal data or manipulate sessions, effectively positioning the attacker in a man-in-the-middle role. Additionally, because service workers can cache and serve modified responses and continue running after installation, they provide attackers with durable control over traffic flows - a capability that can help deliver malicious payloads directly to the browser and potentially evade traditional inspection points such as SSL/TLS proxies. Setting up a service worker is not a simple task which is why there are pre-configured frameworks already built allowing for simple and quick deployments.</p>

<h3 id="ultraviolet-proxy-framework"><a name="ultraviolet-proxy-framework">Ultraviolet Proxy Framework</a></h3>

<p>Ultraviolet is an open-source browser-based proxy framework originally designed to bypass censorship by relaying web content through a service-worker sandbox. When run  in a user’s browser, Ultraviolet intercepts all HTTP requests for that window and reroutes them through an Ultraviolet server. This allows users to access arbitrary sites as if the content were hosted on the proxy domain itself.</p>

<p>Key features include:</p>

<ul>
  <li>Handling CAPTCHAs and cookies</li>
  <li>Full client-side request interception and content rewriting</li>
  <li>URL encoding/decoding to obscure navigation paths</li>
</ul>

<p><br /></p>

<p>While intended for censorship circumvention, these capabilities make Ultraviolet attractive to threat actors, enabling them to host phishing pages, malware, or redirects behind seemingly benign domains, thereby evading URL-based filtering and static content analysis.</p>

<hr />

<h3 id="case-study-microsoft-login-phishing-via-ultraviolet"><a name="case-study:-microsoft-login-phishing-via-ultraviolet">Case Study: Microsoft Login Phishing via Ultraviolet</a></h3>

<blockquote>
  <h5 id="this-case-study-is-based-on-an-unlisted-scan-conducted-by-the-urlscan-threat-research-team-and-approved-for-release">This case study is based on an <span class="label label-info">Unlisted</span> scan conducted by the urlscan Threat Research Team and approved for release.</h5>
</blockquote>

<p>In a recent investigation <a href="https://urlscan.io/result/019be0f3-2e36-7549-aff2-f3b27a575a72">https://urlscan.io/result/019be0f3-2e36-7549-aff2-f3b27a575a72</a> the urlscan Threat Research Team noticed a redirect was being abused from the legitimate Microsoft login domain to the malicious phishing domain. This led the urlscan Threat Research Team to investigate this process further.</p>

<p>During analysis we observed the page’s HTML and scripts loaded components from an Ultraviolet proxy. Specifically, the page pulled down <code class="language-plaintext highlighter-rouge">uv.bundle.js</code>, <code class="language-plaintext highlighter-rouge">uv.config.js</code>, and <code class="language-plaintext highlighter-rouge">uv.handler.js</code>. These filenames and the presence of a <code class="language-plaintext highlighter-rouge">__uv$config</code> JavaScript object and UVClient JavaScript functions in the scan capture are hallmarks of the Ultraviolet framework.</p>

<p>These scripts are not part of any legitimate Microsoft login page - they are the Ultraviolet client scripts that bootstrap the service worker proxy. In effect, when a victim’s browser visited the phishing link, the Ultraviolet worker redirected requests behind the scenes to the real Microsoft login server. This lets the phishing site display a genuine-looking login page without hosting it on the attacker’s domain.</p>

<p>In general, any page that loads <code class="language-plaintext highlighter-rouge">uv.handler.js</code>, <code class="language-plaintext highlighter-rouge">uv.config.js</code>, and <code class="language-plaintext highlighter-rouge">uv.bundle.js</code> or similar from a nonstandard host is likely using Ultraviolet (or a derivative).</p>

<p>The configuration json file is used to adjust how the proxy operates. The structure of the file is shown below.</p>

<pre><code class="language-txt">  Prefix: The directory prefix users will see.
  Bare: Bare servers can run on directories, e.g., http://example.org/bare/.
  EncodeUrl: How you want the URL to be encoded (Examples: xor or base64).
  DecodeURL: How you want the URL to be decoded (Should match EncodeUrl).
  Handler: Path to the UV handler (Default: static/uv/uv.handler.js).
  Bundle: Path to the UV bundle file (Default: static/uv/uv.bundle.js).
  Config: Path to the UV config file (Default: static/uv/uv.bundle.js).
  SW: Path to the UV Service Worker script (Default: static/uv/uv.sw.js).
</code></pre>

<p>Using the case study scan the config file can be seen in the responses:</p>

<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="w">  </span><span class="err">self.__uv$config</span><span class="w"> </span><span class="err">=</span><span class="w"> </span><span class="p">{</span><span class="w">
      </span><span class="err">prefix:</span><span class="w"> </span><span class="err">'/s/'</span><span class="p">,</span><span class="w">
      </span><span class="err">bare:</span><span class="w"> </span><span class="err">'/bare/'</span><span class="p">,</span><span class="w">
      </span><span class="err">encodeUrl:</span><span class="w"> </span><span class="err">Ultraviolet.codec.xor.encode</span><span class="p">,</span><span class="w">
      </span><span class="err">decodeUrl:</span><span class="w"> </span><span class="err">Ultraviolet.codec.xor.decode</span><span class="p">,</span><span class="w">
      </span><span class="err">handler:</span><span class="w"> </span><span class="err">'/uv/uv.handler.js'</span><span class="p">,</span><span class="w">
      </span><span class="err">bundle:</span><span class="w"> </span><span class="err">'/uv/uv.bundle.js'</span><span class="p">,</span><span class="w">
      </span><span class="err">config:</span><span class="w"> </span><span class="err">'/uv/uv.config.js'</span><span class="p">,</span><span class="w">
      </span><span class="err">sw:</span><span class="w"> </span><span class="err">'/uv/uv.sw.js'</span><span class="p">,</span><span class="w">
  </span><span class="p">}</span><span class="err">;</span><span class="w">
</span></code></pre></div></div>

<p>Importantly, urlscan’s phishing‐detection engine flagged this scan as malicious and branded the scan as impersonating Microsoft.</p>

<hr />

<h3 id="emerging-trends-scramjet"><a name="emerging-trends:-scramjet">Emerging Trends: Scramjet</a></h3>

<p>Scramjet is the successor to Ultraviolet and functions similarly, with comparable client-side artifacts and fingerprinting opportunities. Proactive detection is recommended by using a search for the key items liked to Scramjet deployments.</p>

<h3 id="why-threat-actors-use-proxy-frameworks"><a name="why-threat-actors-use-proxy-frameworks">Why Threat Actors Use Proxy Frameworks</a></h3>

<ol>
  <li>
    <p>Infrastructure hiding: Attackers often host phishing pages on compromised or throwaway domains. Proxy systems let them further obscure the origin by funneling traffic through a proxy domain. To a casual observer or naive filter, the site may simply appear as a normal hosted portal (Google or Discord via proxy), not a credential phishing page.</p>
  </li>
  <li>
    <p>Content authenticity: Because the proxy relays live content from legitimate sites, the phishing page can embed genuine UI elements. In our case, the Microsoft login form was real Microsoft HTML, not a hand‑crafted fake. This reduces the chance of obvious typos or missing images.</p>
  </li>
  <li>
    <p>Filter evasion: Many network defenses whitelist popular content providers or CDN domains. By bouncing through an allowed domain, the attacker can slip past domain‑based blocks.</p>
  </li>
  <li>
    <p>Unified control: The service‑worker model of Ultraviolet and Scramjet means an attacker can configure proxy rules centrally. They can redirect all requests to chosen endpoints (e.g. credential phishing login pages) without modifying each page. This flexibility is appealing for maintaining phishing kits or redirect infrastructures.</p>
  </li>
</ol>

<p>Because Ultraviolet and Scramjet were designed to bypass censorship, it is inherently evasive. Threat actors repurpose that feature set for malicious anonymity and persistence.</p>

<p>To support detection of these systems we have added a new label to urlscan Pro to allow detection of proxy frameworks including Ultraviolet and Scramjet. These can be used to detect an underlying proxy framework on a scan or filter based upon these detections. <br />
<code class="language-plaintext highlighter-rouge">tech.proxy.ultraviolet</code> and <code class="language-plaintext highlighter-rouge">tech.proxy.scramjet</code></p>

<h2 id="conclusion"><a name="conclusion">Conclusion</a></h2>

<p>Client-side proxy frameworks such as Ultraviolet and Scramjet are being repurposed by threat actors to cloak phishing campaigns. The case study demonstrates how attackers can proxy legitimate content (e.g., Microsoft login pages) through their own infrastructure, evading traditional URL-based and static detection.</p>

<p>Detection is achievable by analyzing artifacts, global objects, and configuration patterns in the page source, highlighting the importance of hunting using framework elements and rendered images and hashes.</p>

<p>By proactively monitoring these frameworks and correlating proxy behaviors with open redirects, defenders can identify and mitigate high-fidelity phishing campaigns before they reach and impact end users.</p>

<hr />

<h2 id="references-and-further-reading"><a name="references-and-further-reading">References and further reading</a></h2>

<p>If you would like to know more about service workers, their potential abuse and impact on detections take a look at the following resources which provide good oversights into this topic.</p>

<ul>
  <li><a href="https://github.com/MercuryWorkshop/scramjet">GitHub - Scramjet</a></li>
  <li><a href="https://github.com/titaniumnetwork-dev/Ultraviolet">GitHub - Ultraviolet</a></li>
  <li><a href="https://medium.com/@ahaz1701/evilworker-da94ae171249">Medium @ahaz1701 - EvilWorker: AiTM attack leveraging service workers</a></li>
  <li><a href="https://www.mux.com/blog/service-workers-are-underrated">Mux.com - Service workers are underrated, and building media proxies proves it</a></li>
  <li><a href="https://web.dev/learn/pwa/service-workers">Web.dev - Service workers</a></li>
</ul>

<p><br /></p>]]></content><author><name>urlscan Threat Research Team</name></author><category term="research" /><category term="phishing-kit" /><category term="phishing" /><category term="ultraviolet" /><category term="scramjet" /><category term="javascript" /><category term="tech" /><summary type="html"><![CDATA[During routine monitoring of malicious web activity on the urlscan platform, the urlscan Threat Research Team identified a phishing campaign abusing the Ultraviolet (UV) client-side proxy framework. This framework was being leveraged to obscure attacker infrastructure, evade traditional detection methods, and deliver high-fidelity credential harvesting content.]]></summary></entry><entry><title type="html">urlscan at PIVOTcon – Málaga, Spain - May 6-8, 2026</title><link href="https://urlscan.io/blog/2026/04/13/join-us-at-pivotcon/" rel="alternate" type="text/html" title="urlscan at PIVOTcon – Málaga, Spain - May 6-8, 2026" /><published>2026-04-13T14:22:00+02:00</published><updated>2026-04-13T14:22:00+02:00</updated><id>https://urlscan.io/blog/2026/04/13/join-us-at-pivotcon</id><content type="html" xml:base="https://urlscan.io/blog/2026/04/13/join-us-at-pivotcon/"><![CDATA[<p>We are excited to be heading to PIVOTcon, where we will host a hands-on
workshop focused on hunting phishing pages and infrastructure. If you are
attending the conference, this is a great opportunity to connect with us and
learn how to take make full use of our community and urlscan Pro platforms.</p>

<div class="row bottom10">
<div class="col col-md-12">
 <a href="https://pivotcon.org/">
  <img class="post" src="/blog/assets/images/pivotcon-26.png" title="urlscan at PIVOTcon 2026" alt="urlscan at PIVOTcon 2026" />
 </a>
</div>
</div>

<h2>Workshop: Uncovering Phishing Infrastructure<small><br />A Hands-On Workshop with urlscan.io</small></h2>

<p>In this interactive workshop, we will show how analysts can transform a single
suspicious URL into a deep investigation - uncovering entire phishing
campaigns and the infrastructure behind them.  Whether you’re new to urlscan.io
or already using it in your workflow, this session is designed to give you
practical techniques you can apply immediately.</p>

<!--more-->

<p>During the workshop, we’ll walk through real-world investigation scenarios and demonstrate how to:</p>

<ul>
  <li>Analyse suspicious websites safely using urlscan.io</li>
  <li>Pivot across domains, hostnames, and infrastructure</li>
  <li>Identify clusters of phishing activity using advanced search techniques</li>
  <li>Automate scans and monitor threats in the background</li>
  <li>Use features like live browsing, incident creation, and investigation workflows</li>
</ul>

<p>Our focus is on hands-on, practical investigation techniques - not just theory.</p>

<p>Phishing campaigns are becoming more sophisticated, scalable, and
interconnected. Being able to quickly pivot from a single indicator to a
broader threat landscape is a critical skill for modern analysts.</p>

<p>This workshop will give you insight into:</p>

<ul>
  <li>How real-world investigations are conducted by the urlscan Threat Research Team</li>
  <li>How to uncover infrastructure that isn’t immediately visible</li>
  <li>How to scale your investigations with automation</li>
</ul>

<h2 id="meet-the-team">Meet the Team</h2>

<p>Beyond the workshop, we would love to meet you in person. If you are attending
PIVOTcon, come and chat with us. Share your success stories as well as where
you might still be struggling!</p>

<div class="row text-center bottom10">
<div class="col col-md-5">
 <img class="post-small" src="/blog/assets/images/jake-s.png" title="Jake" alt="Jake" />
 <h3>Jake S</h3>
 <h4>Senior Threat Researcher</h4>
</div>
<div class="col col-md-5">
 <img class="post-small" src="/blog/assets/images/johannes-gilger.png" title="Johannes Gilger" alt="Johannes Gilger" />
 <h3>Johannes 'Jojo' Gilger</h3>
 <h4>Founder &amp; CEO</h4>
</div>
</div>

<p>Whether you are a customer already or just curious about our platform, we
invite you to reach out and schedule a meeting with us around the date of the
conference itself. Please reach out to info@urlscan.io to get this set up.</p>]]></content><author><name>urlscan.io</name></author><category term="announcement" /><summary type="html"><![CDATA[We are excited to be heading to PIVOTcon, where we will host a hands-on workshop focused on hunting phishing pages and infrastructure. If you are attending the conference, this is a great opportunity to connect with us and learn how to take make full use of our community and urlscan Pro platforms. Workshop: Uncovering Phishing InfrastructureA Hands-On Workshop with urlscan.io In this interactive workshop, we will show how analysts can transform a single suspicious URL into a deep investigation - uncovering entire phishing campaigns and the infrastructure behind them. Whether you’re new to urlscan.io or already using it in your workflow, this session is designed to give you practical techniques you can apply immediately.]]></summary></entry><entry><title type="html">Remote Access Scams</title><link href="https://urlscan.io/blog/2026/03/25/LiveSupportScams/" rel="alternate" type="text/html" title="Remote Access Scams" /><published>2026-03-25T16:09:00+01:00</published><updated>2026-03-25T16:09:00+01:00</updated><id>https://urlscan.io/blog/2026/03/25/LiveSupportScams</id><content type="html" xml:base="https://urlscan.io/blog/2026/03/25/LiveSupportScams/"><![CDATA[<p>Over the last couple of years, the urlscan Threat Research Team have observed repeated, near-identical “live support” webpages used to socially-engineer victims into installing legitimate remote access tools (AnyDesk, ConnectWise/ScreenConnect, TeamViewer, etc.). Threat actors pair these pages with cold calls impersonating banks, telcos, or crypto services and attempt to install screen sharing software. Once connected they take control of sessions and facilitate fraudulent transfers.</p>

<!--more-->

<hr />

<blockquote>
  <h5 id="vishing-the-fraudulent-practice-of-making-phone-calls-or-leaving-voice-messages-purporting-to-be-from-reputable-companies-in-order-to-induce-individuals-to-reveal-personal-information-such-as-bank-details-and-credit-card-numbers"><strong>Vishing</strong>: the fraudulent practice of making phone calls or leaving voice messages purporting to be from reputable companies in order to induce individuals to reveal personal information, such as bank details and credit card numbers.</h5>
</blockquote>

<h3 id="a-typical-attack-flow"><strong>A Typical Attack Flow</strong></h3>
<p>These campaigns usually begin with a cold call where the threat actor impersonates a trusted entity. The victim is directed to a single purpose landing page designed to look like a legitimate “live support” or “chat assistance” portal. From this page, the victim is instructed to download and install a legitimate remote access tool, most commonly AnyDesk, TeamViewer, or ConnectWise.</p>

<p>Once the software is installed, the threat actor requests the session code or in some cases guides the victim through installing a preconfigured client, enabling full remote access to the victim’s desktop.</p>

<p>At this stage, the threat actor may social-engineer the victim into logging into their bank or approving actions such as MFA prompts or payment confirmations. Because the activity originates from the victim’s own device and browser, fraudulent transactions often appear indistinguishable from normal user behavior.</p>

<ul>
  <li>The software involved is legitimate and widely used for real IT support, which is a common tactic employed by threat actors, meaning behavioral fingerprints are harder to spot, as the actions of the remote desktop tool isn’t obscure.</li>
  <li>Financial transactions are executed through the genuine account holder’s account from their usual device and location, which means behavioral fraud detection systems often fail to trigger alerts.</li>
</ul>

<p><br /></p>

<h3 id="technical-clusters"><strong>Technical Clusters</strong></h3>

<p>Below are the structural clusters we use to group and hunt these setups. Each cluster includes the behavioral/structural signatures.</p>

<h4 id="indexconfigjs-cluster"><strong>‘index/config.js’ Cluster</strong></h4>

<p>Pages use a consistent combination of <code class="language-plaintext highlighter-rouge">index.js</code> and <code class="language-plaintext highlighter-rouge">config.js</code> filenames. <code class="language-plaintext highlighter-rouge">config.js</code> frequently contains direct or proxied links for the remote desktop installer.</p>

<p>The <code class="language-plaintext highlighter-rouge">index.js</code> file  produces the download buttons for Windows and Mac based on the visitor’s browser. It is notable in the cluster that Windows and Mac operating systems are differentiated in the download.</p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  <span class="k">import</span> <span class="p">{</span> <span class="nx">WIN_DOWNLOAD_LINK</span><span class="p">,</span> <span class="nx">MAC_DOWNLOAD_LINK</span> <span class="p">}</span> <span class="k">from</span> <span class="dl">'</span><span class="s1">../config.js</span><span class="dl">'</span><span class="p">;</span>

  <span class="nb">document</span><span class="p">.</span><span class="nx">addEventListener</span><span class="p">(</span><span class="dl">'</span><span class="s1">DOMContentLoaded</span><span class="dl">'</span><span class="p">,</span> <span class="kd">function</span> <span class="p">()</span> <span class="p">{</span>
    <span class="kd">const</span> <span class="nx">downloadButtons</span> <span class="o">=</span> <span class="nb">document</span><span class="p">.</span><span class="nx">querySelectorAll</span><span class="p">(</span><span class="dl">'</span><span class="s1">.dl-btn</span><span class="dl">'</span><span class="p">);</span>
    <span class="kd">const</span> <span class="nx">winIcons</span> <span class="o">=</span> <span class="nb">document</span><span class="p">.</span><span class="nx">querySelectorAll</span><span class="p">(</span><span class="dl">'</span><span class="s1">.dl-win</span><span class="dl">'</span><span class="p">);</span>
    <span class="kd">const</span> <span class="nx">macIcons</span> <span class="o">=</span> <span class="nb">document</span><span class="p">.</span><span class="nx">querySelectorAll</span><span class="p">(</span><span class="dl">'</span><span class="s1">.dl-mac</span><span class="dl">'</span><span class="p">);</span>
    <span class="kd">const</span> <span class="nx">isMac</span> <span class="o">=</span> <span class="nb">navigator</span><span class="p">.</span><span class="nx">platform</span><span class="p">.</span><span class="nx">startsWith</span><span class="p">(</span><span class="dl">'</span><span class="s1">Mac</span><span class="dl">'</span><span class="p">);</span>

    <span class="nx">winIcons</span><span class="p">.</span><span class="nx">forEach</span><span class="p">((</span><span class="nx">icon</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="nx">icon</span><span class="p">.</span><span class="nx">classList</span><span class="p">[</span><span class="nx">isMac</span> <span class="p">?</span> <span class="dl">'</span><span class="s1">add</span><span class="dl">'</span> <span class="p">:</span> <span class="dl">'</span><span class="s1">remove</span><span class="dl">'</span><span class="p">](</span><span class="dl">'</span><span class="s1">hidden</span><span class="dl">'</span><span class="p">));</span>
    <span class="nx">macIcons</span><span class="p">.</span><span class="nx">forEach</span><span class="p">((</span><span class="nx">icon</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="nx">icon</span><span class="p">.</span><span class="nx">classList</span><span class="p">[</span><span class="nx">isMac</span> <span class="p">?</span> <span class="dl">'</span><span class="s1">remove</span><span class="dl">'</span> <span class="p">:</span> <span class="dl">'</span><span class="s1">add</span><span class="dl">'</span><span class="p">](</span><span class="dl">'</span><span class="s1">hidden</span><span class="dl">'</span><span class="p">));</span>

    <span class="nx">downloadButtons</span><span class="p">.</span><span class="nx">forEach</span><span class="p">((</span><span class="nx">button</span><span class="p">)</span> <span class="o">=&gt;</span> <span class="p">{</span>
      <span class="nx">button</span><span class="p">.</span><span class="nx">addEventListener</span><span class="p">(</span><span class="dl">'</span><span class="s1">click</span><span class="dl">'</span><span class="p">,</span> <span class="kd">function</span> <span class="p">()</span> <span class="p">{</span>
        <span class="kd">const</span> <span class="nx">downloadLink</span> <span class="o">=</span> <span class="nx">isMac</span> <span class="p">?</span> <span class="nx">MAC_DOWNLOAD_LINK</span> <span class="p">:</span> <span class="nx">WIN_DOWNLOAD_LINK</span><span class="p">;</span>
        <span class="kd">const</span> <span class="nx">link</span> <span class="o">=</span> <span class="nb">document</span><span class="p">.</span><span class="nx">createElement</span><span class="p">(</span><span class="dl">'</span><span class="s1">a</span><span class="dl">'</span><span class="p">);</span>
        <span class="nx">link</span><span class="p">.</span><span class="nx">href</span> <span class="o">=</span> <span class="nx">downloadLink</span><span class="p">;</span>
        <span class="nb">document</span><span class="p">.</span><span class="nx">body</span><span class="p">.</span><span class="nx">appendChild</span><span class="p">(</span><span class="nx">link</span><span class="p">);</span>
        <span class="nx">link</span><span class="p">.</span><span class="nx">click</span><span class="p">();</span>
        <span class="nb">document</span><span class="p">.</span><span class="nx">body</span><span class="p">.</span><span class="nx">removeChild</span><span class="p">(</span><span class="nx">link</span><span class="p">);</span>
      <span class="p">});</span>
    <span class="p">});</span>
  <span class="p">});</span>

</code></pre></div></div>

<p>The <code class="language-plaintext highlighter-rouge">config.js</code> file is a very basic file which points the <code class="language-plaintext highlighter-rouge">index.js</code> file to the executable download locations on the domain.</p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  <span class="k">export</span> <span class="kd">const</span> <span class="nx">WIN_DOWNLOAD_LINK</span> <span class="o">=</span> <span class="dl">'</span><span class="s1">/path/to/win.exe</span><span class="dl">'</span><span class="p">;</span>
  <span class="k">export</span> <span class="kd">const</span> <span class="nx">MAC_DOWNLOAD_LINK</span> <span class="o">=</span> <span class="dl">'</span><span class="s1">/path/to/mac.dmg</span><span class="dl">'</span><span class="p">;</span>
</code></pre></div></div>

<p>The brands targeted by this cluster are predominantly banking and financial companies.</p>

<p>Observed brands within this cluster are:</p>

<ul>
  <li>BNZ - <a href="https://urlscan.io/result/019b6626-d717-7118-91f8-378b03838990">https://urlscan.io/result/019b6626-d717-7118-91f8-378b03838990</a></li>
  <li>Chase - <a href="https://urlscan.io/result/019acb3e-6ff2-7775-ab90-c35975f309f1">https://urlscan.io/result/019acb3e-6ff2-7775-ab90-c35975f309f1</a></li>
  <li>American Express - <a href="https://urlscan.io/result/019ac9f1-d79b-7002-8eb7-505a4b469e95">https://urlscan.io/result/019ac9f1-d79b-7002-8eb7-505a4b469e95</a></li>
  <li>Bank Of America - <a href="https://urlscan.io/result/019a7070-6aca-744a-aca2-2928bf7ad6a2">https://urlscan.io/result/019a7070-6aca-744a-aca2-2928bf7ad6a2</a></li>
  <li>PayPal - <a href="https://urlscan.io/result/0198c996-c9fa-71dc-a6c1-ac5821187a6b">https://urlscan.io/result/0198c996-c9fa-71dc-a6c1-ac5821187a6b</a></li>
</ul>

<p><br /></p>

<h4 id="osname-cluster"><strong>‘OSname’ Cluster</strong></h4>

<p>Pages which are part of this cluster include a small JavaScript snippet that inspects <code class="language-plaintext highlighter-rouge">navigator.appVersion</code> and assigns <code class="language-plaintext highlighter-rouge">OSName</code>, sets the <code class="language-plaintext highlighter-rouge">dlButton</code> text and <code class="language-plaintext highlighter-rouge">href</code>.</p>

<p>This cluster also looks for the operating system and then adapts the button text to the corresponding name</p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  <span class="nx">$</span><span class="p">(</span><span class="nb">document</span><span class="p">).</span><span class="nx">ready</span><span class="p">(</span><span class="kd">function</span> <span class="p">()</span> <span class="p">{</span>
      <span class="k">if</span> <span class="p">(</span><span class="nb">navigator</span><span class="p">.</span><span class="nx">appVersion</span><span class="p">.</span><span class="nx">indexOf</span><span class="p">(</span><span class="dl">"</span><span class="s2">Win</span><span class="dl">"</span><span class="p">)</span> <span class="o">!=</span> <span class="o">-</span><span class="mi">1</span><span class="p">)</span> <span class="p">{</span>
          <span class="nx">OSName</span> <span class="o">=</span> <span class="dl">"</span><span class="s2">Windows</span><span class="dl">"</span><span class="p">;</span>
          <span class="nx">$</span><span class="p">(</span><span class="dl">"</span><span class="s2">#dlButton</span><span class="dl">"</span><span class="p">).</span><span class="nx">text</span><span class="p">(</span><span class="dl">"</span><span class="s2">Open Live chat on Windows</span><span class="dl">"</span><span class="p">);</span>
          <span class="nx">$</span><span class="p">(</span><span class="dl">"</span><span class="s2">#dlButton</span><span class="dl">"</span><span class="p">).</span><span class="nx">attr</span><span class="p">(</span><span class="dl">"</span><span class="s2">href</span><span class="dl">"</span><span class="p">,</span> <span class="dl">"</span><span class="s2">https://download.anydesk.com/AnyDesk.exe</span><span class="dl">"</span><span class="p">);</span>
      <span class="p">}</span> <span class="k">else</span> <span class="k">if</span> <span class="p">(</span><span class="nb">navigator</span><span class="p">.</span><span class="nx">appVersion</span><span class="p">.</span><span class="nx">indexOf</span><span class="p">(</span><span class="dl">"</span><span class="s2">Mac</span><span class="dl">"</span><span class="p">)</span> <span class="o">!=</span> <span class="o">-</span><span class="mi">1</span><span class="p">)</span> <span class="p">{</span>
          <span class="nx">OSName</span> <span class="o">=</span> <span class="dl">"</span><span class="s2">macOS</span><span class="dl">"</span><span class="p">;</span>
          <span class="nx">$</span><span class="p">(</span><span class="dl">"</span><span class="s2">#dlButton</span><span class="dl">"</span><span class="p">).</span><span class="nx">text</span><span class="p">(</span><span class="dl">"</span><span class="s2">Open Live chat on Mac</span><span class="dl">"</span><span class="p">);</span>
          <span class="nx">$</span><span class="p">(</span><span class="dl">"</span><span class="s2">#dlButton</span><span class="dl">"</span><span class="p">).</span><span class="nx">attr</span><span class="p">(</span><span class="dl">"</span><span class="s2">href</span><span class="dl">"</span><span class="p">,</span> <span class="dl">"</span><span class="s2">https://download.anydesk.com/anydesk.dmg</span><span class="dl">"</span><span class="p">);</span>
      <span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
          <span class="nx">OSName</span> <span class="o">=</span> <span class="dl">"</span><span class="s2">Unknown</span><span class="dl">"</span><span class="p">;</span>
          <span class="nx">$</span><span class="p">(</span><span class="dl">"</span><span class="s2">#dlButton</span><span class="dl">"</span><span class="p">).</span><span class="nx">text</span><span class="p">(</span><span class="dl">"</span><span class="s2">Not Available</span><span class="dl">"</span><span class="p">);</span>
          <span class="nx">$</span><span class="p">(</span><span class="dl">"</span><span class="s2">#dlButton</span><span class="dl">"</span><span class="p">).</span><span class="nx">attr</span><span class="p">(</span><span class="dl">"</span><span class="s2">href</span><span class="dl">"</span><span class="p">,</span> <span class="dl">"</span><span class="s2">javascript:void(0);</span><span class="dl">"</span><span class="p">);</span>
      <span class="p">}</span>
  <span class="p">});</span>
</code></pre></div></div>

<p>A large number of brands are used as lures in this cluster. A sample set of brands from different verticals observed are:</p>

<ul>
  <li>Banking
    <ul>
      <li>Bank of Ireland - <a href="https://urlscan.io/result/019b35e2-a87f-7128-b36d-3188d8e3602e">https://urlscan.io/result/019b35e2-a87f-7128-b36d-3188d8e3602e</a></li>
      <li>Barclays - <a href="https://urlscan.io/result/019b03b2-c4d8-72c7-a7ac-bf7c3d98266a">https://urlscan.io/result/019b03b2-c4d8-72c7-a7ac-bf7c3d98266a</a></li>
      <li>Comerica - <a href="https://urlscan.io/result/019c0603-69c9-72f2-83dc-2b4eca2af2ff">https://urlscan.io/result/019c0603-69c9-72f2-83dc-2b4eca2af2ff</a></li>
      <li>Huntington Bank - <a href="https://urlscan.io/result/019c2a52-fcf6-7782-b256-e37502e2a27b">https://urlscan.io/result/019c2a52-fcf6-7782-b256-e37502e2a27b</a></li>
      <li>Santander - <a href="https://urlscan.io/result/019abb35-bda6-73b8-8785-0661216037d4">https://urlscan.io/result/019abb35-bda6-73b8-8785-0661216037d4</a></li>
      <li>US Bank - <a href="https://urlscan.io/result/019bb790-81e4-7348-b589-542b7272aeb4">https://urlscan.io/result/019bb790-81e4-7348-b589-542b7272aeb4</a></li>
      <li>Westpac - <a href="https://urlscan.io/result/019a99bb-8221-7225-81da-b3166cefddaa">https://urlscan.io/result/019a99bb-8221-7225-81da-b3166cefddaa</a></li>
    </ul>
  </li>
  <li>Crypto
    <ul>
      <li>Coinbase - <a href="https://urlscan.io/result/0196a048-a33a-72ef-8fe0-c25d651c1445">https://urlscan.io/result/0196a048-a33a-72ef-8fe0-c25d651c1445</a></li>
      <li>Ledger - <a href="https://urlscan.io/result/019c2496-1c1a-7561-aadd-e986d3cff6cf">https://urlscan.io/result/019c2496-1c1a-7561-aadd-e986d3cff6cf</a></li>
    </ul>
  </li>
  <li>Online
    <ul>
      <li>Haveibeenpwned - <a href="https://urlscan.io/result/0199a98f-8446-74ba-b61d-f539f074f786">https://urlscan.io/result/0199a98f-8446-74ba-b61d-f539f074f786</a></li>
    </ul>
  </li>
</ul>

<p><br /></p>

<h4 id="direct-link-cluster"><strong>Direct-link Cluster</strong></h4>

<p>The simplest of the clusters has static HTML templates linking directly to the official AnyDesk download page on <code class="language-plaintext highlighter-rouge">download.anydesk.com</code>. This is really useful as using urlscan it is trivial to quickly spot copycats sites abusing the web links.</p>

<p>Predominantly financial institutions are targeted by this cluster. A small section of the brands associated are:</p>

<ul>
  <li>Hampden Bank - <a href="https://urlscan.io/result/01967c63-9367-7676-aeff-b9d13ee681b8">https://urlscan.io/result/01967c63-9367-7676-aeff-b9d13ee681b8</a></li>
  <li>Kiwibank - <a href="https://urlscan.io/result/019abe17-c487-70f9-9cdf-89e2ebb51665">https://urlscan.io/result/019abe17-c487-70f9-9cdf-89e2ebb51665</a></li>
  <li>Metro Bank - <a href="https://urlscan.io/result/4f11d391-ab70-47b6-85ed-dfd29e6f0e58">https://urlscan.io/result/4f11d391-ab70-47b6-85ed-dfd29e6f0e58</a></li>
  <li>Revolut - <a href="https://urlscan.io/result/0195f1f0-57c3-754e-8e4f-c606df805b86">https://urlscan.io/result/0195f1f0-57c3-754e-8e4f-c606df805b86</a></li>
  <li>Scotiabank - <a href="https://urlscan.io/result/0195a9a1-8d97-7cca-9171-cb192e92ac64">https://urlscan.io/result/0195a9a1-8d97-7cca-9171-cb192e92ac64</a></li>
  <li>Westpac - <a href="https://urlscan.io/result/019bd7e8-de31-735c-8de2-2313d0a8a652">https://urlscan.io/result/019bd7e8-de31-735c-8de2-2313d0a8a652</a></li>
</ul>

<p>It should be noted that the OSname and Direct-link clusters share very similar code. This similarity may indicate a newer generation of the kit or the emergence of a splinter group. However, this does not preclude the presence of two distinct clusters, each with its own unique fingerprint.</p>

<p><br /></p>

<h4 id="the-killer-cluster"><strong>The ‘killer’ Cluster</strong></h4>

<p>The pages in this cluster implement an initial anti-bot filter step using a geo filter and custom allowlist. Additionally, these pages use a backend, often Supabase, before loading the final “support” page. <code class="language-plaintext highlighter-rouge">config.js</code> usually exports constants like <code class="language-plaintext highlighter-rouge">ENTRY_FILE</code>, <code class="language-plaintext highlighter-rouge">ACCESS_KEY</code>, <code class="language-plaintext highlighter-rouge">SUPABASE_URL</code>, <code class="language-plaintext highlighter-rouge">SUPABASE_KEY</code>. The token names/strings often reuse terms like <code class="language-plaintext highlighter-rouge">killer</code> or brand shortcodes (e.g. <code class="language-plaintext highlighter-rouge">anzkiller</code>).</p>

<p><strong>Representative <code class="language-plaintext highlighter-rouge">config.js</code> file:</strong></p>

<div class="language-javascript highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  <span class="k">export</span> <span class="kd">const</span> <span class="nx">ENTRY_FILE</span> <span class="o">=</span> <span class="dl">'</span><span class="s1">/anz/index.html</span><span class="dl">'</span><span class="p">;</span>
  <span class="k">export</span> <span class="kd">const</span> <span class="nx">ACCESS_KEY</span> <span class="o">=</span> <span class="dl">'</span><span class="s1">anzkiller</span><span class="dl">'</span><span class="p">;</span>
  <span class="k">export</span> <span class="kd">const</span> <span class="nx">SUPABASE_URL</span> <span class="o">=</span> <span class="dl">'</span><span class="s1">https://xnixjkzqyaynqblknxcz.supabase.co</span><span class="dl">'</span><span class="p">;</span>
  <span class="k">export</span> <span class="kd">const</span> <span class="nx">SUPABASE_KEY</span> <span class="o">=</span> <span class="dl">'</span><span class="s1">eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...</span><span class="dl">'</span><span class="p">;</span>
</code></pre></div></div>

<p>There are multiple searches which can be used to find the pages associated with this cluster. Searches can also be refined to look for the initial landing pages which in some cases, depending on how the scan was performed, has been redirected, or searching for the page content on the anti-bot page. Due to the way the redirection logic is built, scans can still be hunted and matched.</p>

<ul>
  <li>Landing page scan - ANZ Bank - <a href="https://urlscan.io/result/019b062e-5fad-77cd-b61c-1ab46e05aa1e">https://urlscan.io/result/019b062e-5fad-77cd-b61c-1ab46e05aa1e</a></li>
  <li>Direct scans of the subpage - Virgin Money - <a href="https://urlscan.io/result/019b271e-7e19-70ae-8405-080033ee9bd7">https://urlscan.io/result/019b271e-7e19-70ae-8405-080033ee9bd7</a></li>
</ul>

<p>The brands observed are predominantly geographically targeted with Australia and The United Kingdom being targeted the most. A sample of brands identified in this cluster are:</p>

<ul>
  <li>ANZ Bank <a href="https://urlscan.io/result/01985cdc-a2fb-75bb-91fe-638c064fd650">https://urlscan.io/result/01985cdc-a2fb-75bb-91fe-638c064fd650</a></li>
  <li>NAB <a href="https://urlscan.io/result/0199ff5d-c8d8-748d-ac7b-ee49192c61f9">https://urlscan.io/result/0199ff5d-c8d8-748d-ac7b-ee49192c61f9</a></li>
  <li>Westpac <a href="https://urlscan.io/result/01984e53-3838-72da-a6d0-a0b92bc9c69b">https://urlscan.io/result/01984e53-3838-72da-a6d0-a0b92bc9c69b</a></li>
  <li>Bank of Scotland <a href="https://urlscan.io/result/01961a0e-c8f2-723a-aae1-c70c024ba326">https://urlscan.io/result/01961a0e-c8f2-723a-aae1-c70c024ba326</a></li>
  <li>Lloyds <a href="https://urlscan.io/result/d2f44a43-86a2-4b3e-9d2e-d506dcb4d2de">https://urlscan.io/result/d2f44a43-86a2-4b3e-9d2e-d506dcb4d2de</a></li>
  <li>Santander - <a href="https://urlscan.io/result/01957ac3-261e-7000-b553-60523a6504c4">https://urlscan.io/result/01957ac3-261e-7000-b553-60523a6504c4</a></li>
</ul>

<p><br /></p>

<h3 id="conclusion"><strong>Conclusion</strong></h3>

<p>The analysis of live support campaigns reveals a persistent threat model centered on social engineering and the abuse of legitimate remote access tools such as AnyDesk and TeamViewer. This approach allows threat actors to bypass traditional fraud detection mechanisms by initiating fraudulent transactions directly from the victim’s own, trusted device and location.</p>

<p>The campaigns exhibit a scalable structure, identifiable through four distinct technical clusters - the <code class="language-plaintext highlighter-rouge">index/config.js</code> cluster, the <code class="language-plaintext highlighter-rouge">OSName</code> cluster, the <code class="language-plaintext highlighter-rouge">Direct-link cluster</code>, and the regionally-focused <code class="language-plaintext highlighter-rouge">"killer"</code> cluster - which provide actionable signatures for defense. While various brands are impersonated, the core objective remains financially motivated, with a heavy emphasis on targeting banking and financial institutions, particularly in regions like the US, Australia and the UK.</p>]]></content><author><name>urlscan Threat Research Team</name></author><category term="australia" /><category term="banking" /><category term="crypto" /><category term="europe" /><category term="phishing" /><category term="phishing-kit" /><category term="research" /><category term="telecommunication" /><category term="UK" /><category term="US" /><summary type="html"><![CDATA[Over the last couple of years, the urlscan Threat Research Team have observed repeated, near-identical “live support” webpages used to socially-engineer victims into installing legitimate remote access tools (AnyDesk, ConnectWise/ScreenConnect, TeamViewer, etc.). Threat actors pair these pages with cold calls impersonating banks, telcos, or crypto services and attempt to install screen sharing software. Once connected they take control of sessions and facilitate fraudulent transfers.]]></summary></entry><entry><title type="html">Fast checks using the new Malicious Lookup API</title><link href="https://urlscan.io/blog/2026/03/24/malicious-api/" rel="alternate" type="text/html" title="Fast checks using the new Malicious Lookup API" /><published>2026-03-24T00:00:00+01:00</published><updated>2026-03-24T00:00:00+01:00</updated><id>https://urlscan.io/blog/2026/03/24/malicious-api</id><content type="html" xml:base="https://urlscan.io/blog/2026/03/24/malicious-api/"><![CDATA[<p>Today we are announcing a new API endpoint for looking up observables on
urlscan.io: The <strong>Malicious Lookup API</strong>. This new endpoint enables
fast checks against our database of malicious websites and is meant to answer a
simple question:</p>

<blockquote>
  <p>Has this hostname/domain/IP/URL been observed hosting malicious content?</p>
</blockquote>

<p>The API answers this question efficiently with predictable performance.</p>

<!--more-->

<h3 id="background">Background</h3>

<p>A common use-case for customers of the urlscan platform is to check historical
scan results to determine whether a particular item had been seen in connection
with malicious activity. This type of lookup was always possible using the
Search API, but was slow and relatively expensive to run across 10 years of
history and billions of scan results. The new Malicious Lookup API was created
to answer that simple questions more efficiently.</p>

<p>The API could be used as a cheap pre-check before performing more expensive
actions: If a website has already (or recently) been seen in connection with
malicious activity, then maybe it does not need to be scanned again.</p>

<h3 id="api-reference">API Reference</h3>

<p>The Malicious Lookup API is available via the following endpoint:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>GET /api/v1/malicious/{type}/{value}
</code></pre></div></div>

<p>The <code class="language-plaintext highlighter-rouge">type</code> parameter selects what kind of observable to query:</p>

<ul>
  <li><strong><code class="language-plaintext highlighter-rouge">ip</code></strong> – Look up an IP address (e.g. <code class="language-plaintext highlighter-rouge">192.0.2.1</code>)</li>
  <li><strong><code class="language-plaintext highlighter-rouge">hostname</code></strong> – Look up an exact hostname match (e.g. <code class="language-plaintext highlighter-rouge">www.example.com</code>)</li>
  <li><strong><code class="language-plaintext highlighter-rouge">domain</code></strong> – Look up an apex domain, covering all subdomains (e.g. <code class="language-plaintext highlighter-rouge">example.com</code>)</li>
  <li><strong><code class="language-plaintext highlighter-rouge">url</code></strong> – Look up an exact page URL (URL-encoded, e.g. <code class="language-plaintext highlighter-rouge">https%3A%2F%2Fexample.com%2Fpath</code>)</li>
</ul>

<p><em>Note</em>: URLs are canonicalised automatically: The protocol and query parameters are discarded before running the lookup.</p>

<p>The response includes the observable, its type, the number of malicious scan
results it was seen in, and when it was first and last seen:</p>

<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
    </span><span class="nl">"observable"</span><span class="p">:</span><span class="w"> </span><span class="s2">"testsafebrowsing.appspot.com"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hostname"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"count"</span><span class="p">:</span><span class="w"> </span><span class="mi">2445</span><span class="p">,</span><span class="w">
    </span><span class="nl">"firstSeen"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2023-05-22T06:17:07.535Z"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"lastSeen"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2026-03-23T10:49:14.046Z"</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>

<h3 id="curl-example">cURL example</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>curl <span class="nt">-X</span> GET <span class="se">\</span>
  <span class="s1">'https://urlscan.io/api/v1/malicious/hostname/testsafebrowsing.appspot.com'</span> <span class="se">\</span>
  <span class="nt">-H</span> <span class="s1">'api-key: YOUR_API_KEY_HERE'</span>
</code></pre></div></div>

<h3 id="about-the-urlscan-classification-approach">About the urlscan classification approach</h3>

<p>A website will be flagged as <em>malicious</em> by urlscan under the following conditions:</p>

<ul>
  <li>The website is hosting what appears to be phishing or brand impersonation.</li>
  <li>The website is not hosted on a legitimate domain for whatever brand or organisation it claims to represent.</li>
</ul>

<p>urlscan does not flag hostnames or domains as malicious purely based on their
domain name or community verdicts. Our main focus is the <em>content</em> of these
websites. As a result, even legitimate domains and hostnames will be flagged
when they host malicious content. A platform like Google Docs on
<code>docs.google.com</code> could appear as malicious if there are some pages
on that hostname which are hosting malicious content.</p>

<h3 id="availability">Availability</h3>

<p>This endpoint is available to <strong>urlscan Pro</strong> customers. For full details,
see the <a href="https://docs.urlscan.io/apis/urlscan-openapi/malicious">API documentation</a>.</p>]]></content><author><name>urlscan.io</name></author><category term="changelog" /><category term="product" /><category term="api" /><summary type="html"><![CDATA[Today we are announcing a new API endpoint for looking up observables on urlscan.io: The Malicious Lookup API. This new endpoint enables fast checks against our database of malicious websites and is meant to answer a simple question: Has this hostname/domain/IP/URL been observed hosting malicious content? The API answers this question efficiently with predictable performance.]]></summary></entry><entry><title type="html">Brand AI and ML verdicts improved, new AI summaries</title><link href="https://urlscan.io/blog/2026/03/23/brand-ai-ml-verdicts-ai-summaries/" rel="alternate" type="text/html" title="Brand AI and ML verdicts improved, new AI summaries" /><published>2026-03-23T16:29:00+01:00</published><updated>2026-03-23T16:29:00+01:00</updated><id>https://urlscan.io/blog/2026/03/23/brand-ai-ml-verdicts-ai-summaries</id><content type="html" xml:base="https://urlscan.io/blog/2026/03/23/brand-ai-ml-verdicts-ai-summaries/"><![CDATA[<p>We have made significant improvements to our core AI features on the urlscan Pro
platform: <strong>Brand AI</strong> allows users search for brand abuse using the visual
representation of a website, <strong>ML verdicts</strong> deliver a score for the
trustworthiness of a website and the new <strong>AI summaries</strong> help users understand
the content of a website in a foreign language.</p>

<div class="row">
<div class="col col-md-12">
<img class="post" src="/blog/assets/images/brandai2-header.png" title="Searching for brand names using Brand AI on urlscan Pro" alt="Searching for brand names using Brand AI on urlscan Pro" />
</div>
</div>

<!--more-->

<h3 id="brand-ai-20">Brand AI 2.0</h3>

<p>We are now using an improved model for our Brand AI feature. Brand AI
dynamically recognizes the brand name of a website and makes it available as a
searchable attribute in the <code class="language-plaintext highlighter-rouge">visible.brandname</code> field in the scans database.
Using the visible brand name determined by Brand AI, customers can hunt for
pages related to their brand more effectively than using pure text-based
signals (such as <code class="language-plaintext highlighter-rouge">text.content</code> or <code class="language-plaintext highlighter-rouge">page.title</code>).</p>

<p>Searching for brand names using Brand AI can be as easy as:</p>

<div class="language-sparql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="err">visible</span><span class="p">.</span><span class="nn">brandname</span><span class="o">:</span><span class="ss">ezpass</span><span class="w">
</span></code></pre></div></div>

<p>Using the <code class="language-plaintext highlighter-rouge">.keyword</code> field for exact matches (no additional words):</p>

<div class="language-sparql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="err">visible</span><span class="p">.</span><span class="err">brandname</span><span class="p">.</span><span class="nn">keyword</span><span class="o">:</span><span class="ss">irs</span><span class="w">
</span></code></pre></div></div>

<p>Brand AI allows urlscan and our customers to quickly detect new phishing and
brand-impersonation campaigns which deviate from previous templates and
branding.</p>

<div class="row">
<div class="col col-md-12">
<img class="post" src="/blog/assets/images/brandai2-search.png" title="Searching for brand names using Brand AI on urlscan Pro" alt="Searching for brand names using Brand AI on urlscan Pro" />
<p class="caption text-center">Searching for brand names using Brand AI on urlscan Pro</p>
</div>
</div>

<h3 id="ml-verdicts-20">ML Verdicts 2.0</h3>

<p>Our ML classifier has been retrained from scratch using a refined training data
selection regime and additional classification features. The mis-classification
rate should now be much lower. A side-effect of this new training run is that
many newly observed domains and hostnames will be assigned a more aggressive
score.</p>

<p>The ML verdicts can be queried using the <code class="language-plaintext highlighter-rouge">verdicts.engines.score</code> field, with
values ranging from -100 (benign) to 100 (malicious). The ML verdicts do not
automatically trigger detections using our traditional verified detections
available via the <code class="language-plaintext highlighter-rouge">brand.key</code> fields.</p>

<div class="row">
<div class="col col-md-12">
<img class="post" src="/blog/assets/images/brandai2-show.png" title="Brand AI and ML Verdict on a scan result page" alt="Brand AI and ML Verdict on a scan result page" />
<p class="caption text-center">Brand AI and ML Verdict on a scan result page</p>
</div>
</div>

<p>Brand AI and ML verdicts can be combined for targeted hunting:</p>

<div class="language-sparql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="err">visible</span><span class="p">.</span><span class="nn">brandname</span><span class="o">:</span><span class="ss">mygov</span><span class="w"> </span><span class="err">AND</span><span class="w"> </span><span class="err">verdicts</span><span class="p">.</span><span class="err">engines</span><span class="p">.</span><span class="nn">score</span><span class="o">:&gt;</span><span class="mi">80</span><span class="w">
</span></code></pre></div></div>

<h3 id="ai-summaries--translations">AI Summaries &amp; Translations</h3>

<p>We have launched an experimental new feature called AI Summaries and AI
Translations. These will summarise and translate the content of the scan
screenshot. The goal is to allow our customers to quickly understand the
content of a website delivered in a foreign language without
resorting to copy-pasting text into third-party platforms. This is a common
requirement when investigating campaigns or malicious infrastructure and
stumbling on a website in a foreign language.</p>

<div class="row">
<div class="col col-md-12">
<img class="post" src="/blog/assets/images/ai-summary.png" title="AI Summary and Translation on a scan result page" alt="AI Summary and Translation on a scan result page" />
<p class="caption text-center">AI Summary and Translation on a scan result page</p>
</div>
</div>]]></content><author><name>urlscan.io</name></author><category term="changelog" /><category term="product" /><summary type="html"><![CDATA[We have made significant improvements to our core AI features on the urlscan Pro platform: Brand AI allows users search for brand abuse using the visual representation of a website, ML verdicts deliver a score for the trustworthiness of a website and the new AI summaries help users understand the content of a website in a foreign language.]]></summary></entry><entry><title type="html">urlscan API: Mandatory authentication starting May 4th</title><link href="https://urlscan.io/blog/2026/03/18/api-auth-required/" rel="alternate" type="text/html" title="urlscan API: Mandatory authentication starting May 4th" /><published>2026-03-18T12:48:00+01:00</published><updated>2026-03-18T12:48:00+01:00</updated><id>https://urlscan.io/blog/2026/03/18/api-auth-required</id><content type="html" xml:base="https://urlscan.io/blog/2026/03/18/api-auth-required/"><![CDATA[<p>Starting <em>May 4th, 2026</em> some of the publicly accessible API endpoints on
urlscan.io will only respond to authenticated requests. An authenticated
request is a request with a valid API key or by a signed-in user. The API
endpoints affected are:</p>

<ul>
  <li><code class="language-plaintext highlighter-rouge">GET /api/v1/result/{scanId}/</code></li>
  <li><code class="language-plaintext highlighter-rouge">GET /dom/{scanId}/</code></li>
  <li><code class="language-plaintext highlighter-rouge">GET /responses/{fileHash}/</code></li>
</ul>

<p><strong>Make sure all of your API integrations are sending the
urlscan API key via the appropriate <code class="language-plaintext highlighter-rouge">api-key</code> HTTP request header today.</strong></p>

<p><strong>Make sure to send API key headers for all requests against urlscan.io, even
for API paths that do not require authentication today.</strong></p>

<h3 id="api-calls">API Calls</h3>

<p>This is what an authenticated API call looks like:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>curl <span class="nt">-i</span> <span class="nt">-X</span> GET <span class="se">\</span>
  <span class="s1">'https://urlscan.io/api/v1/result/{scanId}/'</span> <span class="se">\</span>
  <span class="nt">-H</span> <span class="s1">'api-key: YOUR_API_KEY_HERE'</span>
</code></pre></div></div>

<p>For more details please check <a href="https://docs.urlscan.io/apis/urlscan-openapi">the API docs</a>.</p>

<h3 id="background">Background</h3>

<p>These changes are necessary to curb abuse of our platform and ensure its
stability and availability for legitimate users.</p>]]></content><author><name>urlscan.io</name></author><category term="changelog" /><category term="product" /><category term="api" /><summary type="html"><![CDATA[Starting May 4th, 2026 some of the publicly accessible API endpoints on urlscan.io will only respond to authenticated requests. An authenticated request is a request with a valid API key or by a signed-in user. The API endpoints affected are: GET /api/v1/result/{scanId}/ GET /dom/{scanId}/ GET /responses/{fileHash}/ Make sure all of your API integrations are sending the urlscan API key via the appropriate api-key HTTP request header today. Make sure to send API key headers for all requests against urlscan.io, even for API paths that do not require authentication today. API Calls This is what an authenticated API call looks like: curl -i -X GET \ 'https://urlscan.io/api/v1/result/{scanId}/' \ -H 'api-key: YOUR_API_KEY_HERE' For more details please check the API docs. Background These changes are necessary to curb abuse of our platform and ensure its stability and availability for legitimate users.]]></summary></entry><entry><title type="html">Introducing Data Dumps: Bulk Download of urlscan Scan Data</title><link href="https://urlscan.io/blog/2026/03/12/datadump/" rel="alternate" type="text/html" title="Introducing Data Dumps: Bulk Download of urlscan Scan Data" /><published>2026-03-12T00:00:00+01:00</published><updated>2026-03-12T00:00:00+01:00</updated><id>https://urlscan.io/blog/2026/03/12/datadump</id><content type="html" xml:base="https://urlscan.io/blog/2026/03/12/datadump/"><![CDATA[<p>We are excited to announce the launch of <strong>Data Dumps</strong>, a new feature that
allows customers to bulk-download scan data from urlscan.io without making
individual API calls for each result.</p>

<!--more-->

<p>Data Dumps provide pre-built, gzip-compressed JSONL files and tar archives containing the
results of all <strong>public and unlisted</strong> scans (private scans are not included).
Files are organised by time window and data type, and are available at
per-minute, per-hour, and per-day granularity — so you can pick exactly the
slice of data you need.</p>

<h3 id="why-data-dumps">Why Data Dumps?</h3>

<p>Until now, customers who needed large volumes of scan result data had to call
the Result API individually for every scan UUID — often resulting in millions
of API calls per day. Data Dumps change this fundamentally:</p>

<ul>
  <li><strong>Drastically reduced quota usage</strong> — download an entire day’s worth of
scan results in a single request instead of hundreds of thousands.</li>
  <li><strong>Higher throughput</strong> — retrieve large datasets as pre-built, compressed
files over high-bandwidth connections rather than through rate-limited API
endpoints.</li>
  <li><strong>Simpler pipelines</strong> — integrate scan data into your own data warehouse or
processing pipeline by periodically fetching a single file per time window.</li>
  <li><strong>Multiple data types</strong> — dumps are available for API results, search
results, screenshots, and DOM snapshots.</li>
</ul>

<h3 id="whats-included">What’s included?</h3>

<p>The available data types are:</p>

<table>
  <thead>
    <tr>
      <th>Type</th>
      <th>Format</th>
      <th>Description</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">api</code></td>
      <td><code class="language-plaintext highlighter-rouge">.gz</code> (JSONL)</td>
      <td>Full scan result (equivalent to the Result API)</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">search</code></td>
      <td><code class="language-plaintext highlighter-rouge">.gz</code> (JSONL)</td>
      <td>Search API result metadata</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">screenshot</code></td>
      <td><code class="language-plaintext highlighter-rouge">.tar.gz</code></td>
      <td>Screenshot images</td>
    </tr>
    <tr>
      <td><code class="language-plaintext highlighter-rouge">dom</code></td>
      <td><code class="language-plaintext highlighter-rouge">.tar.gz</code></td>
      <td>DOM snapshots</td>
    </tr>
  </tbody>
</table>

<p>Only <strong>public</strong> and <strong>unlisted</strong> scans are included in data dumps. Private
scans are never exported. Dumps are available for a rolling <strong>7-day</strong> window,
so you can backfill up to 7 days of data at any time.</p>

<h3 id="availability">Availability</h3>

<p>Data Dumps are available today for customers on the <strong>Ultimate</strong> and
<strong>Enterprise</strong> plans. You can browse and download dump files directly from the
<a href="https://pro.urlscan.io/datadumps">urlscan Pro Data Dumps page</a>, which also
shows the available time windows, file sizes, and timestamps. The page also
provides the corresponding API URL for each file so you can integrate downloads
into your own tooling.</p>

<h3 id="using-data-dumps-with-urlscan-cli">Using Data Dumps with urlscan-cli</h3>

<p>Data Dump support is included in <strong>urlscan-cli v0.0.5</strong> and later. The
<code class="language-plaintext highlighter-rouge">urlscan pro datadump</code> command provides two sub-commands: <code class="language-plaintext highlighter-rouge">list</code> and
<code class="language-plaintext highlighter-rouge">download</code>.</p>

<p>The path format is <code class="language-plaintext highlighter-rouge">&lt;time-window&gt;/&lt;file-type&gt;/&lt;date&gt;</code>, where:</p>
<ul>
  <li><code class="language-plaintext highlighter-rouge">time-window</code> is <code class="language-plaintext highlighter-rouge">days</code>, <code class="language-plaintext highlighter-rouge">hours</code>, or <code class="language-plaintext highlighter-rouge">minutes</code></li>
  <li><code class="language-plaintext highlighter-rouge">file-type</code> is <code class="language-plaintext highlighter-rouge">api</code>, <code class="language-plaintext highlighter-rouge">search</code>, <code class="language-plaintext highlighter-rouge">screenshot</code>, or <code class="language-plaintext highlighter-rouge">dom</code></li>
  <li><code class="language-plaintext highlighter-rouge">date</code> is an optional date in <code class="language-plaintext highlighter-rouge">YYYYMMDD</code> format</li>
</ul>

<p><strong>List available dump files:</strong></p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># List all available daily API dumps</span>
urlscan pro datadump list days/api

<span class="c"># List hourly API dumps for a specific day</span>
urlscan pro datadump list hours/api/20260301
</code></pre></div></div>

<p><strong>Download a specific file:</strong></p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Download a daily API dump</span>
urlscan pro datadump download days/api/20260301.gz

<span class="c"># Download a specific hourly dump</span>
urlscan pro datadump download hours/api/20260301/20260301-14.gz <span class="nt">--extract</span>
</code></pre></div></div>

<p><strong>Use <code class="language-plaintext highlighter-rouge">--follow</code> to continuously sync all available files</strong> (up to the last 7
days). The <code class="language-plaintext highlighter-rouge">--follow</code> flag memoises which files have already been downloaded,
so it is safe to run periodically as a cron job — only new files will be
fetched:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># Download all available hourly API dumps (last 7 days)</span>
urlscan pro datadump download hours/api/ <span class="nt">--follow</span>

<span class="c"># Download all hourly DOM dumps for a specific day</span>
urlscan pro datadump download hours/dom/20260301/ <span class="nt">--follow</span>
</code></pre></div></div>

<p>Files can be saved to a specific directory with <code class="language-plaintext highlighter-rouge">--directory-prefix</code> / <code class="language-plaintext highlighter-rouge">-P</code>
and automatically extracted with <code class="language-plaintext highlighter-rouge">--extract</code> / <code class="language-plaintext highlighter-rouge">-x</code>.</p>

<p>Full CLI reference: <a href="https://github.com/urlscan/urlscan-cli/blob/main/docs/urlscan_pro_datadump.md">urlscan pro datadump</a></p>

<h3 id="using-data-dumps-with-urlscan-python">Using Data Dumps with urlscan-python</h3>

<p>Data Dump support is included in <strong>urlscan-python v0.0.2</strong> and later via the
<code class="language-plaintext highlighter-rouge">Pro</code> client’s <code class="language-plaintext highlighter-rouge">datadump</code> attribute.</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="nn">os</span>
<span class="kn">from</span> <span class="nn">urlscan</span> <span class="kn">import</span> <span class="n">Pro</span>
<span class="kn">from</span> <span class="nn">urlscan.utils</span> <span class="kn">import</span> <span class="n">extract</span>

<span class="k">with</span> <span class="n">Pro</span><span class="p">(</span><span class="s">"&lt;your_api_key&gt;"</span><span class="p">)</span> <span class="k">as</span> <span class="n">pro</span><span class="p">:</span>
    <span class="c1"># List hourly API dump files for a specific day
</span>    <span class="n">res</span> <span class="o">=</span> <span class="n">pro</span><span class="p">.</span><span class="n">datadump</span><span class="p">.</span><span class="n">get_list</span><span class="p">(</span><span class="s">"hours/api/20260301/"</span><span class="p">)</span>

    <span class="c1"># Download and extract each file
</span>    <span class="k">for</span> <span class="n">f</span> <span class="ow">in</span> <span class="n">res</span><span class="p">[</span><span class="s">"files"</span><span class="p">]:</span>
        <span class="n">path</span> <span class="o">=</span> <span class="n">f</span><span class="p">[</span><span class="s">"path"</span><span class="p">]</span>
        <span class="n">basename</span> <span class="o">=</span> <span class="n">os</span><span class="p">.</span><span class="n">path</span><span class="p">.</span><span class="n">basename</span><span class="p">(</span><span class="n">path</span><span class="p">)</span>

        <span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">basename</span><span class="p">,</span> <span class="s">"wb"</span><span class="p">)</span> <span class="k">as</span> <span class="nb">file</span><span class="p">:</span>
            <span class="n">pro</span><span class="p">.</span><span class="n">datadump</span><span class="p">.</span><span class="n">download_file</span><span class="p">(</span><span class="n">path</span><span class="p">,</span> <span class="nb">file</span><span class="o">=</span><span class="nb">file</span><span class="p">)</span>

        <span class="n">extract</span><span class="p">(</span><span class="n">basename</span><span class="p">,</span> <span class="s">"/tmp"</span><span class="p">)</span>
</code></pre></div></div>

<p>The <code class="language-plaintext highlighter-rouge">get_list</code> method accepts the same <code class="language-plaintext highlighter-rouge">&lt;time-window&gt;/&lt;file-type&gt;/&lt;date&gt;</code> path
format as the CLI. The <code class="language-plaintext highlighter-rouge">extract</code> utility from <code class="language-plaintext highlighter-rouge">urlscan.utils</code> handles
decompression of the downloaded <code class="language-plaintext highlighter-rouge">.gz</code> files.</p>

<h3 id="getting-started">Getting Started</h3>

<p>Log in to <a href="https://pro.urlscan.io/datadumps">urlscan Pro</a> to explore the
available dumps and get your API key. If you have any questions or feedback,
please reach out to <a href="mailto:support@urlscan.io">support@urlscan.io</a>.</p>]]></content><author><name>urlscan.io</name></author><category term="changelog" /><category term="product" /><category term="announcement" /><summary type="html"><![CDATA[We are excited to announce the launch of Data Dumps, a new feature that allows customers to bulk-download scan data from urlscan.io without making individual API calls for each result.]]></summary></entry><entry><title type="html">Updates to urlscan-cli and urlscan-python</title><link href="https://urlscan.io/blog/2026/02/03/cli-py-updates/" rel="alternate" type="text/html" title="Updates to urlscan-cli and urlscan-python" /><published>2026-02-03T00:00:00+01:00</published><updated>2026-02-03T00:00:00+01:00</updated><id>https://urlscan.io/blog/2026/02/03/cli-py-updates</id><content type="html" xml:base="https://urlscan.io/blog/2026/02/03/cli-py-updates/"><![CDATA[<p>We are excited to announce new releases of our official CLI and Python library. These updates bring new features and improvements to help you integrate urlscan.io into your workflows more effectively.</p>

<!--more-->

<h2 id="urlscan-cli">urlscan-cli</h2>

<p>We have released a new version of <a href="https://github.com/urlscan/urlscan-cli">urlscan-cli</a>, our official command-line tool for interacting with the urlscan platform.</p>

<h3 id="whats-new">What’s New</h3>

<p>Support for more Pro API endpoints as sub-commands:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>urlscan pro
Pro sub-commands

Usage:
  urlscan pro <span class="o">[</span><span class="nb">command</span><span class="o">]</span>

Available Commands:
  brand            Brand sub-commands
  channel          Channel sub-commands
  datadump         Data dump sub-commands
  file             Download a file
  <span class="nb">hostname         </span>Get the historical observations <span class="k">for </span>a specific <span class="nb">hostname </span><span class="k">in </span>the <span class="nb">hostname </span>data <span class="nb">source
  </span>incident         Incident sub-commands
  livescan         Livescan sub-commands
  saved-search     Saved search sub-commands
  structure-search Get structurally similar results to a specific scan
  subscription     Subscription sub-commands

Flags:
  <span class="nt">-h</span>, <span class="nt">--help</span>   <span class="nb">help </span><span class="k">for </span>pro

Use <span class="s2">"urlscan pro [command] --help"</span> <span class="k">for </span>more information about a command.
</code></pre></div></div>

<p>For example:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># get hostname history (ref. https://docs.urlscan.io/apis/urlscan-openapi/hostnames/hostnamehistory)</span>
<span class="nv">$ </span>urlscan pro <span class="nb">hostname</span> &lt;<span class="nb">hostname</span><span class="o">&gt;</span>
<span class="c"># download a file (ref. https://docs.urlscan.io/apis/urlscan-openapi/files)</span>
<span class="nv">$ </span>urlscan pro file &lt;file-hash&gt;
</code></pre></div></div>

<h3 id="installation">Installation</h3>

<p>To upgrade to the latest version:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c"># macOS (Homebrew)</span>
brew upgrade urlscan/tap/urlscan-cli

<span class="c"># Or download the latest release from GitHub</span>
https://github.com/urlscan/urlscan-cli/releases
</code></pre></div></div>

<p>For more details, please refer to the <a href="https://github.com/urlscan/urlscan-cli/">urlscan-cli documentation</a>.</p>

<h2 id="urlscan-python">urlscan-python</h2>

<p>We have also released a new version of <a href="https://github.com/urlscan/urlscan-python">urlscan-python</a>.</p>

<h3 id="whats-new-1">What’s New</h3>

<p>The python library now supports all the pro API endpoints. For example:</p>

<div class="language-py highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="nn">datetime</span>
<span class="kn">import</span> <span class="nn">os</span>

<span class="kn">from</span> <span class="nn">urlscan</span> <span class="kn">import</span> <span class="n">Pro</span>

<span class="k">with</span> <span class="n">Pro</span><span class="p">(</span><span class="n">api_key</span><span class="o">=</span><span class="s">"&lt;your_api_key&gt;"</span><span class="p">)</span> <span class="k">as</span> <span class="n">pro</span><span class="p">:</span>
    <span class="c1"># iterate over hostname history
</span>    <span class="n">it</span> <span class="o">=</span> <span class="n">pro</span><span class="p">.</span><span class="n">hostname</span><span class="p">(</span><span class="s">"&lt;hostname&gt;"</span><span class="p">,</span> <span class="n">limit</span><span class="o">=</span><span class="mi">100</span><span class="p">)</span>
    <span class="k">for</span> <span class="n">result</span> <span class="ow">in</span> <span class="n">it</span><span class="p">:</span>
        <span class="k">print</span><span class="p">(</span><span class="n">result</span><span class="p">)</span>

    <span class="c1"># download a file
</span>    <span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="s">"downloaded_file"</span><span class="p">,</span> <span class="s">"wb"</span><span class="p">)</span> <span class="k">as</span> <span class="nb">file</span><span class="p">:</span>
        <span class="n">pro</span><span class="p">.</span><span class="n">download_file</span><span class="p">(</span><span class="s">"&lt;file-hash&gt;"</span><span class="p">,</span> <span class="nb">file</span><span class="o">=</span><span class="nb">file</span><span class="p">)</span>
</code></pre></div></div>

<h3 id="installation-1">Installation</h3>

<p>To upgrade to the latest version:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>pip <span class="nb">install</span> <span class="nt">--upgrade</span> urlscan-python
</code></pre></div></div>

<p>For more details, please refer to the <a href="https://urlscan.github.io/urlscan-python/latest/">official documentation</a>.</p>

<h2 id="feedback-and-support">Feedback and Support</h2>

<p>We would love to get your feedback on these updates. For any suggestions about improvements or further functionality as well as general support and bug reporting, please open a GitHub issue in the respective repositories or reach out to our support team at <a href="mailto:support@urlscan.io">support@urlscan.io</a>.</p>]]></content><author><name>urlscan.io</name></author><category term="product" /><category term="announcement" /><summary type="html"><![CDATA[We are excited to announce new releases of our official CLI and Python library. These updates bring new features and improvements to help you integrate urlscan.io into your workflows more effectively.]]></summary></entry><entry><title type="html">Activity tracking for users and API keys</title><link href="https://urlscan.io/blog/2025/11/25/activity-tracking/" rel="alternate" type="text/html" title="Activity tracking for users and API keys" /><published>2025-11-25T08:33:39+01:00</published><updated>2025-11-25T08:33:39+01:00</updated><id>https://urlscan.io/blog/2025/11/25/activity-tracking</id><content type="html" xml:base="https://urlscan.io/blog/2025/11/25/activity-tracking/"><![CDATA[<p>Today we are announcing detailed activity insights for teams and API keys. The
activity insights show users the quota consumption of each API key,
and whether any of these API keys are generating errors when calling our APIs.</p>

<div class="row">
<div class="col col-md-12">
<img src="/blog/assets/images/activity-tracking.png" title="urlscan API key activity tracking" alt="urlscan API key activity tracking" />
<p class="caption text-center">urlscan API key activity tracking</p>
</div>
</div>

<!--more-->

<p>Activity tracking tracks the following information per API key and per user:</p>

<ul>
  <li>The number of requests for: Search API, Result API, Scan API, Livescan</li>
  <li>The last activity timestamp for each of these actions</li>
  <li>Activity statistics available in: minute, hour, day, and month granularity</li>
  <li>Response count grouped by status code: <code class="language-plaintext highlighter-rouge">HTTP/200</code>, <code class="language-plaintext highlighter-rouge">HTTP/400</code>, <code class="language-plaintext highlighter-rouge">HTTP/404</code>, <code class="language-plaintext highlighter-rouge">HTTP/429</code></li>
  <li>The last HTTP <code class="language-plaintext highlighter-rouge">User-Agent</code> header observed - This helps to identify the system or code making those requests</li>
  <li>The last IP address observed using this key - This helps to identify the system or code making those requests</li>
</ul>

<p>Activity tracking is shown in the urlscan.io UI on the API key management and
team management pages.</p>

<p>Structured information is available via the following endpoints:</p>

<ul>
  <li><a href="https://urlscan.io/api/v1/activity/user"><code class="language-plaintext highlighter-rouge">/api/v1/activity/user</code></a></li>
  <li><a href="https://urlscan.io/api/v1/activity/team"><code class="language-plaintext highlighter-rouge">/api/v1/activity/team</code></a></li>
</ul>

<h3 id="background">Background</h3>
<p>Over the years we have worked with hundreds of customers, some of whom are
using urlscan and urlscan Pro with with dozens or even hundreds of individual
team members. Those types of customers would typically also have multiple API keys
active at any one time: One for their SOAR platform, another one for
integration testing, yet more keys for individual research projects. A frequent
issue that came up was that customers would eventually lose track of where all
of their API keys were active and how much each API key was consuming of
their account-wide quota.</p>

<p>The new Activity tracking feature will help urlscan customers to 
identify issues with their automated urlscan usage more 
quickly and more confidently.</p>]]></content><author><name>urlscan.io</name></author><category term="product" /><summary type="html"><![CDATA[Today we are announcing detailed activity insights for teams and API keys. The activity insights show users the quota consumption of each API key, and whether any of these API keys are generating errors when calling our APIs. urlscan API key activity tracking]]></summary></entry><entry><title type="html">urlscan at CYBERWARCON 2025</title><link href="https://urlscan.io/blog/2025/11/05/cyberwarcon-announcement/" rel="alternate" type="text/html" title="urlscan at CYBERWARCON 2025" /><published>2025-11-05T15:33:39+01:00</published><updated>2025-11-05T15:33:39+01:00</updated><id>https://urlscan.io/blog/2025/11/05/cyberwarcon-announcement</id><content type="html" xml:base="https://urlscan.io/blog/2025/11/05/cyberwarcon-announcement/"><![CDATA[<h3 id="arlington-va---november-19-2025">Arlington, VA - November 19, 2025</h3>

<p>urlscan is excited to be a sponsor of <a href="https://www.cyberwarcon.com/">CYBERWARCON</a> for the third year in a
row. We will be attending the conference and you are invited to meet up with
us.</p>

<div class="row bottom10">
<div class="col col-md-12">
 <a href="https://www.cyberwarcon.com/">
  <img src="/blog/assets/images/cyberwarcon-2025.png" title="urlscan at Cyberwarcon 2025" alt="urlscan at Cyberwarcon 2025" />
 </a>
</div>
</div>

<p>Like in the previous years, urlscan will be attending <a href="https://www.cyberwarcon.com/">CYBERWARCON 2025</a> in Arlington, Virginia.
We are proud to be sponsoring the conference for the third year in a row.</p>

<p>CYBERWARCON is the premier conference covering state-sponsored cyber threats.
Each year it brings together hundreds of professionals from military and
government, academia, the media, and the private sector. The conference takes
place as a one-day event packed with talks and speakers of the highest caliber.</p>

<h3 id="connect-with-urlscan">Connect with urlscan</h3>

<p>Our executive team is attending the conference to get in touch with our
customer base and get an opportunity to sit down face to face. Whether you are
a customer already or just curious about our platform, we invite you to reach
out and schedule a meeting with us around the date of the conference itself.
Please reach out to info@urlscan.io to get this set up.</p>]]></content><author><name>urlscan.io</name></author><category term="announcement" /><summary type="html"><![CDATA[Arlington, VA - November 19, 2025 urlscan is excited to be a sponsor of CYBERWARCON for the third year in a row. We will be attending the conference and you are invited to meet up with us. Like in the previous years, urlscan will be attending CYBERWARCON 2025 in Arlington, Virginia. We are proud to be sponsoring the conference for the third year in a row. CYBERWARCON is the premier conference covering state-sponsored cyber threats. Each year it brings together hundreds of professionals from military and government, academia, the media, and the private sector. The conference takes place as a one-day event packed with talks and speakers of the highest caliber. Connect with urlscan Our executive team is attending the conference to get in touch with our customer base and get an opportunity to sit down face to face. Whether you are a customer already or just curious about our platform, we invite you to reach out and schedule a meeting with us around the date of the conference itself. Please reach out to info@urlscan.io to get this set up.]]></summary></entry></feed>