www.washingtonpost.com Open in urlscan Pro
23.37.45.67  Public Scan

URL: https://www.washingtonpost.com/politics/2023/07/24/twitter-rival-mastodon-rife-with-child-abuse-material-study-finds/
Submission: On December 31 via api from LU — Scanned from DE

Form analysis 1 forms found in the DOM

<form class="w-100 left" id="registration-form" data-qa="regwall-registration-form-container">
  <div>
    <div class="wpds-c-giPdwp wpds-c-giPdwp-iPJLV-css">
      <div class="wpds-c-iQOSPq"><span role="label" id="radix-0" class="wpds-c-hdyOns wpds-c-iJWmNK">Enter email address</span><input id="registration-email-id" type="text" aria-invalid="false" name="registration-email"
          data-qa="regwall-registration-form-email-input" data-private="true" class="wpds-c-djFMBQ wpds-c-djFMBQ-iPJLV-css" value="" aria-labelledby="radix-0"></div>
    </div>
  </div>
  <div class="dn">
    <div class="db mt-xs mb-xs "><span role="label" id="radix-1" class="wpds-c-hdyOns"><span class="db font-xxxs gray-darker pt-xxs pb-xxs gray-dark" style="padding-top: 1px;"><span>By selecting "Start reading," you agree to The Washington Post's
            <a target="_blank" style="color:inherit;" class="underline" href="https://www.washingtonpost.com/information/2022/01/01/terms-of-service/">Terms of Service</a> and
            <a target="_blank" style="color:inherit;" class="underline" href="https://www.washingtonpost.com/privacy-policy/">Privacy Policy</a>.</span></span></span>
      <div class="db gray-dark relative flex pt-xxs pb-xxs items-start gray-darker"><span role="label" id="radix-2" class="wpds-c-hdyOns wpds-c-jDXwHV"><button type="button" role="checkbox" aria-checked="false" data-state="unchecked" value="on"
            id="mcCheckbox" data-testid="mcCheckbox" class="wpds-c-bdrwYf wpds-c-bdrwYf-bnVAXI-size-125 wpds-c-bdrwYf-kFjMjo-cv wpds-c-bdrwYf-ikKWKCv-css" aria-labelledby="radix-2"></button><input type="checkbox" aria-hidden="true" tabindex="-1"
            value="on" style="transform: translateX(-100%); position: absolute; pointer-events: none; opacity: 0; margin: 0px; width: 0px; height: 0px;"><span class="wpds-c-bFeFXz"><span class="relative db gray-darker" style="padding-top: 2px;"><span
                class="relative db font-xxxs" style="padding-top: 1px;"><span>The Washington Post may use my email address to provide me occasional special offers via email and through other platforms. I can opt out at any
                  time.</span></span></span></span></span></div>
    </div>
  </div>
  <div id="subs-turnstile-hook" class="center dn"></div><button data-qa="regwall-registration-form-cta-button" type="submit"
    class="wpds-c-kSOqLF wpds-c-kSOqLF-kXPmWT-variant-cta wpds-c-kSOqLF-eHdizY-density-default wpds-c-kSOqLF-ejCoEP-icon-left wpds-c-kSOqLF-ikFyhzm-css w-100 mt-sm"><span>Start reading</span></button>
</form>

Text Content

Accessibility statementSkip to main content

Democracy Dies in Darkness
SubscribeSign in



Advertisement


Close
The Washington PostDemocracy Dies in Darkness
The Technology 202

A newsletter briefing on the intersection of technology and politics.

Subscribe to the newsletterAdd



TWITTER RIVAL MASTODON RIFE WITH CHILD-ABUSE MATERIAL, STUDY FINDS

Analysis by Cristiano Lima

with research by David DiMolfetta

July 24, 2023 at 8:53 a.m. EDT

A newsletter briefing on the intersection of technology and politics.

Add
Sign upfor The Technology 202 newsletter

Share
Comment on this storyComment
Add to your saved stories
Save

Happy Monday! Send news tips and summer podcast recommendations to:
cristiano.lima@washpost.com.

Below: The Microsoft-Activision deal is back in the hands of the U.K.’s
antitrust watchdog, and a key OpenAI employee departs. First:


WpGet the full experience.Choose your planArrowRight

TWITTER RIVAL MASTODON IS RIFE WITH CHILD ABUSE MATERIAL, STUDY FINDS


Mastodon has emerged as a Twitter competitor popular in Silicon Valley. (Barbara
Ortutay/AP)

A new report has found rampant child sexual abuse material on Mastodon, a social
media site that has gained popularity in recent months as an alternative to
platforms like Twitter and Instagram. 



Researchers say the findings raise major questions about the effectiveness of
safety efforts across so-called “decentralized” platforms, which let users join
independently run communities that set their own moderation rules, particularly
in dealing with the internet’s most vile content.

Story continues below advertisement



During a two-day test, researchers at the Stanford Internet Observatory found
over 600 pieces of known or suspected child abuse material across some of
Mastodon’s most popular networks, according to a report shared exclusively with
The Technology 202.

Advertisement


Researchers reported finding their first piece of content containing child
exploitation within about five minutes. They would go on to uncover roughly
2,000 uses of hashtags associated with such material. David Thiel, one of the
report’s authors, called it an unprecedented sum. 

“We got more photoDNA hits in a two-day period than we’ve probably had in the
entire history of our organization of doing any kind of social media analysis,
and it’s not even close,” said Thiel, referring to a technique used to identify
pieces of content with unique digital signatures. Mastodon did not return a
request for comment.

Story continues below advertisement



Policymakers for years have called on prominent social media platforms like
Instagram, TikTok and Twitter to take greater steps to stem the tide of child
abuse material online, criticizing tech companies for not devoting enough
resources to enforcing rules against such content.

Advertisement


But as millions of users seek out substitutes to those sites, the findings
released Monday underscore the significant structural challenges faced by
platforms that don’t rely on a single company to set and enforce policies that
address illegal or harmful content. 

Mastodon is what’s known as a “federated” social media platform, where users can
join servers or “instances” that are separate but interconnected, allowing them
to view content from other communities while adhering to their own network’s
rules.  

Story continues below advertisement



That model has been billed as more open and democratizing than the more
centralized approach of giants like Twitter and Facebook. But the report’s
results highlight the hurdles moderators on those platforms face in tackling
harmful material, Thiel said, including “fairly primitive” tools to detect and
escalate reports of child abuse material and limited volunteer moderation teams.

Advertisement


“A lot of it is just a result of what seems to be a lack of tooling that
centralized social media platforms use to address child safety concerns,” he
said.

To tackle some of those obstacles, researchers wrote in the report,
decentralized platforms like Mastodon may need to borrow some of the strategies
and tools used by their larger peers.

Story continues below advertisement



“Investment in one or more centralized clearinghouses for performing content
scanning (as well as investment in moderation tooling) would be beneficial to
the Fediverse as a whole,” Thiel and co-author Renée DiResta wrote, referring to
the so-called federated universe of platforms. 

Another challenge for decentralized platforms, Thiel said, is that pockets of
dangerous content can emerge in communities with more lax guidelines.  

A significant portion of the child abuse material researchers uncovered was from
networks in Japan, where there are “significantly more lax laws” that “exclude
computer-generated content as well as manga and anime,” according to the report.

Advertisement

Story continues below advertisement



“We found that on one of the largest Mastodon instances in the Fediverse (based
in Japan), 11 of the top 20 most commonly used hashtags were related to
pedophilia,” the researchers wrote.

With massive recent developments in artificial intelligence technology,
researchers also reported finding a spike in computer-generated child-abuse
material. Thiel called it “a picture of things to come,” and said companies and
officials have yet to fully tackle the problem.

“It’s an issue that has never really been fully addressed by either regulators
or tech platforms as a whole. … It's not really so cleanly delineated,” he said.

Their findings could also pose fresh questions for policymakers globally,
especially as Europe imposes sweeping new online safety rules with greater
obligations for major platforms.

Advertisement

Story continues below advertisement



“The policies that people are trying to come up with to pressure large platforms
… [will] have to take that vastly different nature of that network into
account,” Thiel said.

OUR TOP TABS

FATE OF MICROSOFT-ACTIVISION DEAL IS BACK IN HANDS OF U.K.’S ANTITRUST REGULATOR



The fate of Microsoft’s planned $69 billion purchase of video game company
Activision Blizzard is back in the hands of the U.K.’s antitrust watchdog, Sarah
Young, Paul Sandle and Sam Tobin report for Reuters. 

Britain’s Competition and Markets Authority (CMA) said it could reach a
provisional decision on whether to greenlight the transaction by the week of
Aug. 7. “Having initially blocked the $69 billion deal in April over concerns
about its impact on competition in the cloud gaming market, the CMA has since
reopened the file, after it was left increasingly isolated among world
regulators in its opposition,” according to the report. 

Advertisement

Story continues below advertisement



“Explaining why the deal should now be given the green light, Microsoft argued
that the binding commitments accepted by the European Union shortly after
Britain had blocked the deal changed matters,” the report adds, citing published
court documents. 

Share this articleShare

Those binding commitments in the E.U. include allowing Activision games to be
streamed for 10 years after the deal closes. Microsoft also made agreements with
hardware maker Nvidia, as well as cloud gaming providers Boosteroid and Ubitus.

SCHOOL DISTRICTS JOIN LAWSUITS ALLEGING SOCIAL MEDIA HARMS TO KIDS



School boards throughout the United States are backing lawsuits that allege
major social media platforms like TikTok and Snapchat have harmed kids’ mental
health and diverted their attention away from classroom learning, Sara Randazzo
and Ryan Tracy report for the Wall Street Journal.

Advertisement

Story continues below advertisement



“Nearly 200 school districts so far have joined the litigation against the
parent companies of Facebook, TikTok, Snapchat, and YouTube,” they write. “The
suits have been consolidated in the U.S. District Court in Oakland, Calif.,
along with hundreds of suits by families alleging harms to their children from
social media.”

The school districts could face challenges when a judge later this year is
expected to consider a request from the social media platforms to dismiss the
cases on the grounds that they are protected by Section 230, the legal liability
shield that generally prevents internet companies from being sued for their
third-party content on their platforms, according to the report.

However, the school districts and families “contend that the social-media
companies have created an addictive product that pushes destructive content to
youth — and that a product, unlike content, doesn’t enjoy Section 230
protections,” Randazzo and Tracy write.

OPENAI HEAD OF TRUST AND SAFETY IS LEAVING



OpenAI’s head of trust and safety Dave Willner is stepping down from his role,
Clare Duffy reports for CNN, citing a Thursday LinkedIn post. 

Advertisement


Willner is moving into an advisory role to spend more time with family. The
departure comes as “OpenAI has faced growing scrutiny from lawmakers, regulators
and the public over the safety of its products and their potential implications
for society” amid growing success of its ChatGPT product, Duffy writes. 

The move comes as the company agreed to voluntary artificial intelligence safety
commitments with the White House, alongside other major tech companies. 

The CNN report adds: “OpenAI’s Chief Technology Officer Mira Murati will become
the trust and safety team’s interim manager and Willner will advise the team
through the end of this year, according to the company.”

RANT AND RAVE

Twitter reacts to Elon Musk bidding farewell to the platform’s bird logo. Tech
journalist Kara Swisher:



New York Times tech reporter Ryan Mac:



Platformer’s Zoë Schiffer:


HILL HAPPENINGS

House Dems call on White House to make agencies adopt NIST AI framework
(FedScoop)

INSIDE THE INDUSTRY

Hong Kong court to rule if Google, Meta must censor unofficial anthem (Theodora
Yu and Meaghan Tobin)

Threading the needle: social media power users are divided about threads (The
Information)

Chip CEOs urge US to study impact of China curbs and take pause (Bloomberg News)

COMPETITION WATCH

Dutch online marketplace OLX contributing to EU antitrust probe into Meta
(Reuters)

WORKFORCE REPORT

The creator economy was already exploding. Then Hollywood went on strike. (Drew
Harwell and Taylor Lorenz)

Sergey Brin is back in the trenches at Google (Wall Street Journal)

TRENDING

Twitter will lose bird logo in brand overhaul, Elon Musk says (Joseph Menn and
Marianna Sotomayor)

DAYBOOK

 * FCC Chair Jessica Rosenworcel speaks at a Center for Strategic and
   International Studies event on 5G spectrum security tomorrow at 2 p.m.
 * The Senate Judiciary Committee holds a hearing on AI regulation principles
   tomorrow at 3 p.m.

BEFORE YOU LOG OFF



That’s all for today — thank you so much for joining us! Make sure to tell
others to subscribe to The Technology 202 here. Get in touch with tips, feedback
or greetings on Twitter or email. 

Share
55 Comments



Loading...


Subscribe to comment and get the full experience. Choose your plan →


Advertisement



Advertisement

TOP STORIES
D.C. region
Local news, weather, sports, events, restaurants and more
Analysis|The best and worst of D.C. sports in 2023: Bye-bye, Daniel Snyder


Review|5 great places to eat in London, from Sunday roast to afternoon tea


He wrote letters to MLB GMs, asking for advice. Now he works for the Nats.


Refresh
Try a different topic

Sign in or create a free account to save your preferences
Advertisement


Advertisement

Company
About The Post Newsroom Policies & Standards Diversity & Inclusion Careers Media
& Community Relations WP Creative Group Accessibility Statement Sitemap
Get The Post
Become a Subscriber Gift Subscriptions Mobile & Apps Newsletters & Alerts
Washington Post Live Reprints & Permissions Post Store Books & E-Books Print
Archives (Subscribers Only) Today’s Paper Public Notices Coupons
Contact Us
Contact the Newsroom Contact Customer Care Contact the Opinions Team Advertise
Licensing & Syndication Request a Correction Send a News Tip Report a
Vulnerability
Terms of Use
Digital Products Terms of Sale Print Products Terms of Sale Terms of Service
Privacy Policy Cookie Settings Submissions & Discussion Policy RSS Terms of
Service Ad Choices
washingtonpost.com © 1996-2023 The Washington Post
 * washingtonpost.com
 * © 1996-2023 The Washington Post
 * About The Post
 * Contact the Newsroom
 * Contact Customer Care
 * Request a Correction
 * Send a News Tip
 * Report a Vulnerability
 * Download the Washington Post App
 * Policies & Standards
 * Terms of Service
 * Privacy Policy
 * Cookie Settings
 * Print Products Terms of Sale
 * Digital Products Terms of Sale
 * Submissions & Discussion Policy
 * RSS Terms of Service
 * Ad Choices
 * Coupons

5.8.2






Already have an account? Sign in

--------------------------------------------------------------------------------


TWO WAYS TO READ THIS ARTICLE:

Create an account or sign in
Free
 * Access this article

Enter email address
By selecting "Start reading," you agree to The Washington Post's Terms of
Service and Privacy Policy.
The Washington Post may use my email address to provide me occasional special
offers via email and through other platforms. I can opt out at any time.

Start reading
BEST VALUE
Subscribe
€0.25every week for the first year
billed as €1 every 4 weeks
 * Unlimited access to all articles
 * Save stories to read later

Subscribe



WE CARE ABOUT YOUR PRIVACY

We and our 41 partners store and/or access information on a device, such as
unique IDs in cookies to process personal data. You may accept or manage your
choices by clicking below, including your right to object where legitimate
interest is used, or at any time in the privacy policy page. These choices will
be signaled to our partners and will not affect browsing data.


WE AND OUR PARTNERS PROCESS DATA TO PROVIDE:

Actively scan device characteristics for identification. Use limited data to
select advertising. Store and/or access information on a device. Use limited
data to select content. Personalised advertising and content, advertising and
content measurement, audience research and services development. List of
Partners (vendors)

I Accept Reject All Show Purposes