www.globalwitness.org Open in urlscan Pro
2606:4700:10::6814:7455  Public Scan

URL: https://www.globalwitness.org/en/campaigns/digital-threats/rohingya-facebook-hate-speech/
Submission: On June 21 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

Cookie Policy
By clicking "Allow cookies", you agree to the storing of cookies on your device.
You can change your preferences at any time in cookie settings. Learn more
Cookie settingsAllow cookies

 * Expand menu. Open menu. Close menu.
   Close expanded menu.
   Active Campaigns Active Campaigns
    * Digital threats
    * Forests
    * Fossil gas
    * Greenwashing
    * Holding corporates to account
    * Land & environmental defenders
    * Natural resource governance
    * Stop Russian oil
   
   View campaign archive
   Topics Topics
    * Ukraine invasion
    * Climate action
    * Digital Investigations
    * Gas
    * Press freedom
   
   All topics
   Countries Countries
    * Brazil
    * China
    * Democratic Republic of Congo
    * European Union
    * Republic of Congo
    * Myanmar
    * Papua New Guinea
    * The Philippines
    * United States
   
   All countries & regions
   Tags Tags
    * Investigations
    * Explainers
    * Events
    * Policy briefings
   
   View all content types
   Press releases Press releases
   View all press releases
   About About
    * About us
    * Climate action
    * Annual reports
    * Financial statements
    * Governance
   
   Languages Languages
    * Français
    * 中文
    * Español
    * English
   
   Other Other
    * Jobs
    * Contact us
    * Privacy Notice
   
   Donate Donate
    * Donate

 * Search
 *  * About
    * Campaigns Expand Campaigns menu.
      * Digital threats
      * Forests
      * Fossil gas
      * Greenwashing
      * Holding corporates to account
      * Land & environmental defenders
      * Natural resource governance
      * Stop Russian oil
      * View campaign archive
    * Countries & regions Expand Countries & regions menu.
      * Brazil
      * China
      * Democratic Republic of Congo
      * European Union
      * Republic of Congo
      * Myanmar
      * Papua New Guinea
      * The Philippines
      * United States
      * All countries & regions
    * Press
    * Blog
    * Donate

 * Select language
   * Français
   * 中文
   * Español
   * English

Global Witness

Article | March 20, 2022


FACEBOOK APPROVES ADVERTS CONTAINING HATE SPEECH INCITING VIOLENCE AND GENOCIDE
AGAINST THE ROHINGYA

Share this

Tweet Share Facebook Share LinkedIn
Digital threats
Myanmar

> The ethnic violence in Myanmar is horrific and we have been too slow to
> prevent misinformation and hate on Facebook. - Facebook, 2018

Facebook has admitted that it played a role in inciting violence during the
genocidal campaign against the Rohingya Muslim minority in Myanmar (Burma).


Since 2017, nearly 900,000 people have been displaced, hundreds of villages have
been burned to the ground, families have been separated and killed, and
hundreds, possibly thousands of women and girls have been raped, including in
public mass gang rapes. In their own words, Facebook said that “We agree that we
can and should do more,” and that they would invest resources in preventing the
spread of hate speech in Myanmar.

Since then, the improvements that they have made include employing more content
reviewers who speak the country’s languages [1], improving their ability to use
artificial intelligence to flag examples of hate speech including in Burmese
[2], and establishing a dedicated team to work on the country. Have these
changes made a significant difference? Is the platform still susceptible to
facilitating incitement to violence, hatred and genocide? 

Our investigation provides a disturbing answer to these questions: Facebook’s
ability to detect Burmese language hate speech remains abysmally poor.








TESTING FACEBOOK’S ABILITY TO DETECT HATE SPEECH

We collated eight real examples of hate speech directed against the Rohingya, as
reported by the United Nations Independent International Fact-Finding Mission on
Myanmar in their report to the Human Rights Council.

We submitted each of these hate speech examples to Facebook in the form of an
advert in Burmese [3]. Facebook says that before adverts are permitted to appear
online, they’re reviewed to make sure that they meet their advertising policies,
and that during this process they check the advert's “images, video, text and
targeting information, as well as an ad's associated landing page”. The process
relies primarily on automated tools, though Facebook reveals little about how
it’s done in practice. Of course, we didn’t actually publish any of the ads. We
set a publication date in the future and deleted the ads once we received the
notification from Facebook as to whether they were approved for publication or
not.

All eight of the adverts were accepted by Facebook for publication.







A Rohingya refugee looks at his phone in a refugee camp in southern Bangladesh.
The alleged Rohingya genocide saw nearly 900,000 people displaced to the
country. Munir Uz Zaman/AFP via Getty Images

Facebook’s community standards define hate speech as:

a direct attack against people — rather than concepts or institutions— on the
basis of what we call protected characteristics: race, ethnicity, […] religious
affiliation […]. We define attacks as violent or dehumanizing speech, harmful
stereotypes, statements of inferiority, expressions of contempt, disgust or
dismissal, cursing and calls for exclusion or segregation.

The hate speech examples we used are highly offensive and we are therefore
deliberately not repeating the exact phrases used here [4]. However, all of the
ads fall within Facebook’s definition of hate speech.  The sentences used
included:



 * Violent speech that called for the killing of the Rohingya
 * Dehumanising speech that compared the Rohingya to animals
 * Calls for exclusion or segregation including a claim that the Rohingya are
   not Myanmar citizens and that the country needs to protect itself against a
   “Muslim invasion”.



In addition to falling within Facebook’s definition of hate speech, most of the
ads would have breached international law had they been published. The
International Convention on the Elimination of All Forms of Racial
Discrimination makes clear that States must prohibit [5]:

"all dissemination of ideas based on racial superiority or hatred, incitement to
racial discrimination, as well as all acts of violence or incitement to such
acts against any race or group of persons of another colour or ethnic origin,
and also the provision of any assistance to racist activities, including the
financing thereof"







Protesters demonstrating in support of Myanmar’s Rohingya minority in The Hague.
Meanwhile Facebook approved all eight of our ads containing Burmese language
hate speech for publication. Sem van der Wal/ANP/AFP via Getty Images

We’re not suggesting that paid-for content is the primary means by which hate
speech is spread in Myanmar. Instead, we used the submission of adverts as a
means of testing Facebook’s ability to detect hate speech without ourselves
posting hate speech. It is reasonable to assume that Facebook would apply its
hate speech detection systems to ads as well as to organic content given that
the company says explicitly that ads violating its community standards (which
include hate speech) are prohibited. Indeed, Facebook themselves have said that
during elections in Myanmar they have removed political pages that violated
their ad policy on hate speech.

We put our findings to Facebook in order to give them the opportunity to put
their side of the story but did not receive a response from them. However, in
response to the Associated Press they said that they have invested in Burmese
language technology and built a team of Burmese speakers whose work is informed
by feedback from experts, civil society organizations and the UN Fact-Finding
Mission on Myanmar’s collective output. Our point remains, however, that
Facebook approved the ads with hate speech in them despite having made these
improvements and therefore that their current systems are not good enough.


CONCLUSION AND RECOMMENDATIONS

Facebook and other social media platforms should treat the spread of hate and
violence with the utmost urgency. As an immediate step, they must properly
resource and publish what integrity and security systems exist for their
platform in each country – making sure that people in all countries and
languages are sufficiently protected from hate speech and violence online.

In places such as Myanmar where there is clear evidence that Facebook was used
to incite real world harms that cost ten thousand people their lives, and
hundreds of thousands their homes and livelihoods, and where the Rohingya face
an ongoing heightened risk of violence and continued discrimination, the very
minimum the platform should do is ensure it is not being used for future
incitement, and provide remedy to victims. There are a number of ongoing cases
which are attempting to require Facebook to do this, including:




 * A complaint under the OECD Guidelines for Multinational Enterprises (OECD
   Guidelines) that alleges Facebook breached the OECD human rights guidelines.
   The complaint was submitted by Rohingya groups in refugee camps in Bangladesh
   to the Irish National Contact Point for the OECD Guidelines and seeks remedy
   for refugees.  The Irish National Contact Point had three months to decide
   whether to take the case forward. That deadline passed today. Our
   investigation shows how Facebook hasn’t cleaned up its act, at least for
   paid-for content, and is still hugely vulnerable to illegal Burmese hate
   speech: the Irish National Contact Point should take up the case. 
 * Legal action in the US and UK that accuse Facebook of negligence that
   facilitated a genocide against the Rohingya for which compensation on behalf
   of the victims of more than $150 billion is sought.



But it’s not enough to rely on private litigation or to expect the companies to
regulate themselves. Governments must step in and hold these companies to
account, keep people safe and prevent human rights abuses.  

The European Union is taking an important step in this direction with its
Digital Services Act (DSA). Once passed, the DSA will not only establish content
moderation rules but will also require transparency and accountability
mechanisms such as requiring platforms to assess and mitigate the risk that they
spread hate speech, have their claims audited and provide data to independent
researchers.  


The EU must ensure the strongest version of the Act is quickly passed into law. 
Governments elsewhere in the world - notably the United States – should follow
the lead of the EU and regulate Big Tech companies and force meaningful
oversight, including requiring the platforms to assess and mitigate the risk
that their services allow hate speech to flourish. 


--------------------------------------------------------------------------------

[1] From two Burmese language speakers in early 2015 to 60 Myanmar language
speakers in mid 2018, 99 by the end of 2018, and further expansion between 2019
and 2021.





[2] In 2018, Facebook admitted that it was ‘too slow’ in addressing hate speech
in Myanmar in response to an investigation by Reuters. They said that they were
investing ‘heavily’ in artificial intelligence to proactively flag hate
speech and in 2020, they said that they had made progress in ‘improving our
ability to detect and remove hate speech’ and that they had ‘invested
significantly’ in this technology.


[3] The ads were in the form of an image and were not labelled as being
political in nature.

[4] Researchers interested to knowing the exact wording of the sentences we used
are welcome to request this information from us by writing to
digitalthreats@globalwitness.org

[5] The overwhelming majority of the world, including the places that Facebook
operates from, the US and Ireland, have ratified the Convention. Myanmar has
not.


RELATED TO MYANMAR

 * Press Release | April 06, 2022
   
   
   GLOBAL WITNESS WELCOMES PASSAGE OF THE BURMA ACT BY THE US HOUSE OF
   REPRESENTATIVES AND CALLS ON THE SENATE TO FOLLOW SUIT
   
   
 * Press Release | March 21, 2022
   
   
   NEW INVESTIGATION SHOWS FACEBOOK APPROVES ADS CONTAINING HATE SPEECH INCITING
   GENOCIDE AGAINST THE ROHINGYA
   
   
 * Press Release | March 21, 2022
   
   
   ROHINGYA YOUTH WAIT IN WORLD’S LARGEST REFUGEE CAMP TO HEAR FROM FACEBOOK, AS
   PLATFORM FAILS TO DETECT HATE SPEECH AGAINST THEM
   
   

FOLLOW GLOBAL WITNESS

 * twitter
 * facebook
 * linkedin
 * instagram
 * youtube
 * medium


 * Contact us
 * Press releases
 * About us
 * Jobs
 * Support us
 * Privacy
 * Terms of use
 * Cookie Policy
 * Share information securely with us

© Global Witness 2022 (Global Witness is not responsible for the content of
external sites)