Like, Share, Murder: The Complicity of Facebook in the Rohingya Genocide

Facebook.jpg

BY JENEAN DOCTER

Facebook has an ethnic cleansing problem. And it’s not getting any better.

Within the past few months, Facebook has emerged as the object of international scrutiny after various sources--notably including the United Nations--spoke out against the social networking site's failure to quickly remove anti-Muslim propaganda and the individuals who propagate it from its own website.

Years after Myanmar military officials began targeting the largely disenfranchised Rohingya minority, international figureheads including both Trump and the reigning Pope have remained disturbingly silent. Both men were eventually questioned on the genocide issue; yet, Facebook’s arguably greater degree of negligence has gone comparatively unnoticed.

A United Nations report released in August revealed that numerous Facebook accounts run by Myanmar military and government officials, as well as others maintained by civilians, worked to incite genocide against Muslim men, women, and children. Upwards of 727,000 have been expelled from their homes into Bangladesh while fleeing the real and present threats of mass murder and gang rape.

Officially, Facebook claims that “there is no place for hate speech” within the site. Yet, problematic content circulated on the social media platform ranges from reposted news “articles” passed between networks, mischaracterized and photoshopped images, and explicitly anti-Rohingya cartoons. In 2017, chain messages sent through Facebook Messenger played a central role in inciting mob violence when an anti-Rohingya page claimed that Muslim “terrorists” were planning attacks on September 17. Perhaps more disturbingly, one might wonder whether Facebook’s popular Messenger App has been used--or will be used--to directly coordinate violence.

In an open letter to Facebook, a group of Myanmar-based human rights nonprofits related Facebook’s failure to respond to the contact they made with the social network regarding posts and messages that incited violence. The letter noted that Facebook currently did not offer a reporting option for messages received in the Facebook Messenger app, where misinformation was spread. While users can currently fill out a form to report threatening messages, the option is not clearly available, and must be sought after via various help pages. Facebook’s failure to provide an in-app reporting system thus appears as a critical failure of its duty to moderate the content it supports--a failure that may cast the site as partly responsible for genocide.

According to United Nations Myanmar investigator Yanghee Lee, Facebook was initially used “to spread public messages”, but Ultranationalist Buddhist military leaders quickly realized the social network’s capacity for spreading hate speech to a wide audience.

To make matters worse for Facebook, this is not the first time the site has fallen under international critique regarding its failure to quickly and effectively review posts; Facebook was heavily criticized for aiding anti-Rohingya individuals aiming to incite inter-group violence, which eventually led to a series of violent outbreaks between 2012 and 2017.

The historical otherization of the Rohingya--supported by a national narrative that the Muslim minority are not rightful citizens of Myanmar and “stole the rightful land” of the Buddhist majority--has been indubitably compounded by the virtually unchecked spread of fake news and hate speech issued by prominent military leaders via Facebook. The United Nations Human Rights Council’s Report of the independent international fact-finding mission on Myanmar reported that Tatmadaw Commander-in-Chief and Senior General Min Aung Hlaing posted “the Bengali problem was a longstanding one which has become an unfinished job despite the efforts of the previous governments to solve it. The government in office is taking great care in solving the problem.”

In a Newsroom blog post published in late August, Facebook applauded itself for removing a handful of accounts operated by senior-level Myanmar generals who were rightfully accused of spreading hate speech with the explicit intention of inciting violence. Min Aung Hlaing’s account was one of those removed. But, let us view this seemingly good deed in perspective; by late August of this year, the genocide had been underway at full speed for nearly a full year, and at least 700,000 Rohingya had been forced from their homes.

In its post, Facebook admits that the total number of individuals following the accounts it purged from its platform number around twelve million. According to a survey conducted by the International Republican Institute, some 37 percent of Myanmar residents admitted to receiving “most, if not all” of their news from Facebook, with 24 percent of those sampled recalling seeing posts about ethnic conflict in their country once a day, and 36 percent reported seeing such posts once a week. In the same poll, 55 percent of the Myanmar residents claimed that they thought that “most” of what they saw on Facebook was true.

The true, human cost of such a situation cannot be quantified. While a wealthy corporation that profits from every account created on its platform negligently failed to monitor the content it (perhaps unwittingly) proliferated, intolerant government officials utilized the social network to tear lives asunder.

At the present moment, Facebook conveniently claims that it cannot provide country-specific data regarding the proliferation of hate speech via its platform, which would prove extremely useful for international officials seeking to analyze its role in the spread of anti-Rohingya propaganda.

It seems impossible to fathom that no posts were flagged before Zuckerberg announced official action against the finally-defunct accounts. In September 2017, Hlaing posted on Facebook “gallant efforts to defend the HQ against terrorist [term used by Myanmar military to legitimize persecution of Rohingya] attacks and brilliant efforts to restore regional peace, security are honored.” In the context of the crimes against humanity perpetrated by Hlaing’s army, his endorsement of those who “defend” against “terrorists” appears as an explicit endorsement and legitimization of human rights abuses executed against the Rohingya.

Looking to a rejected attempt pursued by some Rohingya to convince Pakistan (the relevant portion of which is now Bangladesh) to annex the territory occupied by Muslim minority group following World War II, the reigning Myanmar military administration has historically fixated on an arguably irrational, long-lived fear of a takeover by the Rohingya, who comprise little more than 4 percent of the country’s population. The militarized government is unofficially led by Aung San Suu Kyi--the same Democratic activist and Nobel Laureate it formerly imprisoned--who, despite promising “transparency” and emphasizing the value of human rights, either fears upsetting the delicate, unspoken balance of power between herself and the military too much to denounce the atrocities committed against the Rohingya, or doesn’t disagree with the treatment at all.

Moreover, the rapid spread of internet usage post-2010, coupled with the increasing adoption of smartphones, spawned a political time-bomb through which intensifying, popular anti-Rohingya propaganda and state-supported military action combined to lead the country to a final, disturbing strategy: genocide against its own kin.

Of course, this is not the first historical instance when genocidal regimes have utilized propaganda as a tool of dehumanization. During the Rwandan genocide, radio, newspapers, and cartoons all functioned as tools of state-sanctioned human rights abuses, orchestrated through the mass mobilization of the ostensible majority group against the minority. The International Criminal Court later declared the prejudiced newspaper Kangura to have functioned as a tool of genocide, and tried its editor, Hassan Ngeze, for the crimes his publication encouraged. Facebook is the newest reincarnation of this tradition of genocidal media. But it is far more instantaneous, omnipotent, and interactive--and apparently, just as unmonitored. Only time will reveal whether Mark Zuckerberg will be similarly held to answer for negligent genocidal complicity.

Hate speech is more than a collection of offensive words strung together by bigoted mouths. It is is the tool of (often political) otherization and political coordination that transforms the shady figures behind uninformed, malicious statements into figures of powerful, genocidal authority. The failure of Facebook to proactively remove accounts and posts explicitly encouraging and coordinating ethnic cleansing, within a timely manner, defines a disturbing moment of corporate, technological complicity in the violation of basic human rights. When dealing with genocide, no degree of negligence is acceptable.

Facebook officials claim that the social media platform acted in its full capacity to control the spread of hate speech that violated its Community Guidelines, without infringing on free speech rights. Yet, need Facebook concern itself with free speech? Facebook is a private, for-profit corporation. It is not a public forum in any defendable way, shape, or form; instead of objectively supporting all discussions without prejudice, Facebook admittedly privileges some content over others on individual timelines, particularly regarding advertising content and the order in which posts are displayed. Nor is the site bound to uphold constitutional rights. Facebook’s lackadaisical pace of content moderation thus cannot logically be said to intentionally uphold the free speech rights of the individual; it simply reveals a lack of corporate ethics, and a failure to responsible investment in content moderation. If Facebook lacked the necessary resources to moderate and swiftly remove objectionable content scribed in Burmese or Bengali, it should have sensibly invested in moderation resources before expanding into countries it was unprepared to enter. Instead, Facebook has demonstrated an unconscionable prioritization of profit over people.

This is not an argument campaigning for all-out censorship or the destruction of free speech. This is an argument about the fundamental responsibility of a social networking giant to utilize its rights as a private entity to remove content that is explicitly threatening and dehumanizing, particularly within the context of known human rights violations, and sometimes posted by known human rights violators. In the age of social media, complicity arguably takes on a new, less clear-cut definition: the neglectful failure of a networking giant to proactively monitor the content its platform supports. Viewed in this light, Facebook appears at least partly culpable for the fastest-growing refugee crisis in the world.

After all, monsters do not commit genocide. High-ranking government officials commit genocide. Negligent social networking companies led by hoodie-clad tech entrepreneurs commit genocide.

And so do millions of otherwise ordinary human beings, armed with Facebook accounts and the dangerous potential to circulate misinformation or coordinate the logistics of violence with a single tap of a colorful square labeled “like,” “share” or “post.”

The road to universal human rights is not brightly lit, but its pathways grow dimmer and more neglected when responsibility for abuse is not assigned accordingly. Facebook must be held accountable for the horrors borne of its negligence.

Facebook must do better.