top of page
The Guardian

Facebook's Role in Myanmar and Ethiopia Under New Scrutiny

Whistleblower Frances Haugen adds to long-held concerns that social media site is fuelling violence and instability

By Emmanuel Akinwotu


The former Facebook employee Frances Haugen testifying before a Senate committee, where she accused Facebook of ‘literally fanning ethnic violence’. Photograph: Lenin Nolly/SOPA Images/Rex/Shutterstock

Whistleblower Frances Haugen’s testimony to US senators on Tuesday shone a light on violence and instability in Myanmar and Ethiopia in recent years and long-held concerns about links with activity on Facebook.


“What we saw in Myanmar and are now seeing in Ethiopia are only the opening chapters of a story so terrifying, no one wants to read the end of it,” Haugen said in her striking testimony. Haugen warned that Facebook was “literally fanning ethnic violence” in places such as Ethiopia because it was not policing its service adequately outside the US.


About half of Myanmar’s population of 53 million use Facebook, with many relying on the site as their primary source of news. In June this year, an investigation by the rights group Global Witness found that Facebook’s algorithm was promoting posts in breach of its own policies that incited violence against protesters marching against the coup launched by the military in February.


Researchers began by liking a Myanmar military fan page, which was not seen to be violating Facebook’s terms. They found that Facebook then suggested several pro-military pages that did contain abusive content.


“We didn’t have to look hard to find this content; FB’s algorithm led us to it,” said Rosie Sharpe, a digital researcher who worked on the report. “Of the first five pages they recommended, three of them contained content that broke FB’s rules by, for example, inciting or glorifying violence.”


The link between social media posts and offline violence in Myanmar had already been widely documented. In 2018 a Guardian analysis revealed that hate speech exploded on Facebook at the start of the Rohingya crisis the year before, when attacks by armed groups and ordinary communities on people from the Muslim minority erupted.


Thousands of posts by nationalist, anti-Rohingya supporters gained traction online, including posts which falsely claimed mosques were stockpiling weapons. An independent investigation commissioned by Facebook later agreed with assessments that the site had been used to incite offline violence.


“What happens on Facebook matters,” Sharpe said. “Promotion of violence online leads to real-world harms. That’s particularly true in Myanmar, where Facebook has admitted that it played a role in inciting violence during the military’s genocidal campaign against the Rohingya.”


Facebook has faced similar criticism in Ethiopia, which has been engulfed in an armed conflict between the federal government and the Tigray People’s Liberation Front (TPLF). In 2019, for instance, the retired Ethiopian runner Haile Gebrselassie blamed “fake news” being shared on Facebook for violence that left 81 people dead in Oromia region.

After another outbreak of ethnic violence in 2020 – sparked by the killing of a popular singer from the Oromo ethnic group – an investigation by Vice claimed that the violence had been “supercharged by the almost-instant and widespread sharing of hate speech and incitement to violence on Facebook, which whipped up people’s anger”.

In her testimony Haugen blamed engagement-based ranking for “literally fanning ethnic violence” in countries like Ethiopia. “Facebook … knows, they have admitted in public, that engagement-based ranking is dangerous without integrity and security systems, but then not rolled out those integrity and security systems to most of the languages in the world,” Haugen said. And that’s what is causing things like ethnic violence in Ethiopia.”

Sharpe said legislators were not doing enough to hold social media companies to account.

“The EU has gone the furthest towards doing this. There’s draft legislation in the EU, the digital services act. If it was passed it would require very large online platforms to have to assess and mitigate the risk of their algorithms spreading content that impacts on our rights. However, the proposed law doesn’t go far enough as it would only give regulators the opportunity to scrutinise how algorithms work when they suspect wrongdoing.”

Facebook has pushed back forcefully against Haugen’s accusations. In a blogpost published on Tuesday evening its chief executive, Mark Zuckerberg, said “it’s just not true” that the company puts profit over safety.



© 2021 Guardian News & Media Limited or its affiliated companies.


Follow Genocide Watch for more updates:

  • Grey Facebook Icon
  • Grey Twitter Icon
  • Grey YouTube Icon
bottom of page