top of page

Facebook and TikTok publish hate speech in Kenya


Terrorist groups in Kenya have tried to use social media to destabilize before the election. (Farah Abdi Warsameh/Associated Press)


Washington Post

July 31, 2022

By Neha Wadekar


NAIROBI — The shooter approaches from behind, raising a pistol to his victim’s head. He pulls the trigger and “pop,” a lifeless body slumps forward. The shot cuts to another execution, and another. The video was posted on Facebook, in a large group of al-Shabab and Islamic State supporters, where different versions were viewed thousands of times before being taken down.


As Facebook and its competitor TikTok grow at breakneck speed in Kenya, and across Africa, researchers say the technology companies are failing to keep pace with a proliferation of terrorist content, hate speech and false information, taking advantage of poor regulatory frameworks to avoid stricter oversight.


“It is a deliberate choice to maximize labor and profit extraction, because they view the societies in the Global South primarily as markets, not as societies,” said Nanjala Nyabola, a Kenyan technology and social researcher.


About 1 in 5 Kenyans use Facebook, which its parent company last year renamed itself Meta, and TikTok has become one of the most downloaded apps in the country. The prevalence of violent and inflammatory content on the platforms poses real risks in this East African nation, as it prepares for a bitterly contested presidential election next month and deals with the threat of terrorism posed by a resurgent al-Shabab.


“Our approach to content moderation in Africa is no different than anywhere else in the world,” Kojo Boakye, Meta’s director of public policy for Africa, the Middle East and Turkey, wrote in an email to The Washington Post. “We prioritize safety on our platforms and have taken aggressive steps to fight misinformation and harmful content.”


Fortune Mgwili-Sibanda, the head of government relations and public policy in sub-Saharan Africa for TikTok, also responded to The Post by email, writing: “We have thousands of people working on safety all around the world, and we’re continuing to expand this function in our African markets in line with the continued growth of our TikTok community on the continent.”


The companies use a two-pronged content moderation strategy: Artificial intelligence (AI) algorithms provide a first line of defense. But Meta has admitted it is challenging to teach AI to recognize hate speech in multiple languages and contexts, and reports show that posts in languages other than English often slip through the cracks.


In June, researchers at the Institute for Strategic Dialogue in London released a report outlining how al-Shabab and the Islamic State use Facebook to spread extremist content, like the execution video. The two-year investigation revealed at least 30 public al-Shabab and Islamic State propaganda pages with nearly 40,000 combined followers. The groups posted videos depicting gruesome assassinations, suicide bombings, attacks on Kenyan military forces and Islamist militant training exercises. Some content had lived on the platform for more than six years.


Reliance on AI was a core problem, said Moustafa Ayad, one of the authors of the report, because bad actors have learned how to game the system. If the terrorists know the AI is looking for the word jihad, Ayad explained, they can “split up J.I.H.A.D with periods in between the letters, so now it is not being read properly by the AI system.”


Ayad said most of the accounts flagged in the report have now been removed, but similar content has since popped up, such as a video posted in July featuring Fuad Mohamed Khalaf, an al-Shabab leader wanted by the U.S. government. It garnered 141,000 views and 1,800 shares before being removed after 10 days.


Terrorist groups can also bypass human moderation, the second line of defense for social media companies, by exploiting language and cultural expertise gaps, the report said. The national languages in Kenya are English and Swahili, but Kenyans speak dozens of other tribal languages, dialects and the local slang known as Sheng.


Meta said it has a 350-person multidisciplinary team, including native Arabic, Somali and Swahili speakers, who monitor and handle terrorist content. Between January and March, the company claims to have removed 15 million pieces of content that violated its terrorism policies but did not say how much terrorist content it believes to still be on the platform.


In January 2019, al-Shabab attacked the DusitD2 complex in Nairobi, killing 21 people. A government investigation later revealed they planned the attack using a Facebook account that remained undetected for six months, according to local media.


During the Kenyan election in 2017, journalists documented how Facebook struggled to rein in the spread of ethnically charged hate speech, an issue researchers say the company is still failing to address. Adding to their worries now is the growing popularity of TikTok, which is also being used to inflame tensions ahead of the presidential vote in August.


In June, the Mozilla Foundation released a report outlining how election disinformation in Kenya has taken root on TikTok. The report examined more than 130 videos from 33 accounts that had been viewed more than 4 million times, finding ethnic-based hate speech as well as manipulated and false content that violated TikTok policies.


One video clip mimicked a detergent commercial in which the narrator told viewers that the “detergent” could eliminate “madoadoa,” including members of the Kamba, Kikuyu, Luhya and Luo tribes. Interpreted literally, “madoadoa” is an innocuous word meaning blemish or spot, but it can also be a coded ethnic slur and a call to violence. The video contained graphic images of post-election clashes from previous years.

After the report was published, TikTok removed the video and flagged the term “madoadoa,” but the episode showed how the nuances of language can elude human moderators.


A TikTok whistleblower told report author Odanga Madung that she was asked to watch videos in languages she did not speak and determine, from looking at images alone, whether they violated its guidelines.


TikTok did not directly respond to that allegation when asked for comment by The Post, but the company issued a recent statement about efforts to address problematic election content. TikTok said it moderates content in more than 60 languages, including Swahili, but declined to give additional details about its moderators in Kenya or the number of languages it monitors. It has also launched an operations center for Kenya with experts who detect and remove posts that violate its policies. And in July, it rolled out an user guide containing election and media literacy information.


“We have a dedicated team working to safeguard TikTok during the Kenyan elections,” Mgwili-Sibanda wrote. “We prohibit and remove election misinformation, promotions of violence and other violations of our policies.”


But researchers still worry that violent rhetoric online could lead to real violence. “One will see these lies really turn into very tragic consequences for people attending rallies,” said Irungu Houghton, director of Amnesty International Kenya.


Researchers say TikTok and Meta can get away with lower content moderation standards in Kenya, in part because Kenyan law does not directly hold social media companies responsible for harmful content on their platforms. By contrast, the Facebook Act in Germany fines companies up to $50 million if they do not remove “clearly illegal” content within 24 hours after a user files a complaint.


“This is quite a gray area,” said Mugambi Laibuta, a Kenyan lawyer. “When you’re talking about hate speech, there’s no law in Kenya that states that these sites should enforce content moderation.”


If Meta and TikTok do not police themselves, experts warn, African governments will do it for them, possibly in anti-democratic and dangerous ways. “If the platforms don’t get their act together, they become convenient excuses for authoritarians to clamp down on them across the continent” and “a convenient excuse for them to disappear,” Madung said. “And we all need these platforms to survive. We need them to thrive.”



Copyright 2022 The Washington Post


Follow Genocide Watch for more updates:

  • Grey Facebook Icon
  • Grey Twitter Icon
  • Grey YouTube Icon
bottom of page