Africa is beginning to hold social media companies to account

Legal action has been launched against Facebook owners Meta in Nairobi, Kenya. Petitioners in the case claim that the company promoted posts that led to ethnic violence and killings in Ethiopia, due to the way Facebook algorithms recommend content. Over the past two years thousands of civilians have died in Ethiopia in various conflicts, the most prominent being in the Tigray region. The petitioners in the case want to compel Meta to create a $1.6 billion victims’ fund.
The case came at the end of 2022, a relatively volatile year in the world of social networks with Elon Musk’s takeover of Twitter and the deteriorating content moderation of the platform.
While the Meta case is the first in Africa, it shows the demand for social media companies to be accountable for online harm and their impact on life offline. Social media use has grown considerably across the continent – around 12 per cent of Africans were on Facebook in 2020, the most popular platform after WhatsApp and YouTube. Yet the networks are playing catch up in response to safety needs.
Many governments in Africa have often sought to ally themselves with companies to restrict the rights of users with freedom of expression, seen as a threat by most. Some have shut down the networks when their power has been threatened, or over election-related concerns. Facebook is still blocked in Uganda almost two years after the violent elections of 2021.
Increased internet penetration has allowed a growth in online presence for marginalized social groups. But they continue to struggle against social media companies driven by profit, with community standards that don’t serve all communities. For years women, queer activists and ethnic and racial minorities have been outspoken about the inaction. Meanwhile, hateful pockets of internet users continue to inflict harm in the public squares that social networks have come to be.
Advert
In May 2022, Twitter was a ground for anti-LGBTQI+ rhetoric when Senegalese footballer Idrissa Gana Gueye refused to wear a rainbow jersey during a match in France. It seemed the hate had spilled over to the streets when a video circulated showing a group of people beating a man in Dakar while hurling homophobic abuse. Similarly, when Facebook group ‘Homme Choc’ – ‘shocking men’ in French – was found to carry violent misogynist content, including abuse of women with disabilities, Meta was slow to investigate the group’s activities. Senegalese activists who outed the group were targeted with gendered disinformation campaigns.
Irene Khan, the UN Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, has implored states to stem online violence. She notes that ‘people with intersecting marginalized identities, such as people of African descent, Indigenous people, Dalits, migrants, LGBTQ+ people and persons with disabilities face more frequent and more concerted attacks targeting their identities’, which limits their freedom of expression online.
It’s almost as if social media companies are beyond accountability. Without alternatives, major platforms now feel like a trap because minoritized people have built communities within them, yet these very spaces have never really existed to serve them.
Digital spaces owned and managed by private actors, but positioned as serving a public good, are the core of the problem. Will 2023 show a glimpse of alternative pathways? The Meta case – if the court grants the petitioners’ request – could set a precedent for accountability.
This article is from
the March-April 2023 issue
of New Internationalist.
- Discover unique global perspectives
- Support cutting-edge independent media
- Magazine delivered to your door or inbox
- Digital archive of over 500 issues
- Fund in-depth, high quality journalism