global technical issue – Devstyler.io https://devstyler.io News for developers from tech to lifestyle Tue, 18 May 2021 09:04:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 Biased Algorithms and Moderation are Censoring Activists on Social Media https://devstyler.io/blog/2021/05/18/biased-algorithms-and-moderation-are-censoring-activists-on-social-media/ Tue, 18 May 2021 09:04:14 +0000 https://devstyler.io/?p=51464 ...]]> Following Red Dress Day on May 5, a day aimed to raise awareness for Missing and Murdered Indigenous Women and Girls (MMIWG), Indigenous activists and supporters of the campaign found posts about MMIWG had disappeared from their Instagram accounts.

In response, Instagram released a tweet saying that this was “a widespread global technical issue not related to any particular topic.”Creators, however, said that not all stories were affected. Many Black Lives Matter (BLM) activists were similarly frustrated when Facebook flagged their accounts, but didn’t do enough to stop racism and hate speech against Black people on their platform.

So were these really about technical glitches? Or did they result from the platforms’ discriminatory and biased policies and practices? The answer probably lies somewhere in between.

Every time an activist’s post is wrongly removed, there are several possible scenarios.

First, the platform deliberately takes down activists’ posts and accounts, usually at request of and/or in coordination with the government. In fact, some countries and disputed territories platforms censored activists and journalists to allegedly maintain their market access. Second, a post can be removed through a user-reporting mechanism. To handle unlawful or prohibited communication, social media platforms have primarily relied on users reporting.Applying community standards developed by the platform, content moderators would review reported content and determine whether a violation had occurred.

Also, complexities of language pose real challenges. In flagging content, users tend to rely on partisanship and ideology. User reporting approach is driven by popular opinion of a platform’s users while potentially repressing the right to unpopular speech.Such approach also emboldens freedom to hate, where users exercise their right to voice their opinions while actively silencing others.

Third, platforms are increasingly using artificial intelligence (AI) to help identify and remove prohibited content. The idea is that complex algorithms that use natural language processing can flag racist or violent content faster and better than humans possibly can. During the COVID-19 pandemic, social media companies are relying more on AI to cover for tens of thousands of human moderators who were sent home.

Algorithmic biases

There’s a belief that AI systems are less biased and can scale better than human beings. In practice, however, they’re easily disposed to error and can impose bias on a colossal systemic scale.

In one study, researchers found that tweets written in African American English commonly spoken by Black Americans are up to twice more likely to be flagged as offensive compared to others. Using a dataset of 155,800 tweets, another study found a similar widespread racial bias against Black speeches.

What’s considered offensive is bound to social context; terms that are slurs when used in some settings may not be in others. Algorithmic systems lack an ability to capture nuances and contextual particularities, which may not be understood by human moderators who test data used to train these algorithms either. This means natural language processing which is often perceived as an objective tool to identify offensive content can amplify the same biases that human beings have.

While AI is celebrated as autonomous technology that can develop away from human intervention, it is inherently biased. The inequalities that underpin bias already exist in society and influence who gets the opportunity to build algorithms and their databases, and for what purpose. As such, algorithms do not intrinsically provide ways for marginalized people to escape discrimination, but they also reproduce new forms of inequality along social, racial and political lines.

Bias can infiltrate the process anywhere in designing algorithms.

The inclusion of more people from diverse backgrounds within this process is one of important steps to help mitigate the bias. In the meantime, it is important to push platforms to allow for as much transparency and public oversight as possible.

]]>