top of page
  • Writer's pictureBrain Booster Articles

SOCIAL MEDIA AND RACISM

Author: Kaulik Mitra, II year of B.A.,LL.B. from KIIT School of Law


Abstract

In today's day and age everyone is using social media. In fact, there are now so many types of social media platforms, all having diverse and varied functions or programs. This means that social media has almost become an extension of real life. Whatever exists in real life, exists on social media these days. Naturally, this translates to the negatives of human nature as much as it does to the positives. Racism is one such evil that has made this unfortunate transition. Here, we look at racism on social media, what the different platforms have done or are doing about it, the current provisions in place, and what can be done about it.

Keywords- Social media, racism


Introduction

The recent Euro 2020 final and the subsequent reactions to England's loss brought a lot of attention to the problem of racism on social media. The fact is that it has existed for a long time now and unfortunately as well, it is scarily common. Going back to the England incident, what happened was that in the penalty shootout Jadon Sancho, Marcus Rashford and Bukayo Saka all missed, handing the cup over to Italy. Now such failures lead to criticism, which is harsh, for such is the nature of sport that one team has to lose, but also to some extent fair. But it is only fair when its limited to the failure. The sad part is that some people only seemed to notice that fact that all of three of these players were 'of colour' so as to speak. Regardless to say but of course the colour of one's skin has no relation with their actions or in this case mistakes. All three of these players are fine young men and exceptional footballers, the type people should celebrate and cherish and not racially abuse. However within seconds of the match ending, they received unbelievable amount of racial hatred. Their comments and inboxes were full of derogatory racial abuses and emojis of similiar negative connotations.

The matter of fact is that this is not an isolated incidence. It is unnervingly regular and repeatedly keeps happening at levels subliminal to the public conscience. The thing is, racially abusing someone shouldn't be as easy as it seems to be. Racism is ultimately an extremely grave and serious problem, the history and even present repercussions of which are widely known to everyone. Ultimately it is easy to say that racism on social media wouldn't be taking place if the root of the evil is dealt with; racism itself. However that is easier said than done because the evil is so deeply entrenched within the fabric of some individuals, families or even communities.


Existing social media policies for racism

Social media platforms of course have their own rules and regulations pertaining to racism. This can most commonly be seen under the umbrella of hate speech. The most popular social media companies often own more than one platform. Facebook for example owns others like Instagram and Whatsapp but still their guidelines are different because of differences in the platforms. Here’s what Facebook have as a part of their community guidelines against hate speech - “we define hate speech as a direct attack on people - based on what we call protected traits: race, ethnic identity, national origin, disability, religious identity, caste, sexual identity, sexuality, gender identity and serious disease. We interpret attacks as violent or inhumane speech, harmful stereotypes, derogatory statements, expressions of contempt, hatred or exclusion, cursing, and calls for exclusion or segregation. We also prohibit harmful stereotypes that have historically been used to attack, intimidate, or exclude certain groups. We define content as inhumane comparisons that are often associated with offline violence.” They also mention a list of actions that users can’t do in respect to these restrictions. Violent speech or support in the form of written or visual descriptions is prohibited as is inhumane speech or description by comparing, generalizing or presenting inappropriate behavioral statements be it in written or visual form.[1]


Twitter say a similar thing as well, not allowing anything thatpromotes violence against or directly attacks or threatens other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. According to their policy, they also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories. An interesting difference between the two sites however is the fact that Facebook’s definition of hate speech is broad, and covers “violent or dehumanising speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation.”


Twitter, however, takes a narrower view, and bans only hate speech that could “promote violence against or directly attack or threaten other people on the basis of race” or other protected characteristics. Users can be penalised for “targeting individuals with repeated slurs, tropes or other content that intends to dehumanise, degrade or reinforce negative or harmful stereotypes about a protected category”.[2]


Instagram are a photo/ video sharing platform who are also owned by Facebook. They have around 1.074 billion monthly users. Instagram’s policy seems a little more broader. Their website says “Our rules against hate speech don’t tolerate attacks on people based on their protected characteristics, including race or religion. We strengthened these rules last year, banning more implicit forms of hate speech, like content depicting Blackface and common antisemitic tropes. We take action whenever we become aware of hate speech, and we’re continuously improving our detection tools so we can find it faster”. They also recognise that a lot of the abuse actually takes place in private inboxes or Direct Messages as they call them. “ Between July and September of last year, we took action on 6.5 million pieces of hate speech on Instagram, including in DMs, 95% of which we found before anyone reported it” - Instagram add. They are also working with law enforcement especially in the UK and will be assisting them with information in cases whenever asked for. Moreover, they won’t be allowing racist messages to be sent, prohibiting the person from sending any messages at all for a set time period. Repeat offenders can have their accounts disabled. They also say that they willdisable new accounts created to get around their messaging restrictions, and will continue to disable accounts they find that are created purely to send abusive messages.[3]


The video platform Youtube who are owned by Google are also quite strict about preventing racism. Their policies say clearly that hate speech is not allowed on YouTube. They also say that they remove content promoting violence or hatred against individuals or groups. They specify a long list of things that are rightly prohibited in videos on their platform. On violation of these regulations, punishment ranges from removal of content, demonetisation, temporary ban to even permanent deletion of channel.[4]


Discussion

What can be improved?

Although these are the existing provisions to act against racism in the most popular social media platforms, it is important to note that racism on these social media platforms are still commonly prevalent and even widespread. There’s no doubt that further and constant improvements need to be made in this field. The world's largest social networks say that racism is not welcome on their platforms, but the combination of weak law enforcement and weak rules has made hate prevalent. Another significant problem that arises is the existence of automated moderators rather than actual human beings acting as moderators. A number of well wishing users were surprised by this when they tried to report racist content. "Because we receive a large number of reports, our review team is unable to review your report," many users get told "However, our technology found that this article cannot violate our community guidelines." Instead, they are advised to personally block users who posted abusive content or silence the phrases so they couldn't see them. These posts are undeniably racist, but there is no obvious way to attract actual human attention and force the issue on them. There have been calls for the social media companies to use their expertise in artificial intelligence to detect racist messages, as and when they are being written, and urge users to think twice or avoid publishing them.[5]

But another idea that seems to be increasingly popular is to end anonymity on social media so that racists can be tracked. However this is not a solution. This can, instead of solving one problem aggravate another. If social media companies and governments force users to reveal their legal identities, it will cause serious harm, especially to those who are already at risk, such as people of color, women, and LGBTQ+ community members. For many people who have long been excluded from physical and cyberspace, or who have been marginalized and attacked, anonymity is a survival tool. For these people, an anonymous account is the only option to interact with the media, express themselves and share information online in a relatively secure way. Similarly, survivors of domestic violence have found a safe place online, thanks to their ability to stay in touch and communicate while maintaining their identity.[6]Numerous studies and testimonials have shown that for millions of people, anonymity online is essential for personal safety and existence of freedom.The best we can do as responsible digital citizens is first of all to report racist content and also make sure that we do not unintentionally spread or amplify it.[7]


Conclusion

Racism on social media is an extension of racist tendencies in real life. Hence the punishment must also be an extension from social media to real life. What this means is that even apart from the punishments that have been mentioned above that are dished out by the social media platforms, penalizing offenders by the book of the law is also a need of the times. Therefore it is important for law enforcement to take these cases that occur on social media as seriously as the ones that do take place in real life. The UK law enforcement have particular provisions to take action against racism. In the United Kingdom, although you may find a lot of offensive material on the Internet, only a small part is illegal.[8] When the crime defined by the law is committed out of hate motives, online hate materials will be registered as a hate crime by the police. When online materials are motivated by hatred but do not reach the threshold of crime, they are recorded as hate incidents. Law enforcement agencieslike the police have the responsibility to promote good relationships between different parts of our communities, but they do not have powers to control offensive thoughts or words unless they are shared illegally.The Director of Public Prosecutions, who has the responsibility for deciding who should be prosecuted has produced guidance to prosecutors to ensure consistency. That is the situation in UK.[9]

In India, the legislation isn’t as well constructed or developed. “The Government of India in its affidavit dated 8 July 2015 before the Delhi High Court as well in written replies in the Rajya Sabha on 18 March 2015 and 26 July 2017 assured that the MHA was in the process of finalising a comprehensive bill for insertion of new sections of 153C and 509A in Indian Penal Code (IPC) to address racial attacks especially on the people from North-Eastern States.” IPC Section 153(A) and 295(A) speaks about the restrictions to freedom of expression and this does include statements that promote hatred on the grounds of race. Australia and Germany have some of the strictest laws and regulations on social media, imposing fines and imprisonment for "inaction against extremist hate speech" in a short period of time.[10]


At the same time, the EU has also formulated a code of conduct to ensure that hate speech does not spread. Most other countries have been struggling between threats to technology companies and platforms and law enforcement responses to non-anonymous users, but given the transnational nature of the technology, consistent global regulatory standards will have to ensure the unity of governance and supervision, and at the same time maintain specificity as well as consider local conditions. With the emergence of new platforms such as TikTok, online media and social responsibility are constantly evolving. This makes it all the harder. Things like these have lead to people in charge of the social media platforms saying that it is virtually impossible to completely weed out racism from their websites.Both Facebook CEO Mark Zuckerberg and Instagram head Adam Mosseri publicly admitted that their platforms will never be fully rid of harmful content.[11] Ultimately the thing is no matter how much social media platforms improve their systems, the only way to completely eliminate racism is through education of the masses.


References

8. https://www.report-it.org.uk/reporting_internet_hate_crime

bottom of page