While Information Technology (IT) has made access to information and methods of communication fast, reliable and convenient, it has also led to developments of the sinister kind; the emergence of hate speech and fake news. With about 50% of the world connected to each other by the World Wide Web (WWW) as per an internet penetration rate analysis, emergence of Chinese budge oriented smartphone companies capturing more than 50% of the market share and electricity becoming accessible to many remote areas, censoring the aforementioned maladies is a major challenge.
It is not surprising that social media giants such as Facebook and Twitter had to close down more than 1.5 billion and 70 million accounts respectively to ensure that fake news and hate speech did not lead to radicalisation of people in different parts of the world. But how do these companies manage this humongous task? The answer lies in incorporating Artificial Intelligence (AI) and Human Intelligence (HI) to detect anomalies
One way that social media entities detect fake news and hate speech is by asking users on their platforms to report abuse or flag comments that are ‘inappropriate’ or ‘horrifying’. Once the user flags a link, video or content, its upto the social media entity to detect traces of maleficent behavior. Facebook has taken help of an AI system called Rosetta to understand the authenticity of a news, photo or content that’s been uploaded on the system.
What Rosetta does is in plain english is this; it scans the word, picture, language, font, date of the post amongst other variables and tries to see if the information being presented is genuine or not. The role of human annotators is without a doubt the most difficult and dangerous. Given that the AI system is not fully adept at understanding innuendoes, references, slights and the contexts in which the content was posted, its upto the human moderators to guide the AI system in discovering fake news and hate speech which are interspersed with murder, pornography and rape.
This methodology has already been adopted or looked up by entities dealing with people’s opinions, comments and perspectives. While some content moderators have hailed the confluence of AI and HI to combat fake news and hate speech, some feel that a lot more needs to be done. Given that the lingua franca between people across the world is majorly in English with a lot of anglicisation, detecting malicious content is a superhuman endeavor that cannot be fully curtailed because that would mean stomping down on people’s right to expression. Navigating the tightrope between data privacy and content moderation is another thing data analysts highlight upon.
However, the fight to ensure that a world where love not war, kindness not anger and bonds instead of bullets exist is on.