September 22, 2022
Everyone is entitled to their opinion. But when free speech goes to extremes and uses damaging vocabulary, it can become extremely dangerous. As social media use grows, so does online hate. Groups and individuals have become targets because of their ethnicity, religion, gender, and sexual orientation.
And sadly, many social media platforms don't have the tools in place to moderate hate speech before it causes irreparable harm.
Social media is a convenient way to reach millions of users across the world. However, despite the growing use of toxicity to target minority groups, moderation guidelines are still limited in their effectiveness. For example, when there is no moderation, violent marketing campaigns can go viral in moments and affect many people.
Plus, anyone can post on a social platform anonymously and within seconds the post can go viral. Manually monitoring bullying and aggression on social networks is almost impossible, both in terms of the time it takes and the possibility of human error. Not to mention, human moderators can be severely affected by reading hundreds of racist or homophobic messages every day.
The big platforms are making efforts to moderate content and remove toxicity using machine learning algorithms to detect hate speech.
But currently, social platforms like Facebook detect only 20-40% of hateful content (in English only): which means up to 60% is still there targeting vulnerable people.
There are many examples of the destructive impact hateful content can have on communities. This can range from inciting others to commit violence and promoting propaganda to emotional abuse, and even encouraging murder and suicide.
One example is the ‘Fetrah’ campaign that took place on Twitter. This anti-LGBT (lesbian, gay, bisexual, and transgender) movement spread rapidly, despite Facebook and Instagram banning the account. The logo is a blue and pink flag that represents the male and female genders and rejects any other communities. The campaign was created by three Egyptian marketers and seeks to harm gay and transgender people and groups, particularly in the Middle East.
Another example was Facebook’s inability to detect hate in the East African language Kiswahili (Swahili). A series of ads were submitted to Facebook that referred to rape, beheadings, and extreme physical violence by the non-profit group, Global Witness. The ads were approved by Facebook (some were even in English). While the ads were never published, as this was a test, this shows the gap that exists in content moderation on social media.
These examples demonstrate that giant social platforms are still not monitoring and filtering out hateful content. Every user deserves maximum protection through the use of reliable content moderation systems.
While 40% of users leave a platform after their first exposure to toxic language, many stay and participate, resulting in collective abuse. Research shows that even if a user reports hateful comments, one-third are not deleted. The result is that many marginalized groups and people are left vulnerable to online hate. There's little doubt that improved moderation standards are needed on social media, particularly those that have a major influence on users’ psyches.
Bodyguard believes in freedom of speech and the right for people to use the internet freely, without receiving harmful and hateful comments.
Bodyguard is a unique moderation solution that is both instantaneous and intelligent, combining the speed of a machine with the subtle ability and nuance of a human to assess and evaluate toxic content.
Bodyguard moderation:
As a more effective method than employing a human moderator, Bodyguard’s objective is to have a positive social impact on our society. Toxic content is detected in real-time and is moderated immediately, eliminating the possibility of human error.
For more information on how Bodyguard can help your business, contact us.
Charles founded Bodyguard in 2018 to combat cyberbullying and protect internet users from online toxicity. Under his leadership, Bodyguard has become the world’s leading social monitoring and content moderation solution, safeguarding social media for global brands and billions of followers. A Forbes 30 under 30 honoree, Charles is dedicated to balancing free speech and protection from hate, and regularly speaks at industry events and in the media.
© 2024 Bodyguard.ai — All rights reserved worldwide.