A deep-dive into content moderation solutions

It is no secret that online platforms can be hotbeds for hateful and toxic content. There is an ongoing search for the best practices to combat these behaviors, and so it is important to know what solutions are available right now.

Jordan

Jordan

Human Moderation

One of the more obvious solutions to this hate problem is to have a team of moderators review the content according to platform standards. This ensures that community standards are upheld, but will more than likely be a very time consuming process.

Human moderation is also a reactive process and lacks autonomy. This means that the content is still being seen by members of the community before it is analyzed and a verdict is reached. So even if the solution is very well catered to the platform, it may not be helpful in preventing exposure to online hate.

On top of the users potentially being subjected to this toxic online content, the moderators are constantly having to read these comments. Having to sift through the negative and at times upsetting comments and messages can take a heavy toll on the mental health of moderators.

In 2020, Facebook was ordered to pay 52 million dollars to current and former content moderators due to their development of Post-Traumatic Stress Disorder (PTSD). This was the result of a settlement that entitled these workers up to $50,000 in compensation due to their mental health issues.

Apart from the large financial sums attached to this ruling, it should not be overlooked what this means in terms of the limitations of using human moderation as a solution to online hate. These adverse mental effects from human moderation make this a practice to use more scarcely when possible.

Even without lawsuits that order financial compensation, the price of moderation can prove an expensive addition to community teams. If a platform has around a million comments a month, it would take 166 full 8-hour workdays for a single individual to moderate this number of comments. Even if an organization were to use the minimum number of employees possible, and only pay them $7 an hour, the cost of labor would be approximately $94000 per year.

This also doesn’t take into account onboarding and training that would be needed for each new employee. So even though this solution may seem like the most accessible, the price, scalability, and lack of autonomy seems to suggest otherwise.

Can keyword solutions do the job?

In an effort to solve the problems that come from human moderation, notably autonomy and scalability, some have turned to technology as a solution. This idea of using tech as a way to make organizations’ lives easier is not new.

In the case of moderation, keyword solutions certainly respond to the problems of scalability and automation. They can make sure that the hateful content is never seen by the users, and having a list of words and phrases that are automatically removed allows for better scalability.

However, there are other new problems that are caused by this shift to keyword solutions. The most prominent issue is the understanding of the context of comments and messages (or, in most cases, the lack thereof).

Keyword solutions will do a great job of identifying words that would be classed as hateful and removing them in real-time. As language and communications evolve, human review is still necessary to establish a contextual understanding to help in deciding if the removed content was hateful or not.

This may make it easier for the human moderators to reactively restore comments that were wrongfully deleted, but it can also stymie a conversation. Even more crucial is that too many incorrect removals could lead to a decline in users who feel they are unable to express their perspectives on a given platform.

The other big hurdle that organizations and moderators face when using keyword moderation is the need for a tech and language expert. The tech expert will need to ensure that the rules put in place with this type of solution are correctly implemented, and the language expert needs to ensure that the right phrases and words are included in the database for removing these comments.

So, while this option for moderation will aid human moderation, it does not serve as a standalone option. It is at best a supplementary technology that helps with scalability and automation, but at worst something that can stop meaningful interactions on a platform.

Autonomy, accuracy, context and customization

While looking at the 2 most prevalent solutions to combating online hate and creating safer online communities, there are clearly disadvantages to both o. The struggle is to find a moderation solution that keeps the best of both worlds.

There needs to be autonomy to help protect communities, platforms, and advertisers from being exposed or linked to online hate. Conversely, this can not be done at the expense of accurate and contextual review. Furthermore, it is essential to have the ability to cater to the needs of one community or platform to another without the necessity of hiring a full new team.

Artificial intelligence (AI) moderation is meant to make lives easier by having a solution that can autonomously replicate human review. It is with this principle in mind that Bodyguard was developed, and where its technology makes the difference.

Bodyguard offers the attractive scalability and autonomy of keyword solutions, but with the content and accuracy of human moderation. This technology is available to independent applications and communities, but also has the ability to work for communities that are already established on existing major social media platforms - like Facebook, Twitter, Twitch, Instagram and YouTube.

Bodyguard’s technology is able to clean text (recognize internet slang, emojis, typos) and to analyze for potential toxic words and phrases. It also goes a step further and contextualizes the messages based on the entire phrase.

This process mimics the process that human moderators go through when analyzing content. From here, the technology applies custom filters set by the community manager, in order to return a “keep” or “remove” verdict.

This level of contextualization, accuracy, customization, and autonomy cuts costs of moderation drastically, and even frees up community management teams so that they can focus more on growing and fostering positive communities rather than playing editors.

The need for moderation continues to grow, and luckily many solutions are available to help with this. The challenges faced by moderators and communities at present vary in many ways, but the ability of Bodyguard to provide autonomous human-like moderation solves the hardest of those challenges.

Find out more about Bodyguard’s autonomous, human-like solution by booking a demo!