October 25, 2023
Innovation is one of Bodyguard’s core values. We show it by continuously improving our solution, and constantly evaluating how we can offer something more to our customers.
We seized the potential of Artificial Intelligence to help create a safer, healthier and more positive online space for people and businesses. Now, we’re making our moderation even more valuable for users, by leveraging one of the freshest technologies around: Large Language Models (LLMs).
We’ve used an LLM to create the newest feature of Bodyguard: Post Scoring. Post Scoring takes the functionality and value of Bodyguard to the next level, offering our customers much more than moderation. In fact, it has the power to change the way they think about their social media completely.
Post Scoring lets users obtain a predictive toxicity score for any content they plan to post on social media, based on LLM technology. It couldn’t be easier to use, but it makes a serious difference to how our customers can approach their social media.
Using LLM technology to analyse and understand the content, Post Scoring assigns a score to each social media post, so that users can anticipate the kind of response the post is likely to receive. The post content is analyzed, and a score is instantly generated based on key criteria, including keywords, topics, entities and celebrity mentions. The score indicates how ‘risky’ a post is in terms of attracting negative attention, criticism, hateful comments, scam or spam messages. Post Scoring works for multiple languages, in addition to English.
The process is easy:
Based on the post’s predictive score, users can decide whether to make changes to the content and improve its score (decreasing the risk of toxicity) or go ahead and post the content as it is.
One of many branches of Artificial Intelligence, LLM is the current buzzword in tech.
LLMs are machine learning models which are trained to understand natural language using large amounts of data and text (hence the name). Over time, they are able to understand, analyze, translate, predict and generate content themselves, making them a really useful model for a variety of functions. ChatGPT is one of the best known examples of an LLM in action.
When it comes to content moderation, LLMs ensure that text is accurately interpreted, so that the appropriate action can be taken on a post, for example, removing a comment it recognises as hateful.
Put simply, LLM is no-nonsense, proven technology which makes a tangible difference to the effectiveness of content moderation.
Post Scoring empowers users to make better decisions when it comes to their social media and brings value in two distinct ways:
Post Scoring shows additional topics that might be associated with the content being posted. This helps users understand further why their post has been given a certain score, and why it might be considered risky, even if the original content seems innocuous.
To enhance the effectiveness of Post Scoring even further, users can combine the feature with our Alerting functionality, which generates email alerts of unusual peaks in commenting activity, and makes the user aware of the behaviour, including:
With Post Scoring and Alerting activated, users can be confident they have a robust safety net which allows them to avoid, anticipate and respond to toxicity on their social media. Combined with Bodyguard’s rigorous moderation, users will know they are benefitting from the most comprehensive and effective content moderation available.
Post Scoring is available as part of Bodyguard’s Advanced Package. Whether you’re newly discovering Bodyguard and want to take control of your social media moderation for the first time, or you’re an existing customer who wants to maximise the power of your moderation, we’re here to help. Talk to us about your moderation needs today and let's get started!
© 2024 Bodyguard.ai — All rights reserved worldwide.