Community Managers: Their challenges and how to solve them

Imagine waking up at 3 a.m. to check your phone in case a new crisis needed solving. Imagine sleeping 3 hours a night to make sure you don’t lose clients. Imagine having to read hundreds of toxic comments every week, and trying to not let them affect you. Imagine being a community manager.

Bastien

Bastien

Being the gatekeeper of your online community isn't for the fainthearted. Constantly being exposed to toxic content, and keeping up with the large and constantly growing volume of messages that needs verifying immediately, is a demanding and stressful job for anyone.

People often say that community managers play an essential role in the strategy and the life of a business. This isn't an easy job, and it comes with a lot of responsibility. Community managers needs to nurture their company's online following, promote engagement, encourage user-generated content, and keep the community together. All that while looking out for the company’s e-reputation, and striving to keep it clean.

We've seen PR crises' originating within a company's social media many times. Whether it's toxic content that has been posted and spiralled out of control, or a debated between community members that has quickly turned ugly, social media and online platforms can be a ticking time bomb for a community manager.

Beyond the immediate dent in the company’s image, there are other types of long-lasting impact, like a decline in user acquisition and retention, loss of advertising revenue, lack of user engagement, that come from poorly managed online communities. Often, these negative effects could have been prevented: automated moderation is just one of the ways this can be controlled.

Here are some of the key challenges community managers face, and how they can be solved.

Constant exposure to toxic content

Toxic content has the power to destroy a community if it's left unaddressed. Hateful, hurtful or toxic comments have no place on a company’s social media channels, yet we’re dealing with more toxic content than ever on social media and the Internet in general.

Once they're given a voice, lots of keyboard warriors resort to the anonymity of the internet to verbally attack, harass, bully, and slam others. They use insults, threats, body-shaming, sexual harassment, racist comments, misogynist or homophobic remarks to victimize other users and create conflict. It seems like nothing is off limits for the worst internet trolls.

Community managers spend hours and hours every day manually scanning and analyzing content, in order to detect and prevent anything toxic, so that they can protect their communities.

The constant exposure to toxic content takes its toll on our community gatekeepers. Interviews with community managers show that most of them sink under the enormity of their workload, which seems to get bigger everyday.

Efforts to improve staff well-being, like regular breaks, aren’t widely adopted across the wide spectrum of businesses with online communities. Community managers — many of whom aren’t properly trained — are often simply left to get on with it. This can come at a significant human cost: according to research, repeated exposure to online negativity combined with challenging working conditions increases the risk of developing anxiety, depression, stress disorders, heart disease, and even substance abuse.

Beyond the human cost, brands that fail to moderate their content effectively can lose out on profits due to alienated customers. In itself, human moderation is a costly, inefficient process.

Constantly increasing content

The way many community managers work currently is not scalable or sustainable: and if we think otherwise, we're not being realistic. Businesses have to address this issue so that their community managers can focus on what’s most important, and enjoy good mental health.

By automating much of the content review process, companies could actually have more effective moderation and free up their community managers’ time, while also shielding them from unnecessary toxicity.

Instead of manually filtering through thousands of messages every day or week, community managers could instead, intervene only when necessary: and could spend the rest of their time thinking about new ways to engage their community and get more value from their social media activity.

What can stay and what should go?

Another challenge community managers face is having to decide what is acceptable and what isn’t when it comes to users posting content on their company social media channels. What can users say? What should be restricted? The responsibility that comes with this role is directly related to the potential impact on the business if things go wrong.

Businesses often rely on their community manager’s moral compass when it comes to deciding what is acceptable for their community and what should be removed. But that approach is far from ideal. When things go wrong and the blame lies with one individual, it casts doubt over their judgement and moral compass: an unfair position for any employee to be in. Additionally, if you have more than one community manager, whose moral compass will have the deciding vote?

To avoid this, the best thing to do is to put clear community guidelines in place.

Applying community guidelines to your platforms

Community guidelines are a set of rules for your community members, which determine what the users can or cannot do or say. If a user violates the community guidelines by posting an offensive or illegal comment, there are consequences. They might get a temporary ban, or even blocked permanently from the community. Having these rules in place encourages members to keep channels clean and safe for everyone.

Studies have shown 72% of users are unlikely to return to a platform if they encounter toxic content. Every time a negative comment appears, more people are likely to leave your page. This is why community managers invest so much effort into keeping online communities clean and free of hate. But having community guidelines helps keep your site safe by adding an additional layer of rules and giving your community managers the confidence and authority to enforce those rules.

Good community guidelines address two things. Firstly, they determine the kind of content that is not allowed on your platform, and what users cannot post.

Site users need to know what they are allowed to comment or do on the site, and what is considered disrespectful to the community. It’s important to distinguish different actions that should be banned from your page. For example, spamming or trolling would not be treated in the same way as racism or sexism.

Secondly, community guidelines need to establish the consequences when someone infringes the rules. If your website promotes inclusion, a comment on an individual's physical appearance would not be tolerated. But if your page is a fashion blog, some criticism would be normal. Depending on the action or comment and what kind of site you are, the consequences would be more or less severe.

Community guidelines set out rules, but they can’t prevent toxic content from reaching your platform. For that, you’ll still need someone or something to monitor the content and decide what does and doesn't adhere to the community guidelines.

How automatic moderation can support community managers

The best alternative to human moderation is automated moderation. That’s where we come in. At Bodyguard, prevention comes first. Our solution uses artificial intelligence (AI) to analyze, filter, and delete huge amounts of content in real-time, before it reaches your community. It can be integrated with social networks and platforms of all sizes. This way, community managers and users alike are spared excessive exposure to toxicity.

Community managers can easily customize our solution and manage accounts using a single dashboard, saving them time while reducing business costs. Our solution also comes with a wide range of metrics to help managers learn more about their communities and identify which types of content generate the most toxic, and positive, reactions.

Our key features at a glance:

  • AI moderation: our solution intelligently filters out content and then shows you how it’s been classified and what action has been taken.

  • Contextual analysis: bodyguard analyses text, typos, deliberately misspelled words, and emojis in context.

  • Classification: easily organize messages into categories of severity.

  • Lives streaming protection: Automatically moderate millions of comments in real-time during live events.

  • Dashboard: get access to insightful metrics that help you foster respect and engage your community.

  • Easy integration: deploy bodyguard with minimal fuss via an API on almost any platform.

To find out more about our services and how we can help make community managers’ lives easier, visit our site here.

Custom module undefined not implemented