Bodyguard.ai protects French Open players from online abuse

Bodyguard.ai is proud to be protecting players at the 2023 French Open tennis tournament from online abuse and toxic content, so they can focus on playing at their peak.

Lucy

Lucy

Online abuse is rife in sport

Online abuse affects few people as much as professional athletes.

Win or lose, sports people in every discipline receive a barrage of messages after each event. Many are from fans and supporters. But trolls and haters leave toxic comments which can impact players' mental health and performance. And sometimes, the strain gets too much.

Take Japanese professional tennis star Naomi Osaka. Osaka had just won her second Australian Open title and was seeded second in the upcoming 2021 French Open when she unexpectedly withdrew from the competition. The reason? Her declining mental health.

It seemed the pressure to perform on the court had won; made worse by how easily people could voice their opinions on social media.

And Osaka isn't the only tennis player to fall foul of toxic content. The women's world number three, American star Sloane Stephens, has previously shared her experience of online racist abuse, saying that it has "never stopped."

"If anything, it's only gotten worse", Stephens said. "I have a lot of key words banned on Instagram and all of these things. But that doesn't stop someone from just typing in an asterisk or typing it in a different way, which obviously software most of the time doesn't catch."

The expectation for the world's top athletes to bring their A-game every time is growing. And so is the online abuse they get. A 2022 study by UK communications regulator Ofcom and the Alan Turing Institute, which analysed more than two million tweets over a five month period, found that seven in ten Premier League footballers received at least one abusive tweet during that time. Seven per cent of players received abuse daily.

Until recently, it seemed little could be done to stop the influx of toxic comments. Players could delete their social media accounts; but they'd risk losing thousands of dollars in sponsorship deals. The alternative? Avoiding reading the comments altogether. And, let's be honest; how many of us could resist the temptation to see what people were saying about us, both good and bad?

Not just 'part of the game'

Fortunately, things are changing. With the help of AI-powered Natural Language Processing technology, athletes don't have to accept online abuse as just part of the game.

Bodyguard.ai's content moderation solution protects sports teams and players from the negative effects of social media and toxic content that can impact their performance and mental health.

By crawling millions of messages in real time, our solution flags and removes toxic comments instantly, so players don't have to be exposed to them.

This year, we've teamed up with the Fédération Française de Tennis to protect players at the Roland-Garros tennis grand slam from online abuse.

Players of every ranking, including the women's world number one Iga Swiatek, have endorsed Bodyguard.ai and shared publicly how it has helped them to focus on their game during the tournament.

Want to protect your team from online toxicity? Talk to us. We're listening...

The world is waking u

Our solution has also attracted international media attention, in a positive sign that the world is recognising the potential negative effects of social media, as well its good side.

US-based non-profit media organisation National Public Radio (NPR) is the latest media outlet to cover tennis players' experience of online abuse and how it can be stopped. Bodyguard.ai's co-founder and president Matthieu Boutard (MB) joined the conversation to tell the broadcaster how we've been protecting players at this year's French Open, and what makes AI so effective in recognising toxicity.

MB: "What's important is not the keywords that are used. It is very much who the message targets and what's actually toxic inside it. And if we think that it's toxic based on different criteria, we would actually remove the content from social media. AI is a lot more complex in a sense that it understands context, which is pretty much the essence of moderation. So it's a very different ballgame."

Whilst athletes in every sport experience online abuse, it seems tennis is particularly brutal. Without teammates to share both the blame and glory with, online hate is aimed directly at the individual.

MB: "Being a tennis player is very difficult. It's an individual sport. So if you lose a game, that's your fault. You're very exposed because a lot of people are actually betting on sport and tennis specifically, which means a lot of haters going after you if you lose a point, if you lose a set or if you lose a game."

Now content moderation is making the headlines more than ever, inevitably, the question of censorship has been raised. How much moderation is too much? Is it eroding freedom of speech? When does content moderation become censorship?

Bodyguard.ai is intelligent enough to know the difference between criticism and hateful comments.

MB: "We don't remove criticism, but we remove its toxicity. The line is actually pretty clear. If you start throwing insults, being racist, attacking a player, using body-shaming; that's not a criticism, and that's actually toxic to the player."

The statistics around messages blocked during this year's tournament prove just how effective Bodyguard.ai is.

MB: "Out of all the messages that were sent to the players, 10 per cent of them were toxic. And we blocked more than 95 per cent of them."

Listen to the interview with NPR below.

Bodyguard.ai x NPR interview