Media monitoring is an important tool in this reputation-driven world. It’s something that can be utilised by influencers as part of a reputation management strategy, and it’s also employed by brands to understand what the world thinks about them and perform competitor analyses.
But it goes much deeper than that. In the right hands, a media monitoring campaign can help you understand social changes and get a grip on concerning trends such as hate speech.
The Rise of Online Hate Speech
According to the Anti-Defamation League, social networks and other online platforms have stepped up their efforts to combat hate speech.
When you think back to the early moderation tools used in forums, message boards, and online chat rooms, we’ve come a long way.
As little as 15 and even 10 years ago, communities dealt with hate speech by detecting certain words. They would simply draw from a database of curse words and words commonly used in hateful comments, and then flag them. From there, the content could be blocked, and/or the poster could be warned.
This is essentially how some sites still operate, much to the frustration of many users, as there are several issues with this approach.
Check out how another company from the Updata One community – Identrics, works to create an automated hate-speech detection model to help different companies eliminate hate speech in the comment sections.
The Problem with Detecting Hate Speech
If you program your software to flag every use of a swear word, you also flag everyone who uses it in a harmless context. Furthermore, if you target racist terminology, racists will change those words to something harmless and use them in the same context. And then you have the modern phenomena of “dog whistles,” seemingly innocuous phrases with entirely different meanings to specific groups of people.
While several models have been developed over the years, a general solution to the problem still needs to be discovered. One approach involves using various algorithms to assign a “toxicity score” to comments and content based on factors such as profanity and hate speech. However, these tools have limitations and may not accurately distinguish between harmless and harmful comments. To improve the accuracy of these models, some developers have invited machine learning experts to utilize public datasets and open-source approaches.
As things stand today, we still don’t have software that can accurately detect hate speech 100% of the time, but we’ve come a long way, and media monitoring plays a significant role in continuing that journey.
After all, the datasets are constantly changing. The language used by a racist or homophobic commenter in 2023 is different from the one used 20 or 30 years ago, and it’ll be different from the language used in 2030.
A successful algorithm must be able to record those changes and adapt to them. That requires regular monitoring of social networks, chat rooms, and comment sections, all of which can be handled by AI technology.
Using Media Monitoring to Understand Public Opinion and Act Accordingly
You may have heard the phrase “on the right side of history”. It’s one that has been uttered many times and by many different people over the last few years. It refers to people and acts that may seem odd at first but will be judged as righteous and “correct” by future generations.
It’s often used to reference people who stood up for gay rights at a time when same-sex marriages were illegal. In recent years, it has also been used in a political context, where the lines between right and wrong aren’t as clear.
In reality, history doesn’t record who was right or wrong, and there’s no way of knowing what the future will hold. However, by monitoring the media and analysing shifts in public opinion, you can see these changes happening in real-time and it leads to some eye-opening revelations.
For example, we’ve all heard stories of comedians who have been “cancelled” because of a joke they made 5, 10, or 20 years ago or the influencers whose careers have been ruined because of a social misstep.
Let’s imagine that you find yourself in the same position. Someone finds an old controversial tweet or video, writes an article about it, and causes an uproar. You don’t catch wind of the early chaos, but a couple of days later, the news is everywhere and you find yourself desperately scrambling to consult your PR manager for some good old-fashioned crisis management.
But if you were monitoring the media, you would have received an alert as soon as that first story went live, followed by additional alerts when more people piled on.
At that point, you could have evaluated public opinion and determined the best course of action.
If there are enough people supporting you and dismissing it as a non-issue, maybe you can ignore it or draw attention to how silly it is. If the backlash is strong, you can give yourself time to craft a response, speak with sponsors, and ensure you’re prepared.
Either way, media monitoring will have given you insights into public opinion and ensure you’re prepared to deal with the issue quickly. That way, you’re being proactive and not reactive; you’re plugging the leak as opposed to grabbing a bucket and a towel and waiting for the flood.
Media Intelligence and the Future
Hate speech is not going anywhere. There will always be racists, homophobes, xenophobes, and general bigots, just as there will always be people who think it’s amusing to insult strangers. However, media intelligence can help governments, companies, developers, and individuals to understand the patterns of hate and do something about them.
By the same token, many of the same strategies and tools used to understand hate speech can be used by influencers and brands to control their voice, safeguard their reputations, and ensure they are the first to know and the first to act following a reputation crisis.
At a time when even the slightest social media mishap can lead to a deluge of hateful comments and threats of professional destruction, media intelligence tools are more important than ever.