Our digital space, marked by the rapid dissemination of news, opinions, and narratives, has become a breeding ground for deceptive information. Terms like fake news, misinformation, disinformation, propaganda, and cognitive warfare echo through our conversations, underscoring the pervasive nature of intentionally misleading content. It is a landscape where the lines between truth and manipulation blur, leaving us to navigate an intricate web of narratives designed to influence our perceptions and shape our opinions.

November was AMEC Measurement month and experts from A Data Pro and Identrics celebrated it with a webinar titled: Measuring Disinformation: How Media Intelligence Techniques and Technologies Inform and Enhance Communication Strategies. 

The Importance of Detecting Disinformation for Businesses

According to our Media Intelligence Account Manager Rostislav Petrov, “we have been living in a post-truth world since 2016. Back in 2015, the New York Times actually posted a video about fake news vs. real news and the reliability of sources.

The following year, the Presidential elections (Hilary vs. Trump) took place and right after Trump’s inauguration in early 2017, the New York Times published an article about the post-truth era and how they couldn’t have foreseen, in that earlier 2015 piece, that fake news would mar the 2016 presidential elections, that it would cause real violence (referring to the Washington Pizzeria Attack), and that Trump, the president elect at the time, would use the term to attack mainstream media he opposes. 

According to research, some 45% of American citizens read – and did so regularly – sources that published fake news/propaganda during the 2016 election cycle. As per Stanford researchers, the number has gone down to some 20% in 2020, but 20% of people uncritically reading fake stories that are used to taint public opinion, is a frightening thought, because 20% of Americans are still tens of millions of people who might vote out of fear and not based on a value system they have.

In a post-truth world, there is no objective truth. There are only narratives that can be pushed down the public’s throat, so to speak; and these narratives can alter the public perception. Companies, individuals, and products can suffer the consequences. That is why it is important to integrate media intelligence strategies for identifying propaganda and disinformation into their projects. But if we date the beginning of the post-truth era to 2016, coinciding with the Oxford Dictionary adopting the term, six years have passed and we need to act.

The Need for Identifying Disinformation and Propaganda in Visual Formats (e.g. Memes)

Rostislav explains: “…research on disinformation in memes is currently a work in progress. It has started and some progress has been made, most certainly, but there is a long way to go. But, it’s important for a very different reason as well and we all know it: memes can be shared online and they have this ability, this tendency to go viral sometimes and, for obvious reasons, the wrong memes go viral and their influence might be overwhelming and really dangerous. That’s why they are so important because not everybody would really go to a website or a source to read an article that’s 10,000 words in length but many people would actually skip this part, and just look at several memes instead. The memes will actually have an impact on their opinion.”

Rostislav adds “..back in December 2020 a single meme generated thousands of new mentions of a conspiracy tying the covid vaccines to 5G. And we may laugh at that, but this was a really really dangerous meme back then, and, of course, as I said visuals have power. By visuals – I mean memes, pictures, video clips, whatever. Memes are an external driver of narratives in which context could easily and inventively be manipulated by a third party.

Devora Kotseva Measuring Disinformation webinar

Devora Kotseva, a Media Analyst in A Data Pro’s Media Intelligence desk, has experience working as a consolidator where she had to annotate memes that were meant to identify political propaganda techniques.

She said given that communications are increasingly becoming visual, organisations should be looking for more ways to be able to monitor and decipher the meanings of images as part of their proactive communication measurement strategies. 

Devora explains: “I can also refer to a recent case study that Todor Kiriakov and I did, which concerned a disinformation campaign against the political candidates and this is where memes were used to push a narrative into the mainstream and you could see how the memeification of this candidate’s image could reach audiences that were not actively looking for news on him. It could promote either voter suppression or it could also mobilise users to not vote in favour of this candidate, so I definitely think that visuals like memes and also DIPs (deceptive image persuasion) that we’ve recently seen a very huge spike in, are very important to observe, especially since deep fakes and AI generated images are very easy to produce and they’re getting more and more convincing.”

Hard Power, Soft Power, Sharp Power, Cognitive Warfare and Their Connection to Disinformation

Identrics’ Services & Products Manager Nesin Veli argued that memes illustrate how “something so intrinsic to human nature, to human culture, can be designed and weaponized as a means to a political end. And I want to frame our playground a bit here on the international level.

Now, I’m going to reach for a bit of a large example but one can argue that the cold war or, at least what is turning out to be the first round of the Cold War, was largely won due to culture: music, movies, McDonald’s. 

So this cultural interaction between nations can be used to apply pressure and to achieve goals and this is what is usually known as soft power. Now, in this system on the opposite side of soft power, we’re looking at hard power, of course, this is direct military contact.”

“…Now between soft power and hard power, we have something new and emerging called sharp power. Now cognitive warfare dwells within there, but some other examples that I want to put up front here is, for example, economic and energy extortion. Of course, we all remember the Russian threats for a cold European winter… Another example of sharp power would be technological pressure. We have the ever rising importance of microchips, especially now with the need to train large language models. An example here is the United States putting a semi-tech embargo on China to hinder their efforts in this area, which, in turn, of course, escalated the relations between China and Taiwan because Taiwan is one of the places where those microchips are produced. 

Part of sharp power is the attempts of a foreign country of foreign state actors to try and influence, in most of the cases, the democratic process and the people of another state. These are things that are not always about truth and life. Oftentimes, it’s about heart and minds. The idea here is to split the society, to undermine the trust that society has in the government, and this is achieved through designing and injecting disinformation narratives into all levels of the media.”

Nesin goes on to say that the end game of this operation is for disinformation to reach mainstream media as well as topple the current government. “This allows for the rise of non-state actors that can ride that wave of discontempt to the next elections, and the idea is to put a government in place that is under the influence of the state that is designing this attack.” Nesin adds that “obviously this is possible only because of the exponential rise of technology and, of course, of the interconnectedness of the global economic system and the manufacturing pipelines but, on the technology level, this is going on in the media.

This is why media intelligence tools become even more important, and working on them and developing them so they can be used to measure and to give power to the people that can make counteracting decisions when we’re talking about propaganda and misinformation – be it on the corporate level, or be it on the on the government level – is becoming ever more important.”

Terminology check: Fake News, Disinformation, Misinformation, and Propaganda

Todor Kiriakov works as a Training Specialist as part of A Data Pro’s Training unit. He says that “…it is quite important to use the right terms and the right vocabulary to frame an issue and to frame a conversation because like we’ve seen since 2014, especially since 2016, the conversation has just exploded around those issues and the field has grown a lot.

There have been some contributions to the vocabulary of talking about disinformation and propaganda, but there have also been buzzwords and catchphrases that have casually been used and, sometimes, very inaccurately, and that is actually a hurdle towards understanding the problem. 

Fake news” is unhelpful and we avoid it because it fails to describe the problem accurately and has also been appropriated by bad actors. “Disinformation” is much better because it has a clear definition: information that is false and is spread intentionally in order to cause harm. It’s different from “misinformation“, which is false information spread unwittingly by people who believe it to be true or just don’t care whether it’s true. And there is also “propaganda“, which has been defined differently over the years, including in some rather positive terms. The definition of propaganda that Devora and I use in our work is communication that utilises language and imagery and symbols and biases and emotional triggers in order to influence people – not just their behaviour, but also the ways they perceive the world. 

But beyond those basic concepts, different organisations, such as governments and academia, may have their own terms designed to describe the problem or angle of a problem that they want to specifically monitor and analyse. For example, the European External Action Service, which is essentially the diplomatic service of the EU, has a precise definition of what it calls Foreign Information Manipulation and Interference, and this is a concept that EU policymakers find useful and usable.”

Adapting Media Intelligence techniques to measure disinformation and propaganda

When asked what specifically Todor and Devora have been working on as in disinformation, propaganda, or the intersection of the two, Todor explains: “We are most interested in the targeted use of disinformation and propaganda in ways that further someone’s goals at the expense of society. To describe such long-term, sustained efforts at societal erosion and disruption, we use the term Information Operations, or IO, which also comes from military lingo. 

IO exploit existing contradictions in society and the weaknesses of the information environment to serve the interests of malign players, who could be both foreign and domestic. They take place across various communication channels and use a wide range of techniques in a bid to capture the audience’s attention and shape their opinions and worldviews. What distinguishes IO from the casual spread of misinformation by unwitting people is that these operations are planned, coordinated, and hostile in nature.” 

Devora Kotseva Measuring Disinformation webinar

Devora adds “we should specify that these operations can also be very long-lasting and low-intensity, they may have dormant phases and reactivation periods, and the forces behind them are not always apparent. An IO consists of a series of what we call Information Events, or IE.

These are viral moments that see flurries of media activity, usually centred around a specific narrative that is currently trending. Information Events are the tangible, observable components of Information Operations.”

This raises the question: is it possible to measure the spread, the reach and the effects that information operations and information events have on audiences?

Todor says “Well, there must be because you can’t fight a problem unless you can measure and comprehend it first. So, as Devora said, we do use the methods of media intelligence and they allow us to measure many things: we can quantitatively measure the numbers of publications that spread a disinformation or a propagandistic narrative,

we can measure audience engagement on social media, we can measure the pace at which a narrative spreads, we can observe like how it peaks, when exactly and why it peaks, or identify the breakout moments from one media channel into another. 

If I may borrow some terminology from AMEC’s Integrated Evaluation Framework, actually, this is mostly focusing on the outputs and outtakes, whereas if we want to focus on the outcomes and the long-term impact of an of an information event or an entire operation, we need a much more qualitative and analytical approach. That’s why Devora and I pay so much attention to things such as the identities of the disinformation actors, the techniques they use, the reactions of the target audiences, the longevity and evolution of narratives, and so on. There are so many things that need to be monitored and analysed to make sense of how an event takes place and how an operation evolves.”

Information Operations and Information Events Operating Spheres 

Devora Kotseva Measuring Disinformation webinar

Devora explains that IO and IE are “not strictly limited to the political sphere. They can occur in industries like agriculture, pharma, fast moving consumer goods, the grocery industry, and this being said, they could be used to sway the public’s opinion so as to serve a certain political agenda that you’re not immediately aware of.

For instance, in my line of work as a media analyst, I’ve come across a disinformation campaign run by Chinese bots on Twitter that were trying to amplify a negative piece of news that involved one of our clients, and then I’ve also seen a very fringe media reporting on an RT (Russia Today) article citing sanctions, but the fringe media didn’t put the precise link to it, but it was citing it.”

Marketers and PR & Comms professionals are arguably the most concerned with identifying and measuring disinformation as they are the ones that craft communication strategies. According to Devora’s experience “a lot of people are confused about propaganda, persuasion, and marketing. Sometimes you start to question traditional marketing practices, and it’s normal because disinformation appropriates traditional marketing practices to enhance its messages, distribution, and impact (i.e. coordination between channels, SEO techniques, audience targeting). Think multinational companies with their international and local marketing efforts, looking to create a ‘viral effect’, fostering communities/echo chambers, influencers or brand ambassadors and fake experts and useful idiots. ”

Devora goes on to elaborate how disinformation actors use SEO techniques and “hyperlinks, internal links, and backlinks so it can have a wider reach and those pages can have a higher page rank and seem legitimate like fringe media websites. Then you have on one side influencers, brand ambassadors, and let’s say Potemkin villages with fake think tanks and very strange institutions that once you Google them, you don’t really find them…Forbes recently published an article ‘Marketing Misinformation: A Thin Line Between Persuasion And Deception‘ suggesting that marketers are under more scrutiny than ever to be ethical in their practices.”

The Use of AI to Measure Propaganda and Disinformation Narratives

Nesin Veli Measuring Disinformation webinar

Identrics’ Services & Products Manager Nesin Veli says “We have been using AI techniques such as NLP and machine learning to measure communication for some time. As mentioned, a lot of disinformation tracking builds upon existing media intelligence mechanisms and products…we are relying on the vast experience of both domain editorial expertise and technological expertise that we have accumulated through media intelligence measurement. 

Let’s take our Media Contacts Database automation services as an example that we have at Identrics that we have built upon to measure and counter propaganda. We have journalist and outlet profiles based on what topic they are writing about via our document topic classification model; what people and organisations they are writing about via our named entity recognition model; what is the sentiment of the article; and social media engagement to measure the impact of the outlet or author or the specific story; and the data is put into knowledge graphs so that complex relationships between journalists, outlets, publishers and ultimate owners can be explored. All of this is usually used to provide our clients with very high precision for their marketing and email campaigns.

But look at it from the point of this reality where we have to tackle propaganda and disinformation and now, willing or not, knowing or not, [some of] those journalists and outlets are turning into part of the pipelines that propagate propaganda and disinformation. So, we started monitoring additional metadata and building additional models. For example, now we measure:

  • topic dispersion – are they writing on diverse topics or pushing the same topic?
  • original content levels – are they largely creating or republishing content?
  • title information levels – is it clickbait or does it inform you?
  • staff transparency – are their articles signed with an actual name or is the name of the source or no author at all?
  • ownership transparency via the knowledge graph, are there PEP connections for example.

These are data points that we are putting into a knowledge base which, yes, can be used on the Media Contacts Database level if you want to make a more informed decision when you’re targeting a mailing campaign, but this also can inform decisions about when you are researching whether there’s a disinformation campaign going on, and this openness of the media is really a large part of it, and as I said and it bears repeating, we are designing tools that are aimed to empower decisions, and not to automate and make decisions.

According to Nesin, “propaganda and disinformation are societal problems, and they’re going to be solved by people. There’s no way around it, so we are looking to empower those people that are on the front lines, and are making the decisions with our tools.

Factors Driving the Rise in Disinformation Campaigns

Nesin sees the exponential growth of technology as the main factor driving the rise in disinformation campaigns. “NATO put the beginning of the modern era of cognitive warfare in 2014, with the preparation for annexation of Crimea and the start of the ongoing Kremlin sharp power active measures in the Black Sea region. I think this happened the first time it could – social media was on par as an information source to mainstream media; people started being served by algorithms which calcified them in comfort information bubbles affirming their beliefs and bringing them together with other people with the same beliefs.

This has only grown exponentially and now we have people that are either easily agitated when confronted with alternative information that is outside of their comfort zone or we have people that are completely checked out of the conversation. We have more information flow than ever and people are surrounded by a wall of noise and to reach through that you need media literacy and tech resilience.

We also have mainstream media abandoning journalistic principles to play catch up with algorithms over clicks. Disinformation is not going to go away – it is cheap and it works. And I do not foresee any total winnable scenarios, the best case scenario is to mitigate back to a corner and keep it there.”

How Disinformation Tries to Capture Audiences’ Limited Attention Spans

Devora Kotseva Measuring Disinformation webinar

Here, Devora jumps in and touches upon the importance of data and attention in the context of information warfare, citing some of Emerson Brooking’s observations. She says “we talk a lot about data data being the new oil, the new currency, but we should also touch upon attention because everyone, from say activists to politicians to disinformation spreaders, is vying to reach wider audiences and they vie for their very limited attention,

which is normal because attention is very human, and it has a very limited span. It demands constant novelty and innovation, and it still is the way we process the world and we also process the world via stories and emotions. 

When talking about information bubbles, we still trust people that we share common traits or interests with and value their opinion over the opinions of other people, and we should definitely take this into account. The same case is with “outrage” being the emotion that is the one that would always force a reaction out of us, because, again, it’s very human. You have the villain, you have the hero, and unfairness, and we should take emotions and attention into consideration, because disinformation campaigns want to get noticed by as many people as possible, otherwise they wouldn’t be able to complete their goals.”

Nesin Veli Measuring Disinformation webinar

Nesin agrees with Devora and adds that “the attention economy is certainly something we should be aware of.  We see that a large part of societies are just checked out: they’re tired of following news, they’re tired of wondering is this disinformation, is this news, etc. We need to find ways to bring them back into the fold, because we need people to be engaged with it. Media intelligence professionals should rely on different and new ways to engage with those people.”

The Propaganda Measurement Approach – Media Intelligence (Mi) Solutions

Nesin points out that the approach includes “early alerting systems which enable active prevention. Yes, measuring a campaign is important for the recognition pattern, but providing the right set of tools to propaganda and disinformation analysts and decision makers is what we are aiming for. We mentioned fringe media a couple of times and this is part of a methodology we have developed, and the idea here is to track the spread from origin point to blossom point of disinformation campaigns. 

We’re looking at fast data, fringe media, and mainstream media. Fast data – yes, social media is a huge proponent of disinformation, but the seeds are sown before that, in instant messaging channels like Telegram, like Reddit. Fringe media – usually controlled by the same actors that saw the seeds with fast data, fringe media aims to mimic mainstream media and infest it with false narratives. For the apathetic people it often becomes mainstream media. This is where the seeds grow. Mainstream media – unfortunately in some cases mainstream media does not need fringe media. This is where the seeds blossom and reach the largest audience.”

The work that Nesin and his teams do at Identrics includes “tracking that barrier of when a story jumps between different data sets. We are compiling these patterns and are tackling the pattern recognition issue and hoping to soon roll out to professionals to be able to track from Telegram and Reddit to the end point of the mainstream [media]. And of course, the name of the game here is not measurement but prevention. We want to empower people to pinpoint things in their infancy and get truthful data out there to counteract the disinformation narrative. This is the idea here.

Mainstream media is also a huge target for media takeover. One clever tactic are troll farms targeting older articles on a website and spamming with hate speech, after which non-state actors report the media to the government agency for hate speech facilitation which leads to fines. Rinse and repeat and it is easy to get rid of smaller legit media or hinder the operations of a large one.

This is why we collaborated with editorial and moderator teams to create hate speech recognition models, based on a training set provided by them. With a precision of .8 we were able to automate signalling of potential hate speech patterns in the comments they are moderating. The decision is not automated, the signal is. It is important to keep humans in the centre of the tech loop especially when talking about hate speech, propaganda, disinformation.”

The Role of Generative AI in Spreading Disinformation

Nesin says that Large Language Models (LLMs) are a “powerful double-edged sword. I haven’t decided whether I’m more optimistic or pessimistic regarding it, maybe more optimistic because they certainly grew with the public release of GPT 3.5 back in November of 2022. But our team has been working with GPT 2 since it was made available by OpenAI a few years back. So, we certainly appreciate the heavy lifting LLMs can do, such as pinpointing linguistic patterns in the test due to the large corpora of training data. 

For example, LLMs can pinpoint hate speech language and propaganda language – this is something that we have experimented with. I might add not in production because when working with large language models that are owned by third parties, I’m concerned about my data, my company data, my client data, but in isolated cases for production it can be used, if it’s controlled. LLMs can be used to extract data points into knowledge bases which then can be used by fact checkers. The thing I am probably most excited about is the possibility of crossing the language barriers when monitoring disinfo campaigns.

Now on the more pessimistic side – not that the troll farms needed any help, but this really boosted them – we have all this synthetic information out there, and part of it is propaganda and disinformation. It’s very important to have tools to recognise fake generated text because this fake generated text is fed back right into the models. So, in a way, it’s a kind of poison…so I would say to be cautious and to use it in a controlled setting. 

We at least can start to make some assumptions of why it returned this answer and not that answer. This level of accountability is not something that can be easily achieved by Google or Microsoft. And we talk about PR and marketing campaigns. We know how effective and efficient the system of selling ideas and goods already is. The LLMs are able to craft personalised disinformation campaigns, pushing the right buttons, pushing the right fears, so they can cater to more specific mindsets.

Disinformation, Communication Strategies, and Measurement

Dilek Asanoska, the webinar’s moderator and A Data Pro’s Communication Expert, pointed out the circular nature of information, supporting Nesin’s views on the dangers of feeding information to AI. She says “we feed something to AI and it feeds it back to us and then again it takes from us and it feeds back.

We have to be constantly vigilant of the bias that it might have depending on who is actually training it and what type of information it’s processing.”

According to Dilek, communicators need to be media literate because they are the ones who build communication strategies and put out messages in the world on behalf of individuals, companies, or organisations. She also calls for the “implementation of a multi-layered approach to monitoring the media as well as measuring the impact of misinformation and disinformation on audiences. Monitoring mentions might not be completely useful if you don’t understand what information your target audience are receiving as well as whether that information is reliable. 

Also, something that’s very interesting to me is this generational shift where audiences especially those of the MZ Generations are expecting brands to be very transparent, to have a clear stance on political and social issues and not just make relatable TikToks. Due to the fact that the algorithms, as I said, on social media are creating these echo chambers, regardless of how media literate you might think you are, as somebody who communicates an official stance on an issue, you have to be able to explore the multiple sides of an argument and know:

  • Where does this come from? 
  • Who is enforcing this argument?
  • Who is it impacting? 
  • What would be the implications if we took a side? 

So especially for PR and Comms people, when a PR crisis happens you have a small time window, call it golden time, and in that golden time you have to respond. This is where timely monitoring of these types of narratives, as Nesin mentioned doing this proactively can help you develop more informed communication strategies.”


Interested in working with A Data Pro’s professionals? Contact us!