Let us talk about:
- why proper training is needed to recognise political propaganda in memes.
- what classifies as political propaganda and the subtle yet sophisticated ways it is integrated in memes.
- how meme annotation can be used to train AI and help you detect and fight political propaganda.
In an age where information is disseminated rapidly through social media, the influence of visual content, especially memes, has become a powerful vehicle for shaping public opinion. Memes, which are usually seen as humorous images accompanied with text, are no longer used just to tickle a laugh out of online audiences and create a sense of community. Due to their seemingly innocuous nature, memes are sometimes employed by political actors to perpetuate their political propaganda.
As more and more propaganda appears to disrupt the consumption of the truth, detecting the subtle ways it is spread and fed to unsuspecting viewers is of paramount importance. But how can you recognise political propaganda in memes so you can bring it to people’s attention, and how can you keep up with the sheer volume of images shared on social media every day?
The answer is: by having professional annotators train artificial intelligence (AI) models to comb through the immense volumes of content, find patterns, and automate time-consuming tasks.
Why Memes?
Throughout the years, A Data Pro professionals have worked on projects that focused on detecting propaganda techniques in written content such as news articles. For example, our Account Manager Rostislav Petrov co-wrote a paper that explored the findings of SemEval-2020 Task 11 on Detection of Propaganda Techniques in News Articles which noted that propaganda is successful when it goes unnoticed by the viewer. Political propaganda has evolved and infiltrated our screens in such a way that unless properly trained to recognise propaganda techniques, we become unwitting participants in the dissemination of harmful narratives.
As content online becomes more and more visual rather than textual, we have to develop sophisticated analyzing processes to capture the potential political nuances in images. Detecting propaganda in images is trickier than text, but not impossible.
Rostislav explains: “I think memes are increasingly relevant in a political context and if the online coverage of the 2020 US Presidential Elections and the subsequent Capitol attack was any indication, memes will be a social media tool used in next year’s US elections as well. Therefore, it’s crucial that our industry widens its lens to increase the focus on visuals, including memes. Unlike traditional media, memes are spread rapidly and could have enormous impact, especially as Gen Z is coming into its own.”
What Classifies as Political Propaganda?
Political propaganda encompasses a wide range of techniques and strategies aimed at influencing public opinion, often with a particular political agenda or ideology in mind. What classifies as political propaganda can vary, but some common elements include:

Disinformation:
The deliberate spreading of false information to shape public perception or discredit opposing views.

Emotional Appeals:
Using emotional triggers, such as fear, anger, outrage, or sympathy, to sway public sentiment and support a specific political cause or candidate.

Cherry-Picking Information:
Selectively presenting facts or statistics to support a particular argument while ignoring or suppressing contradictory data.

Oversimplification and False Equivalence:
Reducing complex issues into overly simple explanations, presenting reality in black-and-white terms, or drawing flawed comparisons between incomparable things, all in order to substitute logic and understanding with emotional responses.

Demonization and Character Attacks:
Portraying opponents or rival groups in an overwhelmingly negative light, often resorting to personal attacks and character assassination.

Appeals to Identity and Feeling of Belonging:
Fostering patriotism, nationalism, and other senses of group pride and duty as a means to garner support for a particular political agenda or leader.

Use of Symbols and Imagery:
Employing powerful symbols, images, and slogans to create associations and manipulate public sentiment.

Censorship and Suppression:
Restricting or controlling the flow of information to prevent opposing viewpoints from reaching the public.

Controlled Media Narratives:
Manipulating media outlets and news sources to disseminate a specific message or perspective, or employing fake news outlets to flood the information space with narratives.

Manipulating Social Media:
Leveraging social media platforms to spread and amplify political messaging, often through the use of fake accounts, bots, or coordinated campaigns.
How is Political Propaganda Spread via Memes Different?
It is not different. It is just more subtle and sophisticated. Memes have visual appeal and often leverage humor, satire, or clever imagery to engage and entertain the audience, making them more shareable and relatable. They are very concise and usually consist of a single image with a short witty text, which makes them perfect for quick consumption and easy sharing, further accelerating the already fast-paced nature of social media. This also means that they can easily go viral, which can organically amplify the propaganda message.
Memes use informal language and colloquialisms, making them more accessible to a wide audience. They can contain hidden messages, masked behind humor or irony. Moreover, they are user-generated and shared within online communities, making it difficult to detect the initial source of the propaganda messages. Popular meme templates live lives of their own, carrying associations and subtextual meanings that, if recognized, can drastically augment the apparent messages of individual memes.
Memes can adapt quickly to current events or political developments. They respond to real-time issues and can be tailored to specific niche audiences, allowing propagandists to stay relevant and craft messages for distinct demographic or ideological groups. Online echo-chambers, such as closed Facebook groups, subreddits, and Telegram channels, are a particularly favorable environment for the dissemination of memes whose messages conform with the chamber’s exclusive worldview. Lastly, a meme can be masked as being a mere joke, which means that it is difficult to hold particular individuals or groups accountable for spreading political propaganda.
What is Meme Annotation?
Meme annotation is a crucial process in the field of artificial intelligence and natural language processing where expert human annotators provide structured and context-rich information to memes. The annotations serve to clarify the content, the intent, as well as the message behind a meme, and by doing so, train AI models to understand and interpret them effectively.
How Does Meme Annotation Work?
Meme annotation is a type of image annotation, a process that involves adding labels, tags, or metadata to memes to identify objects, people, emotions, actions, and any text within the meme. Additionally, annotators often highlight the humor, sentiment, or any cultural references present in the meme, making it easier for AI algorithms to grasp its context and significance. In the context of political memes, the annotation involves the identification of particular propagandistic techniques that can appear in the text of a meme, in the image, or in both.
The goal is to equip AI systems with the data needed to recognize humor, satire, sarcasm, and even potential political propaganda. This process helps AI models become adept at understanding the nuances of internet culture and, ultimately, contributes to their ability to engage in more sophisticated meme analysis, sentiment analysis, and content moderation on digital platforms.
When the goal is to detect propaganda, the process can be roughly separated in 4 steps:

Defining propaganda techniques to be used for the annotation process such as: whataboutism, name calling, appeal to fear/prejudice, red herring, etc.

Retrieving meme samples from target sources, whether that is Facebook groups, forums, or any other social media platforms.

Independent annotation and labeling a meme with specific propaganda techniques. The annotator first sees the text and decides which propaganda techniques are applicable. Then they see the text together with the image to be able to determine whether their first choice was right or if the image changes the meaning of the text (e.g. sarcasm). At this stage, the annotator can label the meme with additional techniques, if they are recognized as present only in the visual component of the meme (e.g. symbols).

Collective discussion about the independently annotated memes together with a consolidator to spot discrepancies or label mismatches.
How is Human Bias Eliminated when Annotating Memes?
The answer is: by having a diverse team of professional human annotators working on the project who can bring different perspectives to the process as well as adhere to the guidelines. They are trained to interpret memes objectively, without personal or political bias. According to Training Specialist and disinformation expert Todor Kiriakov, teams are specifically taught to recognize various forms of humor, satire, and cultural references, ensuring that their interpretations remain impartial.
As noted above, a big part of the meme annotation process is discussion. Continuous quality control, cross-validation, and iterative feedback mechanisms are employed to reduce the influence of individual biases and maintain a consistent, neutral standard. By promoting such best practices, AI models can be equipped with the capacity to analyze memes without inheriting or perpetuating human biases, contributing to more fair and accurate assessments of meme content.
Media Analyst and disinformation expert Devora Kotseva, who specializes in propaganda and oversees the consolidations, points out that diverging opinions and conflicting viewpoints inevitably emerge during these discussions. In fact, one of the most popular sentences exchanged during the meme consolidation process by our professional annotators that has surfaced as an internal joke is: “I understand, but I do not think so.” However, the team takes pains to ensure that memes are labeled with the techniques that all human annotators recognize and agree on.
6 Ways Training AI Models via Meme Annotation Can Help You Fight Political Propaganda
By employing meme annotation to train AI models, you can maintain a vigilant eye on meme content, ensure the integrity of online discourse, and ultimately contribute to a more informed and discerning society. This leads to:

Enhanced detection accuracy:
Meme annotation enables AI models to recognize the subtle cues and hidden messages within memes, improving the accuracy of political propaganda detection.

Cultural Context Understanding:
AI models become adept at understanding cultural references, humor, and satire, vital for deciphering the context of memes often used in propaganda.

Large-scale Analysis:
AI can process and analyze massive volumes of memes quickly, allowing for comprehensive scrutiny of propaganda efforts across various platforms.

Contribution to Broader Studies:
Propaganda and disinformation-based information operations rely on combinations of techniques to achieve their goals, and AI-powered analysis can improve the understanding of the role of memes in such operations.

Content Moderation:
Armed with trained AI models, the moderators of online media platforms can more easily detect and remove memes relying on hateful, insulting, or threatening text and imagery.

Educating the Public:
The insights gained from AI analysis can be used to educate the public about the prevalence and dangers of political propaganda in meme form.
How can we help?
A Data Pro has a robust and dynamic approach to tackling the challenge of propaganda, particularly in meme form. Our expert teams boast professional annotators from diverse backgrounds who can decipher the complexities of these seemingly innocuous visuals. They follow rigorous guidelines, engage in extensive discussions, and take quality control measures to eliminate human bias and maintain a consistent, impartial standard.
We are committed to equipping individuals, organizations, and platforms with the tools to defend against the manipulation of public opinion and have taken significant strides towards combating the insidious spread of disinformation.
We can assist you in your efforts to maintain the integrity of online discourse.
For more information, contact us today!
This article was written based on insights provided by A Data Pro Account Manager Rostislav Petrov, Training Specialist and disinformation expert Todor Kiriakov, and Media Analyst and disinformation expert Devora Kotseva.