Investigations
Security
Capacity Building
Insights
About
Digital Threat Digest Insights Careers Let's talk

What is the difference between misinformation and disinformation?

Confusion G

The terms misinformation and disinformation are often used interchangeably, but they do differ in nuance. Misinformation constitutes the majority of my family WhatsApp chat – false information shared without malicious intent. On the other hand, disinformation refers to false information designed to have a negative impact, be it socially, economically or culturally.

Where do Information Operations start?

Misinformation and disinformation come together to form a wider strategy: Information Operations (IOs), which often incorporate both authentic and inauthentic elements in order to influence opinion on social media. How do they work together? An IO is broken down to two main elements: the creation of the initial disinformation, and the subsequent amplification of the content (spread as misinformation by unsuspecting WhatsApp participants, for example).

Information Operations typically originate from a state level—or at least from state-backed entities—and states often run multiple types of IO simultaneously. These can include self-contained operations, where an actor creates a public-facing pseudomedia site through which they seed entirely inauthentic content that is then amplified by bot accounts on social media. These operations are often easier to identify due to the significant quantities of overt disinformation they host. But not all IOs are quite so obvious; ‘hybrid’ IOs are more subtle, masking specific disinformation narratives on pseudomedia sites amid legitimate content. The most sophisticated IOs do not contain overt ‘fake news’, as we think of it, but rely on the concept of polarisation.

Taking the example of Ukraine—where Russia often runs IOs to assess their scalability and effectiveness—the concept of polarisation has been crucial. Part of the pre-emptive legitimisation of territorial conquest revolves around on-the-ground acceptance. In parts of eastern Ukraine, Russian-created or influenced media platforms have sought to promote content that highlights a regional identity, rather than a national one. This identity is defined along cultural, religious and societal lines, and seeks to erase the idea of one person being Ukrainian while another is Russian. This promotion is not disinformation as, in many regards, the regional identity crosses what are often arbitrary borders.

Spotting an Information Operation

The more subtle IOs involving polarisation are also where traditional media plays a role. Media publication of overt disinformation is rare; however, by default media entities are drawn to popular or viral content, particularly on social media. Polarising content generates debate and therefore clicks and therefore ad revenue. The promotion of inauthentic content by an authentic outlet is the ultimate goal of any sophisticated IO, as it ensures that the amplification element of the process plays out in an entirely organic manner.

Comparing actual interaction metrics of specific pieces of disinformation against their expected metrics is one way of identifying which content has been inauthentically promoted—the deployment of 10,000 Twitter bots on a particular hashtag generates an obviously false growth chart for example. However, it is almost impossible to distinguish a piece of disinformation that has gained organic traction from a legitimate audience from any legitimate content simply going viral.

Identifying IOs from a narrative-led approach is therefore often an effective technique as the narratives they promote typically align with a state’s policy objectives. This can be seen at an international level through Russian IOs in eastern Ukraine, Iranian IOs in northern Iraq, and on a domestic level within Libya.

Information Operations now

As Covid-19 has shown us, pieces of both mis- and disinformation are going viral constantly, and authentic audiences are promoting pieces that align with a variety of international state objectives. Authentic promotion is not necessarily linked to social engineering, but rather the tendency of individuals to share polarising content which fits their specific worldview. In many cases, the more polarising the content, the more it fits a worldview. Social media has been omnipresent in our lives for more than a decade, but as we have shifted to working at kitchen tables over the past couple of months exposure to the ‘always online’ lifestyle has certainly increased. While we all laughed at the plans to bake a giant lasagne in Wembley Stadium, social media disinformation has had devastating real-world impacts, such as the spread of many 5G conspiracies, which have prompted physical harm in both arson attacks and assault of telecoms staff.

How do disinformation and misinformation affect business?

At a base level, the risks that disinformation campaigns and IOs pose to businesses are clear: How can you make a good decision based on poor or compromised intelligence? How do you identify and counter an inauthentic smear campaign? How do you assess what conversations potential clients are having when they are influenced by 10,000 bots run from a dark PR firm in Poland? How can we trust in any social media content as its legitimacy continues to be eroded?

But equal benefits can be gained from assessing IOs, as understanding an IO allows you to understand the objectives of the entity responsible for its deployment – from state to political to commercial contexts.

Subscribe to our Digital Threat Digest, insights from the PGI team into disinformation, misinformation, and online harms.

PGI’s Digital Investigations Team work with both public and private sector entities to help them identify, assess, and attribute IOs. From high level assessments of the risks of disinformation to electoral integrity in central Africa to deep dives into specific state-sponsored activity in eastern Europe we have applied our in-house capability globally. 

Contact us to talk about your requirements.