A lot has happened in the UK political landscape over the past few months; a landslide election with a significant transition of power, and sweeping waves of riots and political violence. We have witnessed how digital threats such as disinformation, can be central to igniting civil unrest.
Civil society and government are now engaged in frank discussions about what caused the riots, and how to stop them from happening again. But beyond just talking about these issues and their causes, we also need to scrutinise how they are discussed and defined in public consciousness. Why? Because digital threats, like disinformation, do not occur in a vacuum. Threat actors exploit vulnerabilities in society such as media access, poor digital literacy, polarisation, or press censorship, which all influence how content is received, internalised, shared, or acted upon.
If digital threats do not occur in a vacuum, then neither should our responses. We should look at digital threats as attacks on wider societal digital resilience, which requires engagement from stakeholders across the public, private, media, and defensive sectors. Weakness in one area will compromise the integrity of our collective response overall.
Events like an election or civil unrest offer opportunities to identify what areas of our digital maturity and responsiveness are in most need of transformation. One such issue arising from the recent UK riots has been that of journalistic quality and over-vigilance.
The problem with over-vigilance
It was pleasing to see proactive messaging from media outlets about the risks of disinformation during the UK’s recent general election, as well as the general population’s increased awareness of digital harms. Wider vigilance toward a threat, be it AI, disinformation, or malign influence, is healthy; it helps the wider population stay alert to attempts to undermine democratic integrity.
The more people writing and talking about it, the better.
However, in the lead-up to polling day some commentators crossed into over-vigilance; turning everything into a ‘Russian Information Operation (IO)’ or a ‘deepfake’. Post-election, we know these fears were exaggerated. PGI’s Digital Investigation’s team noted: commentators referring to a politician’s previous un-achieved policy pledge as disinformation; journalists claiming to have found large-scale coordinated influence operations from Russia; and web sleuths rushing to accuse candidates of being fake or using Russian bots. In truth, hyperbole isn’t illegal; a political U-turn is nothing new; a group of five social media accounts sharing anti-west propaganda is hardly evidence by itself of Russian interference; and a ‘paper candidate’ using an AI filter to edit his headshot is not evidence of them being a ‘fake persona’.
This over-vigilance and reductive thinking extended to responses to the UK riots. Mainly poorly researched articles claiming—without nearly enough evidence—that disinformation about the Southport attacker’s ethnicity was part of a Russian-linked IO. More holistic research that contradicted these claims came shortly after, which showed that better research is possible, but that there is nothing forcing journalists to do so. The lack of a consensus in public reporting about standards of attribution, methodology, and evidence when writing about digital harms, means that the quality of reporting on digital threats is going to vary hugely. This wouldn’t be too much of a problem if the quality of public media wasn’t such an important pillar of wider societal digital resilience and threat awareness.
As usual, many commentators blamed the platforms, but the fact that a social media platform did not (or was slow to) take content down does not mean inaction or complicity. It shows that they have a bar that needs to be met to justify removing things from their platforms, and that many researchers and journalists simply aren’t reaching it.
Hunting for the scoop
The rise of OSINT/SOCMINT has fuelled a more competitive environment, where rushing to find the scoop or the smoking gun before anyone else is common. Stories of large scale IOs and deepfakes sell, but this competition often leads to cutting corners, with some commentary lacking evidence, attribution, or nuance.
The world is complex, with a myriad of ideologies, moral standpoints, and motivations. When it comes to digital harms, it’s tempting (and easy) to simplify things by blaming everything on unseen ‘dark forces’.
However, hastily labelling everything as a deepfake or IO without evidence, dilutes defensive terminology, making it harder to identify, map, and attribute genuine harms. Sometimes we want to find scandals so badly that we’re willing to speculate beyond the evidence. The labels get overused, and their meaning becomes even more obscured. They get relegated to mere insults used to delegitimise things we don’t like or agree with. When research becomes partisan, labels and terminologies that were designed to help us flag harmful behaviours exacerbate polarisation; one of democracy’s most sensitive vulnerabilities.
But beyond normalising bad research, we fear unchecked over-vigilance also draws attention to our greatest anxieties. When we accuse every faceless burner account of being a Russian bot, we reveal our own cognitive dissonance toward opposing beliefs; confirming externally our fears, focus, and blind spots. When we misfire, overinflate, or misattribute, our matrix of decision making is exposed, and the extent of our defensive knowledge as a collective information environment is laid bare. This can provide crucial intelligence about vulnerabilities to those seeking to disrupt our institutions and security. We would be wise to keep our cards closer to our chest.
How do we approach this?
If we approach digital harms as a holistic issue of information resiliency, then we can extend our focus toward the way digital harms are discussed and reported on.
The way we talk about digital harms is just as important as researching them. Knowledge and resilience are best promoted via accessible writing and transparency, with journalists, media outlets, and public-facing OSINT researchers playing a key roles. Sharing best practices, ethics, and research methods should be shared across various stakeholders and sectors, to ensure adherence to certain standards.
Rigorous, methodical, and accountable research is crucial for identifying and attributing threats, especially when making claims about individuals, groups, or governments. When we can’t prove who made a piece of harmful content, we need to have honest discussions about the actual risk it poses to the public.
Too many commentators focus on leveraging digital harms against platforms, rather than working with them to formulate appropriate responses. We need to go beyond talking about the ‘what’, and promote discussion about the ‘who’, ‘where’, ‘why’, and ‘how’. Practitioners, platforms, and wider civil society institutions—from the media to individual researchers—can do more to collaborate and work toward a commonly understood goal.
Talk to us
If you would like a deeper dive into this subject or need support with proactively managing how bad actors are leveraging your infrastructure, get in touch with us.
Insights
From predictions to reality: Digital safety in a year of change
We began this year knowing it was going to be a significant year for digital risk and digital safety. An unprecedented number of elections, brand new online safety legislation under implementation – all taking place against a backdrop of both existing and new conflict and war.
Trust & Safety: A look ahead to 2025
Working within the Trust and Safety industry, 2024 has been PGI’s busiest year to date, both in our work with clients and our participation in key conversations, particularly around the future of regulation, the human-AI interface, and child safety.
Lies, damned lies, and AI - Digital Threat Digest
At their core, artificial systems are a series of relationships between intelligence, truth, and decision making.