Gotcha - Digital Threat Digest
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
Media, politics, debate, and general human interaction increasingly revolve around ‘gotcha’ moments. The desperate and infinite pursuit of being able to catch out the person who disagreed with you online and parade them in front of the digital crowd as a stupid dumb idiot.
This in turn leads to a couple of problems – the first is that spending time online devolves into observing exponentially tedious tiny arguments. Fire up any Reddit thread and lose yourself in the most niche of disagreements between armchair experts. Crack open Twitter you’ll find the anonymous masses arguing over whether it would be more effective to drone strike or missile strike small boats crossing the English Channel.
The second is more serious – the fact that everyone is hunting for the gotcha moment means that everything becomes potential evidence in support of that cause. Nothing exists for the sake of existing, it all has to mean something. If a tree falls over in a forest and no one is around to hear it, does that mean it was toppled by a cabal of global elites? No, it doesn’t.
The hyper-politicisation of the environment, identity, health, housing, education, gender, transport means that every single news event is now evidence for a particular narrative and counter-narrative. Occam’s razor is entirely disregarded in favour of partisan interpretation of reality. When a high-performance sports star suffers a heart attack, for the anti-vaxxers it's evidence of unsafe Covid-19 vaccines. When authorities try to introduce low emission zones to ensure children can breathe in cities, for the conspiracists it's evidence the state is trying to create open prisons.
At some point in the last 10-20 years, the status quo in which we interpret the world has flipped. We’ve gone from the vast majority observing the available data and drawing a common conclusion, to a significant vocal minority of fragmented out-groups interpreting the available evidence to match having their predetermined conclusions. Improved social communications infrastructure has of course promoted this phenomenon among the public – but it’s equally present at the very top level of politics and media across the global west.
The extremes now flourish in the mainstream, directed from partisan funded media houses which spoon feed lazy gotcha soundbites based on biased evidence to their adoring congregations. At some point the pursuit of genuine digital resilience will require dealing with this problem from the bottom up – and solve the additional difficulty of any potential intervention being viewed as evidence of a plot by those it seeks to draw back from the conspiratorial fringes.
More about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.
We began this year knowing it was going to be a significant year for digital risk and digital safety. An unprecedented number of elections, brand new online safety legislation under implementation – all taking place against a backdrop of both existing and new conflict and war.
Working within the Trust and Safety industry, 2024 has been PGI’s busiest year to date, both in our work with clients and our participation in key conversations, particularly around the future of regulation, the human-AI interface, and child safety.
At their core, artificial systems are a series of relationships between intelligence, truth, and decision making.