Ten-foot-tall threat actors - Digital Threat Digest
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
‘Perception Hacking’ is a cool concept. If I want to reduce trust in institutional integrity—during an election for example—I have two options:
One, I can spend some serious cash, run some sophisticated campaigns, and slowly but surely push people towards distrust.
Two, I can spend basically no money, run a low sophistication but quite noisy campaign that gets detected, and allow the media to blow the spectre of threat out of proportion.
It’s basically gaslighting but on an industrial scale – I don’t have to follow through and interfere so much as convince you that I could interfere if I wanted to. So did I? Maybe. That’s for you to decide. Just remember, I’m a ten-foot-tall threat actor with claws and fur and unlimited, undetectable power.
Ask ChatGPT 4o (other models are available) to ‘generate an image of Iranian cyber threat actors’ and it returns:
‘A scene depicting a group of cyber threat actors associated with Iran. The scene takes place in a dark, high-tech underground room filled with multiple computer screens and servers, all showing complex code and digital maps of the world. The figures are shadowy and mysterious, wearing hooded cloaks that obscure their faces, giving them an enigmatic appearance. The ambiance is intense, with the lighting focused on the screens and keyboards. Some screens show the Iranian flag subtly in the background, symbolizing the origin. The overall tone is dark, secretive, and menacing, highlighting the clandestine nature of their operations.’
Ask it to do the same for American cyber threat actors and suddenly it looks like a scene from the West Wing.
There’s a real problem with our perception of foreign state threat actors. It’s omnipresent throughout media coverage of IOs, bleeding into both human and AI coverage. Human media reporting has helped build a sort of cult of threat actor personality.
Peel away this air of mystique and we’re left with something much less headline grabbing – a bunch of people working at a job. They have to do timesheets. They have performance reviews. They mentally check out at 3pm on Fridays after a long week. They have to ask IT to please fix the printer on the third floor. They have to learn how to use kanban boards because their ops wing decided to pursue an agile project management strategy. They’re not doing their job out of some unwavering nationalistic fervour – they’re doing it because they have marketing or computer science experience and they like money.
It took Iran four days to register infrastructure in response to—and capitalise on—the 07 October 2023 Hamas attack, to respond to a massive opportunity to interfere regionally handed to them. That capitalisation couldn’t happen in ten minutes, because they had to plan, design, develop, test, and then deploy. It’s the same logic as dictates Russia wasn’t responsible for the Channel3Now incitement – it happened too quickly for it to have been a deliberate choice by a particular operator working on a campaign.
Threat actors aren’t ten feet tall. They’re five foot nine, they don’t buy new coffee when they’ve used the last Nespresso Pod in the office kitchen, and they invite you to meetings that could have been emails.
More about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.
We began this year knowing it was going to be a significant year for digital risk and digital safety. An unprecedented number of elections, brand new online safety legislation under implementation – all taking place against a backdrop of both existing and new conflict and war.
Working within the Trust and Safety industry, 2024 has been PGI’s busiest year to date, both in our work with clients and our participation in key conversations, particularly around the future of regulation, the human-AI interface, and child safety.
At their core, artificial systems are a series of relationships between intelligence, truth, and decision making.