Has everything changed? - Digital Threat Digest
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
Metricising harm often boils down to looking at two things: intent and capability. What is your threat actor trying to do, and how good are they at doing it. If they’re trying to do something real bad, and they’re real good at it, then you have a big old problem to deal with.
There are multiple ways we try and deal with these problems as a society - the five purposes of prison are deterrence, incapacitation, rehabilitation, retribution, and restitution, with deterrence really speaking to intent, and incapacitation covering capability. Make something illegal and people will be deterred by the potential consequences. But regulating the intent and capability of humans is difficult enough – people still commit loads of crimes, all of the time – before we even begin to look at digitally enhanced capability.
To bring in an example, last week a series of AI generated pornographic images of Taylor Swift spread rapidly across Twitter. To be clear, this is absolutely unacceptable, and I suspect we’re going to see an incredible lawsuit out of it, but what I want to focus on are the problems around the behaviour. 95% of the reaction was uniform – people arguing that this should be illegal. But what exactly is the ‘this’ here? What do we want to make illegal as the deterrence? Do we mean any AI generation of another person? Is it only when it’s pornographic? How do you define pornographic? What if they’re not nude in the generation? What if we forget AI even, what if I pay a talented artist to draw a hyperreal nude depiction of a celebrity using pastels?
None of these questions are new, which his where the remaining 5% of reactions came in – I am the internet’s complete lack of surprise. Rule 34 of the internet states ‘if it exists, there is porn of it’, and the current largest Rule 34 archive has around 8,254,187 images, a mix of artist drawn and photoshopped content. The largest public celebrity deepfake host, run out of the Philippines, received 85 million visits in October 2023, rising to 110 million visits in December 2023. What’s new is the speed and the accessibility. This content has been – generally speaking – rare enough over the years that it has remained out of the public eye. But now the threat itself, and the spectre of the threat, are very much in the court of public opinion.
So, what do we do? Do we want to make this illegal, and, once again, what exactly do we want to make illegal? Do we seize the means of production or the means of distribution? Do we listen to the immediate reaction and seize all of it, and regulate AI out of the hands of everyday people so that the corporate giants can paywall and profit from it? Considering how well the instinctive reaction to blanket ban worked for the war on drugs, I’m not sure there’s much to replicate from that model of deterrence and incapacitation. Do we want to allow people to produce whatever they want as long as it isn’t for supply? Going after the distribution networks can be effective, as long as there is some wording drawn up defining ‘pornographic’. Maybe we define it the same way we rate movies based on permitted audience.
Whatever we decide to do, there’s no viable solution to harmful human behaviour. Before AI it was photoshop. Before photoshop it was drawing. Before drawing it was sculpting. Before sculpting it was imagination. Whatever safeguards or rails are built in from a tech perspective, people will figure out a way of breaking them. But that doesn’t mean we should do nothing. The AI revolution may seem new, but really, it’s the same human s**t on a different, tech enabled day. As such, imperfect rules to protect privacy are likely better than no rules at all.
More about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.
Working within the Trust and Safety industry, 2024 has been PGI’s busiest year to date, both in our work with clients and our participation in key conversations, particularly around the future of regulation, the human-AI interface, and child safety.
At their core, artificial systems are a series of relationships between intelligence, truth, and decision making.
Feeding the name of a new criminal to the online OSINT community is like waving a red rag to a bull. There’s an immediate scramble to be the first to find every piece of information out there on the target, and present it back in a nice network graph (bonus points if you’re using your own network graph product and the whole thing is a thinly veiled advert for why your Ghunt code wrap with its purple-backlit-round-edged-dynamic-element CSS is better than everyone else’s).