Impose cost - Digital Threat Digest
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
The concept of cost imposition sits at the core of plenty a cybersecurity strategy. The idea is that you make life as operationally, financially, or existentially difficult for your adversary as you can. There exists a whole spectrum of interventions you can take to impose cost, some much more grandiose than others.
You can pay for a DDoS Guard service, so your adversary has to pay for more resource to target you. You can publicly dox someone who repeatedly targets you, so they have to decide if the exposure of their personal information is worth their pursuit. At the extreme end, you can pop a Hellfire R9X through their kitchen window, so your adversary is very much humanly incapable of continuing to target you.
Netflix doesn’t want people to watch shows outside areas where they hold the rights for distribution; so, they geofence content – a cost imposition. People often want to get around the geofence, so they subscribe to a VPN service. In response, Netflix blocks the IP ranges of the major VPN providers – a further cost imposition. People give up and torrent the US Office instead – now it’s no longer a Netflix problem, mission accomplished.
Recently I’ve been thinking more about cost imposition in the OSINT space. In many ways, investigators are the adversaries in OSINT. The sum of points from which we can extract data from platforms, services, and online resources is smaller than ever in 2024. Gone are the days of Facebook graph search, when you could manipulate one string to pull back essentially any public piece of content from the Facebook platform. Here are the days where even LinkedIn are now set to lock down information visible to public browsers.
And it makes sense – data is money. LinkedIn want us to subscribe to premium, imposing a direct financial cost. But there’s another angle of cost imposition here, in OSINT tool development, and its increasing commercialisation. One of the most valuable pivots in OSINT is going from an email or a phone number to a user identity on a service. From an email to a Google account, or a phone number to a Strava account. There are several tools that can do this – OSINT Industries, Epieos, Castrick, etc.
Historically, these capabilities were based on open source code with open source software licence types – ‘copyleft’ licences do not allow their software to be embedded/used within proprietary or commercial software, for example. And yet, in the last couple of weeks there have been disagreements rumbling on between rival developers publicly accusing each other of stealing their open source code to use in proprietary, paid tooling, thereby violating the licences.
I don’t really care who stole what when, but the problem with the increased commercialisation of previously fully open source capability is that it imposes further cost on OSINT as an industry, raising the bar for entry to increasingly financially prohibitive levels. There’s still no defined pathway into OSINT, the qualifications are few and far between and vary wildly in quality, and now we’re expecting entry level hobbyist practitioners to shell out £100 a month just to subscribe to one email tool, one out of 20 paid tools they need in their toolkit.
If I had a tooling budget of £1.5mn to spend on my team of 50, would I get Palatir’s Gotham system? Maybe. Does a beginner have £500 a year to spend on email pivots? Probably not. As the OSINT industry expands we have to be careful to keep the focus on adversarial cost imposition, and not on those trying to further the field.
More about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.
We began this year knowing it was going to be a significant year for digital risk and digital safety. An unprecedented number of elections, brand new online safety legislation under implementation – all taking place against a backdrop of both existing and new conflict and war.
Working within the Trust and Safety industry, 2024 has been PGI’s busiest year to date, both in our work with clients and our participation in key conversations, particularly around the future of regulation, the human-AI interface, and child safety.
At their core, artificial systems are a series of relationships between intelligence, truth, and decision making.