Business Continuity Management Systems

PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
When it comes to regulating something at an international level, navigating the geopolitical interests of different states is a difficult task. Different countries have different approaches to how they treat their citizens and how they implement technology into their militaries.
In November, the UK will host a summit on AI safety and international regulation. One of the main goals of this summit is to develop a conceptual framework for the regulation of AI development. However, coming off the back of the G7 in May, it is unclear whether this will be a ‘democracies only’ type meeting, or whether more players will be invited to the table. I would argue that keeping this limited to like-minded countries, while tempting, is a mistake in the long run.
The power that AI development represents creates a classic prisoner’s dilemma: Countries that do regulate AI will be at a severe disadvantage compared to those that don’t. Therefore, at least until you hit a theoretical world ending point of development, if some are not regulating, it is in everyone’s interests to not regulate.
While getting all these different governments to work together may feel like an impossible task, there are examples of this happening with other forms of destabilising technology. The Geneva Conventions and the regulation of chemical weapons after their use in WWI has been largely successful, as have various nuclear weapons and missile treaties (to a degree).
In those cases, the clear and horrifying consequences of their use contribute to the success of their regulation. AI doesn’t and shouldn’t necessarily have that stigma, but countries should still be cautious. AI can be weaponised both politically and militarily: loitering munitions, autonomous drones with the ability to kill, and target identification software are all AI-based military technologies that exist today. Politically, as AI develops it can be used to generate misinformation, target individuals, and inundate information environments with scams.
While the timelines and severity may be up for debate, AI development will undoubtably change everyone’s lives over the course of the next few decades. But because of our dilemma, we need to all be on board when regulating AI technology. This is going to require diplomacy with countries that might not be very palatable in the short term, but whose cooperation is necessary in the long term.
More about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.
At their core, artificial systems are a series of relationships between intelligence, truth, and decision making.
Feeding the name of a new criminal to the online OSINT community is like waving a red rag to a bull. There’s an immediate scramble to be the first to find every piece of information out there on the target, and present it back in a nice network graph (bonus points if you’re using your own network graph product and the whole thing is a thinly veiled advert for why your Ghunt code wrap with its purple-backlit-round-edged-dynamic-element CSS is better than everyone else’s).
There is a tendency to think that modern problems require modern solutions. Got a problem with AI-generated content? Your only hope is to build an AI-powered detection engine.