Investigations
Security
Capacity Building
Insights
About
Digital Threat Digest Insights Careers Let's talk

AI and the prisoner’s dilemma of regulation - Digital Threat Digest

PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.

Double circle designs19

When it comes to regulating something at an international level, navigating the geopolitical interests of different states is a difficult task. Different countries have different approaches to how they treat their citizens and how they implement technology into their militaries.

In November, the UK will host a summit on AI safety and international regulation. One of the main goals of this summit is to develop a conceptual framework for the regulation of AI development. However, coming off the back of the G7 in May, it is unclear whether this will be a ‘democracies only’ type meeting, or whether more players will be invited to the table. I would argue that keeping this limited to like-minded countries, while tempting, is a mistake in the long run.

The power that AI development represents creates a classic prisoner’s dilemma: Countries that do regulate AI will be at a severe disadvantage compared to those that don’t. Therefore, at least until you hit a theoretical world ending point of development, if some are not regulating, it is in everyone’s interests to not regulate.

While getting all these different governments to work together may feel like an impossible task, there are examples of this happening with other forms of destabilising technology. The Geneva Conventions and the regulation of chemical weapons after their use in WWI has been largely successful, as have various nuclear weapons and missile treaties (to a degree).

In those cases, the clear and horrifying consequences of their use contribute to the success of their regulation. AI doesn’t and shouldn’t necessarily have that stigma, but countries should still be cautious. AI can be weaponised both politically and militarily: loitering munitions, autonomous drones with the ability to kill, and target identification software are all AI-based military technologies that exist today. Politically, as AI develops it can be used to generate misinformation, target individuals, and inundate information environments with scams.

While the timelines and severity may be up for debate, AI development will undoubtably change everyone’s lives over the course of the next few decades. But because of our dilemma, we need to all be on board when regulating AI technology. This is going to require diplomacy with countries that might not be very palatable in the short term, but whose cooperation is necessary in the long term.


More about Protection Group International's Digital Investigations

Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.

Disclaimer: Protection Group International does not endorse any of the linked content.