Cyber Security
Investigations
Capacity Building
Insights
About
Digital Threat Digest Insights Careers Let's talk

AI Agents: Destroyers of the world? Or more of the same? - Digital Threat Digest

PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.

PGI7

After the public release of ChatGPT, it has been hard to avoid the explosion of promises around the potential uses of AI. As is often the case with new technology, these promises often border on the realm of science fiction. Reasonable timelines are purposefully avoided and glaring technical challenges are handwaved away, in the rhetorical arms race to capture funding and media attention.

With that said, the core technology of AI is indeed developing at a relatively quick pace. As of now, most AI technology is limited by significant barriers:

  • It can’t interact with the internet
  • Its training data is a few years old
  • It’s prone to errors in logic or fact

Some of these limitations are by design, but others require significant time and money to fix without guarantees of full success.

But the real holy grail of this industry isn’t chatbots. The real focus is on developing the next thing you feel you can’t live without; a real, useful AI assistant. And for this reason, it’s necessary to talk about the next AI development rapidly coming down the pipeline: AI Agents.

Agents run on Large Language Models (LLMs) such as ChatGPT to (somewhat) autonomously execute tasks given to them by a human. In theory, they can write their own code and develop their own AI subunits who can perform their own tasks. They can adapt and remember previous tasks so they can learn from their mistakes.

I have my doubts as to how quickly this technology will meet such high expectations. However, Agents are almost tailor made to feed into the scary sci-fi stories we’ve been hearing about. In the darkest future, the potential threats from Agents are as numerous as your imagination allows. Essentially, take all the bad things you see now on the internet, but they’re now run by tireless, self-learning AI who can adapt to situations on the fly.

Luckily, even if that future were to come about, that’s not necessarily as bad as it sounds. Because fundamentally, it’s going to have to be more of the same. There are only so many ways to surreptitiously influence an election, fake reviews, or flood a platform with hate speech. Because at the end of the day, AI or human, they’re still trying to complete a task – and that’s the bottleneck we can use to our advantage.

In this way, until they become smarter than humans, AI and AI Agents don’t necessarily present a foundationally new digital threat. They are a tool that can enhance certain aspects of existing risks. It’s important to be clear-eyed about the real nature of AI if we want to be effective at countering what is to come.


More about Protection Group International's Digital Investigations

Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.

Disclaimer: Protection Group International does not endorse any of the linked content.