Cyber Security
Investigations
Capacity Building
Insights
About
Digital Threat Digest Insights Careers Let's talk

Lies, damned lies, and AI - Digital Threat Digest

PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.

Double circle designs8

At their core, artificial systems are a series of relationships between intelligence, truth, and decision making.

Truth-seeking AI prioritises facts, it is objective, and adheres to established evidence and credible information overall. Intelligence in AI covers problem-solving, creativity, adaptability, and the ability to make approximations based on inputs and context. And each has their challenges; a truth-seeking AI has to acknowledge that truth is, particularly in 2024, open to interpretation. The purest definition of truth also needs complete data – any gaps and you’re going to have a system that is forced to fill in the gaps with informed guesswork. And for a system prioritising intelligence, how do you balance pragmatically achieving outcomes with the strict pursuit of truth?

The two aren’t necessarily mutually exclusive – if you ask GPT4o whether it is truth seeking or intelligent, it will reassure you that it is designed to balance the two based on context; i.e., provide accurate and reliable information while also adapting to the user. You’ll notice this balance when querying – sometimes it will tell you when it isn’t sure, or when there are multiple ways of interpreting a specific topic. You’ll also notice it attempt to display intelligent empathy, by adapting its tone depending on the emotion of your query. However, herein lines our next pitfall: where GPT4o reassures us that “if you notice me favouring intelligence or truth-seeking over the other in a way that doesn’t suit your expectations, you can guide me – I’ll adapt!”

Aside from the friendly exclamation mark designed to emphatically reassure me of its willingness to cooperate (intelligence) as it tells me part of its system prompt (truth seeking)—within that statement lies a sort of ultimate risk; whoever controls the balance, controls the prioritisation of truth. Who ultimately makes the decision about the balance between truth-seeking and intelligence in artificial systems, and what are their intentions? Here we stray into all sorts of ethical minefields – the centralisation of power in defining what we consider truth, and in how that definition can be manipulated. Prioritisation bias, in which the cultural, economic, or social priorities of the developer shape the design of the system. This can be implicit, wherein a culturally homogenous team unconsciously embed systemic inequality from skewed training data, or explicit, wherein an authoritarian regime strategically prioritises adherence to the party line rather than objective truth. Or the presentation of the party line as objective truth. In the short term, these issues lead to a loss of user confidence in artificial systems. This manifests as an overall scepticism toward AI-generated knowledge or content. In the long term, they cause a slightly more concerning considerable amount of damage to epistemic trust. When multiple systems begin to present conflicting versions of the truth, we lose societal consensus of what constitutes a shared fact, and overall societal cohesion is damaged.

And what of lies? If the ultimate goal is a harmonious relationship between truth-seeking and intelligence, then how do we deconflict the fact that lying is a form of intelligence, demonstrating social awareness, creativity, and the ability to think strategically? Fundamentally, an AI cannot simultaneously be truth-seeking and capable of deliberately lying. It can make mistakes based on poor quality of data ingested – a failure of intelligence. Or it can be programmed to deliver benevolent falsehoods, but in doing so it fails in its truth-seeking objective. And once again – if an AI can lie, then who decides when, how, and why it can do so? To open Asimov’s can of worms – should an AI adhere to the first law by lying to protect the emotional wellbeing of one person? What about if that one lie risks deceiving ten others?

If we conclude that lying, as a form of intelligence, runs contrary to the idea that credibility is the greatest strength of an artificial system, then I don’t see how we can ever have a system that is capable of balancing intelligence, truth, and decision making.

Subscribe to the Digital Threat Digest

More about Protection Group International's Digital Investigations

Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.

Disclaimer: Protection Group International does not endorse any of the linked content.