Ghosts in the machine? - Digital Threat Digest
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
In the mid-20th century, Gilbert Ryle threw sand in the eye of Cartesian dualism, calling the idea of a separate mind a 'category mistake' and dubbing it the 'ghost in the machine'—essentially suggesting that Descartes had outed himself as harbouring an imaginary friend. Like all great philosophical debates, this one eventually made its way to the silver screen in the form of the 2004 Hollywood blockbuster I, Robot, in which a leather-clad action hero faces down a rogue humanoid bot with more existential angst than a first-year philosophy student. Because if anyone’s going to settle the mind-body debate, it’s Will Smith in a trench coat.
At one point in the film, this machine sentience is conveniently explained by the scientist who founded the company making these robots. He asserts:
“There have always been ghosts in the machine…Why is it that when some robots are left in darkness, they will seek out the light? Why is it that when robots are stored in an empty space, they will group together, rather than stand alone? How do we explain this behaviour?...When does a personality simulation become the bitter mote of a soul?”
These reflections speak to the general state of AI discourse today, that similarly tends to oscillate between spectral extremes of fear and fascination. The most apt (and possibly the most interesting) recent example has come with Anthropic AI’s announcement that their model, Claude 3.5, can now use computers “the way people do”. Swathes of users have been quick to stoke alarm over its ability to “CONTROL” devices, while others have actively sought to assess the extent to which this new and improved Claude is “self-aware”.
The one aspect of this development that intrigued me most however was the musing that during one example recording, Claude “took a break from our coding demo and began to peruse photos of Yellowstone National Park”. It’s funny, but it is also much more than a humorous quirk. Even with the promise of rapid improvement, it raises the question of how each users’ AI tool would replicate their own behavioural patterns, and bear the subtle imprint of, in this case, human distraction. Would a different Claude, trained on a different person’s data, instead procrastinate by beelining for cat compilation videos?
When we talk of ‘ghosts in the machine’, we often imagine the unpredictable ways that technology might free itself from the shackles of its human overlords and spend less time thinking about the ways in which we haunt the machines.
It seems we shouldn’t fear AI in and of itself, but rather the unchecked social phenomena it reflects back at us (desires and attempts to misinform, mislead, enrage, manipulate, the list goes on). Every day in our work, we see chains of human decision-making, guided by human strategic aims, that are often revealed through human error and human hubris. We should probably worry less about machines surpassing people by developing autonomous minds, and more about them preserving and enhancing humanity’s most dangerous qualities. It may be an uncomfortable truth, but panic around AI’s potential to act unethically or destructively is, or should actually be, a response to embedded human traits, if not a more direct objection to people using these tools to amplify or automate harm in the first instance.
If in the end it's people that are the problem, then suffice it to say, I’m feeling thoroughly spooked.
Subscribe to the Digital Threat DigestMore about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.
In the mid-20th century, Gilbert Ryle threw sand in the eye of Cartesian dualism, calling the idea of a separate mind a 'category mistake' and dubbing it the 'ghost in the machine'—essentially suggesting that Descartes had outed himself as harbouring an imaginary friend.
Everything that I have learned about the US elections this year has been against my will. Don't get me wrong, I am well aware that whoever controls the White House has significant impact around the world, and I will admit that keeping up with American politics makes me a better analyst.
Digital threat intelligence helps us respond to harmful entities and their activities online. As our professional investigation capability evolves, so do the online tactics of threat actors themselves, in something of a perpetual cat and mouse game.