Ghosts in the machine? - Digital Threat Digest
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
In the mid-20th century, Gilbert Ryle threw sand in the eye of Cartesian dualism, calling the idea of a separate mind a 'category mistake' and dubbing it the 'ghost in the machine'—essentially suggesting that Descartes had outed himself as harbouring an imaginary friend. Like all great philosophical debates, this one eventually made its way to the silver screen in the form of the 2004 Hollywood blockbuster I, Robot, in which a leather-clad action hero faces down a rogue humanoid bot with more existential angst than a first-year philosophy student. Because if anyone’s going to settle the mind-body debate, it’s Will Smith in a trench coat.
At one point in the film, this machine sentience is conveniently explained by the scientist who founded the company making these robots. He asserts:
“There have always been ghosts in the machine…Why is it that when some robots are left in darkness, they will seek out the light? Why is it that when robots are stored in an empty space, they will group together, rather than stand alone? How do we explain this behaviour?...When does a personality simulation become the bitter mote of a soul?”
These reflections speak to the general state of AI discourse today, that similarly tends to oscillate between spectral extremes of fear and fascination. The most apt (and possibly the most interesting) recent example has come with Anthropic AI’s announcement that their model, Claude 3.5, can now use computers “the way people do”. Swathes of users have been quick to stoke alarm over its ability to “CONTROL” devices, while others have actively sought to assess the extent to which this new and improved Claude is “self-aware”.
The one aspect of this development that intrigued me most however was the musing that during one example recording, Claude “took a break from our coding demo and began to peruse photos of Yellowstone National Park”. It’s funny, but it is also much more than a humorous quirk. Even with the promise of rapid improvement, it raises the question of how each users’ AI tool would replicate their own behavioural patterns, and bear the subtle imprint of, in this case, human distraction. Would a different Claude, trained on a different person’s data, instead procrastinate by beelining for cat compilation videos?
When we talk of ‘ghosts in the machine’, we often imagine the unpredictable ways that technology might free itself from the shackles of its human overlords and spend less time thinking about the ways in which we haunt the machines.
It seems we shouldn’t fear AI in and of itself, but rather the unchecked social phenomena it reflects back at us (desires and attempts to misinform, mislead, enrage, manipulate, the list goes on). Every day in our work, we see chains of human decision-making, guided by human strategic aims, that are often revealed through human error and human hubris. We should probably worry less about machines surpassing people by developing autonomous minds, and more about them preserving and enhancing humanity’s most dangerous qualities. It may be an uncomfortable truth, but panic around AI’s potential to act unethically or destructively is, or should actually be, a response to embedded human traits, if not a more direct objection to people using these tools to amplify or automate harm in the first instance.
If in the end it's people that are the problem, then suffice it to say, I’m feeling thoroughly spooked.
Subscribe to the Digital Threat DigestMore about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.
Working within the Trust and Safety industry, 2024 has been PGI’s busiest year to date, both in our work with clients and our participation in key conversations, particularly around the future of regulation, the human-AI interface, and child safety.
At their core, artificial systems are a series of relationships between intelligence, truth, and decision making.
Feeding the name of a new criminal to the online OSINT community is like waving a red rag to a bull. There’s an immediate scramble to be the first to find every piece of information out there on the target, and present it back in a nice network graph (bonus points if you’re using your own network graph product and the whole thing is a thinly veiled advert for why your Ghunt code wrap with its purple-backlit-round-edged-dynamic-element CSS is better than everyone else’s).