Tackling the new reality - Digital Threat Digest
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
This weekend a digitally generated audio of London Mayor Sadiq Khan circulated online. The ‘deepfake’ Khan makes dismissive comments about Remembrance weekend commemorations and calls for a ‘million-man’ march in support of Palestine. The audio was shared predominantly by far-right users online ahead of the weekend’s violent nationalist counter-protests.
The police reviewed the video and stated it did not constitute a criminal offence. This demonstrates the lack of adequate legal frameworks to regulate this type of harmful and false content. So far, laws in the UK only cover the use of artificial intelligence and deep fakes in specific cases. These include defamation, harassment, and data protection. The recent Online Safety Bill also forbids the sharing of pornographic deepfakes without consent. None of these cover the type of political impersonation that Mayor Khan was subjected to.
In the elections we cover at PGI, we see this type of sophisticated digital manipulation more and more. In Slovakia, a fake audio recording of opposition politician Michal Šimečka appeared online and in Taiwan a fake audio of party leader Ko Wen-je was sent to the country’s press. Often these attempts are crude and obvious. But as they become more subtle and sophisticated, their impact will increase.
We shouldn’t discount the ability of users online to adapt to these new threats. Steps can be taken to improve media literacy, renew trust in news media and encourage users to engage in a wider variety of news sources. This all helps to improve society’s resilience to digital manipulation and misinformation.
However, we still need legislation to keep up with new technology, and governments have been slow to adapt to new technological threats. The Online Safety Bill went through a four-year process to get passed, consultation for the EU's Digital Services Act began in July 2020, and some geographies have blanket banned certain platforms rather than attempt to tackle the complexities of regulation.
A good example of a rapidly moving government is Taiwan. The Taiwanese government, because their country is targeted continually with disinformation from the PRC, recognises the specific danger deepfakes pose in misleading the public and inflaming political sensitivities. In response to this growing threat, this year it outlawed the use of deepfakes in cases of fraud including the impersonation of political or government figures.
This could serve as a model for democracies elsewhere of how to regulate sophisticated computer-generated content. The protection of free speech should include legal measures to curb the degradation of the online media environment by computer generated content. Where AI is being used artistically or satirically it should be labelled as such. This cannot just be left up to platforms. The deepfake audio like the one circulated this weekend isn’t the first and won’t be the last.
More about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.
Working within the Trust and Safety industry, 2024 has been PGI’s busiest year to date, both in our work with clients and our participation in key conversations, particularly around the future of regulation, the human-AI interface, and child safety.
At their core, artificial systems are a series of relationships between intelligence, truth, and decision making.
Feeding the name of a new criminal to the online OSINT community is like waving a red rag to a bull. There’s an immediate scramble to be the first to find every piece of information out there on the target, and present it back in a nice network graph (bonus points if you’re using your own network graph product and the whole thing is a thinly veiled advert for why your Ghunt code wrap with its purple-backlit-round-edged-dynamic-element CSS is better than everyone else’s).