Telegrammer in the (Tele)slammer - Digital Threat Digest
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
In 1858, The New York Times called the telegraph (the thingy that sends individual telegrams) “trivial and paltry”, and also “superficial, sudden, unsifted, too fast for the truth”. Ouch.
Without giving mid-19th century journalists too much credit for refusing to be dazzled by the electricity-based innovations of their time, it’s as good a moment as any to revisit these qualms – not least because the recent arrest of Pavel Durov, the founder of the Telegram app, in France has re-opened this very pandora's box (and, funnily enough, the parallels go beyond the shared name).
French prosecutors said, in a statement released to the public, that Durov was being held in custody as part of a cyber-crime investigation, covering 12 offences that range from complicity in the sale of drugs to complicity in the spread of CSAM. For free speech absolutists, Durov’s arrest is a sign of global censorship. Unsurprisingly, the most shared X post using '#FreePavel' was authored by the owner of the platform and self-proclaimed “free speech advocate”, Elon Musk. Chris Pavlovski, the CEO of Rumble, another 'alt-tech' app criticised for attracting and hosting extreme content, also declared on X that he had departed European soil, urging for Durov to be “immediately released” and for users to '#BoycottFrance'.
But when messages can be sent instantly, how does ‘the truth’ keep up? What is the best way to ‘sift’ through (i.e. moderate) the content itself? These questions are now widely posited, particularly as governments and civil society groups have pushed to make social media platforms both safer for, and more accountable to, their users. Nonetheless, what’s really been hammered home by Durov’s arrest is the somewhat less philosophical and more game-of-hot-potato inducing concern: whose fault is it (or whose fault should it be) when things go wrong?
A Telegram statement asserted that “it is absurd to claim that a platform or its owner is responsible for abuse of that platform”; presumably even when said platform distributes its servers worldwide (so that any one government agency would have to get warrants from multiple countries to retrieve data), boasts about the privacy it affords users and its total lack of moderation in private channels, and has been reported as operating in accordance with a “wild west ethos”.
Moderation and regulation are one thing, but jurisdictional responsibility vis-à-vis the borderless internet is fraught with complexity and beholden to the national sovereignty prioritised by existing international frameworks. How should we regulate platforms that simultaneously operate in different countries with different laws? Which nation or entity is responsible for doing so? How closely tied is it to which passports the founder or CEO themselves has, versus where the company is registered, where it’s headquartered, or even where it has the most users? Will national interests undermine international efforts (in the same way that tax havens have a nasty habit of enabling tax evasion)?
While the case against Durov is still unfolding, the underlying difficulty is still clear: can global connectivity be squared with local, or indeed personal, accountability? A mere 166 years later, could it be that The New York Times was onto something all along...
More about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.
Working within the Trust and Safety industry, 2024 has been PGI’s busiest year to date, both in our work with clients and our participation in key conversations, particularly around the future of regulation, the human-AI interface, and child safety.
At their core, artificial systems are a series of relationships between intelligence, truth, and decision making.
Feeding the name of a new criminal to the online OSINT community is like waving a red rag to a bull. There’s an immediate scramble to be the first to find every piece of information out there on the target, and present it back in a nice network graph (bonus points if you’re using your own network graph product and the whole thing is a thinly veiled advert for why your Ghunt code wrap with its purple-backlit-round-edged-dynamic-element CSS is better than everyone else’s).