Forgive and forget - Digital Threat Digest
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
In Italian slang, people often say that God forgives but your mother does not. Tough love aside, as an atheist, I often find myself wondering if God really does forgive, especially in the digital age, where forgetting has become almost impossible.
In 2016, as part of the General Data Protection Regulation (GDPR), the European Union established the right of erasure, also known as the right to be forgotten. Article 17 has come to be recognised as a key element of privacy and personal data protection, granting individuals the right to have their personal data erased under specific conditions. Conditions include provisions for individuals who have committed crimes and served their sentences, for example. Though each EU member has introduced the right in different iterations, at the core of most is the idea that if a convicted individual - who has served their sentence - shows remorse and growth from the criminal act, once a certain amount of time has passed, and if their case is not deemed to be 'publicly useful knowledge' by the authorities, they can claim the right to be forgotten. This does not mean that any online information on the person will be deleted, but rather that it will not appear at the very top of search engines.
Now, for obvious reasons, to date, there have been few individuals who have been granted the right to be forgotten. From a legal standpoint, the provision is interesting as it sets forth an idea of restorative justice, which is just starting to be discussed and implemented in most European countries. From an OSINT standpoint, it muddies the waters. On the one hand, it decreases transparency, affecting a whole range of personal, commercial, and security-related activities. On the other, it has fostered the rise of a new sub-category of dark public relations.
This is the case of companies like Eliminalia, which promises to wipe any bad press or 'fake information' on someone from 'mass media, state gazettes, and social networks and fora'. The Spanish company promises to clean someone's online identity in seven easy steps and says that it caters to corporations, public figures, and individuals. To do so, Eliminalia claims to appeal to Article 17 of the GDPR, the right to be forgotten. However, there are several inconsistencies in this story. First, the company notes it has decades of experience in the business, having opened its doors in 2011, when the legislation was only passed in 2016. This raises questions as to how it conducted business beforehand. Secondly, and most importantly, research from Qurium shows that the company engages in negative search engine optimization and inauthentic behaviour. There is evidence that Eliminalia has created fake news websites, flooding search engines with random articles with the intention of de-indexing and sinking undesired press coverage and information. The company has also been found to plagiarise and backdate authentic articles on their news websites, to then pose as EU officials and threaten to sue the initial publication for copyright infringement if the article is not taken down. Finally, the track record of the company shows it has worked for dubious individuals, whose criminal actions would not make them eligible for digital erasure.
The truth is that Eliminalia is not the first and will not be the last company to offer services of this kind. With the right to be forgotten, the European Union and other countries have opened yet another digital pandoras box. Can states provide their citizens with privacy and data protection, while ensuring maximum transparency online? With the right to be forgotten, does my privacy end where the public good starts or vice-versa? In reality, what Article 17 has shown is that public administrations are still unable to legislate on digital spaces and regulate them accordingly.
More about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.
Working within the Trust and Safety industry, 2024 has been PGI’s busiest year to date, both in our work with clients and our participation in key conversations, particularly around the future of regulation, the human-AI interface, and child safety.
At their core, artificial systems are a series of relationships between intelligence, truth, and decision making.
Feeding the name of a new criminal to the online OSINT community is like waving a red rag to a bull. There’s an immediate scramble to be the first to find every piece of information out there on the target, and present it back in a nice network graph (bonus points if you’re using your own network graph product and the whole thing is a thinly veiled advert for why your Ghunt code wrap with its purple-backlit-round-edged-dynamic-element CSS is better than everyone else’s).