Lights, camera, authentic imagery - Digital Threat Digest
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.
Last week, world-renowned camera company, Leica announced the release of its first ‘Encryption Verification’ camera with content credentials built in. This camera is intrinsically designed to ensure the authenticity of photos taken on it, and has features which embed a photo’s metadata, including the location, camera make, picture mode, and details of any editing history. Images can also be verified through an external website or through the Leica app, providing another avenue to confirm the legitimacy of images. Designed initially for photojournalists, this product comes at a time when images are highly susceptible to misattribution, manipulation and even being wholly generated by artificial intelligence. From AI photos of Donald Trump, to misattributed photos of conflict zones – the integrity of photos is important to maintaining public trust in what we see online, now more than ever.
One drawback is the hefty price tag of $9,000, which will create barriers to access. However, the technology behind this product is still exciting, and down the line will hopefully become more widely available to consumers, journalists, and the general public. Photo verification services are not new, but embedding of this technology into everyday products such as cameras tackles the problem directly at the point of capture. In August, Canon partnered with Reuters and Starling Lab, an academic research lab, to run a pilot program which explored how embedding metadata within images might create an increased sense of trust from viewers around the integrity of the image.
This pilot and other initiatives by camera brands and technology companies is encouraging, especially given the ease at which manipulated images can spread like wildfire on social media. There are, of course, questions about the ease of accessibility and how this type of technology will develop. But for right now, the move towards cementing authenticity and legitimacy within photos and products themselves signals a new approach to tacking misinformation. This will be beneficial not just for photographers and photojournalists, but also for members of the OSINT community who work to fight against misattributed and manipulated content.
More about Protection Group International's Digital Investigations
Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.
Disclaimer: Protection Group International does not endorse any of the linked content.
Working within the Trust and Safety industry, 2024 has been PGI’s busiest year to date, both in our work with clients and our participation in key conversations, particularly around the future of regulation, the human-AI interface, and child safety.
At their core, artificial systems are a series of relationships between intelligence, truth, and decision making.
Feeding the name of a new criminal to the online OSINT community is like waving a red rag to a bull. There’s an immediate scramble to be the first to find every piece of information out there on the target, and present it back in a nice network graph (bonus points if you’re using your own network graph product and the whole thing is a thinly veiled advert for why your Ghunt code wrap with its purple-backlit-round-edged-dynamic-element CSS is better than everyone else’s).