Cyber Security
Investigations
Capacity Building
Insights
About
Digital Threat Digest Insights Careers Let's talk

Added context on Community Notes I thought you might want to know - Digital Threat Digest

PGI’s Digital Investigations Team brings you the Digital Threat Digest, SOCMINT and OSINT insights into disinformation, influence operations, and online harms.

Bird

If, like me, you still can’t escape The Platform Formerly Known as Twitter you may have noticed the roll out of ‘Community Notes’. For me, this is the one change I concede to the Musk fanboys. But now even the man himself seems to have turned against the service he once championed. Last weekend, Musk stated Community Notes was being ‘gamed by state actors’ after an annotation cast doubt on a tweet he made in support of pro-Russian YouTuber arrested in Ukraine.

Community Notes was originally piloted in January 2021 in the US as Birdwatch but was never rolled out sitewide until Musk took over. It works through user-volunteers who can suggest notes to highlight errors, lies, or context for specific Tweets. These submissions are then rated on their usefulness and sincerity by their peers. However, for the note to be ultimately published users who have disagreed in the past must both rate the note as useful to ensure ideological consensus.

After gutting X/Twitter’s Trust and Safety team, Musk outsourced their oversight role to Community Notes. The attraction of this crowdsourced model is that is potentially more agile, less prone to supposed liberal groupthink, and – most importantly for the company’s finances – relies on free labour.

Like I said, I quite like this more community focused, less hierarchical approach. After all, why shouldn’t ordinary users having more agency over the content they receive. And personally, I often scroll down to read comments on online articles to get another perspective or context. In a way, Community Notes is a more systematic version of this approach. Even some of its detractors admit it is very effective at dealing with scams, gossip, and nonsense advertising. Musk has even demonetised fact-checked content to remove the commercial incentive to blatant grifting on site.

The issue with Community Notes is when it operates in highly politicised political contexts. This makes reaching an ‘ideological consensus’ on content particularly fraught. Researchers and journalists looking into unpublished Community Notes find that behind the scenes it is rife with polarised and conspiratorial arguments. There have also been repeated examples of partisan volunteers working in concert to manipulate the service.

Bellingcat identified a note that promoted misinformation about one of Taylor Swift’s bodyguards who appears to have recently re-enlisted with the IDF. The posts claimed that the bodyguard was never part of Swift’s security detail, a claim that was shown to be false. The first note was voted down after 15 hours before being shortly replaced by another, which itself was taken down after 11 hours. Other incidents of systematic downvoting of anti-Russian notes have been identified regarding the country’s invasion of Ukraine.

Community Notes could learn from Wikipedia, another volunteer driven service with a tradition of community-driven validation. Community Notes could incorporate ‘knowledge hierarchies’, ranking sources themselves on bias and trustworthiness, to help deal with the use of spurious sources and data by some volunteers. The other is the development of a truth-seeking ethos among its volunteer community. This is something that may come with time, as contributors build experience, but Twitter could encourage this through initial eLearning for its volunteers.

However, the current experience of Community Notes also suggests that in and of itself community-based verification is insufficient. Musk’s ideological opposition to Trust and Safety teams should give way to the realisation that X/Twitter needs some comprehensive and neutral oversight on content, especially on critical and divisive issues. A hybrid approach would make this promising experiment Musk’s first successful change to site.


More about Protection Group International's Digital Investigations

Our Digital Investigations Analysts combine modern exploitative technology with deep human analytical expertise that covers the social media platforms themselves and the behaviours and the intents of those who use them. Our experienced analyst team have a deep understanding of how various threat groups use social media and follow a three-pronged approach focused on content, behaviour and infrastructure to assess and substantiate threat landscapes.

Disclaimer: Protection Group International does not endorse any of the linked content.