2024 Elections – Digital threat forecast
PGI’s Digital Investigations Team have outlined the primary digital threats posing significant risks to elections across the world in 2024.
PGI’s Digital Investigations Team have outlined the primary digital threats posing significant risks to elections across the world in 2024.
2024 is set to be a monumental year for democracy; with over two billion people across 50 countries going to the polls to elect representatives at local, national, and intra-continental levels. This includes elections in some of the world’s most populous countries, such as India, Brazil, Indonesia, and the US.
While this election year will certainly be a milestone in the long evolution of democracy, many of these elections take place amid a backdrop of increasing divisions in international relations, an uptick in populist politics, and a widespread disenchantment with political representation in some of the world’s most developed democracies.
All of these issues transcend real-world and online spaces, creating distorted and muddied political landscapes prime for exploitation. PGI’s Digital Investigations Team have outlined the primary digital threats posing significant risks to elections across the world in 2024.
Electoral misinformation and disinformation will likely remain highly prevalent in elections across the world in 2024. Online threat actors, like pseudomedia entities, will likely continue sharing content designed to sow distrust in the electoral process for both ideological and commercial gain. Across geographies, false narratives are likely to target voting systems and the integrity of electoral institutions, particularly in closely contested elections.
This type of disinformation at scale will have a significant impact on highly volatile political environments and those with a history of false fraud claims in previous elections. Such narratives played a significant role in elections across the world in 2023, including in Nigeria, Spain, and Turkey, often pushed by politicians and candidates – a behaviour which is likely to continue over the coming year.
Disinformation targeting the integrity of elections can influence dangerous real-world behaviours, including disrupting democratic processes and triggering post-election violence. This has been witnessed over the past year in countries such as Brazil – where supporters of former President Jair Bolsonaro stormed Congress in January 2023 alleging institutional election fraud.
Hate speech perpetrated by extremist organisations and political parties in online spaces will likely pose a significant risk to elections across the world, particularly in the US, Indonesia, Europe, and India.
These entities will likely exploit polarised information environments and target wedge issues such as immigration and religion to seed hateful discourse towards minority groups, as well as to recruit and mobilise users in less-monitored digital spaces. These behaviours will likely be seen during the campaigning period ahead of the June European Union elections, particularly in light of the growing popularity of ultra-nationalist ideologies and politicians across the continent in 2023.
There is also a heightened risk of ideologically motivated real-world violence around election periods. Around the October US election in particular, we are likely to see an increased presence of extremist groups and armed militias who claim to protect electoral integrity while sharing ultranationalist viewpoints and encouraging civilians to take up arms. Similarly, heightened levels of anti-Rohingya discourse online ahead of the February Indonesia general election have already triggered violent confrontations between protesters and refugees. While in India, the Hindutva ideology—which has millions of supporters and has incited hatred against civilians and political candidates belonging to minority religious communities both online and offline—will likely impact the April-May general election.
Foreign state-backed influence operations (IOs) targeting elections are highly likely to be a persistent and significant threat in 2024. The aim of foreign IOs targeting elections is to create a divisive and distorted information environment. This in turn triggers confusion and fuels voter polarisation, while instilling public distrust in leaders and the electoral process.
Recent reports have outlined how Russia and China linked IOs have targeted the US to exploit domestic socio-political divisions. Similar state-linked campaigns will likely increase in prevalence in the coming year, capitalising on wedge issues, such as US spending on Ukraine, to sow discord ahead of the October US election.
Foreign IOs will also likely target governments with a mutual ideological alignment in an attempt to strengthen bilateral relations. For example, there is an increased risk of Russian interference targeting the upcoming South African elections—a fellow BRICS nation—set for Q2-Q3 2024. The Kremlin-linked RT News will build on the physical base in South Africa which they established in 2022, and covert influence operations have been found to inflame inter-racial and intra-African National Congress tensions, as well as promote pro-Russian propaganda in relation to the war in Ukraine.
AI-generated content will likely play a greater role in elections in 2024 as threat actors and political campaigns continue to embed AI techniques within their content-producing toolkits. AI-manipulated and generated media will likely be used by inauthentic entities to deceive voters, as well as by official election campaigns as promotional material.
However, the use of sophisticated AI-generated content and technically manipulated media aimed at sowing distrust in candidates and electoral processes will likely be limited, with the majority of AI-generated media being low-quality in nature and easily discernible by ordinary online users.
As a result, the risk of AI to elections in the medium term is often overstated. Threat actors certainly have the ability to weaponise AI effectively, as shown over the past year in America where the Republican Party released an ad with AI-generated images visualising a ‘dystopian world’ with a re-elected President Joe Biden, and in Moldova where President Maia Sandu was forced to refute claims in a Russia-made deepfake video of herself. However, the vast majority of elections in 2023 saw AI-generated content have a limited influence, and current disinformation campaigns are currently succeeding organically by exploiting societal rifts.
At present, the risk of AI to elections is centred more on the intrinsic uncertainty of its potential, rather than on its current impact.
Heightened levels of targeted harassment and doxxing are likely in 2024, following a spike in threats against election workers and politicians over the past year in countries including New Zealand, Sweden, the US, and Japan. Going forward, targets of online harassment campaigns are likely to include political candidates, election workers, journalists, activists, and members of the judiciary. This is most likely to manifest in highly polarised political environments.
These threats will likely entail the dissemination of Personally Identifiable Information (PII) online—such as targets’ home addresses, family members, and phone numbers—as well as online harassment campaigns designed to undermine their legitimacy. In the US, this has manifested in a phenomenon known as ‘swatting’ – a form of harassment where false calls to law enforcement trigger an armed police raid on the target’s house; which most recently targeted Secretary of State for Maine, Shenna Bellows, in December 2023.
Digital forms of harassment can also be a precursor to inciting physical violence against journalists and civil society members. We have already seen this in the recent 7 January 2024 election in Bangladesh, where Awami League supporters attacked reporters at voting stations. Separately, in Mexico, high-profile politicians and criminal groups frequently attack and harass media workers, making it one of the most violent countries in the world for journalists.
The 2024 election threat landscape is complex. Misinformation, disinformation, hate speech, state-backed influence operations, and targeted harassment are likely to impact electoral integrity in many of the 50 countries going to vote.
In the year ahead, vigilance and critical thinking will be vital in democracies being able to navigate the nuances of these digital threats and knowing what these threats are is just the first step.
Our team of experts help make sense of the whole information environment. Through research, intelligence reporting and capacity-building programmes, we help clients boost information resilience. If you would like to know more, please get in touch.
I am firmly of the opinion that if Google had fired all their feature developers around 2013 then their 2024 offering would be far superior to the unfortunate guff it has become today.
Tuesday night saw the celebration of a major political event, a commemoration of political stability and continuity: Guy Fawkes Night.
What is a data breach? A data breach occurs when sensitive, protected, or confidential information is accessed, shared, or stolen by an unauthorised person.