2024 was a landmark year for democracy. It was also a battleground against misinformation and disinformation.

As billions turn to digital platforms for political news, tech giants face increasing scrutiny over their efforts to safeguard elections. Nonetheless, regulating the emerging threat of misinformation and disinformation raises critical concerns about free speech, privacy, and trust in government.

In 2024, we have asked: can we protect democracy without stifling open communication?

In the second part of Nexus APAC’s retrospective series on the key themes of the ‘super election year’, Nexus Analyst Adelaide Hayes explores the challenges presented by digital disinformation and misinformation for elections in 2024 – and beyond.

The Threat of Misinformation and Disinformation

Whilst misinformation and disinformation both involve spreading false information, their key difference lies in intent.

Misinformation is the unintentional or inadvertent spread of incorrect information without specific intent to harm.

Disinformation is the deliberate spreading of falsehoods or manipulated facts, often designed to evoke an emotional response, sow discord, or achieve political objectives. Disinformation operates as a tool of political manipulation – fostering cynicism, apathy and distrust.

A growing consensus asserts that while these phenomena are not novel, the digital age has exponentially magnified their reach and impact. Social media has amplified the scale and speed at which false information can evolve and be disseminated – especially given that users are unrestrained by journalistic, academic or scientific standards.

Accordingly, the World Economic Forum rated mis- and disinformation as the 2024 election season’s top short-term threat over extreme weather events, interstate armed conflict and inflation.

During election periods, false information fosters mistrustful sentiments about democracies, polarises communities, promotes conspiracy theories, corrupts polling data, misleads voters, and even discourages them from voting altogether. At its worst, disinformation can incite violence and compromise national security.

Disinformation is often discrete. For example, NewsGuard’s 2024 US Election Misinformation Monitoring Center discovered 1,283 partisan sites masquerading as politically neutral local news outlets. Furthermore, many of these sites were secretly funded by political organisations without clear disclosure to readers, adopting innocuous names like “The Philadelphia Leader” or “The Copper Courier.”

The role of Artificial Intelligence in the spread of false information

When people think ‘disinformation’, artificial intelligence (AI) frequently comes to mind.

New technologies, including AI-enabled data infrastructure and generative artificial intelligence (GAI), are enabling hostile actors – domestic or foreign – to create disinformation and manipulate information with a sophistication that preventative mechanisms do not currently match. When unsuspecting users of social media disseminate this material, misinformation ensues – sometimes at an enormous and irreversible level.

In 2024, elections around the world were impacted by AI disinformation. Researchers from the Centre for Emerging Technology and Security (CETaS) at the Alan Turing Institute identified 16 confirmed viral cases of AI disinformation or deepfakes during the 2024 United Kingdom general election.

11 viral cases were identified in the 2024 EU and French elections combined.

In the United States presidential election, Russian influence actors disseminated a falsely depicted video of individuals, who claimed to be Haitian, voting in multiple counties in Georgia. In January 2024, robocalls using synthetic speech of President Joe Biden’s voice discouraged New Hampshire voters from voting in primaries.

At this stage, studies indicate that users are reasonably adept at figuring out when a photo is AI-generated. Some posit that this is a symptom of toxic information systems that make us increasingly suspicious of political news. Others maintain that most GAI-generated disinformation has a “tell”. In the context of imagery, this includes uneven lighting, two right hands, or some other perceptible anomaly.

In the near future, deepfakes could be indistinguishable from non-AI-generated content and potentially disseminated at a scale far too large for humans to review and moderate. Whilst AI is not currently the primary culprit in the spread of false information, it is likely to become increasingly dangerous throughout upcoming election cycles.

Current efforts to combat false information

Election years illuminate what is at stake when technology is unrestrained and moving faster than society can control it. As such, there’s increasing scrutiny on digital platforms, governments and international institutions to act.

Digital platforms

Whether it is AI-generated fake news, manipulated political ads, or outright hoaxes, the capacity of online platforms to swiftly detect, evaluate, and neutralise dis- and misinformation was put to the test in 2024.

Acknowledging their potential role in undermining electoral integrity, tech giants such as Google, TikTok, and Meta made their 2024 election plans public. However, a critical evaluation of these plans against international best practices, such as the European Commission’s Digital Services Act (DSA) and the International Foundation for Electoral System (IFES) guidelines, reveals significant gaps and oversights.

Google, Meta, and TikTok’s plans might appear comprehensive in theory, but they fall short in key areas, particularly when it comes to post-election reviews, transparency, and proactive disinformation countermeasures. Furthermore, there are few incentives for these profit-driven entities to combat the spread of falsehoods.

Australia: a case study for government responses to false information (or lack thereof)

Around the world, many countries are considering legislation to suppress specific types of misinformation or require online platforms to suppress it. In democracies, censorship entails controversy. Notably, censorship restricts people’s right to free speech – an essential natural freedom protected in the Universal Declaration of Human Rights and international law.

The Australian Government introduced the Communications Legislation Amendment (Combatting Misinformation and Disinformation) Bill 2024 into Parliament in September 2024.  However, the Albanese Government was unable to secure the support needed to pass the legislation, with the Coalition, Greens and crossbench in the Senate all opposing the Bill.

Leader of the Opposition, the Hon Peter Dutton MP, maintained an anti-regulatory sentiment. “Labor’s dangerous misinformation bill has been scrapped – a win for free speech for our democracy. This legislation was a scandalous attack on free speech, with censorship at its core,” he stated.

Representing the Minister for Communications in the Senate, the Hon Jenny McAllister represented the other side of the debate.

“It is incumbent on democracies to grapple with these challenges in a way that puts the interests of citizens first and protects our society against those that would use our openness against us,” she said.

Under the proposal, the Australian Communications and Media Authority would have been given power to monitor digital platforms and require them to keep records about misinformation and disinformation on their networks. The Bill incorporated carve-outs for professional news, parody and satire and the reasonable dissemination of content for any academic, artistic, scientific or religious purpose.

In Australia, it is an offence under the Commonwealth Electoral Act to publish or distribute anything likely to mislead or deceive an elector regarding the casting of a vote during the election period. However, there is currently no obligation imposed on digital communications platform providers to actively respond to this type of disinformation being disseminated on their platforms.

As it stands, Australia will head into its next federal election without a framework for combatting misinformation or disinformation.

Global Frameworks

Recognising the threat disinformation poses to the safety and wellbeing of citizens, democracies, societies and economies, the Organisation for Economic Co-operation and Development (OECD) (of which Australia is a member state) published a comprehensive framework for institutions to tackle disinformation and strengthen information integrity in March 2024. The report guides countries in the design of policies, including upgrading governance measures and public institutions to uphold the integrity of the information space.

According to OECD Secretary-General and former Australian Minister for Finance, the Hon Mathias Cormann, “No single democracy can solve the problem of rising disinformation on its own, but every democracy can support independent and diverse journalism, encourage accountability and transparency of online platforms, and help build citizens’ media literacy to encourage critical consumption of content, to address the challenge of disinformation and its corrosive effect on trust.”

Where to Next?

2024 showcased the profound challenges posed by disinformation and misinformation alongside the global efforts to counter these threats. While technology continues to evolve at a staggering pace, society’s capacity to regulate its misuse remains limited. Striking the right balance between safeguarding free speech and protecting electoral integrity is essential but fraught with complexity.

The Australian example elucidates the fact that democracies are cognisant of the threat of mis-and disinformation – but are yet to understand how to combat these phenomena, how constituents respond to regulations, or to what extent regulating information is within the government’s remit.

As governments, digital platforms, and international bodies grapple with solutions, one principle stands clear: democracy depends on an informed and engaged electorate.

Empowering citizens with media literacy, fostering transparency in tech, and implementing measured, collaborative regulations are critical steps toward ensuring that truth prevails in shaping the democratic process.

The lessons of 2024 remind us that vigilance and innovation are essential to maintaining the integrity of elections in an increasingly digital world. As each electoral cycle goes by, the stakes are magnified even further.

Stay tuned on the Nexus APAC Insights page for more political updates.