Date
Tech, media and telecoms

Cyber views on disinformation policy

By
Body

The role of disinformation in elections was brought to light for the first time during the 2016 US presidential race. Six years later, Mandiant researchers have again raised concerns about foreign election interference in reporting on Chinese disinformation campaigns aimed at discouraging people from voting. Given that foreign information operations frequently target information security around elections, policy responses appear to be relatively cautious, focusing on content moderation or education. While content moderation and media literacy policies certainly have a place in countering false narratives, countering foreign information operations may require more than that. As the security industry reconsiders information operations as belonging to the cyber realm, governments may benefit from considering a more cyber-centric approach to disinformation.

Misinformation and disinformation can both destabilise a country's information integrity, spread mistrust in authorities, and threaten national security. It is necessary to distinguish, however, because different actors and behaviours are associated with them. Misinformation is the unintentional spread of false information, such as when sharing a social media post without fact-checking its contents. Anyone is likely to spread misinformation from time to time. Arguably, content moderation and media literacy policies would be well posed to tackle misinformation as a disorganised and single phenomenon that appears in echo chambers. In contrast, disinformation would be less affected by content moderation as it is the deliberate spread of false information with the intent to cause harm and is often associated with foreign state actors. 

When considering disinformation policies, policymakers must contend with several challenges. First, by approaching both misinformation and disinformation in the same way, policymakers struggle to target aspects of both. Policies that target misinformation, such as education campaigns or content moderation incentives, may be ineffective against intentional and targeted disinformation campaigns, which are able to adapt to these challenges. Second, policies run the risk of sparking controversy and have thus been approached with caution. Identifying what information is true and false, and restricting the publication or spread of the latter, risks infringing on free speech and privacy policies. Finally, focusing on the venues of misinformation and disinformation, such as private messengers, may be difficult without jeopardising data and privacy protection.

The disinformation policy landscape 

The EU is frequently a forerunner in digital regulation, and it has launched two notable policy interventions on misinformation and disinformation; both are primarily concerned with content moderation, research, and transparency. First, in July 2022, the EU published an updated version of its voluntary Code of Practice on Disinformation. Second, the Digital Services Act will include a provision applying only to the largest platforms, requiring companies to take steps to mitigate the negative impact of disinformation, with fines of up to 6% of global revenue levied in the event of non-compliance. 

Meanwhile, the United Kingdom proposes a different path in its Online Safety Bill, requiring services to proactively search for and remove state-linked disinformation. It also establishes a legal obligation for larger platforms to establish clear policies for dealing with disinformation. 

In the US, the lack of bipartisan agreement on censorship and free speech has made policy action on disinformation difficult. In August 2022, Representative Donald S. Beyer introduced a bill establishing a commission to create a national strategy for media literacy education in schools. In 2021, Senator Amy Klobuchar introduced a bill to Congress that would create a new carveout in Section 230 of the Communications Decency Act and allow an online platform disseminating health disinformation to be treated like a publisher and held accountable for its content. 

The cyber-centric approach 

As demonstrated by the examples, current disinformation policies prioritise content moderation and media literacy education. However, there has already been a shift in policymakers' thinking as the EU Commission stepped up its efforts to fight against state-backed disinformation campaigns after the invasion of Ukraine. The ‘Defense of Democracy’ Package to be published in the Summer of 2023 is expected to take a more cyber-centric approach to foreign interference. Additionally, disinformation is identified as a cyber threat in the EU’s Strategic Compass for Cybersecurity. 

This follows trends from the security industry increasingly identifying disinformation as a cybersecurity threat and therefore looking at cyber tools to mitigate it. This trend has contributed to a significant increase in demand for Cyber Threat Intelligence (CTI) services and indeed an emerging start-up ecosystem specialised in gathering data from Open-Source Intelligence about cyber threat actors' identities, motives, and behaviours to detect cyber and physical threats online, frequently leveraging artificial intelligence and natural language processing in conjunction with human intelligence. 

As content moderation is a politically controversial tool, as seen in debates around billionaire Elon Musk’s Twitter takeover, and media literacy strategies will take time to show effect, governments may look to CTI for solutions. Public-private partnerships already exist as many security businesses collaborate with governments, civil service and security agencies to take down malicious bot networks and identify foreign information interference. The challenge for the emerging cyber threat industry focussing on information operations is to use the momentum to shift the framing of disinformation online from a social issue to the cybersecurity realm and for the sector to be at the forefront of policymakers' minds when disinformation is on their agenda.
 

    The views expressed in this research can be attributed to the named author(s) only.