Long before the COVID-19 pandemic, the internet was fertile territory for the spread of dangerous disinformation. Hostile states and malicious or misguided individuals quickly adopted the online sphere as a means of disseminating misleading and harmful material to a global audience for personal, financial or political aims. Steps were already taking place around the world to tackle the scourge of disinformation, often igniting concerns about freedom of speech. The global spread of the coronavirus has laid bare the lethal backdrop to this debate and galvanised social media giants and governments alike to tackle what the World Health Organisation ('WHO') has described a massive 'infodemic' accompanying the disease – an over-abundance of information, some accurate and some dangerously false, often leaving the public bewildered and vulnerable.

In reality, the tragic public health consequences of online disinformation have been apparent for years. For example, the WHO estimates that, in 2018, 142,000 infants aged under five died as a result of an entirely preventable global surge in measles cases caused not only by poor health care provision but also spurred on by misinformation campaigns about the risks of immunisation. The UN estimates that measles deaths in 2019 were likely to have been higher still.

Against the rising tide of online disinformation, Governments around the world have for some time been junking older regulatory models in favour of tighter regulation. Some of the most draconian measures include China's policy of internet censorship, the availability of a publicly available app for reporting "online rumours" to the authorities, and the broadcasting of state-approved 'real' news by Chinese social media giants Weibo and WeChat. Singapore's Protection from Online Falsehoods and Manipulation Act outlaws any statement deemed prejudicial to Singapore's public health, security or foreign relations, or – more darkly – which may diminish confidence in the government.

Closer to home, in 2018, the EU agreed an action plan against disinformation aimed at improving detection, co-ordinating responses, mobilising the private sector, and raising awareness of the problem which jeopardises the integrity of democratic processes as well as the health and well-being of citizens across Europe. The EU's action plan included a Rapid Alert System for sharing insights into and tackling emerging disinformation campaigns. In April 2019, the UK Government issued its controversial Online Harms White Paper, foreshadowing the introduction of a unique duty of care on social media providers and other in-scope companies to keep their users safe and tackle harm caused by content on their platforms.

The UK Government's rather muted initial response to its Online Harms Consultation was released in February 2020 with draft legislation promised for later in the year. Though the COVID-19 pandemic is likely to sweep away the planned legislative timetable, the urgent need to tackle dangerous online disinformation about the virus has given new impetus to self-regulation by the social media giants and caused the UK Government to dust-down some old Cold War methods.

Following earlier concerns about anti-vax messages circulating online, social media companies had for some time been taking voluntary steps against disinformation concerning health issues. In 2017, Pinterest announced that it would ban promises of false cures for terminal or chronic medical conditions. In February 2019, YouTube removed advertisements from videos promoting anti-vaccination content, and in March 2019, Facebook issued a statement that it would reduce the ranking of groups and pages spreading disinformation about vaccines.

Following the outbreak of COVID-19, Twitter announced that, while it could not 'police' every tweet, it would delete those which risked harm by spreading dangerous disinformation about the virus including those contradicting health authority guidance about the effectiveness of social distancing, promoting false and in some instances dangerous 'cures', and claims that some nationalities were more susceptible than others. Then, on 17 March, the world's largest social media companies, including Facebook, Google, Microsoft and Twitter put out a joint statement promising to fight COVID-19 fraud and fight disinformation. Details of how they intend to go about this were not published but it is understood they will co-ordinate with US Government healthcare agencies. In April, to impede the dissemination of COVID-19 conspiracy theories, it was reported that Facebook-owned WhatsApp was restricting the frequency with which messages could be forwarded by users.

In a domestic bid to tackle disinformation about the virus, the UK's Cabinet Office has formed a Rapid Response Unit ('RRU') to examine ways of countering harmful online narratives, of which it estimates there may be as many as seventy per week. It is not the first time the Government has used such measures; in 2018, the Government announced a dedicated national security communications unit to combat disinformation from hostile states, and between 1948 until 1977, the Foreign Office operated an Information Research Department to counter the effect of Soviet propaganda aimed at the West. The work of the Cabinet Office's new RRU includes direct rebuttals of disinformation on social media and working with platform providers to remove harmful online content. Other Government measures include funding overseas humanitarian networks challenging disinformation about the pandemic in South-East Asia and Africa which is then disseminated worldwide, and by reprising its online awareness campaign, 'Don't Feed the Beast' encouraging people to be wary of inadvertently promoting disinformation by 'liking', commenting and sharing harmful online content.

More recently, the UK Government has taken the fight against disinformation directly onto mainstream media, with the Culture Secretary publicly slamming TV station London Live for giving airtime to a well-known conspiracy theorist who implied the spread of the coronavirus was attributable to the 5G mobile network. This particular baseless conspiracy, attributed to bad state actors and anti-vax campaigners amongst others, has sparked criminal damage to telephone masts in Birmingham, Merseyside and Co. Donegal in Ireland, as well as verbal and physical threats to telecoms engineers. The broadcasting regulator, Ofcom, ear-marked as the UK's future online harms regulator, has formally sanctioned London Live for potentially causing significant harm to viewers over its 5G broadcast, and issued guidance to ITV after it found a presenter on This Morning made "ill-judged" on-air comments which risked undermining viewers' trust in scientific evidence about the technology.

As Twitter implicitly acknowledged when announcing its own measures to tackle disinformation about COVID-19, even with the rise of algorithmic methods, the volume and variety of online information makes it impossible to prevent all online harms. Nevertheless, the steps now being taken by the leading social media companies, often in conjunction with governments, show what may still be achieved within a self-regulatory framework at times of great exigency, and the lessons learned are likely to have a significant impact on the wider online harms debate in the future.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.