Google+

Coronavirus is black magic? Misinformation as a public health hazard

The spread of ‘fake news’ has been rampant in recent times, with platforms like WhatsApp and Facebook being weaponized to spread misinformation. The content shared on such platforms often also goes viral, regardless of its authenticity, largely unvetted by traditional gatekeepers. This lack of screening also facilitates the spread of false information. The spread of ‘fake news,’ however, is not limited to the realm of social and political news--it extends to science and health as well. False information about the Ebola and the Zika viruses, for instance, has been spread far and wide on social media in the past. Sometimes such incorrect information has had a wider reach than legitimate medical information. More recently, misinformation about the rate of spread of the coronavirus and how it might be an instance of bio-terrorism has also led to distasteful instances of xenophobia. 

Fake news may take many forms, as we have seen in the case of COVID-19, and all of them have the potential to be extremely detrimental from a public health point of view. Some have argued that the virus does not have an impact in higher temperature climates, and therefore would not have an effect in hotter countries. However, this is based on partial or false facts, as evidenced by the presence of the virus in India and other countries with hot and humid climates. Another form of false information is disinformation, or intentional transmission of falsehoods to aid certain underlying motivations. More insidious has been mal-information, or information based on facts used to inflict harm on a person or a social group. Calling the virus ‘the Chinese virus,’ for example is not helpful to people who are or are perceived as Asian, and inadvertently increases cases of xenophobia and hate crimes. 

Many of us, due to incomplete and sometimes false information, have also overreacted; for instance by hoarding essential goods, such as sanitizers and toilet paper. Others have under-reacted, by continuing to go out to crowded places and inadvertently being carriers of the virus. It is also known that preventable diseases like measles are making a comeback because of skepticism around vaccine safety, reaching far and wide thanks to social media. This kind of false information has also been fueled by trolls and bots, who use effective and persuasive communication strategies, such as emotional language, to exacerbate political polarization around such issues. 

So then the question is, what makes us believe or disbelieve information? One cause might be us engaging in groupthink or echo-chambers--we tend to follow and are likely to be influenced by people who think like us, which then encourages algorithms to suggest our profiles to others who think like us. For instance, it is known that YouTube suggests videos that might radicalize an average user in a boiling-frog manner. Once we are in these rather homogeneous groups, our beliefs are further consolidated by cognitive processes like confirmation bias or information avoidance. For example, if you are in a Facebook or WhatsApp group with others who believe that alternative or natural therapies work, you might be first shown, or added to other similar groups. Once in the group, you might be led to articles (including apparently scientific ones) that argue for natural or home remedies, and because these beliefs are congruent with your own, you would be less likely to question them. Alternatively, if you find information that does not confirm your beliefs, you might either ignore it, or have a hard time processing it, finding it very discomforting. Then, because you’ve been repeatedly exposed to this information, it gets consolidated in your mind, and you might think of it as an objective fact, rather than a belief. 

Indeed, even if counter-evidence is provided, those who share misinformation do not change their original beliefs. We often see this in terms of political misinformation, too! When someone on the other side of the political aisle tries to show us how wrong we are, don’t we often ignore it? The same goes for health-based misinformation; if we think the life of the virus is 12 hours, it would be very difficult for us to update that information, especially when we are functioning on less than complete knowledge. Relying on emotions also makes us increasingly believe false news. Indeed, in these times, our first response is fear and panic, which also does not help. News and social media are also lush with threatening information. Further, those who share misinformation are also either unwilling or unable to engage in analytic thinking, which then makes them share information from less reliable news sources. And what is less reliable than the internet?    

With the lack of gatekeeping mechanisms, we have seen user-generated memes and short videos disseminating information without any nuance. Plus, because images and videos are easier to process compared to text-based media like articles, they are more likely to go viral. So when a Nurse Holly, who is followed by 1.7 million users on TikTok, says that the best way to prevent STIs is to wait until marriage, she reaches more people than a scientific article that refutes the claim. Instagram is no better, either, especially because it is extremely popular among those who are the most impressionable. After all, Instagram is where ridiculous diet pills were advertised and where incredibly untrustworthy lifestyle and diet claims are made by influencers.

So, what can we do to curb the rampant spread of misinformation, especially when the consequences are dire? The good news is we are, on the whole, good at discriminating falsehoods from truth. We also prefer truth to lies. But the problem is, when we share information online, we do not even consider whether it is true or false: our attention is focused too much on other aspects of social media, such as how many likes we might get. Our attention is limited, and we spend it worrying about the feedback rather than the veracity of the content we are sharing. This is especially so because the cognitive effort spent on fact-checking is much higher than, say checking how many shares we got. 

If we are actually not very interested in sharing falsehoods, but it does happen unintentionally, what can we do to control misinformation sharing? Considering how analytical thinking is negatively related to sharing falsehoods, one possible solution is to ask people to reflect on whether what they are sharing might be false. Further, science knowledge is positively linked to believing true news, and negatively linked to sharing false news about the virus. Even though it might be difficult to increase our collective science knowledge for now, we could deploy doctors, nurses, public health professionals, and health journalists to do the job for us (and many of them are). We could also have scientific experts dispel myths regularly in a public platform, such as newspapers and social media.

You might argue that people do not actually listen to experts. Research in communication science tells us that those who carefully reflect are less likely to be swayed by the persuasiveness of the message alone. But, such careful reflection is not always possible, and so, we need celebrities and social media influencers, who both, have access to a larger proportion of the population than the average science communicator, and are generally perceived positively among people who follow them. True, the information they provide might not be reliable on its own. More often than not, they do promote hoax goods and pseudoscience. However, they have the reach most of us, including experts, do not. Similarly, it might be helpful to rely on trustworthy celebrities and influencers across the political aisle as well.

Research on persuasion also points us towards the benefit of framing the message as being effective to the consumer or others they care about. For instance, your message should underscore how social distancing is helpful to you, as well as others, like your parents or grandparents, for it to be extremely convincing. In addition, framing the message as moral might also be beneficial. The hopeful news is that given how immediately relevant the pandemic is, people are likely to take it seriously, unlike, say climate change, the consequences of which are comparatively more long-term. The only issue is how much it will be politicized within countries, communities, and cultures.

Even so, debunking misinformation will only get us so far. Because of the amount of information spread, it is very unlikely that everyone also has access to information countering it. A more useful mechanism might be to expose people to some misinformation, so that they are less susceptible to such fake news. For example, if we show people two news articles and tell them that one of these is false, it is possible they are psychologically inoculated to the possibility of encountering misinformation, and hence pay enough attention to identify the correct information. For algorithmic social media like YouTube or TikTok, an algorithmic intervention might be possible. For a more private messenger like WhatsApp, behavioural interventions are also a possibility. Recent research, for example, indicates that subtle nudges to induce accuracy before sharing news might also prevent us from sharing incorrect news. This could be done through prompts on social media sites reminding us to share verified information.

Exigency at all levels is required. For the individual, it means hand washing; for the community, social distancing, as has been constantly advertised. And most of all? Constant vigilance.

Arathy Puthillam

mini_logo.png