Combatting Misinformation

This article was written by Joe DosSantos and originally appeared on the Qlik Blog here:


In March this year, I wrote a blog, titled “The Power of Misinformation,” in which I highlighted serious societal misinformation threats being caused by social media tricksters. I also pointed to research that might explain why we fall for it. We in the analytics community have a responsibility to combat these challenges through the promotion of data literacy skills in our private and professional worlds. Malcolm Gladwell delivered an impassioned keynote address at QlikWorld Online 2021, galvanizing us to use our knowledge and skills to stem the tide of falsehoods being propagated across digital platforms.

First, I want to explore a central theme in both my previous blog post and in Gladwell’s keynote: our ability to detect bias within information and to recognize outright misinformation – and our inclination to seek out and verify claims or to simply consume as true what we are presented with. Sam Wineburg, PhD, Margaret Jacks Professor of Education at Stanford University did some fascinating research into the psychology of critical thinking.

As part of an experiment, he gave professional fact-checkers (along with professors and students at Stanford) a series of tasks to test a person’s ability to identify bias and misinformation in a series of articles. The most adept at the tasks? The fact-checkers. They quickly sidestepped the credibility traps laid out throughout the websites, going to other sites to see how the wider internet community regarded the sites they were investigating. All of the other participants, he concluded, were “essentially wearing blinders.”

Thus, it is clear that validating the veracity of websites and the stories they tell can be taught, though even the most educated may not know how to do so. In fact, Wineburg is using his research on school- and university-aged students to develop new curricula to improve digital literacy skills with remarkable results.

Second, there is a growing fascination with a concept called “pre-bunking.” This piece from the Harvard Kennedy School explains techniques by which people can become less susceptible to immediately believing content they hear by explaining the process that misinformation campaigns leverage to change societal perceptions. This excellent Guardian piece, “Why Smart People Are More Likely to Believe Fake News,” highlighted similar research from John Cook of George Mason University and Stephan Lewandowsky of the University of Bristol. They call this approach “inoculation” and outline how showing people the tactics of the tobacco industry in denying links to lung cancer made them less likely to believe false global warming claims.

Beyond encouraging individual data literacy, it is increasingly important for us to influence social media platforms to help us combat misinformation. And, while it may be tempting to embrace policies that censor people or groups, Joel Finkelstein of the Network Contagion Research Institute argues that this type of censorship is counterproductive and actually reinforces the ideas of conspiracy theories. If you are interested in Joel’s thoughts, take a listen to this April interview with him on the Data Brilliant Podcast.

So, if censorship is not the answer, let’s consider some alternative approaches for social media. Let’s consider two fascinating statistics outlined in this still-relevant 2018 Time piece. First, roughly six in 10 misinformation tweets are retweeted without the retweeter actually reading the original content. Second, fake news travels up to six times faster than true stories. If we truly believe that people should be responsible data consumers, social media giants could require that a person read an entire article before hitting a “share” button. This time would, at least, allow critical thinking skills to kick in as I’ve outlined above. This idea might sound similar to an idea proposed by Michael Lewis in his famous book, titled “Flash Boys: A Wall Street Revolt,” in which Lewis argues that, in order to combat hedge fund market manipulation, an exchange should slow down market information and trading. Although this approach seems at first at odds with liquid markets, Lewis posits that slowing things down actually makes things fair. We, too, should seek ways to slow people down in order to improve the quality of our social media data.

And finally, as a last resort, social media platforms could resort to shame. Much in the way that people loses points from their licenses when they are in car accidents, so, too, can a person’s profile be given demerits for the sharing of misinformation – sites like Quora and Reddit already implement a system whereby the quality of the information shared is scrutinized to some extent. I envision shamed users having their subsequent posts pushed further to the bottom of their social network’s feed – or off of it all together. The ability to effectively label sites as purveyors of falsehoods could dissuade others from seeing biased or fake content and/or believing what they read.


We are all in this together. Governments, professionals and technology giants. Organizations like the Network Contagion Research Institute are at the forefront of bringing these entities together to fight against misinformation with the seriousness and urgency that it deserves. We must start developing the skills to overcome our cognitive biases and work in concert with governmental and technology policies to drive misinformation out of our social media feeds. Like most people, I would like to simultaneously respect the freedoms of my neighbors and to trust what I see. Let’s get to work.