Truth in the Time of Coronavirus

Jonathan Warden
5 min readMar 20, 2020
Truth in the Time of Coronavirus
photo by Brian McGowan via Unsplash

Many of us struggle to separate the information from the misinformation on social media, especially in the past weeks and months as we seek facts regarding COVID-19. The platforms themselves do not help much. It seems increasingly clear that the algorithms used to determine which posts and tweets show up at the top of your feed, having been optimized for engagement, are inadvertently optimized for misinformation.

But social media is still useful, because accurate information does spread too. And it spreads fast. But it’s up to us to judge for ourselves which information is true and which isn’t, and sometimes we get it wrong.

I think the platforms should do a better job of helping us find truth. And I don’t mean censorship. I mean algorithms that decide which posts go at the top of our feeds based not on what we are most likely to click on, but on what is most likely to be true.

And I think this should not be so hard to do.

During a period in late February or early March, my social media feeds included a lot of posts claiming that face masks are ineffective against COVID-19 because they don’t filter particles as small as a virus. Others claimed that properly-fitted N95 masks were effective, but not surgical masks. These claims came from both individuals and mainstream media: a Forbes article stated “from a ‘protecting you’ standpoint, wearing a surgical mask would have about the same effect as covering your face with pastrami.”

But after a few days I started seeing posts stating that masks are effective because they stop the water droplets that carry the virus — even surgical and home-made masks. This new information caught on quickly. The Guardian picked it up in a fact check article. It was “masks don’t work silly people” one day and “actually they do silly people” the next.

The antidote to misinformation is corrective information. When somebody posts something that is false or misleading, often somebody else posts a response explaining why it is false. As information circulates and cyber-society absorbs each new piece of information or each new idea, the argument gradually advances. Certainly some people tend to resist information that does not fit their world view. But facts about COVID-19 are not yet so polarized that people won’t change their minds in light of new information (UPDATE: this has definitely changed since April). Most people honestly don’t know what to believe and are just looking to social media for information.

Optimizing for Engagement

Unfortunately the platforms don’t always encourage the spread of corrective information because they optimize for engagement, not truth. And misinformation often gets more engagement.

For a while, I experimented with calling out friends on Facebook when they posted something that I thought was objectively false. I have a couple of very active right-wing friends and a couple of very active left-wing friends, and all four are often guilty of posting information that supports their political views without taking enough time to verify that it is actually true. So I commented on their posts often, trying to be polite, objective and helpful, and a few times some of them have even corrected their posts based on my feedback.

But Facebook soon learned that I engage a lot with these friends. And so now my Facebook feed is inundated with posts from the four people I know that are most likely to spread untruths. By optimizing for engagement, Facebook has inadvertently optimized for misinformation.

Optimizing for Truth

Why can’t a social media platform optimize my feed for credibility, instead of engagement? Many of the most brilliant minds in the world of data science and machine learning are working on the problem of estimating what you are likely to click on. What you are likely to buy. What you are likely to share. What you are likely to believe. There is no doubt they could put some of that brain power into estimating what is likely to be true.

Facebook is making some effort to modify their algorithms to stop doing such a good job at promoting highly engaging but questionable content. But addressing only the most egregious cases of “fake news” is not enough.

There are a number of ways you could approach optimizing an algorithm for credibility. I think one possibly fantastic heuristic for identifying credible posts is to simply identify the posts that people are willing to defend.

Many of the people who posted masks don’t filter out tiny virus particles believed it because it sounded plausible at the time. Once they learned that the masks stop the water droplets that the virus travels in, many changed their minds. Some probably deleted their original posts. Some got on board the “actually masks work silly people” train and started posting the new corrective information. The fact that there was such weak support of a post in the face of a counter-argument is an indicator that the post may contain misinformation.

On the other hand, maybe there will be a counter-counter argument such as that water droplet theory was debunked and studies show the masks are worthless (I am making that up). If there were strong support for this counter-counter-argument, that could be interpreted as a signal that people still find the original post credible.

I think it should not be hard for social platforms to identify these argument-counter-counter-argument threads, then design a metric for estimating the strength of an argument based on which posts people are likely to defend and which they aren’t. In fact there is a whole academic subfield concerning argument and computation for modeling and evaluating arguments in this way. Such metrics would of course be far from perfect — especially if the topic is highly controversial — but they may be a good heuristic in a system designed to optimize for truth, and an improvement over optimizing for engagement.

Personally, I don’t want the social platforms to show me a post that says masks don’t filter out tiny virus particles just because a lot of people liked it. I would much prefer to see things that a lot of people still liked even after seeing the counter-arguments. I don’t want them to boost posts from people I am most likely to engage with. I want them to boost posts from the people and organizations who are the most credible.


It can be hard to know what is true. But when society is at its best, we successfully aggregate our collective intelligence and arrive at the truth through a combination of reason, reputation, and argument. I think it’s time for social platforms to step up and start supporting this natural process, and help us put our heads together to find out what’s true.

originally published on jonathanwarden.com

--

--