It was likely, as it was a century ago during a prior pandemic, that significant numbers of Americans would argue falsely there was no pandemic (‘just like the regular flu’), that if it were a pandemic it would go away (‘like a miracle’), that anyone talking about illness was merely fearful (as though discussions of injuries were other than rational assessments), that all that mattered was outlook (as though ‘hope over fear’ was anything more other a platitude), or that in fact science itself somehow supported their views.
Simply listing tables of statistics – that were often mortuary lists, truly – was never going to satisfy those who sought to rationalize mightily a national tragedy. It was naive to the point of foolishness to assume one could reason with those in the grip of motivated reasoning. Those on the other side (mostly Trumpists, but some others, too) were – of course – going to claim that they had science on their side. They could convince themselves of this, if no one else, because they conflated collections of data, or even a single datum, with science as a method by which data are synthesized and analyzed.
These skeptics are not less intelligent, but they are less reasonable for their unacknowledged, partial, and biased approaches.
Note well: (1) To avoid incomplete assessments, one should wait to assess local institutional performance until the pandemic passes, (2) one should not make the mistake of pretending that the skeptics’ views are as legitimate as serious professional analyses, and (3) devoting time to every skeptic or Trumpist’s view is a waste of time as against a focus on political leaders (even down to the local level) espousing such views.
Rebecca Onion writes of these skeptical deficiencies in COVID Skeptics Don’t Just Need More Critical Thinking (‘Without a shared approach to scientific expertise, “trusting the data” won’t lead us to the same conclusions’). Onion’s whole essay is worth reading, as she interviews Crystal Lee (a leader of the group of researchers and a graduate student in MIT’s Program in Science, Technology, and Society) on how skeptics’ approach is incomplete.
[RO]: My question is, if they are using these same tools, using the same data sets, and asking the same questions as the scientists who create visualizations for the government, where are the points of departure? Where do the roads diverge in the woods?
[CL]: The biggest point of diversion is the focus on different metrics—on deaths, rather than cases. They focus on a very small slice of the data. And even then, they contest metrics in ways I think are fundamentally misleading. They’ll say, you know, “Houston is reporting a lot of deaths, but the people there are measuring ‘deaths with COVID,’ in addition to ‘deaths by COVID’ ”—that distinction.
[RO]: Yes, that’s a big one—but, of course, we know that many times the person died from a condition caused by COVID, and that’s what’s being reported.
[CL]: Right. And another major thing is people feel that data doesn’t match their lived experience. So we know a lot of health departments now have websites and data portals and such, but especially in smaller communities, the statistics they have are from the state, and there’s some unevenness between the city or town level and then the state. And so the state might be really bad, and the numbers are scary, but the rate might be lower in a specific town. So they’ll say, “Look, we don’t know anybody who has it, and our hospitals are fine.” So there’s a disconnect that is underlying the skepticism that leads them to try to reapproach the data, reanalyze and represent the data in a way that makes more sense to them.