
Jean-Pierre Roy, “The Sultan and the Strange Loop” (2016)
The U word has once again reared its ugly head.
I’m referring of course to uncertainty, which at least a few of us are pleased has returned to occupy a prominent role in relation to scientific discourse. The idea that we simply do not know is swirling around us, haunting pretty much every pronouncement by economists, virological scientists, epidemiological modelers, and the like.
How many people will contract the novel coronavirus? How many fatalities has the virus caused thus far? And how many people will eventually die because of it? Do face masks work? How many workers have been laid off? How severe will the economic meltdown be in the second quarter and for the rest of the year?
We read and hear lots of answers to those questions but, while individual forecasts and predictions are often presented as uniquely “correct,” they differ from one another and change so often we are forced to admit our knowledge is radically uncertain.
Uncertainty, it seems, erupts every time normalcy is suspended and we are forced to confront the normal workings of scientific practice. It certainly happened during the first Great Depression, when John Maynard Keynes used the idea of radical uncertainty—as against probabilistic risk—to challenge neoclassical economics and its rosy predictions of stable growth and full employment.* And it occurred again during the second Great Depression, when mainstream macroeconomics, especially the so-called dynamic stochastic general equilibrium approach, was criticized for failing to take into account “massive uncertainty,” that is, the impossibility of predicting surprises and situations in which we simply do not know what is going to happen.

The issue of uncertainty came to the fore again after the election of Donald Trump, which came as a shock to many—even though polls showed a race that was both fairly close and highly uncertain. FiveThirtyEight’s final, pre-election forecast put Hillary Clinton’s chance of winning at 71.4 percent, which elicited quite a few criticisms and attacks, since other models were much more confident about Clinton, variously putting her chances at 92 percent to 99 percent. But, as Nate Silver explained just after the election,
one of the reasons to build a model — perhaps the most important reason — is to measure uncertainty and to account for risk. If polling were perfect, you wouldn’t need to do this. . .There was widespread complacency about Clinton’s chances in a way that wasn’t justified by a careful analysis of the data and the uncertainties surrounding it.
In my view, Silver is one of the best when it comes to admitting the enormous gap between what we claim to know and what we actually know (as I argued back in 2012), which however is often undermined in an attempt to make the results of models seem more accurate and to conform to expectations.
And that’s just as much the case in social sciences (including, and perhaps especially, economics) and the natural sciences as it is in weather forecasting. Many, perhaps most, practitioners and pundits operate as if science is a single set of truths and not a discourse, with all the strengths and failings that implies. What I’m referring to are all the uncertainties, not to mention indeterminisms, linguistic risks and confusions, referrals and deferences to other knowledges and discourses, embedded assumptions (e.g., in both the data-gathering and the modeling) that are attendant upon any practice of discursive production and dissemination.
As Siobhan Roberts recently argued,
Science is full of epistemic uncertainty. Circling the unknowns, inching toward truth through argument and experiment is how progress is made. But science is often expected to be a monolithic collection of all the right answers. As a result, some scientists — and the politicians, policymakers and journalists who depend on them — are reluctant to acknowledge the inherent uncertainties, worried that candor undermines credibility.
What that means, in my view, is science is always subject to discussion and debate within and between contending positions, and therefore decisions need to be made —about facts, concepts, theories, models, and much else—all along the way.
As it turns out, acknowledging that uncertainty, and therefore openly disclosing the range of possible outcomes, does not undermine public trust in scientific facts and predictions. That was the conclusion of a study recently published in the Proceedings of the National Academy of the Sciences.
In the “posttruth” era where facts are increasingly contested, a common assumption is that communicating uncertainty will reduce public trust. . .Results show that whereas people do perceive greater uncertainty when it is communicated, we observed only a small decrease in trust in numbers and trustworthiness of the source, and mostly for verbal uncertainty communication. These results could help reassure all communicators of facts and science that they can be more open and transparent about the limits of human knowledge.
Even if communicating uncertainty does decrease people’s trust in and perceived reliability of scientific facts, including numbers, that in my view is not a bad thing. It serves to challenge the usual (especially these days, among liberals, progressives, and others who embrace a Superman theory of truth) that everyone can and should rely on science to make the key decisions.*** The alternative is to admit and accept that decision-making, under uncertainty, is both internal and external to scientific practice. The implication, as I see it, is that the production and communication of scientific facts as well as their subsequent use by other scientists and the general public is a contested terrain, full of uncertainty.
Last year, even before the coronavirus pandemic, Scientific American [unfortunately, behind a paywall] published a special issue titled “Truth, Lies, and Uncertainty.” The symposium covers a wide range of topics, from medicine and mathematics to statistics and paleobiology. For those of us in economics, perhaps the most relevant is the article on physics (“Virtually Reality, by George Musser).
Musser begins by noting that “physics seems to be one of the only domains of human life where truth is clear-cut.”
The laws of physics describe hard reality. They are grounded in mathematical rigor and experimental proof. They give answers, not endless muddle. There is not one physics for you and one physics for me but a single physics for everyone and everywhere.
Or so it appears.
In fact, Musser explains, practicing physicists operate with considerable doubt and uncertainty, on everything from fundamental theories (such as quantum mechanics and string theory) to bench science (“Is a wire broken? Is the code buggy? Is the measurement a statistical fluke?”).
Consider, for example, quantum theory: if you
take quantum theory to be a representation of the world, you are led to think of it as a theory of co-existing alternative realities. Such multiple worlds or parallel universes also seem to be a consequence of cosmological theories: the same processes that gave rise to our universe should beget others as well. Additional parallel universes could exist in higher dimensions of space beyond our view. Those universes are populated with variations on our own universe. There is not a single definite reality.
Although theories that predict a multiverse are entirely objective—no observers or observer-dependent quantities appear in the basic equations—they do not eliminate the observer’s role but merely relocate it. They say that our view of reality is heavily filtered, and we have to take that into account when applying the theory. If we do not see a photon do two contradictory things at once, it does not mean the photon is not doing both. It might just mean we get to see only one of them. Likewise, in cosmology, our mere existence creates a bias in our observations. We necessarily live in a universe that can support human life, so our measurements of the cosmos might not be fully representative.
Musser’s view is that accepting uncertainty in physics actually leads to a better scientific practice, as long as physicists themselves are the ones who attempt to point out problems with their own ideas.
So, if physicists are willing to live with—and even to celebrate—uncertain knowledge, and even if the general public does lose a bit of trust when a degree of uncertainty is revealed, then it’s time for the rest of us (perhaps especially economists) to relinquish the idea of certain scientific knowledge.
Then, as Maggie Koerth recently explained in relation to the coronavirus pandemic, instead of waiting around around for “absolute, unequivocal facts” to decide our fate, we can get on with the task of making the “big, serious decisions” that currently face us.
*Although, as I explained back in 2011, the idea of fundamental uncertainty was first introduced into mainstream economic discourse by Frank Knight.
**And later central bankers (such as the Bank of England’s Andy Haldane) discovered that admitting uncertainty might actually “enhance understanding and therefore authority.”
***The irony is that “the Left” used to be skeptical about and critical of much of modern science—from phrenology, craniometry, and social Darwinism to the atom bomb, sociobiology, and evolutionary psychology.
Like this:
Like Loading...