Posts Tagged ‘certainty’


Mark Tansey, “Coastline Measure” (1987)

The pollsters got it wrong again, just as they did with the Brexit vote and the Colombia peace vote. In each case, they incorrectly predicted one side would win—Hillary Clinton, Remain, and yes—and many of us were taken in by the apparent certainty of the results.

I certainly was. In each case, I told family members, friends, and acquaintances it was quite possible the polls were wrong. But still, as the day approached, I found myself believing the “experts.”

It still seems, when it comes to polling, we have a great deal of difficult with uncertainty:

Berwood Yost of Franklin & Marshall College said he wants to see polling get more comfortable with uncertainty. “The incentives now favor offering a single number that looks similar to other polls instead of really trying to report on the many possible campaign elements that could affect the outcome,” Yost said. “Certainty is rewarded, it seems.”

But election results are not the only area where uncertainty remains a problematic issue. Dani Rodrik thinks mainstream economists would do a better job defending the status quo if they acknowledged their uncertainty about the effects of globalization.

This reluctance to be honest about trade has cost economists their credibility with the public. Worse still, it has fed their opponents’ narrative. Economists’ failure to provide the full picture on trade, with all of the necessary distinctions and caveats, has made it easier to tar trade, often wrongly, with all sorts of ill effects. . .

In short, had economists gone public with the caveats, uncertainties, and skepticism of the seminar room, they might have become better defenders of the world economy.

To be fair, both groups—pollsters and mainstream economists—acknowledge the existence of uncertainty. Pollsters (and especially poll-based modelers, like one of the best, Nate Silver, as I’ve discussed here and here) always say they’re recognizing and capturing uncertainty, for example, in the “error term.”


Even Silver, whose model included a much higher probability of a Donald Trump victory than most others, expressed both defensiveness about and confidence in his forecast:

Despite what you might think, we haven’t been trying to scare anyone with these updates. The goal of a probabilistic model is not to provide deterministic predictions (“Clinton will win Wisconsin”) but instead to provide an assessment of probabilities and risks. In 2012, the risks to to Obama were lower than was commonly acknowledged, because of the low number of undecided voters and his unusually robust polling in swing states. In 2016, just the opposite is true: There are lots of undecideds, and Clinton’s polling leads are somewhat thin in swing states. Nonetheless, Clinton is probably going to win, and she could win by a big margin.


As for the mainstream economists, while they may acknowledge exceptions to the rule that “everyone benefits” from free markets and international trade in some of their models and seminar discussions, they acknowledge no uncertainty whatsoever when it comes to celebrating the current economic system in their textbooks and public pronouncements.

So, what’s the alternative? They (and we) need to find better ways of discussing and possibly “modeling” uncertainty. Since the margins of error, different probabilities, and exceptions to the rule are ways of hedging their bets anyway, why not just discuss the range of possible outcomes and all of what is included and excluded, said and unsaid, measurable and unmeasurable, and so forth?

The election pollsters and statisticians may claim the public demands a single projection, prediction, or forecast. By the same token, the mainstream economists are no doubt afraid of letting the barbarian critics through the gates. In both cases, the effect is to narrow the range of relevant factors and the likelihood of outcomes.

One alternative is to open up the models and develop a more robust language to talk about fundamental uncertainty. “We simply don’t know what’s going to happen.” In both cases, that would mean presenting the full range of possible outcomes (including the possibility that there can be still other possibilities, which haven’t been considered) and discussing the biases built into the models themselves (based on the assumptions that have been used to construct them). Instead of the pseudo-rigor associated with deterministic predictions, we’d have a real rigor predicated on uncertainty, including the uncertainty of the modelers themselves.

Admitting that they (and therefore we) simply don’t know would be a start.


Leicester City was not going to win the Premiership—not by a long shot. Nor was the Republican nomination supposed to be handed to Donald Trump. And Bernie Sanders, well, there was no chance he was going to give Hillary Clinton a serious run for her money (and machine) in the Democratic primaries.

And yet here we are.

Leicester City Football Club, as anyone who has even a fleeting interest in sports (or reads one or another major newspaper or news outlet) knows, were just crowned champions of the Premiership, the highest tier of British football, after starting the season at 5000-1 odds. There really is no parallel in the world of sports—any sport, in any country. (By way of comparison, Donerail, with odds of 91-1 in 1913, is the longest odds winner in Kentucky Derby history.) And the bookies are now being forced to pay up.


Similarly, Donald Trump was not supposed to win the Republican nomination. Instead, it was going to go to Jeb Bush and, if he failed, to Marco Rubio. (And certainly Ted Cruz, the candidate most reviled by other members of the GOP, was not supposed to be there at the end.)


Finally, Bernie Sanders’s campaign for the Democratic nomination was written off almost as soon as it was launched. And yet here is—winning the Indiana contest by 5 points (when it was predicted he would lose by the same number of points) and accumulating enough pledged delegates to be him within a couple of hundred of the presumptive nominee.

What’s going on?

In all three cases, the presumption was that the “system” would prevent such an unlikely occurrence, and that the pundits and prognosticators “knew” from early on the likely outcome.


So, for example, the winner of the Premiership was supposed to come from one of the perennial top four (Manchester United, Chelsea, Arsenal, and Manchester City)—not a club that were only promoted from the second division of British football in 2014 and last April were battling relegation (they finished the season 14th).

Pretty much the same is true in the political arena: neither Trump nor Sanders was taken particularly seriously at the start, and along the way the prevailing common sense was that their campaigns would simply implode or wither away. The idea was that the Republican and Democratic parties and nominating contests were structured so that their preferred nominees would inexorably come out on top.

There are, I think, two lessons to take away from these bolts from the blue. First, the “system,” however defined, is much less complete and determined than people usually think. There are many fissures and spaces in such systems that make what are seemingly unlikely outcomes real possibilities. Second, our presumably certain “knowledges” are exactly that, knowledges, which are constructed—in the face of radical uncertainty—out of theories, presumptions, blind spots, and much else. The fact is, we simply don’t know, and no amount of probabilistic certainty can overcome that epistemological gap.

So—surprise, surprise—Leicester City and Trump won, while Sanders has put up a much more formidable challenge than anyone expected from a socialist presidential candidate in the United States.


One of the most studied issues in contemporary economics is the effect of an increase in the minimum wage. But here we have a panel of so-called experts composed of mainstream economists who are uncertain—about whether employment will decrease or output will increase.

Ordinarily, I would applaud a health dose of uncertainty among economists, especially mainstream economists.


But, of course, mainstream economists show themselves to be quite certain about things other than the minimum wage, such as the idea that the median American household, notwithstanding the small increase in household income, is actually much better off.

Just sayin’. . .


Until recently, we were certain what would happen with an increase in the minimum wage—and that would be the reason to oppose any and all such attempts. Now, it’s a guessing game—and that uncertainty about its possible effects has become reason enough to oppose increasing the minimum wage.

What the hell is going on?


First, the certainty: neoclassical economists confidently asserted that the minimum wage caused unemployment (because it meant, at a wage above the equilibrium wage, the quantity supplied of labor would be created than the quantity demanded). Therefore, any increase in the minimum wage would cause more unemployment and, despite the best intentions of people who wanted to raise the minimum wage, it would actually hurt the poor, since many would lose their jobs.

But, of course, theoretically, the neoclassical labor-market model was missing all kinds of other effects, from wage efficiencies (e.g., higher wages might reduce labor turnover and increase productivity) to market spillovers (e.g., higher wages might lead to more spending, which would in turn increase the demand for labor). If you take those into account, the effects of increasing the minimum wage became more uncertain: it might or might not lead to some workers losing their jobs but those same workers might get jobs elsewhere as economic activity picked up precisely because workers who kept their jobs might be more productive and spend more of their higher earnings.

And that’s precisely what the new empirical studies have concluded: some have find a little less employment, others a bit more employment. In the end, the employment effects are pretty much a wash—and workers are receiving higher wages.

But that’s mostly for small increases in the minimum wage. What if the increase were larger—say, from $7.25 to $10, $12, or $15 an hour?

Well, we just don’t know. All we can do is guess what the effects might be at the local, state, or national level. But conservatives (like David Brooks, big surprise!) are seizing on that uncertainty to oppose increasing the minimum wage.

And that’s what I find interesting: uncertainty, which was at one time (e.g., for conservatives like economist Frank Knight) the spur to action, is now taken to be the reason for inaction. And those who oppose increasing the minimum wage are now choosing the certainty of further misery for minimum-wage workers over the uncertainty of attempting to improve their lot.


They want less of a guessing game?

Then, let’s make the effects of raising the minimum wage more certain. Why not increase government expenditures in areas where raising the minimum wage represents a dramatic increase for workers? Or mandate that employers can’t fire any of the low-wage workers once the minimum wage is increased? Or, if an employer chooses to close an enterprise rather than pay workers more, hand the enterprise over to the workers themselves? Any or all of those measures would increase the certainty of seeing positive effects for the working poor of raising the minimum wage.

But then we’re talking about a different game—of capital versus labor, of profits versus wages. And we know, with a high degree of certainty, the choices neoclassical economists and conservative pundits make in that game.