Posts Tagged ‘forecasting’

bm-31-12.jpg

Special mention

600_219807

5484043_orig

source

A week ago, I noted the pushback against liberal mainstream economists’ attacks on Bernie Sanders’s plans and Gerald Friedman’s analysis of those plans.

The first set of attacks, as Bill Black explained, plumbed “new depths of moral obtuseness, arrogance, and intellectual dishonesty.”

More recently, Christina D. Romer and David H. Romer (pdf) have responded with a more detailed critique of Friedman’s calculations, which has led to additional gloating by Paul Krugman and more publicity to only one side of the debate in the pages of the New York Times.

But, fortunately, that didn’t end the debate.

James K. Galbraith reminded us that “all forecasting models embody theoretical views.”

All involve making assumptions about the shape of the world, and about those features, which can, and cannot, safely be neglected. This is true of the models the Romers favor, as well as of Professor Friedman’s, as it would be true of mine. So each model deserves to be scrutinized.

In the case of the models favored by the Romers, we have the experience of forecasting from the outset of the Great Financial Crisis, which was marked by a famous exercise in early 2009 known as the Romer-Bernstein forecast. According to this forecast (a) the economy would have recovered on its own, in full and with no assistance from government, by 2014, (b) the only effect of the entire stimulus package would be to accelerate the date of full recovery by about six months, and (c) by 2016, the economy would actually be performing worse than if there had been no stimulus at all, since the greater “burden” of the government debt would push up interest rates and depress business investment relative to the full employment level.

It’s fair to say that this forecast was not borne out: the economy did not fully recover even with the ARRA, and there is no sign of “crowding out,” even now. The idea that the economy is now worse off than it would have been without any Obama program is, to most people, I imagine, quite strange. These facts should prompt a careful look at the modeling strategy that the Romers espouse.

Mark Thoma, for his part, argues that, while he does not believe that “we can sustain 5% growth over the next eight years. In the short-run—over the next two to four years—the situation is different.”

I’m worried people will accept without question that the gap is small due to the pushback against Friedman’s analysis of the Sander’s plan, and that will justify policy passivity when we need just the opposite. So let’s stop arguing, put the policies we need in place, and push as hard as we can to increase employment until inflation reveals that we have, in fact, hit capacity constraints. Maybe that happens quickly, but maybe not and we owe it to those who remain unemployed, have dropped out of the labor force but would return, or took a job with lousy wages to try. People who had nothing to do with causing the recession have paid the costs for it, and if we experience a short bout of above target inflation I can live with that. We’ve been wrong about this before in the 1990s, and we may very well be wrong about this again.

Finally, there’s a much more mainstream supporter of the idea that it’s not technologically impossible to imagine “materially super-normal rates of growth in the coming four years”: former Minneapolis Federal Reserve President and University of Rochester economist Narayana Kocherlakota. His view is that “given current economic circumstances, demand-based stimulus is likely to be more effective than supply-based stimulus.”

Why? Because, as Kocherlakota explained elsewhere, labor’s share remains extremely low by historical standards. So, faster growth would serve to push the share of income going to labor back to their historical (pre-1990) ranges and thus boost economic growth above the so-called consensus among economists.

And that’s exactly the basis of Bernie Sanders’s economic plans and Friedman’s analysis : raising labor’s share via redistributive measures is a spur to faster economic growth and encouraging unemployed and underemployed workers to take decent, better-paying jobs will sustain those faster rates of economic growth.

As I’ve written before, that’s not so much a forecast of what will happen as a mirror that demonstrates how diminished are the expectations created by contemporary capitalism and the policies that continue to be put forward by liberal mainstream economists.

Taupinières_-_Mole-hills

The discussion these days seems to be all about foxes and hedgehogs.

Those are the terms Nate Silver borrows from a phrase originally attributed to the Greek poet Archilochus to define his new journalistic project—the fox who knows many things as against the the hedgehog who knows one big thing. (But see my critique here.)

The pair of animal also turns up in James Surowiecki’s review of Fortune Tellers: The Story of America’s First Economic Forecasters by Walter A. Friedman.

Philip Tetlock, a professor of psychology and management at Penn who conducted a 20-year study asking almost 300 experts to forecast political events, has shown that while experts in the political realm are not especially good at forecasting the future, those who did best were, in the terminology he borrowed from Isaiah Berlin, foxes as opposed to hedgehogs—that is, the best forecasters were those who knew lots of little things rather than one big thing. Yet forecasters are more likely to be hedgehogs, if only because it’s easier to get famous when you’re preaching a simple gospel. And hedgehogs are not good, in general, at adapting to changed conditions—think of those bearish commentators who correctly predicted the bursting of the housing bubble but then failed to see that the stock market was going to make a healthy recovery.

The fact is, the two periods that led to more sources of information for economic forecasting preceded the two greatest crises of capitalism we’ve witnessed during the past 100 years—after which new ideas and movements erupted that provided concrete alternatives to capitalism. It’s not that they had more information. They honestly used the data at hand about what was fundamentally wrong with existing economic arrangements and, instead of sticking with tired formulas and failed policies, dared to imagine a world beyond capitalism.

Someday, then, we too will be able to exclaim, “Well burrowed, old mole!”

AIN_0212_20120117_StatonToon

This semester, I’m teaching a course on Marxian economic theory. It’s been a real eye-opener for the the students, who seem a bit surprised to learn that there is such a wholesale critique of the mainstream economics they’ve been learning. Some are even intrigued by this new way of thinking about the economy, which led one of them to pose the following question: did Marxists predict the crisis better or more accurately than mainstream economists?

Well, I explained, that’s setting the bar pretty low, since mainstream economists simply failed to predict the crash of 2007-08. But, I explained, Marxists did no better. And that’s because economic forecasting is like selling snake oil: lots of folks earn lots of money promising the ability to predict economic events but all they’re doing is selling the promise, not the actual ability, to get the forecasts right. (And, of course, they pay nothing for their failures, since they’ve left town long before people discover the magic elixir doesn’t work.)

And that’s what has happened to the students: they’ve been told mainstream economics is superior to all other approaches, that it’s a “real science,” because of its predictive power. And they’re willing to jump ship, as it were, if an alternative theory offers more predictive power.

The problem is, as Sir David Hendry explains, forecasting only works if the future behaves the same as the past, if it follows the same rules and falls under the same normal distribution. If it doesn’t, then all bets are off. What that means for me (and for Chris Dillow) is that Marxists are no better at predicting the future than mainstream economists. In fact, economic forecasting, of whatever sort, is a false promise.

But then I went on in my response to the student’s question: what really distinguishes different groups of economists is whether or not they include the possibility of a crisis in their theories and models—and what they would suggest doing once such a crisis occurred (including measures to prevent future crises). And there the difference between mainstream and Marxian economics couldn’t be starker: mainstream economics simply doesn’t include the possibility of crises (except as an exogenous event) whereas Marxists start from the proposition that instability is inherent (and therefore an endogenous tendency) in an economy based on the capitalist mode of production. That’s one fundamental difference between them. The other is that, once a crisis occurs (such as in 2007-08), the two groups of economists offer very different solutions: whereas mainstream economists spend their time debating whether or not any kind of intervention is warranted (based on neoclassical versus Keynesian assumptions concerning invisible and visible hands), Marxist economists presume that interventions are always-already being made (in terms of determining who pays the costs of the crisis) and that it’s better both to help those who are most vulnerable and to put in place the kinds of institutional changes that would prevent future crises.

So, no, I don’t put a lot of stock in economic forecasting, whether promised by mainstream economists or others. It’s a promise of control that is a lot like selling snake oil. But I’m willing to throw in my lot with an approach that, first, actually includes the possibility of such crises at the very center of the theory and, second, is willing to move outside the paradigm of private property and markets to help those who are hurt by the crisis and to change the rules so that those who created the crisis in the first place no longer have the incentive and means to do it again in the future.

And you don’t need a crystal ball to know that, if such changes are not made, another crisis is awaiting us just around the corner.

Update

Here’s the graph Bruce is referring to in the comments on this post:

PastedGraphic-1

And here’s the same series going back earlier:

fredgraph

Nate Silver is my favorite statistician precisely because he understands the fundamental importance of uncertainty:

The Weather Service has struggled over the years with how much to let the public in on what it doesn’t exactly know. In April 1997, Grand Forks, N.D., was threatened by the flooding Red River, which bisects the city. Snowfall had been especially heavy in the Great Plains that winter, and the service, anticipating runoff as the snow melted, predicted that the Red would crest to 49 feet, close to the record. Because the levees in Grand Forks were built to handle a flood of 52 feet, a small miss in the forecast could prove catastrophic. The margin of error on the Weather Service’s forecast — based on how well its flood forecasts had done in the past — implied about a 35 percent chance of the levees’ being topped.

The waters, in fact, crested to 54 feet. It was well within the forecast’s margin of error, but enough to overcome the levees and spill more than two miles into the city. Cleanup costs ran into the billions of dollars, and more than 75 percent of the city’s homes were damaged or destroyed. Unlike a hurricane or an earthquake, the Grand Forks flood may have been preventable. The city’s flood walls could have been reinforced using sandbags. It might also have been possible to divert the overflow into depopulated areas. But the Weather Service had explicitly avoided communicating the uncertainty in its forecast to the public, emphasizing only the 49-foot prediction. The forecasters later told researchers that they were afraid the public might lose confidence in the forecast if they had conveyed any uncertainty.

Since then, the National Weather Service has come to recognize the importance of communicating the uncertainty in its forecasts as completely as possible. “Uncertainty is the fundamental component of weather prediction,” said Max Mayfield, an Air Force veteran who ran the National Hurricane Center when Katrina hit. “No forecast is complete without some description of that uncertainty.” Under Mayfield’s guidance, the National Hurricane Center began to pay much more attention to how it presents its forecasts. Instead of just showing a single track line for a hurricane’s predicted path, their charts prominently feature a cone of uncertainty, which many in the business call “the cone of chaos.”

Silver also understands how uncertainty can be undermined:

Unfortunately, this cautious message can be undercut by private-sector forecasters. Catering to the demands of viewers can mean intentionally running the risk of making forecasts less accurate. For many years, the Weather Channel avoided forecasting an exact 50 percent chance of rain, which might seem wishy-washy to consumers. Instead, it rounded up to 60 or down to 40. In what may be the worst-kept secret in the business, numerous commercial weather forecasts are also biased toward forecasting more precipitation than will actually occur. (In the business, this is known as the wet bias.) For years, when the Weather Channel said there was a 20 percent chance of rain, it actually rained only about 5 percent of the time.

The same is true in the discipline of economics, where generations of modernist practitioners have attempted to domesticate and contain uncertainty (in the form of probabilistic certainty) instead of allowing it to flourish (and dealing with the resulting effects on their concepts and methods).

As Silver explains, “It’s much easier to hawk overconfidence, no matter if it’s any good.”