Posts Tagged ‘mathematics’

giphy2.gif

In a recent article, Dan Falk [ht: ja] identifies a fundamental problem in contemporary physics:

many physicists working today have been led astray by mathematics — seduced by equations that might be “beautiful” or “elegant” but which lack obvious connection to the real world.

What struck me is that, if you changed physics and physicists to economics and economists, you’d get the exact same article. And the same set of problems.

Economists—especially mainstream economists but, truth be told, not a few heterodox economists—are obsessed with mathematics and formal modeling as the only correct methods for achieving capital-t truth. Mathematical modeling for them represents the best, most scientific way of producing, disseminating, and determining the veracity of economic knowledge—because it is logical, concise, precise, and elegant.* In that sense, mathematics represents what can only described as a utopia for the practice of modern economics.**

Mathematical utopianism in economics is based on elevating mathematics to the status of a special code or language. It is considered both a neutral language and, at the same time, a language uniquely capable of capturing the essence of reality. Thus, economists see mathematics as having both an underprivileged and overprivileged status vis-à-vis other languages.

Let me explain. On one hand, mathematics is understood to be a neutral medium into which all statements of each theory, and the statements of all theories, can be translated without modifying them. Mathematics, in this view, is devoid of content. It is neutral with respect to the various theories where it is applied. Partial and general equilibrium, game theory, and mathematical programming are concepts that serve to communicate the content of a theory without changing that content. Similarly, mathematical operations on the mathematized objects of analysis are considered to be purely formal. Thus, as a result of the conceptual neutrality of the methods and procedures of mathematical formalization, the objects of analysis are said to be unaffected by their mathematical manipulation. On the other hand, mathematics is considered to be uniquely capable of interpreting theory in its ability to separate the rational kernel from the intuitional (vague, imprecise) husk, the essential from the inessential. It becomes the unique standard of logic, consistency, and proof. Once intuitions are formed, mathematical models can be constructed that prove (or not) the logical consistency of the theory. Other languages are considered incapable of doing this because the operations of mathematics have an essential truth value that other languages do not possess. Mathematical statements, for example, are considered to be based on the necessity of arriving at conclusions as a result of following universal mathematical rules.

It is in these two senses that mathematics is considered to be a special language or code. It is more important than other languages in that it is uniquely capable of generating truth statements. It is also less important in that it is conceived to have no impact on what is being thought and communicated.

The notion of mathematics as a special code is linked, in turn, to the twin pillars of traditional epistemology, empiricism and rationalism. The oversight of mathematics implied by its underprivileged status is informed by an empiricist conception of knowledge: mathematics is considered to be a universal instrument of representation. It is used as a tool to express the statements of a discourse that already, always has an essential grasp on the real. It is the universal language in and through which the objects (and the statements about those objects) of different economic and social theories can all be expressed. In other words, the role of mathematics is to express the various “intuitive” statements of the theorist in a neutral language such that they can be measured against reality. The underprivileged position of mathematics that is linked to an empiricist epistemology contrasts sharply with the overprivileged status of mathematics. This overprivileged conception of mathematics is associated with a rationalist theory of knowledge wherein the subject-object dichotomy is reversed. Here the subject becomes the active participant in discovering knowledge by operating on the theoretical model of reality. In this sense, the logical structure of theory—not the purported correspondence of theory to the facts—becomes the privileged or absolute standard of the process of theorizing. Reality, in turn, is said to correspond to the rational order of thought. The laws that govern reality are deduced from the singular set of mathematical models in and through which the essence of reality can be grasped.***

The conception of mathematics as a mere language contains, however, the seeds of its own destruction. The notion of language as a simple medium through which ideas are communicated has long been challenged—since language is both constitutive of, and constituted by, the process of theorizing. The use of mathematics in economics thus may be reconceptualized as a discursive condition of theories, which constrains and limits, and is partly determined by, those theories. Mathematical concepts—such as the equilibrium position associated with the solution to a set of simultaneous equations, the exogenous status of the rules of a game, or the definition of a series of overlapping value functions to optimize an overall goal—partly determine the notions of relation and causality among the theoretical objects designated by the theories in which the means of mathematical formalization are utilized. They are not the neutral conceptual tools to which the propositions of different theories can be reduced. Similarly, the rationalist idea of abstraction, of simplification, also leads to a fundamental problem. It implies that there is a noise that ultimately escapes the “fictional” mathematical model. It implies an empirical distance between the model and its domain of interpretation, the empirical concrete. And that distance is conceived to be part of the empirical concrete itself. There is a part of reality that necessarily escapes the model. Thus, rationalist deductions from the model cannot produce the truth of the real because something is always “missing.”

So, what’s the alternative? As I see it, there is a double movement that involves both the rejection of mathematics as the discovery of an extra-mathematical reality and the critique of the notion that mathematics merely expresses the form in which otherwise nonmathematical theories are communicated. Thus, for example, it is possible (using, e.g., the insights of Ludwig Wittgenstein and Edmund Husserl) to argue that mathematics is a historical, social invention, not a form of discovery of an independent reality; it is not discovered “out there,” but invented and reinvented over time based on rules that are handed by mathematicians and the actual users of mathematics (such as economists). By the same token, we can see mathematics as introducing both new concepts and new forms of reasoning into other domains, such as economics and for that matter physics (which is exactly what Gaston Bachelard has argued).

This double movement has various effects. It means that there are no grounds for considering mathematics to be a privileged language with respect to other, nonmathematical languages. There is, for example, no logical necessity inherent in the use of the mathematical language. The theorist makes choices about the kinds of mathematics that are used, about the steps from one mathematical argument to another, and whether or not any mathematics will be used at all. Different uses (or not) of mathematics and different kinds of mathematics will have determinate effects on the discourse in question. Discourses change as they are mathematized—they are changed, not in the direction of becoming more (or less) scientific, but by transforming the way the objects of the discourse are constructed, and the way statements are made about those objects.

Ultimately, this deconstruction of mathematics as a special code leads to a rejection of the conception of mathematics as a special language of representation. The status of mathematics is both more representational and less representational than allowed by the discourse of representation. More, in the sense that mathematics has effects on the very structure of the mathematized theory; mathematics is not neutral. Less, to the extent that the use of mathematics does not guarantee the scientificity of the theory in question; it is merely one discursive strategy among others.

One alternative approach to making sense of the use of mathematics in economic theory is to consider mathematics not in terms of representation, but as a form of “illustration.” For economists, mathematical concepts and models can be understood as metaphors or heuristic devices that illustrate part of the contradictory movement of economic and social processes. These concepts and models can be used, where appropriate, to consider in artificial isolation one or another moment in the course of the constant movement and change in the economy and society. Mathematics may be used, then, to illustrate the statements of economic theory but, like all metaphors (in economics as in literature and other areas of social thought), it outlives its usefulness and then has to be dismantled.

As I see it, this conception of mathematical models as illustrative metaphors does not constitute a flat rejection of their use in economic theory. Rather, it accords to mathematical concepts and models a discursive status different from the one that is attributed to them in the work of mathematical economists. It accepts the possibility—but not the necessity—of using mathematical propositions as metaphors that are borrowed from outside of economic theory and transformed to teach and develop some of the concepts and statements of one or another economic theory.

Deconstructing the status of mathematics as a special code has the advantage of transforming both the way economics is done within any particular theory and the way the debate between different economic theories itself is conducted. It undermines the Truth-effect associated with mathematical utopianism and focuses attention, instead, on the conditions and consequences of different ways of thinking about the economy.

That debate—about the effects of different languages on economics, and the effects of different economic theories on the wider society—has its own utopian moment: transforming economics into a space not of blind obedience to mathematical protocols, but of real theoretical and political choices.

 


*From time to time, there have been a few admonishments from among economists themselves. Oskar Morgenstern (e.g., in his essay “Limits to the Uses of Mathematics in Economics,” published in 1963) and, more forcefully, Nicholas Georgescu-Roegen (especially in his 1971 Entropy Law and the Economic Process), Philip Mirowski (e.g., in More Heat Than Light, in 1989), and Paul Romer have indicated some of the problems associated with the wholesale mathematization of economics. However, even their limited criticisms have been ignored for the most part by economists.

In recent years, students (such as the members of the International Student Initiative for Pluralism in Economics) have been at the forefront of questioning the fetishism of mathematical methods in economics:

It is clear that maths and statistics are crucial to our discipline. But all too often students learn to master quantitative methods without ever discussing if and why they should be used, the choice of assumptions and the applicability of results. Also, there are important aspects of economics which cannot be understood using exclusively quantitative methods: sound economic inquiry requires that quantitative methods are complemented by methods used by other social sciences. For instance, the understanding of institutions and culture could be greatly enhanced if qualitative analysis was given more attention in economics curricula. Nevertheless, most economics students never take a single class in qualitative methods.

Their pleas, too, have been mostly greeted with indifference or contempt by economists.

**As I see it, the current fad of relying on randomized experiments and big data does not really undo the longstanding utopian claims associated with mathematical modeling, since the formal models are still there in the background, orienting the issues (including the choice of data sets) taken up in the new experimental and data-heavy approach to economics. Then, in addition, there is the problem that others—such as John P. A. Ionnidis et al. (unfortunately behind a paywall)—have discovered: most economists use data sets that are much too small relative to the size of the effects they report. This means that a sizable fraction of the findings reported by economists are simply the result of publication bias—the tendency of academic journals to report accidental results that only appear to be statistically significant.

***Economists often move back and forth between the two otherwise diametrically opposed conceptions of mathematics because they represent two sides of the same epistemological coin: although each reverses the order of proof of the other, both empiricism and rationalism presume the same fundamental terms and some form of correspondence between them. In this sense, they are variant forms of an “essentialist” conception of the process of theorizing. Both of them invoke an absolute epistemological standard to guarantee the (singular, unique) scientificity of the production of economic knowledge.

phlogiston07_1000

Stanislas Wolff, “Phlogiston”

The other day, I argued (as I have many times over the years) that contemporary mainstream macroeconomics is in a sorry state.

Mainstream macroeconomists didn’t predict the crash. They didn’t even include the possibility of such a crash within their theory or models. And they certainly didn’t know what to do once the crash occurred.

I’m certainly not the only one who is critical of the basic theory and models of contemporary mainstream macroeconomics. And, at least recently (and, one might say, finally), many of the other critics are themselves mainstream economists—such as MIT emeritus professor and former IMF chief economist Olivier Blanchard (pdf), who has noted that the models that are central to mainstream economic research—so-called dynamic stochastic general equilibrium models—are “seriously flawed.”

Now, one of the most mainstream of the mainstream, Paul Romer (pdf), soon to be chief economist at the World Bank, has taken aim at mainstream macroeconomics.* You can get a taste of the severity of his criticisms from the abstract:

For more than three decades, macroeconomics has gone backwards. The treatment of identification now is no more credible than in the early 1970s but escapes challenge because it is so much more opaque. Macroeconomic theorists dismiss mere facts by feigning an obtuse ignorance about such simple assertions as “tight monetary policy can cause a recession.” Their models attribute fluctuations in aggregate variables to imaginary causal forces that are not influenced by the action that any person takes. A parallel with string theory from physics hints at a general failure mode of science that is triggered when respect for highly regarded leaders evolves into a deference to authority that displaces objective fact from its position as the ultimate determinant of scientific truth.

That’s right: in Romer’s view, macroeconomics (by which he means mainstream macroeconomics) “has gone backwards” for more than three decades.

Romer’s particular concern is with the “identification problem,” which in econometrics has to do with being able to solve for unique values of the parameters of a model (the so-called structural model, usually of simultaneous equations) from the values of the parameters of the reduced form of the model (i.e., the model in which the endogenous variables are expressed as functions of the exogenous variables). A supply-and-demand model of a market is a good example: it is not enough, in attempting to identify the two different supply and demand equations, to solely use observations of different quantities and prices. In particular, it’s impossible to estimate a downward slope (of the demand curve) and an upward slope (of the supply curve) with one linear regression line involving only two variables. That’s because both supply and demand curves can be shifting at the same time, and it can be difficult to disentangle the two effects. That, in a nutshell, is the “identification problem.”

The problem is similar in macroeconomic models, and Romer finds that many mainstream economists rely on models that require and presume exogenous shocks—imaginary shocks, which “occur at just the right time and by just the right amount” (hence phlogiston)—to generate the desired results. Thus, in his view, “the real business cycle model explains recessions as exogenous decreases in phlogiston.”

The issue with phlogiston is that it can’t be directly measured. Nor, as it turns out, can many of the other effects invoked by mainstream economists. Here’s how Romer summarizes these imaginary effects:

  • A general type of phlogiston that increases the quantity of consumption goods produced by given inputs
  • An “investment-specific” type of phlogiston that increases the quantity of capital goods produced by given inputs
  • A troll who makes random changes to the wages paid to all workers
  • A gremlin who makes random changes to the price of output
  • Aether, which increases the risk preference of investors
  • Caloric, which makes people want less leisure

So, there you have it: in Romer’s view, contemporary mainstream economists rely on various types of phlogiston, a troll, a gremlin, aether, and caloric. That’s how they attempt to solve the identification problem in their models.

But, for Romer, there’s a second identification problem: mainstream economists continue to build and apply these phlogiston-identified dynamic stochastic general equilibrium models because they have “a sense of identification with the group akin to identification with a religious faith or political platform.”

The conditions for failure are present when a few talented researchers come to be respected for genuine contributions on the cutting edge of mathematical modeling. Admiration evolves into deference to these leaders. Deference leads to effort along the specific lines that the leaders recommend. Because guidance from authority can align the efforts of many researchers, conformity to the facts is no longer needed as a coordinating device. As a result, if facts disconfirm the officially sanctioned theoretical vision, they are subordinated. Eventually, evidence stops being relevant. Progress in the field is judged by the purity of its mathematical theories, as determined by the authorities.

I, for one, have no problem with group identification (I often identify with Marxists and many of the other strangers in the strange land of economics). But when it’s identification with a few leaders, and when it’s an issue of the purity of the mathematics—and not shedding light on what is actually going on out there—well, then, there’s a serious problem.

As it turns out, modern mainstream economics has two identification problems—one in the imaginary solution of the models, the other with the imagined purity of the mathematics. Together, the two identification problems mean that what is often taken to be the cutting edge of modern macroeconomics is in fact seriously flawed—and has become increasingly flawed for more than three decades.

But let me leave the last word to Daniel Drezner, who has lost all patience with mainstream economists’ self-satisfaction with their theories, models, and standing in the world:

this is a complete and total crock.

 

*Other mainstream economists, such as Narayana Kocherlakota and Noah Smith, have expressed their substantial agreement with Romer.

Untitled

I know I shouldn’t. But there are so many wrong-headed assertions in the latest Bloomberg column by Noah Smith, “Economics Without Math Is Trendy, But It Doesn’t Add Up,” that I can’t let it pass.

But first let me give him credit for his opening observation, one I myself have made plenty of times on this blog and elsewhere:

There’s no question that mainstream academic macroeconomics failed pretty spectacularly in 2008. It didn’t just fail to predict the crisis — most models, including Nobel Prize-winning ones, didn’t even admit the possibility of a crisis. The vast majority of theories didn’t even include a financial sector.

And in the deep, long recession that followed, mainstream macro theory failed to give policymakers any consistent guidance as to how to respond. Some models recommended fiscal stimulus, some favored forward guidance by the central bank, and others said there was simply nothing at all to be done.

It is, in fact, as Smith himself claims, a “dismal record.”

But then Smith goes off the tracks, with a long series of misleading and mistaken assertions about economics, especially heterodox economics. Let me list some of them:

  • citing a mainstream economist’s characterization of heterodox economics (when he could have, just as easily, sent readers to the Heterodox Economics Directory—or, for that matter, my own blog posts on heterodox economics)
  • presuming that heterodox economics is mostly non-quantitative (although he might have consulted any number of books by economists from various heterodox traditions or journals in which heterodox economists publish articles, many of which contain quantitative—theoretical and empirical—work)
  • equating formal, mathematical, and quantitative (when, in fact, one can find formal models that are neither mathematical nor quantitative)
  • also equating nonquantitative, broad, and vague (when, in fact, there is plenty of nonquantitative work in economics that is quite specific and unambiguous)
  • arguing that nonquantitative economics is uniquely subject to interpretation and reinterpretation (as against, what, the singular meaning of the Arrow-Debreu general equilibrium system or the utility-maximization that serves as the microfoundations of mainstream macroeconomics?)
  • concluding that “heterodox economics hasn’t really produced a replacement for mainstream macro”

Actually, this is the kind of quick and easy dismissal of whole traditions—from Karl Marx to Hyman Minsky—most heterodox economists are quite familiar with.

My own view, for what it’s worth, is that there’s no need for work in economics to be formal, quantitative, or mathematical (however those terms are defined) in order for it be useful, valuable, or insightful (again, however defined)—including, of course, work in traditions that run from Marx to Minsky, that focused on the possibility of a crisis, warned of an impending crisis, and offered specific guidances of what to do once the crisis broke out.

But if Smith wants some heterodox macroeconomics that uses some combination of formal, mathematical, and quantitative techniques he need look no further than a volume of essays that happens to have been published in 2009 (and therefore written earlier), just as the crisis was spreading across the United States and the world economy. I’m referring to Heterodox Macroeconomics: Keynes, Marx and Globalization, edited by Jonathan P. Goldstein and Michael G. Hillard.

There, Smith will find the equation at the top of the post, which is very simple but contains an idea that one will simply not find in mainstream macroeconomics. It’s merely an income share-weighted version of a Keynesian consumption function (for a two-class world), which has the virtue of placing the distribution of income at the center of the macroeconomic story.* Add to that an investment function, which depends on the profit rate (which in turn depends on the profit share of income and capacity utilization) and you’ve got a system in which “alterations in the distribution of income can have important and potentially offsetting impacts on the level of effective demand.”

And heterodox traditions within macroeconomics have built on these relatively simply ideas, including

a microfounded Keynes–Marx theory of investment that further incorporates the external financing of investment based upon uncertain future profits, the irreversibility of investment and the coercive role of competition on investment. In this approach, the investment function is extended to depend on the profit rate, long-term and short-term heuristics for the firm’s financial robustness and the intensity of competition. It is the interaction of these factors that fundamentally alters the nature of the investment function, particularly the typical role assigned to capacity utilization. The main dynamic of the model is an investment-induced growth-financial safety tradeoff facing the firm. Using this approach, a ceteris paribus increase in the financial fragility of the firm reduces investment and can be used to explain autonomous financial crises. In addition, the typical behavior of the profit rate, particularly changes in income shares, is preserved in this theory. Along these lines, the interaction of the profit rate and financial determinants allows for real sector sources of financial fragility to be incorporated into a macro model. Here, a profit squeeze that shifts expectations of future profits forces firms and lenders to alter their perceptions on short-term and long-term levels of acceptable debt. The responses of these agents can produce a cycle based on increases in financial fragility.

It’s true: such a model does not lead to a specific forecast or prediction. (In fact, it’s more a long-term model than an explanation of short-run instabilities.) But it does provide an understanding of the movements of consumption and investment that help to explain how and why a crisis of capitalism might occur. Therefore, it represents a replacement for the mainstream macroeconomics that exhibited a dismal record with respect to the crash of 2007-08.

But maybe it’s not the lack of quantitative analysis in heterodox macroeconomics that troubles Smith so much. Perhaps it’s really the conclusion—the fact that

The current crisis combines the development of under-consumption, over-investment and financial fragility tendencies built up over the last 25 years and associated with a nance- led accumulation regime.

And, for that constellation of problems, there’s no particular advice or quick fix for Smith’s “policymakers and investors”—except, of course, to get rid of capitalism.

 

*Technically, consumption (C) is a function of the marginal propensity to consume of labor, labor’s share of income, the marginal propensity to consume of capital, and the profit share of income.

 

Can-A-Charter-City-Make-Honduras-Prosperous-800x500_c

Sebastian Mallaby referred to Paul Romer’s scheme of building charter cities as Empire 2.0 (which is much the same connection I made back in 2010).

the largest obstacle Romer faces, by his own admission, still remains: he has to find countries willing to play the role of Britain in Hong Kong. Despite the good arguments that Romer makes for his vision, the responsibilities entailed in Empire 2.0 are not popular. How would a rich government contend with the shantytowns that might spring up around the borders of a charter city? Would it deport the inhabitants, and be accused of human-rights abuses? Or tolerate them and allow its oasis to be overrun with people who don’t respect its city charter? And what would the foreign trustee do if its host tried to nullify the lease? Would it defend its development experiment with an expeditionary army, as Margaret Thatcher defended the Falklands? A top official at one of Europe’s aid agencies told me, “Since we are responsible for our remaining overseas territories, I can tell you there is much grief in running these things. I would be surprised if Romer gets any takers.”

According to an announcement on his own blog, Romer is now headed to the World Bank.

There, Romer will be able to develop his imperial scheme—and, presumably, as I described his work last year, eliminate political “mathiness” and steer the focus of attention to “nonrival ideas” and away from capital and the real problems of growth within capitalist economies.

2015-09-11-1442013567-9240180-brooklynstreetharmendehoopjaimerojonuart20150915web

Harmen de Hoop and Jan Ubøe, “Permanent Education (a mural about the beauty of knowledge)” (Nuart 2015, Stavanger, Norway)

Ubøe, Professor of Mathematics and Statistics at the Norwegian School Of Economics, gives a 30-minute lecture on the streets of Stavanger on the subject of option pricing.

Drawing on Black and Scholes explanation of how to price options, Ubøe will explain how banks can eliminate risk when they issue options. Black and Scholes explained how banks (by trading continuously in the market) can meet their obligations no matter what happens. The option price is the minimum amount of money that a bank needs to carry out such a strategy.

While the core argument is perfectly sound, it has an interesting flaw. If the market suddenly makes a jump, i.e. reacts so fast that the bank does not have sufficient time to reposition their assets, the bank will be exposed to risk. This flaw goes a long way to explain the devastating financial crisis.

This theory, and similar other theories, led banks to believe that risk no longer existed, so why not lend money to whoever is in need of money? In the end the losses peaked at 13,000 billion dollars – more than the total profits from banking since the dawn of time.

My guess is, most of the members of the audience did not understand the mathematics. However, Ubøe assures them it works—both as a form of knowledge (the manipulation of the mathematics) and as a strategy for banks (to eliminate risk)—and they can’t but believe him. It has a kind of beauty.

And then he explains that other effect of the math: it led banks to believe they had found a way of eliminating risk (because, like the audience, they believed the mathematicians), which fell apart when markets made sudden jumps and the traders weren’t able to reposition their assets quickly enough.

In that case, the beauty of the knowledge is undermined by the ugliness of the results.

economist-naked

I’m taking nominations for the best examples of dismal economic scientists.

While I wait for your suggestions, I’m going to offer two of my own nominations: Tyler Cowen and Paul Romer.

I am nominating Cowen because, in his argument that the economy probably needs a “reset,” he only focuses on lowering workers’ wages. First, he makes no mention of resetting corporate profits or the incomes of those at the very top, as if what they manage to capture were completely off limits. All the adjustment in the new, “grimmer future” will be born by those at the bottom. Second, he completely overlooks the mechanisms of his own economic theory: if lower rates of economic growth are the product of lower rates of growth of available workers (a key factor in the theory of secular stagnation), then the relative scarcity of workers should mean higher—not lower—wages. In other words, Cowen is determined to make sure all the costs of the new, slower-growing economy will be born by shifted onto those who can least afford it. For that reason, I nominate Cowen for the title of dismal economist.

I also want to nominate Romer, who continues to double down on his “mathiness” argument, by asserting (against all the work that has taken place in the philosophy of science in recent decades) that (a) there’s a single truth, (b) that truth can only be obtained via science, and (c) mathematical modeling is the singular method for making progress in science to obtain truth. There are so many things wrong with each of those assertions it’s hard to know where to begin. And I won’t, at least right now. Let me just say Romer deserves his nomination as one of the most dismal economists because of the extraordinary arrogance, pretentiousness, and ignorance of the following statements:

About math:. . .I’ve seen clear evidence that math can facilitate scientific progress toward the truth.

If you think that math is worthless or dangerous, I’m sure that there are people who will be happy to discuss this with you. I’m not interested. I’m busy.

About truth and science: My fundamental premise is that there is an objective notion of truth and that science can help us make progress toward truth.

If you do not accept this premise, I’m sure that there are people who would be happy to debate it with you. I’m not interested. I’m busy.

And please do not write to tell me that science is a social process or that the progress it makes toward the truth can be irregular. I know.

Me, I’m not too busy to discuss either the fundamental injustices of contemporary capitalism or the often-worthless and dangerous role mathematics, truth, and science have played and continue to play in the discipline of economics.

I’m also not too busy to post additional nominations for dismal economists.

6a01348793456c970c01b8d0708b22970c-800wi

source

You remember the dialogue:

Queen: Slave in the magic mirror, come from the farthest space, through wind and darkness I summon thee. Speak! Let me see thy face.

Magic Mirror: What wouldst thou know, my Queen?

Queen: Magic Mirror on the wall, who is the fairest one of all?

Magic Mirror: Famed is thy beauty, Majesty. But hold, a lovely maid I see. Rags cannot hide her gentle grace. Alas, she is more fair than thee.

I was reminded of this particular snippet from Snow White and the Seven Dwarfs while reading the various defenses of contemporary macroeconomic models. Mainstream macroeconomists failed to predict the most recent economic crisis, the worst since the Great Depression of the 1930s, but, according to them everything in macroeconomics is just fine.

There’s David Andolfatto, who argues that the goal of macro models is not really prediction; it is, instead, only conditional forecasts (“IF a set of circumstances hold, THEN a number of events are likely to follow.”). So, in his view, the existing models are mostly fine—as long as they’re supplemented with some “financial market frictions” and a bit of economic history.

Mark Thoma, for his part, mostly agrees with Andolfatto but adds we need to ask the right questions.

we didn’t foresee the need to ask questions (and build models) that would be useful in a financial crisis — we were focused on models that would explain “normal times” (which is connected to the fact that we thought the Great Moderation would continue due to arrogance on behalf of economists leading to the belief that modern policy tools, particularly from the Fed, would prevent major meltdowns, financial or otherwise). That is happening now, so we’ll be much more prepared if history repeats itself, but I have to wonder what other questions we should be asking, but aren’t.

Then, of course, there’s Paul Krugman who (not for the first time) defends hydraulic Keynesianism (aka Hicksian IS/LM models)—”little equilibrium models with some real-world adjustments”—which in his view have been “stunningly successful.”

And, finally, to complete my sample from just the last couple of days, we have Noah Smith, who defends the existing macroeconomic models—because they’re models!—and chides heterodox economists for not having any alternative models to offer.

The issue, as I see it, is not whether there’s a macroeconomic model (e.g., dynamic stochastic general equilibrium, as depicted in the illustration above, or Bastard Keynesian or whatever) that can, with the appropriate external “shock,” generate a boom-and-bust cycle or zero-lower-bound case for government intervention. There’s a whole host of models that can generate such outcomes.

No, there are two real issues that are never even mentioned in these attempts to defend contemporary macroeconomic models. First, what is widely recognized to be the single most important economic problem of our time—the growing inequality in the distribution of income and wealth—doesn’t (and, in models with a single representative agent, simply can’t) play a role in either causing boom-and-bust cycles or as a result of the lopsided recovery that has come from the kinds of fiscal and monetary policies that have been used in recent years.

That’s the specific issue. And then there’s a second, more general issue: the only way you can get an economic crisis from mainstream models (of whatever stripe, using however much math) is via some kind of external shock. The biggest problem with existing models is not that they failed to predict the crisis; it’s that the only possibility of a crisis comes from an exogenous event. The key failure of mainstream macroeconomic models is to exclude from analysis the idea that the “normal” workings of capitalism generate economic crises on a regular basis—some of which are relatively mild recessions, others of which (such as we’ve seen since 2007) are full-scale depressions.  What really should be of interest are theories that generate boom-and-bust cycles based on endogenous events within capitalism itself.

With respect to both these issues, contemporary mainstream macroeconomic models have “stunningly” failed.

I imagine that’s what the slave in the magic mirror, who simply will not lie to the Queen, would say.