Posts Tagged ‘Great Depression’

Detention-MustNotQuestion-CorporateSchool-Polyp

Mainstream economics lies in tatters. Certainly, the crash of 2007-08 and the Second Great Depression called into question mainstream macroeconomics, which has failed to provide a convincing explanation of either the causes or consequences of the most severe crisis of capitalism since the Great Depression of the 1930s.

But mainstream microeconomics, too, increasingly appears to be a fantasy—especially when it comes to issues of corporate power.

perfect_competition_long_run

Neoclassical microeconomics is based on a set of models that assume perfect competition. What that means, as my students learned the other day, is that, while in the short run firms may capture super-profits (because price is greater than average total cost, at P1 in the chart above), in the “long run,” with free entry and exit, all those extra-normal profits are competed away (since price is driven down to P2, equal to minimum average total cost). That’s why the long run is such an important concept in neoclassical economic theory. The idea is that, starting with perfect competition, neoclassical economists always end up with. . .perfect competition.*

Except, of course, in the real world, where exactly the opposite has been occurring for the past few decades. Thus, as the authors of the new report from the United Nations Conference on Trade and Development have explained, there is a growing concern that

increasing market concentration in leading sectors of the global economy and the growing market and lobbying powers of dominant corporations are creating a new form of global rentier capitalism to the detriment of balanced and inclusive growth for the many.

And they’re not just talking about financial rentier incomes, which has been the focus of attention since the global meltdown provoked by Wall Street nine years ago. Their argument is that a defining feature of “hyperglobalization” is the proliferation of rent-seeking strategies, from technological innovations to mergers and acquisitions, within the non-financial corporate sector. The result is the growth of corporate rents or “surplus profits.”**

Fig6-1

As Figure 6.1 shows, the share of surplus profits in total profits grew significantly for all firms both before and after the global financial crisis—from 4 percent during the 1995-2000 period to 19 percent in 2001-2008 and even higher, to 23 percent, in 2009-2015. The top 100 firms (ranked by market capitalization) also saw the growth of their surplus profits, from 16 percent to 30 percent and then, most recently, to 40 percent.***

The analysis suggests both that surplus profits for all firms have grown over time and that there is an ongoing process of bipolarization, with a growing gap between a few high-performing firms and a growing number of low-performing firms.

Fig6-2

That conclusion is confirmed by their analysis of market concentration, which is presented in Figure 6.2 in terms of the market capitalization of the top 100 nonfinancial firms between 1995 and 2015. The red line shows the actual share of the top 100 firms relative to their hypothetical equal share, assuming that total market capitalization was distributed equally over all firms. The blue line shows the observed share of the top 100 firms relative to the observed share of the bottom 2,000 firms in the sample.

Both measures indicate that the market power of the top companies increased substantially over the 1995-2015 period. For example, the combined share of market capitalization of the top 100 firms was 23 times higher than the share these firms would have held had market capitalization been distributed equally across all firms. By 2015, this gap had increased nearly fourfold, to 84 times. This overall upward surge in concentration, measured by market capitalization since 1995, experienced brief interruptions in 2002−03 after the bursting of the dotcom bubble, and in 2009−2010 in the aftermath of the global financial crisis, and it stabilized at high levels thereafter.****

So, what is causing this growth in market concentration? One reason is because of the nature of the underlying technologies, which involve costs of production that do not rise proportionally to the quantities produced. Instead, after initial high sunk costs (e.g., in the form of expenditures on research and development), the variable costs of producing additional units of output are negligible.***** And then, of course, growing firms can use intellectual property rights and lobbying powers to protect themselves against actual or potential competitors.

Fig6-5

Giant firms can also use their super-profits to merge with and to acquire other firms, a process that has accelerated because—as both a consequence and cause—of the weakening of antitrust legislation and enforcement.

What we’re seeing, then, is a “vicious cycle of underregulation and regulatory capture, on the one hand, and further rampant growth of corporate market power on the other.”

The models of mainstream economics turn out to be a shield, hiding and protecting this strengthening of corporate rule.

What the rest of us, including the folks at UNCTAD, have been witnessing in the real world is the emergence and consolidation of global rentier capitalism.

 

*There’s another reason why the long run is so important for neoclassical economists. All incomes are presumed to be returns to “factors of production” (e.g., land, labor, and capital), equal to their “marginal products.” But short-run super-profits are a theoretical embarrassment. They represent a return not to any factor of production but to something else: serendipity or Fortuna. Oops! That’s another reason it’s important, within a neoclassical world, for short-run super-profits to be competed away in the long run—to eliminate the existence of returns to the decidedly non-productive factor of luck.

**UNCTAD defines surplus profits as the difference between the estimate of total typical profits and the total of actually observed profits of all firms in the sample in that year. Thus, they end up with a lower estimate of surplus or super-profits than if they’d used a strictly neoclassical definition, which would compare actual profits to a zero-rent (or long-run equilibrium) benchmark.

***The authors note that

these results need to be interpreted with caution. More important than the absolute size of surplus profits for firms in the database in any given sub-period, is their increase over time, in particular the surplus profits of the top 100 firms.

****The authors of the study focus particular attention on the so-called high-tech sector, in which they show “a growing predominance of ‘winner takes most’ superstar firms.”

*****Thus, as Piero Sraffa argued long ago, the standard neoclassical model of perfect competition, with U-shaped marginal and average cost curves (i.e., “diminishing returns”), is called into question by increasing returns, with declining marginal and average cost curves.

 

crisis_timeline

The crisis takes a much longer time coming than you think, and then it happens much faster than you would have thoughtRudi Dornbusch

Last week, a wide variety of U.S. media (including the Wall Street Journal and USA Today) marked what they considered to be the ten-year anniversary of the beginning of the global economic crisis—from which we still haven’t recovered.

The event in question, which occurred on 9 August 2007, was the announcement by international banking group BNP Paribas that, because their fund managers could not calculate a reliable net asset value of three mutual funds, they were suspending redemptions.

But, as I explain to my students, “Beware the appearance of precision!” For example, the more numbers after the decimal point (2.9, 2.93, 2.926, etc.), the more real and precise the number appears to be. But such a number is only ever an estimate, a best guess, about what is going on (whether it be the growth of output or the increase in new home sales).

The same holds for dates. It would be odd to choose a particular day ten years ago that, among all the possible causes and precipitating events, put the U.S. and world economies on the road to the Second Great Depression. That would be like saying World War I was caused on 28 June 1914, when Yugoslav nationalist Gavrilo Princip assassinated Archduke Franz Ferdinand of Austria. Or that the first Great Depression began on Black Thursday, 24 October 1929.

fredgraph

Given the centrality of housing sales, mortgages, and mortgage-backed securities in creating the fragility of the financial sector, we could just as easily choose July 2005 (when, as in the green line in the chart above, new one-family house sales peaked), January 2006 (when, as in the blue line, new privately owned housing units starts peaked), or February 2007 (when the Case-Shiller home price index, the red line, started its slide).

fredgraph (1)

Or, alternatively, we could choose the third quarter of 2006, when the U.S. corporate profit share (before taxes and without adjustments) reached its peak, at almost 12 percent of national income. After that, it began to fall, and the decisions of capitalists dragged the entire economy to the brink of disaster.

fredgraph (2)

Or the year 2005, when the profits of the financial and insurance sector were at their highest level—at $158.3 trillion—and then began to decline. Then, of course, it was bailed out after falling into negative territory in 2008.

graph_dl

Or, given the centrality of inequality in creating the conditions for the crash, we can go all the way back to 1980, when the share of income going to the top 1 percent was “only” 10.7 percent—since after that it started to rise, reaching an astounding 20.6 percent in 2006.

Those are all possible dates, some of course more precise than others.

What is important is each one of those indicators gives us a sense of how the normal workings of capitalism—in housing, finance and insurance, corporate profits, and the distribution of income—created, together and over time, the conditions for the most severe set of crises since the first Great Depression. And now, as a result of the crash and the nature of the recovery, all of them have been restored.

Thus creating the conditions for the next crash to occur, ten years after the last one.

decade

Narayana Kocherlakota, professor of economics at the University of Rochester and past president of the Federal Reserve Bank of Minneapolis, is right: in some ways, the 2007-08 was worse than the Great Depression.

It certainly has been worse for average households in the United States. Real median household income (the red line in the chart above) is still below what it was in 2007—and lower still from what it was even earlier, in 1999.

But it hasn’t been a lost decade for the members of the top 1 percent. Their share of income (the blue line in the chart above), which was already an obscene 19 percent in 2007, is today even higher, at 19.6 percent.*

Still, Kocherlakota’s warning is appropriate:

financial crises and the responses to them can have highly persistent adverse effects on economic potential. The risk of such large costs means that policy makers must have better safeguards in place, and be willing to respond vigorously through monetary and fiscal stimulus when crises nonetheless happen.

So what’s happening along these lines? The Trump administration’s nominee to be vice chair of supervision and regulation at the Federal Reserve wants to make the big banks’ stress tests less stringent — and that’ll make a financial crisis more, not less, likely. . .

In short, I see little evidence that policymakers have learned the lessons of the last decade. I hope that situation will change before another crisis occurs.

 

*Both of the data series represented in the chart above end in 2014. That’s because the series on the share of income captured by the top 1 percent ends in that year. Median household income in 2015, the last year for which those data are available, was still below what it was in 2007.

We don’t have comparable data series for the first Great Depression.

GD

What we do know is that the share of income of the bottom 90 percent, which was 52.4 percent in 1928, hovered at roughly the same level throughout the 1930s, ending the decade at 52.5 percent, and then rose dramatically beginning in 1940. Meanwhile, the share of income captured by the top 1 percent, which stood at 21.2 percent in 1928, never managed to rise to that level during the 1930s and, by 1948, it had fallen to 15.6 percent.

Culture, it seems, is back on the agenda in economics. Thomas Piketty, in Capital in the Twenty-First Century, famously invoked the novels of Honoré de Balzac and Jane Austen because they dramatized the immobility of a nineteenth-century world where inequality guaranteed more inequality (which, of course, is where we’re heading again). Robert J. Shiller, past president of the American Economic Association, focused on “Narrative Economics” in his address at the January 2017 Allied Social Association meetings in Chicago. His basic argument was that popular narratives, “the stories and models people are talking about,” play an important role in economic fluctuations. And just the other day, Gary Saul Morson and Morton Schapiro—professor of the arts and humanities and professor of economics and president of Northwestern University, respectively—economists would benefit greatly if they broadened their focus and practiced “humanonomics.”

Dealing as it does with human beings, economics has much to learn from the humanities. Not only could its models be more realistic and its predictions more accurate, but economic policies could be more effective and more just.

Whether one considers how to foster economic growth in diverse cultures, the moral questions raised when universities pursue self-interest at the expense of their students, or deeply personal issues concerning health care, marriage, and families, economic insights are necessary but insufficient. If those insights are all we consider, policies flounder and people suffer.

In their passion for mathematically-based explanations, economists have a hard time in at least three areas: accounting for culture, using narrative explanation, and addressing ethical issues that cannot be reduced to economic categories alone.

As regular readers of this blog know, I’m all in favor of opening up economics to the humanities and the various artifacts of culture, from popular music to novels. In fact, I’ve been involved in various projects along these lines, including the New Economic Criticism, the postmodern moments of modern economics, and economic representations in both academic and everyday arenas.

And, to their credit, the authors I cite above do attempt to go beyond most of their mainstream colleagues in economics, who treat culture either as a commodity like any other (and therefore subject to the same kind of supply-and-demand analysis) or as a reminder term (e.g., to explain different levels of economic development, when all the usual explanations—based on preferences, technology, and endowments—have failed).

But in their attempt to invoke culture—as illustrative of economic ideas, a factor in determining economic events, or as a way of humanizing economic discourse—they forget one of the key lessons of Raymond Williams: that culture both registers the clashes of interest in society (culture represents, therefore, not just objects but the struggles over meaning within society) and stamps its mark on those interests and clashes (and in this sense is “performative,” since it modifies and changes those meanings).

In fact, that’s the approach I took in my 2014 talk on “Culture Beyond Capitalism” in the opening session of the 18th International Conference on Cultural Economics, sponsored by the Association for Cultural Economics International, at the University of Quebec in Montréal.

As I explained,

The basic idea is that culture offers to us a series of images and stories—audio and visual, printed and painted—that point the way toward alternative ways of thinking about and organizing economic and social life. That give us a glimpse of how things might be different from what they are. Much more so than mainstream academic economics has been interested in or able to do, even after the spectacular debacle of the most recent economic crisis, and even now in the midst of what I have to come the Second Great Depression.

I then went on to discuss a series of cultural artifacts—in music, film, short stories, art, and so on—which give us the sense of how things might be different, of how alternative economic theories and institutions might be imagined and created.

Importantly, economic representations in culture are much wider than the realist fiction to which some mainstream economists have turned. One of the best examples, based on the work of Mark Osteen, concerns the relationship between noncapitalist gift economies and jazz improvisation.* According to Osteen, both jazz and gifts involve their participants in risk; both require elasticity; both are social rituals in which the parties express and recreate identities; both are temporally contingent and dynamic. Each of them invokes reciprocal relations, yet transcends mere balance: each, that is, partakes of excess and surplus. Osteen suggests that jazz—such as John Coltrane’s “Chasin’ the Trane”—may serve as both an example of gift practices and a model for another economy, based on an ethos of improvisation, communalism, and excess.

I wonder if economists such as Piketty, Shiller, Morson, and Schapiro, who suggest we include culture in our economic theorizing, are willing to identify and examine aspects of historical and contemporary culture that point us beyond capitalism.

 

*Mark Osteen, “Jazzing the Gift: Improvisation, Reciprocity, Excess,” Rethinking Marxism 22 (October 2010): 569-80.

WP_20150709_19_25_42_Raw__highres

Mark Tansey, “Discarding the Frame” (1993″

Obviously, recent events—such as Brexit, Donald Trump’s presidency, and the rise of Bernie Sanders and Jeremy Corbyn—have surprised many experts and shaken up the existing common sense. Some have therefore begun to make the case that an era has come to an end.

The problem, of course, is while the old may be dying, it’s not all clear the new can be born. And, as Antonio Gramsci warned during the previous world-shaking crisis, “in this interregnum morbid phenomena of the most varied kind come to pass.”

For Pankaj Mishra, it is the era of neoliberalism that has come to an end.

In this new reality, the rhetoric of the conservative right echoes that of the socialistic left as it tries to acknowledge the politically explosive problem of inequality. The leaders of Britain and the United States, two countries that practically invented global capitalism, flirt with rejecting the free-trade zones (the European Union, Nafta) they helped build.

Mishra is correct in tracing British neoliberalism—at least, I hasten to add, its most recent phase—through both the Conservative and Labour Parties, from Margaret Thatcher to Tony Blair and David Cameron.* All of them, albeit in different ways, celebrated and defended individual initiative, self-regulating markets, cheap credit, privatized social services, and greater international trade—bolstered by military adventurism abroad. Similarly, in the United States, Reaganism extended through both Bush administrations as well as the presidencies of Bill Clinton and Barak Obama—and would have been continued by Hillary Clinton—with analogous promises of prosperity based on unleashing competitive market forces, together with military interventions in other countries.

Without a doubt, the combination of capitalist instability—the worst crisis of capitalism since the first Great Depression—and obscene levels of inequality—parallel to the years leading up to the crash of 1929—not to mention the interminable military conflicts that have deflected funding at home and created waves of refugees from war-torn zones, has called into question the legacy and presumptions of Thatcherism and Reaganism.

Where I think Mishra goes wrong is in arguing that “A new economic consensus is quickly replacing the neoliberal one to which Blair and Clinton, as well as Thatcher and Reagan, subscribed.” Yes, in both the United Kingdom and the United States—in the campaign rhetoric of Theresa May and Trump, and in the actual policy proposals of Corbyn and Sanders—neoliberalism has been challenged. But precisely because the existing framing of the questions has not changed, a new economic consensus—an alternative common sense—cannot be born.

To put it differently, the neoliberal frame has been discarded but the ongoing debate remains framed by the terms that gave rise to neoliberalism in the first place. What I mean by that is, while recent criticisms of neoliberalism have emphasized the myriad problems created by individualism and free markets, the current discussion forgets about or overlooks the even-deeper problems based on and associated with capitalism itself. So, once again, we’re caught in the pendulum swing between a more private, market-oriented form of capitalism and a more public, government-regulated form of capitalism. The former has failed—that era does seem to be crumbling—and so now we begin to turn (as we did during the last system-wide economic crisis) to the latter.**

However, the issue that keeps getting swept under the political rug is, how do we deal with the surplus? If the surplus is left largely in private hands, and the vast majority who produce it have no say in how it’s appropriated and distributed, it should come as no surprise that we continue to see a whole host of “morbid phenomena”—from toxic urban water and a burning tower block to a new wave of corporate concentration  and still-escalating inequality.

Questioning some dimensions of neoliberalism does not, in and of itself, constitute a new economic consensus. I’m willing to admit it is a start. But, as long as remain within the present framing of the issues, as long as we cannot show how unreasonable the existing reason is, we cannot say the existing era has actually come to an end and a new era is upon us.

For that we need a new common sense, one that identifies capitalism itself as the problem and imagines and enacts a different relationship to the surplus.

 

*I add that caveat because, as I argued a year ago,

Neoliberal ideas about self-governing individuals and a self-organizing economic system have been articulated since the beginning of capitalism. . .capitalism has been governed by many different (incomplete and contested) projects over the past three centuries or so. Sometimes, it has been more private and oriented around free markets (as it has been with neoliberalism); at other times, more public or state oriented and focused on regulated markets (as it was under the Depression-era New Deals and during the immediate postwar period).

**And even then it’s only a beginning—since, we need to remember, both Sanders and Corbyn did lose in their respective electoral contests. And, at least in the United States, the terms of neoliberalism are still being invoked—for example, by Ron Johnson, Republican senator from Wisconsin—in the current healthcare debate

Japan World Markets

Most of us are pretty cautious when it comes to spending our money. The amount of money we have is pretty small—and the global economic, financial, and political landscape is pretty shaky right now.

And even if we’re not cautious, if we’re not prudent savers, then no harm done. Spending everything we have may be a personal risk but it doesn’t do any social harm.

It’s different, however, for the global rich. The individual decisions they make do, in fact, have social ramifications. That’s why, back in 2011, I suggested we switch our focus from the “culture of poverty” to the pathologies of the rich.

Consider, for example, the BBC [ht: ja] report on the findings of UBS Wealth Management’s survey of more than 2,800 millionaires in seven countries.

Some 82 percent of those surveyed said this is the most unpredictable period in history. More than a quarter are reviewing their investments and almost half said they intend to but haven’t yet done so.

But more than three quarters (77 pct) believe they can “accurately assess financial risk arising from uncertain events”, while 51 percent expect their finances to improve over the coming year compared with 13 percent who expect them to deteriorate.

More than half (57 pct) are optimistic about achieving their long-term goals, compared with 11 percent who are pessimistic. And an overwhelming 86 percent trust their own instincts when making important decisions.

“Most millionaires seem to be confident they can steer their way through the turbulence without so much as a dent in their finances,” UBS WM said.

Most of us can’t afford that kind of arrogance in the face of risk. But the world’s millionaires can. Just as they did during the lead-up to the crashes of 1929 and of 2008.

They trusted their instincts—and everyone else paid the consequences.

Greg Kahn

I am quite willing to admit that, based on last Friday’s job report, the Second Great Depression is now over.

As regular readers know, I have been using the analogy to the Great Depression of the 1930s to characterize the situation in the United States since late 2007. Then as now, it was not a recession but, instead, a depression.

As I explain to my students in A Tale of Two Depressions, the National Bureau of Economic Research doesn’t have any official criteria for distinguishing an economic depression from a recession. What I offer them as an alternative are two criteria: (a) being down (as against going down) and (b) the normal rules are suspended (as, e.g., in the case of the “zero lower bound” and the election of Donald Trump).

By those criteria, the United States experienced a second Great Depression starting in December 2007 and continuing through April 2017. That’s almost a decade of being down and suspending the normal rules!

Now, with the official unemployment rate having fallen to 4.4 percent, equal to the low it had reached in May 2007, we can safely say the Second Great Depression has come to an end.

However, that doesn’t mean we’re out of the woods, or that we can forget about the effects of the most recent depression on American workers.*

GDP.jpg

For example, while Gross Domestic Product per capita in the United States is higher now than it was at the end of 2007 ($51,860 versus $49,586, in chained 2009 dollars, or 4.6 percent), it is still much lower than it would have been had the previous trend continued (which can be seen in the chart above, where I extend the 2000-2007 trend line forward to 2017). All that lost output—not to mention the accompanying jobs, homes, communities, and so on—represents one of the lingering effects of the Second Great Depression.

HS  college

And we can’t forget that young workers face elevated rates of underemployment—11.9 percent for young college graduates and much higher, 30.9 percent, for young high-school graduates. As the Economic Policy Institute observes,

This suggests that young graduates face less desirable employment options than they used to in response to the recent labor market weakness for young workers.

income  wealth

Finally, the previous trend of growing inequality—in terms of both income and wealth—has continued during the Second Great Depression. And there are no indications from the economy or economic policy that suggest that trend will be reversed anytime soon.

So, here we are at the end of the Second Great Depression—no longer down and with the normal rules back in place—and yet the effects from the longest and most severe downturn since the 1930s will be felt for generations to come.

 

*As if often the case, readers’ comments on newspaper articles tell a different story from the articles themselves. Here are two, on the New York Times article about the latest employment data:

John Schmidt—

Any discussion about “full employment”, when there are so many people who’ve essentially given up looking for work or who’re working in low-skill or unskilled labor positions, seems like the fiscal equivalent of rearranging deck chairs on the Titanic. Based on data from the Fed and the World Bank, GDP per capita has doubled since 1993, while median household income has risen ~10%. Most of the newly-generated wealth and gains from productivity increases are being funneled upward, such that the average worker very rarely sees any sort of pay increase. Are we expected to believe that this will change now that we’ve [arguably] passed some arbitrary threshold? Why should we pat ourselves on the back for reaching “full employment”? Shouldn’t we be seeking *fulfilling* employment for everyone, instead, at least inasmuch as that’s possible? Shouldn’t we care that the relentless drive for profit at the expense of everything else is creating a toxic environment where the only way to ensure a raise is to hop from job to job, eroding any sense of two-way loyalty between companies and their employees?

I’m not sure what the solution is, but I know enough to see there’s a problem. Inequality of this sort is not sustainable, and it’s not going to magically disappear without some serious policy changes.

David Dennis—

There is a critical parameter missing from full employment data. very critical. Here in Pontiac, Michigan before the collapse of American manufacturing, full employment meant 10, 000 jobs working at GM factories and Pontiac Motors making above the mean wages with excellent health insurance as well as retirement pensions. You can not compare full employment at McDonalds and Walmart with the jobs that preceded them. The full employment measure doesn’t mean much if it isn’t correlated with a index that compares that employment with a standard of living as it relates to a set basket of goods, services, and benefits.