Posts Tagged ‘Second Great Depression’

wage share-growth

We’ve been hearing this since the recovery from the Second Great Depression began: it’s going to be a Golden Age for workers!

The idea is that the decades of wage stagnation are finally over, as the United States enters a new period of labor shortage and workers will be able to recoup what they’ve lost.

The latest to try to tell this story is Eduardo Porter:

the wage picture is looking decidedly brighter. In 2008, in the midst of the recession, the average hourly pay of production and nonsupervisory workers tracked by the Bureau of Labor Statistics — those who toil at a cash register or on a shop floor — was 10 percent below its 1973 peak after accounting for inflation. Since then, wages have regained virtually all of that ground. Median wages for all full-time workers are rising at a pace last achieved in the dot-com boom at the end of the Clinton administration.

And with employers adding more than two million jobs a year, some economists suspect that American workers — after being pummeled by a furious mix of globalization and automation, strangled by monetary policy that has restrained economic activity in the name of low inflation, and slapped around by government hostility toward unions and labor regulations — may finally be in for a break.

The problem is that wages are still growing at a historically slow pace (the green line in the chart above), which means the wage share (the blue line in the chart) is still very low. The only sign that things might be getting better for workers is that the current wage share is slightly above the low recorded in 2013—but, at 43 percent, it remains far below its high of 51.5 percent in 1970.

That’s an awful lot of ground to make up.

productivity-wage share

The situation for American workers is even worse when we compare labor productivity and the wage share. Since 1970, labor productivity (the real output per hour workers in the nonfarm business sector, the red line in the chart above) has more than doubled, while the wage share (the blue line) has fallen precipitously.

We’re a long way from any kind of Golden Age for workers.

But, in the end, that’s not what Porter is particularly interested in. He’s more concerned about what he considers to be a labor shortage caused by a shrinking labor force.

So, what does Porter recommend to, in his words, “protect economic growth and to give American workers a shot at a new golden age of employment”? More immigration, more international trade, cuts in disability insurance, and limiting increases in the minimum wage.

Someone’s going to have to explain to me how that set of policies is going to reverse the declines of recent decades and usher in a Golden Age for American workers.

crisis_timeline

The crisis takes a much longer time coming than you think, and then it happens much faster than you would have thoughtRudi Dornbusch

Last week, a wide variety of U.S. media (including the Wall Street Journal and USA Today) marked what they considered to be the ten-year anniversary of the beginning of the global economic crisis—from which we still haven’t recovered.

The event in question, which occurred on 9 August 2007, was the announcement by international banking group BNP Paribas that, because their fund managers could not calculate a reliable net asset value of three mutual funds, they were suspending redemptions.

But, as I explain to my students, “Beware the appearance of precision!” For example, the more numbers after the decimal point (2.9, 2.93, 2.926, etc.), the more real and precise the number appears to be. But such a number is only ever an estimate, a best guess, about what is going on (whether it be the growth of output or the increase in new home sales).

The same holds for dates. It would be odd to choose a particular day ten years ago that, among all the possible causes and precipitating events, put the U.S. and world economies on the road to the Second Great Depression. That would be like saying World War I was caused on 28 June 1914, when Yugoslav nationalist Gavrilo Princip assassinated Archduke Franz Ferdinand of Austria. Or that the first Great Depression began on Black Thursday, 24 October 1929.

fredgraph

Given the centrality of housing sales, mortgages, and mortgage-backed securities in creating the fragility of the financial sector, we could just as easily choose July 2005 (when, as in the green line in the chart above, new one-family house sales peaked), January 2006 (when, as in the blue line, new privately owned housing units starts peaked), or February 2007 (when the Case-Shiller home price index, the red line, started its slide).

fredgraph (1)

Or, alternatively, we could choose the third quarter of 2006, when the U.S. corporate profit share (before taxes and without adjustments) reached its peak, at almost 12 percent of national income. After that, it began to fall, and the decisions of capitalists dragged the entire economy to the brink of disaster.

fredgraph (2)

Or the year 2005, when the profits of the financial and insurance sector were at their highest level—at $158.3 trillion—and then began to decline. Then, of course, it was bailed out after falling into negative territory in 2008.

graph_dl

Or, given the centrality of inequality in creating the conditions for the crash, we can go all the way back to 1980, when the share of income going to the top 1 percent was “only” 10.7 percent—since after that it started to rise, reaching an astounding 20.6 percent in 2006.

Those are all possible dates, some of course more precise than others.

What is important is each one of those indicators gives us a sense of how the normal workings of capitalism—in housing, finance and insurance, corporate profits, and the distribution of income—created, together and over time, the conditions for the most severe set of crises since the first Great Depression. And now, as a result of the crash and the nature of the recovery, all of them have been restored.

Thus creating the conditions for the next crash to occur, ten years after the last one.

decade

Narayana Kocherlakota, professor of economics at the University of Rochester and past president of the Federal Reserve Bank of Minneapolis, is right: in some ways, the 2007-08 was worse than the Great Depression.

It certainly has been worse for average households in the United States. Real median household income (the red line in the chart above) is still below what it was in 2007—and lower still from what it was even earlier, in 1999.

But it hasn’t been a lost decade for the members of the top 1 percent. Their share of income (the blue line in the chart above), which was already an obscene 19 percent in 2007, is today even higher, at 19.6 percent.*

Still, Kocherlakota’s warning is appropriate:

financial crises and the responses to them can have highly persistent adverse effects on economic potential. The risk of such large costs means that policy makers must have better safeguards in place, and be willing to respond vigorously through monetary and fiscal stimulus when crises nonetheless happen.

So what’s happening along these lines? The Trump administration’s nominee to be vice chair of supervision and regulation at the Federal Reserve wants to make the big banks’ stress tests less stringent — and that’ll make a financial crisis more, not less, likely. . .

In short, I see little evidence that policymakers have learned the lessons of the last decade. I hope that situation will change before another crisis occurs.

 

*Both of the data series represented in the chart above end in 2014. That’s because the series on the share of income captured by the top 1 percent ends in that year. Median household income in 2015, the last year for which those data are available, was still below what it was in 2007.

We don’t have comparable data series for the first Great Depression.

GD

What we do know is that the share of income of the bottom 90 percent, which was 52.4 percent in 1928, hovered at roughly the same level throughout the 1930s, ending the decade at 52.5 percent, and then rose dramatically beginning in 1940. Meanwhile, the share of income captured by the top 1 percent, which stood at 21.2 percent in 1928, never managed to rise to that level during the 1930s and, by 1948, it had fallen to 15.6 percent.

Culture, it seems, is back on the agenda in economics. Thomas Piketty, in Capital in the Twenty-First Century, famously invoked the novels of Honoré de Balzac and Jane Austen because they dramatized the immobility of a nineteenth-century world where inequality guaranteed more inequality (which, of course, is where we’re heading again). Robert J. Shiller, past president of the American Economic Association, focused on “Narrative Economics” in his address at the January 2017 Allied Social Association meetings in Chicago. His basic argument was that popular narratives, “the stories and models people are talking about,” play an important role in economic fluctuations. And just the other day, Gary Saul Morson and Morton Schapiro—professor of the arts and humanities and professor of economics and president of Northwestern University, respectively—economists would benefit greatly if they broadened their focus and practiced “humanonomics.”

Dealing as it does with human beings, economics has much to learn from the humanities. Not only could its models be more realistic and its predictions more accurate, but economic policies could be more effective and more just.

Whether one considers how to foster economic growth in diverse cultures, the moral questions raised when universities pursue self-interest at the expense of their students, or deeply personal issues concerning health care, marriage, and families, economic insights are necessary but insufficient. If those insights are all we consider, policies flounder and people suffer.

In their passion for mathematically-based explanations, economists have a hard time in at least three areas: accounting for culture, using narrative explanation, and addressing ethical issues that cannot be reduced to economic categories alone.

As regular readers of this blog know, I’m all in favor of opening up economics to the humanities and the various artifacts of culture, from popular music to novels. In fact, I’ve been involved in various projects along these lines, including the New Economic Criticism, the postmodern moments of modern economics, and economic representations in both academic and everyday arenas.

And, to their credit, the authors I cite above do attempt to go beyond most of their mainstream colleagues in economics, who treat culture either as a commodity like any other (and therefore subject to the same kind of supply-and-demand analysis) or as a reminder term (e.g., to explain different levels of economic development, when all the usual explanations—based on preferences, technology, and endowments—have failed).

But in their attempt to invoke culture—as illustrative of economic ideas, a factor in determining economic events, or as a way of humanizing economic discourse—they forget one of the key lessons of Raymond Williams: that culture both registers the clashes of interest in society (culture represents, therefore, not just objects but the struggles over meaning within society) and stamps its mark on those interests and clashes (and in this sense is “performative,” since it modifies and changes those meanings).

In fact, that’s the approach I took in my 2014 talk on “Culture Beyond Capitalism” in the opening session of the 18th International Conference on Cultural Economics, sponsored by the Association for Cultural Economics International, at the University of Quebec in Montréal.

As I explained,

The basic idea is that culture offers to us a series of images and stories—audio and visual, printed and painted—that point the way toward alternative ways of thinking about and organizing economic and social life. That give us a glimpse of how things might be different from what they are. Much more so than mainstream academic economics has been interested in or able to do, even after the spectacular debacle of the most recent economic crisis, and even now in the midst of what I have to come the Second Great Depression.

I then went on to discuss a series of cultural artifacts—in music, film, short stories, art, and so on—which give us the sense of how things might be different, of how alternative economic theories and institutions might be imagined and created.

Importantly, economic representations in culture are much wider than the realist fiction to which some mainstream economists have turned. One of the best examples, based on the work of Mark Osteen, concerns the relationship between noncapitalist gift economies and jazz improvisation.* According to Osteen, both jazz and gifts involve their participants in risk; both require elasticity; both are social rituals in which the parties express and recreate identities; both are temporally contingent and dynamic. Each of them invokes reciprocal relations, yet transcends mere balance: each, that is, partakes of excess and surplus. Osteen suggests that jazz—such as John Coltrane’s “Chasin’ the Trane”—may serve as both an example of gift practices and a model for another economy, based on an ethos of improvisation, communalism, and excess.

I wonder if economists such as Piketty, Shiller, Morson, and Schapiro, who suggest we include culture in our economic theorizing, are willing to identify and examine aspects of historical and contemporary culture that point us beyond capitalism.

 

*Mark Osteen, “Jazzing the Gift: Improvisation, Reciprocity, Excess,” Rethinking Marxism 22 (October 2010): 569-80.

fredgraph (1)

Back in 2010, I warned about the widening and deepening of capitalist poverty in the United States.

The fact is (pdf), more poor people now live in the suburbs than in America’s big cities or rural areas. Suburbia is home to almost 16.4 million poor people, compared to 13.4 million in big cities and 7.3 million in rural areas.

fredgraph (3)

Lake County, IL, one of the wealthiest counties in the United States, is a case in point. Median household income in 2015 was $82,106, 45 percent higher than the national average.

At the same time, 9.6 percent of the Lake County population lived below the poverty line—more than 20 thousand of them children under the age of 17—and about 60 thousand people were forced to rely on food stamp benefits.

As Scott Allard explains,

Set beside Lake Michigan north of the city of Chicago, Lake County abounds with large single-family homes built mostly since 1970. Parks, swimming pools and recreational spaces dot the landscape. Commuter trains and toll roads ferry workers into Chicago, and back again. . .

Poverty problems in Lake County can be hidden from plain view. Many low-income families live in homes and neighbourhoods that appear very “middle class” on the surface – single-family homes with garages and cars in the driveway.

Closer inspection, however, reveals signs of poverty in all corners of the county. Many Lake County communities from all racial and ethnic groups are in need, and poverty rates in the older communities along Lake Michigan, such as Zion or Waukegan, more closely resemble those in the central city.

Pockets of concentrated poverty can be found in subdivisions of single-family homes, isolated apartment complexes and mobile home parks across the county. It also appears at the outer edges of Lake County in areas that might have been described as rural or recreational 30 or 40 years ago, before suburban sprawl brought in new residents and job-seekers. Several once-bustling strip malls are home to discount retailers and empty storefronts. It is not uncommon to see families at local grocery stores and supermarkets using food stamps or electronic benefit transfer cards to pay for part of their bill.

Rising suburban poverty is, of course, not confined to Lake County or the Chicago area. It can be found across the country, from Atlanta to San Francisco.

Back in the 1990s, researchers began to chronicle the diversity that exists across American suburbs, paying particular attention to older, declining suburbs—manufacturing-based, older industrial areas struggling with structural shifts and economic decline.

Now, however, in the wake of the Second Great Depression, the poverty landscape has broadened even further, encompassing all kinds of communities around the country. We’ve now moved well beyond the declining and at-risk suburbs chronicled in earlier research and are forced to confront the geographical widening of poverty, which continues to blight the nation’s cities and rural areas and is increasingly hidden in plain view in its suburbs.

Japan World Markets

Most of us are pretty cautious when it comes to spending our money. The amount of money we have is pretty small—and the global economic, financial, and political landscape is pretty shaky right now.

And even if we’re not cautious, if we’re not prudent savers, then no harm done. Spending everything we have may be a personal risk but it doesn’t do any social harm.

It’s different, however, for the global rich. The individual decisions they make do, in fact, have social ramifications. That’s why, back in 2011, I suggested we switch our focus from the “culture of poverty” to the pathologies of the rich.

Consider, for example, the BBC [ht: ja] report on the findings of UBS Wealth Management’s survey of more than 2,800 millionaires in seven countries.

Some 82 percent of those surveyed said this is the most unpredictable period in history. More than a quarter are reviewing their investments and almost half said they intend to but haven’t yet done so.

But more than three quarters (77 pct) believe they can “accurately assess financial risk arising from uncertain events”, while 51 percent expect their finances to improve over the coming year compared with 13 percent who expect them to deteriorate.

More than half (57 pct) are optimistic about achieving their long-term goals, compared with 11 percent who are pessimistic. And an overwhelming 86 percent trust their own instincts when making important decisions.

“Most millionaires seem to be confident they can steer their way through the turbulence without so much as a dent in their finances,” UBS WM said.

Most of us can’t afford that kind of arrogance in the face of risk. But the world’s millionaires can. Just as they did during the lead-up to the crashes of 1929 and of 2008.

They trusted their instincts—and everyone else paid the consequences.

Greg Kahn

I am quite willing to admit that, based on last Friday’s job report, the Second Great Depression is now over.

As regular readers know, I have been using the analogy to the Great Depression of the 1930s to characterize the situation in the United States since late 2007. Then as now, it was not a recession but, instead, a depression.

As I explain to my students in A Tale of Two Depressions, the National Bureau of Economic Research doesn’t have any official criteria for distinguishing an economic depression from a recession. What I offer them as an alternative are two criteria: (a) being down (as against going down) and (b) the normal rules are suspended (as, e.g., in the case of the “zero lower bound” and the election of Donald Trump).

By those criteria, the United States experienced a second Great Depression starting in December 2007 and continuing through April 2017. That’s almost a decade of being down and suspending the normal rules!

Now, with the official unemployment rate having fallen to 4.4 percent, equal to the low it had reached in May 2007, we can safely say the Second Great Depression has come to an end.

However, that doesn’t mean we’re out of the woods, or that we can forget about the effects of the most recent depression on American workers.*

GDP.jpg

For example, while Gross Domestic Product per capita in the United States is higher now than it was at the end of 2007 ($51,860 versus $49,586, in chained 2009 dollars, or 4.6 percent), it is still much lower than it would have been had the previous trend continued (which can be seen in the chart above, where I extend the 2000-2007 trend line forward to 2017). All that lost output—not to mention the accompanying jobs, homes, communities, and so on—represents one of the lingering effects of the Second Great Depression.

HS  college

And we can’t forget that young workers face elevated rates of underemployment—11.9 percent for young college graduates and much higher, 30.9 percent, for young high-school graduates. As the Economic Policy Institute observes,

This suggests that young graduates face less desirable employment options than they used to in response to the recent labor market weakness for young workers.

income  wealth

Finally, the previous trend of growing inequality—in terms of both income and wealth—has continued during the Second Great Depression. And there are no indications from the economy or economic policy that suggest that trend will be reversed anytime soon.

So, here we are at the end of the Second Great Depression—no longer down and with the normal rules back in place—and yet the effects from the longest and most severe downturn since the 1930s will be felt for generations to come.

 

*As if often the case, readers’ comments on newspaper articles tell a different story from the articles themselves. Here are two, on the New York Times article about the latest employment data:

John Schmidt—

Any discussion about “full employment”, when there are so many people who’ve essentially given up looking for work or who’re working in low-skill or unskilled labor positions, seems like the fiscal equivalent of rearranging deck chairs on the Titanic. Based on data from the Fed and the World Bank, GDP per capita has doubled since 1993, while median household income has risen ~10%. Most of the newly-generated wealth and gains from productivity increases are being funneled upward, such that the average worker very rarely sees any sort of pay increase. Are we expected to believe that this will change now that we’ve [arguably] passed some arbitrary threshold? Why should we pat ourselves on the back for reaching “full employment”? Shouldn’t we be seeking *fulfilling* employment for everyone, instead, at least inasmuch as that’s possible? Shouldn’t we care that the relentless drive for profit at the expense of everything else is creating a toxic environment where the only way to ensure a raise is to hop from job to job, eroding any sense of two-way loyalty between companies and their employees?

I’m not sure what the solution is, but I know enough to see there’s a problem. Inequality of this sort is not sustainable, and it’s not going to magically disappear without some serious policy changes.

David Dennis—

There is a critical parameter missing from full employment data. very critical. Here in Pontiac, Michigan before the collapse of American manufacturing, full employment meant 10, 000 jobs working at GM factories and Pontiac Motors making above the mean wages with excellent health insurance as well as retirement pensions. You can not compare full employment at McDonalds and Walmart with the jobs that preceded them. The full employment measure doesn’t mean much if it isn’t correlated with a index that compares that employment with a standard of living as it relates to a set basket of goods, services, and benefits.