Archive for August, 2016


There are, of course, many aspects of the U.S. healthcare system I have not had the opportunity to discuss over the course of this series on Unhealthy Healthcare. I am thinking of the growth of new, profitable medical centers (e.g., for out-patient surgery), plus biotechnology companies, diagnostic clinics, rehabilitation centers, and nursing homes. There are also all the nurses, orderlies, bookkeepers, and administrative staff, primary-care physicians and therapists in rehabilitation services, the hospital volunteers and the underpaid staff who provide care in nursing homes, the dedicated people who set up clinics for underserved populations, and many others who are forced to work under increasingly difficult conditions to provide decent healthcare to the American people.

But no matter how hard those healthcare workers labor, the current system of U.S healthcare is a failure. It provides less healthcare at a higher cost than in other rich countries. And it continues to leave large numbers of Americans, especially workers and the poor, without access to affordable, high-quality healthcare.

The U.S. healthcare system, as it is currently configured, only really works for those who make a profit—selling health insurance, pharmaceuticals, and in-patient and acute-care services in hospitals—and those who have the wherewithal to finance their own healthcare.


As it turns out, the majority of Americans know this. According to the latest Gallup poll, 54 percent of respondents have a somewhat or very negative view of the healthcare industry. And 60 percent have only some, very little, and no confidence in the current medical system. On top of that, 82 percent worry (either a great deal or a fair amount) about the availability and affordability of healthcare in the United States.



In fact, the majority of Americans (58 percent) say they would like to see the 2010 health care law, the Affordable Care Act, replaced with care for all—along the lines presented most recently by presidential candidate Bernie Sanders.

Obviously, workers and poor people in the United States need and want a healthier healthcare system. The question then is, what would should a system look like?

Here I’ll admit, I don’t have a detailed plan of what the U.S. healthcare system should be or how exactly it should be transformed. There are plenty of such plans out there (the best known of which is probably the single-payer program developed back in 1989 by the Physicians for a National Health Program). And I’m not about to develop and present a new one.

Instead, I am guided by a lesson I learned from an old friend (a veteran of more than three decades of working in the trade-union movement): formulate and win people over to the general goal and, once they’re committed to it, let policymakers and stakeholders negotiate and work out the details to reach that goal.

In this case, the goal is universal, affordable, high-quality healthcare.

Such a system would provide high-quality healthcare (physical and mental, encompassing prevention, acute-care, substance-abuse, rehabilitation, and late-life) to all Americans (without exception, especially those who at the middle and bottom of the economic ladder) at an affordable price (since, as I see it, Americans are willing to pay for decent healthcare but it should be according to their ability to pay, which it currently is not).

That’s it. We shouldn’t care how they provide it. Just that they do so.* And if the key components of the current healthcare system stand in the way, because they’re making profits on how the system is currently organized and don’t want to see real change, they should be bypassed or nationalized (as the case requires). Then, the other private and public entities, the ones actually committed to the goal, can get on with the task of imagining and implementing the universal, affordable, high-quality healthcare system Americans deserve.


*Although, to my view, a healthier healthcare system right now probably involves some combination of single-payer (federal and state) financing and a network of non-profit, community, and cooperative healthcare providers.


For years, I have been arguing that mainstream economists, who continue to pat themselves on the back for a job well done, have no understanding of why the growth of workers’ wages has been so slow after the Great Recession.

Their presumption has long been that, as the level of unemployment fell, workers’ wages would rise. But, as we have seen, the official rate of unemployment has declined dramatically (from a high of 10 percent in October 2009 to to 4.9 percent currently) but wages are growing very slowly (at an annual rate of 2.5 percent in July), as they have throughout the period of recovery (averaging about 2.1 annually).


My answer, as I have explained many times, is that what mainstream economists leave out of their theory and their models is the role of the Reserve Army of the Unemployed and Underemployed. The fact is, when employers can hire and fire workers at will (and when, in addition, they can use the value workers produce to invest in new labor-saving technologies, outsource jobs to the most profitable locations, undermine workers’ attempts to form unions, and much else), they create a group of workers who are either unemployed (whether or not they are actually looking for jobs) or underemployed (often working at part-time jobs when they’d rather have full-time jobs). The existence of such a Reserve Army regulates the level of wages—and thus far, during the current recovery, it has served to keep both wage growth and the wage share of national income at very low levels.

Now however, more than seven years into the current recovery, the problem of slow wage growth and a low wage share is so severe and persistent that the Federal Reserve Bank of Richmond has taken notice.

The persistence of slow wage growth since the Great Recession — amid a steady economic recovery and a sharp drop in unemployment — has become one of the biggest puzzles for economists in recent years. It’s not just an issue for economists; in this election cycle, weak wage growth has been used to support proposals ranging from strengthening unions to boosting the federal minimum wage. More broadly, stagnating incomes have likely fed into the broader ongoing economic pessimism among Americans. One recent Pew Research survey, for example, found that 73 percent of those polled described economic conditions as fair or poor, while only 27 percent considered them excellent or good.

What’s interesting about their analysis is that, however unconsciously and using very different methods, they’ve actually stumbled on at least one aspect of the Reserve Army: the difference in wages between, on one hand, workers who remained employed and, on the other hand, those who were initially let go and are now (at least in some cases) being rehired.

What actually happened during the Great Recession? It turns out that workers who stayed on at their jobs were indeed among the higher skilled and better paid, whereas those who were let go were lower skilled and tended to have wages below the median. The growing concentration of higher-paid workers meant the aggregate wage stayed surprisingly high even as gross domestic product plunged and unemployment spiked. Then, as the economy picked up, the wages of the continuously employed rose as well, just as economic theory would predict. At the same time, however, the new hires coming back into the full-time workforce — whether they had been unemployed, forced to work part-time, or had dropped out of the labor force altogether — re-entered at substantially lower wages com­pared to their continuously employed peers.

About 80 percent of these “re-entries” started their new jobs below the median wage.

In other words, even as the economy was improving and unemployment was falling, the effect of so-called re-entries—hiring from the Reserve Army—was to dampen the increase in their own wages (and, I would add, to lower the wage demands of those who managed to keep their jobs). In general, no matter what the last wage was of those workers who had lost full-time work, they re-entered the workforce at significantly lower wages—thus dragging down the overall growth in wages (because their own wages were low and because workers who remained on the job were forced to constrain their own wage demands, as lower-wage replacements were hired or on the job ladder below them).

And the prospect looking forward?

even if there is a smaller amount of slack left, it means that people returning to full-time work may face a lower starting wage because there is still relatively more labor supply than labor demand, compared to the pre-recession economy. Also, workers coming back to full-time employment may well be earning discounted wages that are lagging trend productivity growth — and that may not change rapidly even as the labor market improves.

And that’s exactly how capitalism and its Reserve Army work—contrary to the view of mainstream economists and the other champions of the current recovery.

So, to answer the question, will workers finally get a raise? Not, within existing economic institutions, anytime soon.


A recently released report from the Joint Economic Committee of the U.S. Congress (pdf) illustrates the current anemic wage picture, finding that average U.S. wages inched up just 1.9 percent over the last year. In nine states and the District of Columbia, inflation-adjusted wages actually fell during the past year. Average hourly earnings for private-sector workers declined 1.8 percent in Alaska, 1.3 percent in Colorado and 0.1 percent in the District of Columbia. In Kentucky, they’re taking home 0.6 percent less, while in Michigan, earnings are down by 0.7 percent. Pay in Nevada declined 1 percent, Texas earnings slid 0.3 percent and Vermont saw a drop of 1.3 percent. Earnings fell 0.1 percent in West Virginia and 0.4 percent in Wyoming.


Special mention

www.usnews-1 cb082616dAPR

wealth shares

[modified from the original source (pdf)]

We’ve been learning a great deal about the conditions and consequences of the obscene levels of inequality in the United States—now, in the past, and it seems for the foreseeable future.

Right now, inequality is escalating within public higher education, especially in research universities that are chasing both tuition revenues and rankings. Thus, the editorial board of the Badger Herald, the student newspaper at the University of Wisconsin, found it necessary to criticize the lifting of the out-of-state student enrollment cap because it betrays the Wisconsin Idea and is making the university both “richer and whiter.”

Instead of increasing enrollment by targeting low-income and underrepresented Wisconsin students, UW now joins the ranks of public institutions that are happy with increasing the — already substantial — socioeconomic divide on campus. Making UW a bougie playground for the greater Chicagoland area is not the way to keep Wisconsin a world-class institution.

The Wisconsin students are right.* As recent research by Ozan Jaquette, Bradley R. Curs, and Julie R. Posselt confirms, public research universities are increasingly relying on tuition increases to fund their activities.** Thus, they are admitting more nonresident students—both for their out-of-state tuition payments and to raise the universities’ academic profile—and, as a result, the proportion of historically underrepresented students and especially of low-income students is declining. Moreover,

The shift towards nonresident students suggests that public research universities have increased the value they place on students who pay high tuition and have high test scores. This shift is indicative of a deeper change in organizational values, away from the public good emphasis on access and towards the self-interested emphases of academic profile and revenue generation. As scholars, campus leaders, or policymakers, we must ask ourselves, whether these are the values we want our flagship public institutions to promote?

We also need to look at the way inequality played out in American history, and make the appropriate connections to the present and future. In a recent paper, Suresh Naidu and Noam Yuchtman examine the situation of labor markets during the first Gilded Age. Their argument, in a nutshell, is that labor markets in the late-nineteenth and early-twentieth centuries are as close as we have seen in U.S. history to the unregulated labor market that is presumed and celebrated within neoclassical economics. But, the authors explain, those Gilded-Age labor markets were characterized by high levels of conflict—between labor movements and employer organizations (over wages and, when workers went on strike, replacement workers or scabs)—which, in turn, called on increased levels of judicial intervention as well as domestic policing and military intervention, generally on the side of the employers.***

And the implications for the United States, in the second Gilded Age:

Looking around today, it is obvious that inequality and conflict over the distribution of wealth and income remain salient a century after the first Gilded Age. History is never a perfect guide, but the late 19th century suggests that even as markets play a greater role in allocating labour, legal and political institutions will continue to shape bargaining power between firms and workers, and thus the division of rents within the firm. What remains to be determined – and battled over – is which institutions are empowered to act, and whose interests they will represent. Regardless, latent labour market conflict seems likely to be a prominent feature of our new Gilded Age.

Finally, what can we way about inequality looking forward? According to Robert Shiller, it “could become a nightmare in the decades ahead.”

The reason for this dire prognosis is that the structures that create high levels of inequality in the first place serve as barriers to policies that might actually lessen the amount of inequality. According to Angus Deaton, “Those who are doing well will organize to protect what they have, including in ways that benefit them at the expense of the majority.” Historically, the only exceptions in capitalist democracies emerge in times of war, “because war mobilization changed beliefs about tax fairness.”

And contra Robert Solow (“We are not good at large-scale redistribution of income”), capitalist societies have consistently shown to be very good at large-scale redistribution of income toward the top—just not particularly interested in moving in the opposite direction, in redistributing income to those at the bottom.

In fact, neither Shiller nor the nine other economists who contributed to a recent project on long-term forecasting “expressed optimism that inequality would be corrected in the future, and none of us ventured that any major economic policy was likely to counteract recent trends.”****

Shiller uses Satyajit Ray’s 1973 movie “Distant Thunder”—about the Bengal famine of 1942-43, when millions died, almost all from the lower classes—to illustrate our current dilemma. There was plenty of food in the Bengal Province of British India to keep everyone alive but “the food was not shared adequately.”*****

Systems of privilege and entitlement permitted hoarding of food by people of status whose lives went on much as usual, except that they had to brush off starving beggars and would occasionally see dead bodies on the street.

It’s clear that, today, there are plenty of goods—food, clothing, and shelter—to go around but they’re not being shared equally. Not by a long shot. The problem is, existing “systems of privilege and entitlement” permit the accumulation of wealth on one end and misery on the end—just as they did during the first Gilded Age and, unless things change, will continue to do so for the foreseeable future.

Meanwhile, the lives of people of status go on much as usual, in their “bougie playground”—except they have to brush off the contemporary equivalent of starving beggars and occasionally see the analogy today of dead bodies on the street.


*It should perhaps come as no surprise that a prominent mainstream economist, Rebecca Blank, Chancellor of the University of Wisconsin-Madison since 2013, is the one who sought (and won) an end to the cap on out-of-state and international students.

**As Stephanie Saul reports,

According to the College Board, the average cost of attending a four-year public university, including room and board, increased from $11,655 in 2000 to $19,548 in 2015, in inflation-adjusted dollars. In the City University of New York system, tuition at four-year colleges is now $6,330, having increased by $300 each year since 2011, when it was $4,830. . .

“What Sanders figured out — it’s not the $65,000 cost of attendance at some of our pricier privates driving the debt bubble, but rather the disinvestment and privatization of public higher ed,” said Barmak Nassirian, the director of federal relations and policy analysis for the American Association of State Colleges and Universities.

***This is one of the examples I use in my graduate-level course on the Political Economy of War and Peace—that the United States has its own history of intrastate wars (which, like many such wars in recent times, have been class wars) and that, as the authors explain, “military and law enforcement institutions of the United States, in particular the Army, the National Guard, and the FBI, can trace their origins to the federal troops, state militias, and private Pinkertons deployed in 19th century labor conflicts.”

****The key point Shiller does not address is the role mainstream economics has played both in creating the current levels of inequality and in creating barriers to imagining and enacting policies and strategies for doing away with the grotesque levels of inequality we are witnessing today.

*****Amartya Sen famously argued that democracy prevents famines. That may be true. But it doesn’t prevent hunger or the other economic and social catastrophes that stem from the high levels of inequality we’ve witnessed during the first and second Gilded Ages in the United States.


Last Wednesday, as part of my Unhealthy Healthcare series, I showed that the recent slowdown in U.S. healthcare costs has been invisible to American workers, because they have been forced to pay much higher premiums and deductibles in order to obtain access to healthcare for themselves and their families.

That conclusion has been confirmed by a recent Wall Street Journal article about the fact that workers are increasingly feeling the pain of paying for their healthcare.

Middle-class households are finding more of their health-care costs are coming out of their own pockets.

David Cutler, a Harvard health-care economist, said this may be “a story of three Americas.” One group, the rich, can afford health care easily. The poor can access public assistance. But for lower middle- to middle-income Americans, “the income struggles and the health-care struggles together are a really potent issue,” he said.

A June Brookings Institution study found middle-income households now devote the largest share of their spending to health care, 8.9%, a rise of more than three percentage points from 1984 to 2014.

By 2014, middle-income households’ health-care spending was 25% higher than what they were spending before the recession that began in 2007, even as spending fell for other “basic needs” such as food, housing, clothing and transportation, according to an analysis for The Wall Street Journal by Brookings senior fellow Diane Schanzenbach. These households cut back sharply on more discretionary categories like dining out and clothing.


While middle-income households are spending 25 percent more on health care, their real incomes actually fell 6.5 percent between 2007 and 2014, from $57,357 to $53,657.

Clearly, American workers are increasingly being squeezed by their employers at both ends—while they’re at work (since they’re working less and less time for themselves and more for their employers) and while they’re away from work (since they’ve been forced to assume a larger and larger share of the costs of their healthcare)



Special mention

184007_600 183964_600

hospital mergers


Much of the debate about the U.S. healthcare system is focused on the role of public financing (in terms of subsidies and, for some, the possibility of a public option or even a single-payer program). But no one seems to want to look at the other key part, the actual delivery of healthcare to American workers and others. And that, regardless of the system of financing, remains mostly in profit-oriented private hands (which, as I argued earlier this year, undermines patient-centered healthcare).

There are a few exceptions, such as the Veterans Health Administration and Indian Health Service, whereby the government directly employs nurses, physicians, and others to provide health services to targeted populations. But the rest of healthcare is provided by private  (profit and nominally nonprofit) individuals, groups, and corporations.

As I discussed on Friday, a significant sector of private healthcare is the increasingly concentrated and enormously profitable pharmaceutical industry. Hospitals (which I’ve commented on many times over the years) are, of course, another key sector (at close to $1 trillion in 2014). That’s where Americans receive most of their in-patient care, critical care (including many without health insurance in emergency rooms), and an increasing number of out-patient treatments. And while hospitals appear to be independent from and non-overlapping with physicians (whose services accounted for roughly $600 billion in 2014), that’s an optical illusion. Not only do they compete with one another (in surgery, imaging, and other ambulatory services), each is forced to work closely with the other: hospitals rely on physicians to admit patients to their facilities, refer to their specialists, and to use their lucrative diagnostic services (with, as it turns out, illegal kickbacks), while physicians tend to their own patients within hospitals and are contracted for “in-house” supervision. And, increasingly, hospitals are directly employing physicians (and other healthcare workers) as salaried and piece-rate workers.


U.S. hospitals are, as it turns out, remarkably profitable. And, according to a recent analysis by Ge Bai and Gerard F. Anderson (unfortunately gated), 7 of the 10 of the most profitable hospitals (each exceeding more than $163 million in total profits from patient care services) are officially non-profit institutions.

According to Anderson,

The system is broken when nonprofit hospitals are raking in such high profits. The most profitable hospitals should either lower their prices or put those profits into other services within the community. We need to develop incentives that allow all hospitals to make a fair profit while at the same time keeping prices reasonable.

It’s true, many other hospitals (56 percent in their sample of acute-care facilities) are not profitable strictly in terms of patient services (the median hospital lost $82 per adjusted patient discharge). However, as the authors explain,

the median overall net income from all activities per adjusted discharge was a profit of $353, because many hospitals earned substantial profits from nonoperating activities—primarily from investments, charitable contributions (in the case of nonprofit hospitals), tuition (in the case of teaching hospitals), parking fees, and space rental. It appears that nonoperating activities allowed many hospitals that were unprofitable on the basis of operating activities to become profitable overall.

The most important factors boosting hospital profitability were markups (especially for uninsured and out-of-network patients and casualty and workers’ compensation insurers who often pay the hospital’s full charge) and the combination of system affiliation and regional power.

In fact, 50 hospitals in the United States are charging uninsured consumers more than 10 times the actual cost of patient care. All but one of the facilities are owned by for-profit entities. Topping the list is North Okaloosa Medical Center, a 110-bed facility in the Florida Panhandle about an hour outside of Pensacola, where uninsured patients are charged 12.6 times the actual cost of patient care. Community Health Systems operates 25 of the hospitals on the list. Hospital Corporation of America operates 14 others.

Again according to Anderson:

They are price-gouging because they can. They are marking up the prices because no one is telling them they can’t. These are the hospitals that have the highest markup of all 5,000 hospitals in the United States. This means when it costs the hospital $100, they are going to charge you, on average, $1,000.




It should come as no surprise, then, that, while the total number of hospitals has remained relatively constant over time, the number of those hospitals in health systems has continued to increase, thereby increasing regional power, markups, and profitability.

In another recent study, by Richard M. Scheffler et al., the authors found that the hospital markets in two states (California and New York) “were moderately to highly concentrated,” with mean Herfindahl-Hirschman indices of 2,259 and 3,708, respectively.* They also found that more concentrated hospital markets were associated with higher premium growth.

As expected, then, there is a continuing strong movement of hospital mergers and acquisitions—with at least 100 deals covering 178 hospitals, involving the takeover of profit and especially non-profit organizations, in 2014—leading to increased concentration in the hospital sector of the U.S. healthcare industry.

As Martin Gaynor explains,

There has been so much consolidation that most urban areas in the US are now dominated by one to three large hospital systems — examples include Boston (Partners), the Bay Area (Sutter), Pittsburgh (UPMC), and Cleveland (Cleveland Clinic, University Hospital). It is also now more likely that further consolidation will combine close competitors, given how many mergers have already occurred.

Clearly, the provision of healthcare through U.S. hospitals—both profit and, at least officially, non-profit—is generating enormous profits for their owners and top executives. But it’s Americans workers, who are both hospital employees and consumers of hospital services, who are paying the price.


*To remind readers, the Herfindahl-Hirschman Index is often used to evaluate the potential antitrust implications of acquisitions and mergers across many industries, including health care. It is calculated by summing the squares of the market shares of individual firms. Markets are then classified in one of three categories: (1) nonconcentrated, with an index below 1,500; (2) moderately concentrated, with an index between 1,500 and 2,500; and (3) highly concentrated, with an index above 2,500.