Posts Tagged ‘technology’


Students are much too busy to think these days. So, when a junior comes to talk with me about the possibility of my directing their senior thesis, I ask them about their topic—and then their schedule. I explain to them that, if they really want to do a good project, they’re going to have to quit half the things they’re involved in.

They look at me as if I’m crazy. “Really?! But I’ve signed up for all these interesting clubs and volunteer projects and intramural sports and. . .” I then patiently explain that, to have the real learning experience of a semester or year of independent study, they need time, a surplus of time. They need to have the extra time in their lives to get lost in the library or to take a break with a friend, to read and to daydream. In other words, they need to have the right to be lazy.

So does everyone else.

As it turns out, that’s exactly what Paul LaFargue argued, in a scathing attack on the capitalist work ethic, “The Right To Be Lazy,” back in 1883.

Capitalist ethics, a pitiful parody on Christian ethics, strikes with its anathema the flesh of the laborer; its ideal is to reduce the producer to the smallest number of needs, to suppress his joys and his passions and to condemn him to play the part of a machine turning out work without respite and without thanks.

And LaFargue criticized both economists (who “preach to us the Malthusian theory, the religion of abstinence and the dogma of work”) and workers themselves (who invited the “miseries of compulsory work and the tortures of hunger” and need instead to forge a brazen law forbidding any man to work more than three hours a day, the earth, the old earth, trembling with joy would feel a new universe leaping within her”).


Today, nothing seems to have changed. Workers (or at least those who claim to champion the cause of workers) demand high-paying jobs and full employment, while mainstream economists (from Casey Mulligan, John Taylor, and Greg Mankiw to Dani Rodrick and Brad DeLong) promote what they consider to be the dignity of work and worry that, even as the official unemployment rate has declined in recent years, the labor-force participation rate in the United States has fallen dramatically and remains much too low.

Mainstream economists and their counterparts in the world of politics and policymaking—both liberals and conservatives—never cease to preach the virtues of work and in every domain, from minimum-wage legislation to economic growth, seek to promote more people getting more jobs to perform more work.

hours worked

This is particularly true in the United States and the United Kingdom, where the “work ethic” remains particularly strong. The number of hours worked per year has fallen in all advanced countries since the middle of the twentieth century but, as is clear from the chart above, in comparison with France and Germany, the average has declined by much less in America and Britain.


Today, according to the OECD, American and British workers spend much more time working per year (1765 and 1675 hours, respectively) than their French and German counterparts (1474 and 1371 hours, respectively).

But in all four countries—and, really, across the entire world—the capitalist work ethic prevails. Workers are exhorted to search for or keep their jobs, even as wage increases fall far short of productivity growth, inequality (already obscene) continues to rise, new forms of automation threaten to displace or destroy a wage range of occupations, unions and other types of worker representation have been undermined, and digital work increasingly permeates workers’ leisure hours.

The world of work, already satirized by LaFargue and others in the nineteenth century, clearly no longer works.

Not surprisingly, the idea of a world without work has returned. According to Andy Beckett, a new generation of utopian academics and activists are imagining a “post-work” future.

Post-work may be a rather grey and academic-sounding phrase, but it offers enormous, alluring promises: that life with much less work, or no work at all, would be calmer, more equal, more communal, more pleasurable, more thoughtful, more politically engaged, more fulfilled – in short, that much of human experience would be transformed.

To many people, this will probably sound outlandish, foolishly optimistic – and quite possibly immoral. But the post-workists insist they are the realists now. “Either automation or the environment, or both, will force the way society thinks about work to change,” says David Frayne, a radical young Welsh academic whose 2015 book The Refusal of Work is one of the most persuasive post-work volumes. “So are we the utopians? Or are the utopians the people who think work is going to carry on as it is?”

I’m willing to keep the utopian label for the post-work thinkers precisely because they criticize the world of work—as neither natural nor particularly old—and extend that critique to the dictatorial powers and assumptions of modern employers, thus opening a path to consider other ways of organizing the world of work. Most importantly, post-work thinking creates the possibility of criticizing the labor involved in exploitation and thus of creating the conditions whereby workers no longer need to succumb to or adhere to the distinction between necessary and surplus labor.

In this sense, the folks working toward a post-work future are the contemporary equivalent of the “communist physiologists, hygienists and economists” LaFargue hoped would be able to

convince the proletariat that the ethics inoculated into it is wicked, that the unbridled work to which it has given itself up for the last hundred years is the most terrible scourge that has ever struck humanity, that work will become a mere condiment to the pleasures of idleness, a beneficial exercise to the human organism, a passion useful to the social organism only when wisely regulated and limited to a maximum of three hours a day; this is an arduous task beyond my strength.

That’s the utopian impulse inherent in the right to be lazy.


Special mention



Mainstream economics lies in tatters. Certainly, the crash of 2007-08 and the Second Great Depression called into question mainstream macroeconomics, which has failed to provide a convincing explanation of either the causes or consequences of the most severe crisis of capitalism since the Great Depression of the 1930s.

But mainstream microeconomics, too, increasingly appears to be a fantasy—especially when it comes to issues of corporate power.


Neoclassical microeconomics is based on a set of models that assume perfect competition. What that means, as my students learned the other day, is that, while in the short run firms may capture super-profits (because price is greater than average total cost, at P1 in the chart above), in the “long run,” with free entry and exit, all those extra-normal profits are competed away (since price is driven down to P2, equal to minimum average total cost). That’s why the long run is such an important concept in neoclassical economic theory. The idea is that, starting with perfect competition, neoclassical economists always end up with. . .perfect competition.*

Except, of course, in the real world, where exactly the opposite has been occurring for the past few decades. Thus, as the authors of the new report from the United Nations Conference on Trade and Development have explained, there is a growing concern that

increasing market concentration in leading sectors of the global economy and the growing market and lobbying powers of dominant corporations are creating a new form of global rentier capitalism to the detriment of balanced and inclusive growth for the many.

And they’re not just talking about financial rentier incomes, which has been the focus of attention since the global meltdown provoked by Wall Street nine years ago. Their argument is that a defining feature of “hyperglobalization” is the proliferation of rent-seeking strategies, from technological innovations to mergers and acquisitions, within the non-financial corporate sector. The result is the growth of corporate rents or “surplus profits.”**


As Figure 6.1 shows, the share of surplus profits in total profits grew significantly for all firms both before and after the global financial crisis—from 4 percent during the 1995-2000 period to 19 percent in 2001-2008 and even higher, to 23 percent, in 2009-2015. The top 100 firms (ranked by market capitalization) also saw the growth of their surplus profits, from 16 percent to 30 percent and then, most recently, to 40 percent.***

The analysis suggests both that surplus profits for all firms have grown over time and that there is an ongoing process of bipolarization, with a growing gap between a few high-performing firms and a growing number of low-performing firms.


That conclusion is confirmed by their analysis of market concentration, which is presented in Figure 6.2 in terms of the market capitalization of the top 100 nonfinancial firms between 1995 and 2015. The red line shows the actual share of the top 100 firms relative to their hypothetical equal share, assuming that total market capitalization was distributed equally over all firms. The blue line shows the observed share of the top 100 firms relative to the observed share of the bottom 2,000 firms in the sample.

Both measures indicate that the market power of the top companies increased substantially over the 1995-2015 period. For example, the combined share of market capitalization of the top 100 firms was 23 times higher than the share these firms would have held had market capitalization been distributed equally across all firms. By 2015, this gap had increased nearly fourfold, to 84 times. This overall upward surge in concentration, measured by market capitalization since 1995, experienced brief interruptions in 2002−03 after the bursting of the dotcom bubble, and in 2009−2010 in the aftermath of the global financial crisis, and it stabilized at high levels thereafter.****

So, what is causing this growth in market concentration? One reason is because of the nature of the underlying technologies, which involve costs of production that do not rise proportionally to the quantities produced. Instead, after initial high sunk costs (e.g., in the form of expenditures on research and development), the variable costs of producing additional units of output are negligible.***** And then, of course, growing firms can use intellectual property rights and lobbying powers to protect themselves against actual or potential competitors.


Giant firms can also use their super-profits to merge with and to acquire other firms, a process that has accelerated because—as both a consequence and cause—of the weakening of antitrust legislation and enforcement.

What we’re seeing, then, is a “vicious cycle of underregulation and regulatory capture, on the one hand, and further rampant growth of corporate market power on the other.”

The models of mainstream economics turn out to be a shield, hiding and protecting this strengthening of corporate rule.

What the rest of us, including the folks at UNCTAD, have been witnessing in the real world is the emergence and consolidation of global rentier capitalism.


*There’s another reason why the long run is so important for neoclassical economists. All incomes are presumed to be returns to “factors of production” (e.g., land, labor, and capital), equal to their “marginal products.” But short-run super-profits are a theoretical embarrassment. They represent a return not to any factor of production but to something else: serendipity or Fortuna. Oops! That’s another reason it’s important, within a neoclassical world, for short-run super-profits to be competed away in the long run—to eliminate the existence of returns to the decidedly non-productive factor of luck.

**UNCTAD defines surplus profits as the difference between the estimate of total typical profits and the total of actually observed profits of all firms in the sample in that year. Thus, they end up with a lower estimate of surplus or super-profits than if they’d used a strictly neoclassical definition, which would compare actual profits to a zero-rent (or long-run equilibrium) benchmark.

***The authors note that

these results need to be interpreted with caution. More important than the absolute size of surplus profits for firms in the database in any given sub-period, is their increase over time, in particular the surplus profits of the top 100 firms.

****The authors of the study focus particular attention on the so-called high-tech sector, in which they show “a growing predominance of ‘winner takes most’ superstar firms.”

*****Thus, as Piero Sraffa argued long ago, the standard neoclassical model of perfect competition, with U-shaped marginal and average cost curves (i.e., “diminishing returns”), is called into question by increasing returns, with declining marginal and average cost curves.


income  wealth

Inequality in the United States is now so obscene that it’s impossible, even for mainstream economists, to avoid the issue of surplus.

Consider the two charts at the top of the post. On the left, income inequality is illustrated by the shares of pre-tax national income going to the top 1 percent (the blue line) and the bottom 90 percent (the red line). Between 1976 and 2014 (the last year for which data are available), the share of income at the top soared, from 10.4 percent to 20.2 percent, while for most everyone else the share has dropped precipitously, from 53.6 percent to 39.7 percent.

The distribution of wealth in the United States is even more unequal, as illustrated in the chart on the right. From 1976 to 2014, the share of wealth owned by the top 1 percent (the purple line) rose dramatically, from 22.9 percent to 38.6 percent, while that of the bottom 90 percent (the green line) tumbled from 34.2 percent to only 27 percent.

The obvious explanation, at least for some of us, is surplus-value. More surplus has been squeezed out of workers, which has been appropriated by their employers and then distributed to those at the top. They, in turn, have managed to use their ability to capture a share of the growing surplus to purchase more wealth, which has generated returns that lead to even more income and wealth—while the shares of income and wealth of those at the bottom have continued to decline.

But the idea of surplus-value is anathema to mainstream economists. They literally can’t see it, because they assume (at least within free markets) workers are paid according to their productivity. Mainstream economic theory excludes any distinction between labor and labor power. Therefore, in their view, the only thing that matters is the price of labor and, in their models, workers are paid their full value. Mainstream economists assume we live in the land of freedom, equality, and just deserts. Thus, everyone gets what they deserve.

Even if mainstream economists can’t see surplus-value, they’re still haunted by the idea of surplus. Their cherished models of perfect competition simply can’t generate the grotesque levels of inequality in the distribution of income and wealth we are seeing in the United States.

That’s why in recent years some of them have turned to the idea of rent-seeking behavior, which is associated with exceptions to perfect competition. They may not be able to conceptualize surplus-value but they can see—at least some of them—the existence of surplus wealth.

The latest is Mordecai Kurz, who has shown that modern information technology—the “source of most improvements in our living standards”—has also been the “cause of rising income and wealth inequality” since the 1970s.

For Kurz, it’s all about monopoly power. High-tech firms, characterized by highly concentrated ownership, have managed to use technical innovations and debt to erect barriers to entry and, once created, to restrain competition.


Thus, in his view, a small group of U.S. corporations have accumulated “surplus wealth”—defined as the difference between wealth created (measured as the market value of the firm’s ownership securities) and their capital (measured as the market value of assets employed by the firm in production)—totaling $24 trillion in 2015.

Here’s Kurz’s explanation:

One part of the answer is that rising monopoly power increased corporate profits and sharply boosted stock prices, which produced gains that were enjoyed by a small population of stockholders and corporate management. . .

Since the 1980s, IT innovations have largely been software-based, giving young innovators an advantage. Additionally, “proof of concept” studies are typically inexpensive for software innovations (except in pharmaceuticals); with modest capital, IT innovators can test ideas without surrendering a major share of their stock. As a result, successful IT innovations have concentrated wealth in fewer – and often younger – hands.

In the end, Kurz wants to tell a story about wealth accumulation based on the rapid rise of individual wealth enabled by information-based innovations (together with the rapid decline of wealth created in older industries such as railroads, automobiles, and steel), which differs from Thomas Piketty’s view of wealth accumulation as taking place through a lengthy intergenerational process where the rate of return on family assets exceeds the growth rate of the economy.

The problem is, neither Kurz nor Piketty can tell a convincing story about where that surplus comes from in the first place, before it is captured by monopoly firms and transformed into the wealth of families.

Kurz, Piketty, and an increasing number of mainstream economists are concerned about obscene and still-growing levels of inequality, and thus remained haunted by the idea of a surplus. But they can’t see—or choose not to see—the surplus-value that is created in the process of extracting labor from labor power.

In other words, mainstream economists don’t see the surplus that arises, in language uniquely appropriate for Halloween, from capitalists’ “vampire thirst for the living blood of labour.”


Apologists for mainstream economics (such as Noah Smith) like to claim that things are OK because good empirical research is crowding out bad theory.

I have no doubt about the fact that the theory of mainstream economics has been bad. But is the empirical research any better?

Not, as I see it, in the academy, in the departments that are dominated by mainstream economics. But there is interesting empirical work going on elsewhere, including of all places in the International Monetary Fund (as I have noted before, e.g., here and here).

The latest, from Mai Dao, Mitali Das, Zsoka Koczan, and Weicheng Lian, documents two important facts: the decline in labor’s share of income—in both developed and developing economies—and the relationship between the fall in the labor share and the rise in inequality.

I demonstrate both facts for the United States in the chart above: the labor share (the red line, measured on the left) has been falling since 1970, while the share of income captured by those in the top 1 percent (the blue line, measured on the right) has been rising.

labor shares

Dao et al. make the same argument, both across countries and within countries over time: declining labor shares are associated with rising inequality.

And they’re clearly concerned about these facts, because inequality can fuel social tension and harm economic growth. It can also lead to a backlash against economic integration and outward-looking policies, which the IMF has a clear stake in defending:

the benefits of trade and financial integration to emerging market and developing economies—where they have fostered convergence, raised incomes, expanded access to goods and services, and lifted millions from poverty—are well documented.

But, of course, there are no facts without theories. What is missing from the IMF facts is a theory of how a falling labor share fuels inequality—and, in turn, has created such a reaction against capitalist globalization.

Let me see if I can help them. When the labor share of national income falls—the result of the forces Dao et al. document, such as outsourcing and new labor-saving technologies—the surplus appropriated from those workers rises. Then, when a share of that growing surplus is distributed to those at the top—for example, to those in the top 1 percent, via high salaries and returns on capital ownership—income inequality rises. Moreover, the ability of those at the top to capture the surplus means they are able to shape economic and political decisions that serve to keep workers’ share of national income on its downward slide.

The problem is mainstream economists are not particularly interested in those facts. Or, for that matter, the theory that can make sense of those facts.


Chico Harlan [ht: ja] describes the arrival of the first robots at Tenere Inc. in Dresser, Wisconsin:

The workers of the first shift had just finished their morning cigarettes and settled into place when one last car pulled into the factory parking lot, driving past an American flag and a “now hiring” sign. Out came two men, who opened up the trunk, and then out came four cardboard boxes labeled “fragile.”

“We’ve got the robots,” one of the men said.

They watched as a forklift hoisted the boxes into the air and followed the forklift into a building where a row of old mechanical presses shook the concrete floor. The forklift honked and carried the boxes past workers in steel-toed boots and earplugs. It rounded a bend and arrived at the other corner of the building, at the end of an assembly line.

The line was intended for 12 workers, but two were no-shows. One had just been jailed for drug possession and violating probation. Three other spots were empty because the company hadn’t found anybody to do the work. That left six people on the line jumping from spot to spot, snapping parts into place and building metal containers by hand, too busy to look up as the forklift now came to a stop beside them.

Tenere is just one of many factories and offices in which employers, in the United States and around the world, are installing robots and other forms of automation in order to boost their profits.


They’re not doing it because there’s any kind of labor shortage. If there were, wages would be rising—and they’re not. Real weekly earnings for full-time workers (the blue line in the chart) increased only 2.3 percent on an annual basis in the most recent quarter. Sure, they complain about a shortage of skilled workers but employers clearly aren’t being compelled to raise wages to attract new workers. As a result, the wage share in the United States (the red line) continues to decline on a long-term basis, falling from 51.5 percent in 1970 to 43 percent last year (only slightly higher than it was, at 42.2 percent, in 2013).

No, they’re using robots in order to compete with other businesses in their industry, by boosting the productivity of their own workers to undercut their competition and capture additional surplus-value.

And they can do so because robots have become much more affordable:

No longer did machines require six-figure investments; they could be purchased for $30,000, or even leased at an hourly rate. As a result, a new generation of robots was winding up on the floors of small- and medium-size companies that had previously depended only on the workers who lived just beyond their doors. Companies now could pick between two versions of the American worker — humans and robots. And at Tenere Inc., where 132 jobs were unfilled on the week the robots arrived, the balance was beginning to shift.

So, where does that leave us?

The prevalent response has been to worry about mass unemployment. However, as I explained a month ago, I don’t think that’s the issue, at least at the macro level.

If workers are displaced from their jobs in one plant or sector, they can’t just remain unemployed. They have to find jobs elsewhere, often at lower wages than their earned before. That’s how capitalism works.

Much the same holds for workers who don’t lose their jobs but who, as new technologies are adopted by their employers, are deskilled and otherwise become appendages of the new machines. They can’t just quit. They remain on the job, even as their working conditions deteriorate and the value of their ability to work falls—and their employers’ profits rise.

No, the real problem is how the gains from the introduction of robots and other new technologies are being unevenly distributed.

And that’s an old problem, which was confronted by forces as diverse as the Luddites and the John L. Lewis-led United Mineworkers of America, none of which was opposed to the use of new, labor-saving technologies.

In fact, Lewis’s argument was that machinery should replace hand work in the mines, which would serve to both ease the burden of miners’ work increase their wages—all under the watchful eye of their union. And mine-owners who attempted to pay workers less, without technological improvements, should be driven out of business.

Mr. Lewis called upon the miners to accept machinery, since they could not turn back the clock, but to demand a fair share of the benefits of mechanization in the form of shorter hours and increased compensation. He said that machines must be made the workingman’s ally, and that nothing was to be gained by fighting them.

The fact is, right now workers are not getting “a fair share of the benefits of mechanization,” whether in the form of shorter hours or increased compensation.

And if employers are not willing to provide those benefits, workers themselves should be given a say in what kinds of robots and other new technologies will be introduced, what their working hours will be, and how much they will be compensated.

Only then will workers be able to confidently say, “we’ve got the robots.”


New technologies—automation, robotics, artificial intelligence—have created a specter of mass unemployment. But, as critical as I am of existing economic institutions, I don’t see that as the issue, at least at the macro level. The real problem is the distribution of the value that is produced with the assistance of the new technologies—in short, the specter of growing inequality.

David Autor and Anna Salomons (pdf) are the latest to attempt to answer the question about technology and employment in their contribution to the recent ECB Forum on Central Banking. Their empirical work leads to the conclusion that while “industry-level employment robustly falls as industry productivity rises. . .country-level employment generally grows as aggregate productivity rises.”

To me, their results make sense. But for a different reason.


It is clear that, in many sectors—perhaps especially in manufacturing—the growth in output (the red line in the chart above) is due to the growth in labor productivity (the blue line) occasioned by the use of new technologies, which in turn has led to a decline in manufacturing employment (the green line).


But for the U.S. economy as a whole, especially since the end of the Great Recession, the opposite is true: the growth in hours worked has played a much more important role in explaining the growth of output than has the growth in labor productivity.

The fact is, increases in labor productivity—which stem at least in part from labor-saving technologies—have not, at least in recent years, led to massive unemployment. (The losses in jobs that have occurred are much more a cyclical phenomenon, due to the crash of 2007-08 and the long, uneven recovery.)

But that’s not because, as Autor and Salomons (and mainstream economists generally) would have it, there are “positive spillovers” of technological change to the rest of the economy. It’s because, under capitalism, workers are forced to have the freedom to sell their ability to work to employers. There’s no other choice. If workers are displaced from their jobs in one plant or sector, they can’t just remain unemployed. They have to find jobs elsewhere, often at lower wages than their earned before. That’s how capitalism works.

Much the same holds for workers who don’t lose their jobs but who, as new technologies are adopted by their employers, are deskilled and otherwise become appendages of the new machines. They can’t just quit. They remain on the job, even as their working conditions deteriorate and the value of their ability to work falls—and their employers’ profits rise.

What happens, in other words, is the gains from the new technologies that are adopted are distributed unevenly.


This is clear if we look at labor productivity for the economy as a whole (the blue line in the chart above) since the end of the Great Recession, which has increased by 7.5 percent. However, the wage share (the green line) has barely budged and is actually now lower than it was in 2009.


The results are even more dramatic over a long time frame—over periods when labor productivity was growing relatively quickly (from 1947 through the 1970s, and from 1980 until the most recent crash) and when productivity has been growing much more slowly (since 2009).

During the initial period (until 1980), labor productivity (the blue line in the chart) almost doubled while income shares—to the bottom 90 percent (the red line) and the top 1 percent (the green line)—remained relatively constant.

After 1980, however—during periods of first rapid and then slow growth in productivity—the situation changed dramatically: the share of income going to the bottom 90 percent declined, while the share captured by the top 1 percent soared. Even as new technologies were adopted across the economy, the vast majority of people were forced to find work, at stagnant or declining wages, while their employers and corporate executives captured a larger and larger share of the new value that was being created.

Autor and Salomons think they’ve arrived at a conclusion—concerning the “relative neutrality of productivity growth for aggregate labor demand”—that is optimistic.

The conclusions of my analysis are much more disconcerting. The broad sharing of the fruits of technological change, from the end of World War II to the late 1970s, was relatively short-lived. Since then, the conditions within which new technologies have been adopted have created a mass of increasingly desperate workers, who have either been forced to labor in more automated workplaces or have been displaced and thus forced to find employment elsewhere. In both cases, their share of income has declined while the share captured by a tiny group at the top has continued to rise. That’s the “new normal” (from 1980 onward) which looks a lot like the “old normal” of capitalist growth (prior to the first Great Depression), interrupted by a relatively short period (during the three postwar decades) that is becoming increasingly recognized as the exception.

Even more, I can make the case that things would be much better if the adoption of new technologies did in fact displace a large number of labor hours. Then, the decreasing amount of labor that needed to be performed could be spread among all workers, thus lessening the need for everyone to work as many hours as they do today.

But that would require a radically different set of economic institutions, one in which people were not forced to have the freedom to sell their ability to work to someone else. However, that’s not a world Autor and Salomons—or mainstream economists generally—can ever imagine let alone work to create.