Posts Tagged ‘history’


It is extraordinary that the hegemonic economic theory in the world today—neoclassical economics—still lacks an adequate theory of the firm.

It beggars belief both because neoclassical economics is the predominant theory that is taught to hundreds of thousands of students every year and used to make sense of the world and formulate policy in countless think thanks and government agencies and because the firm (or enterprise or corporation) is one of the central institutions of capitalism. It’s where many (but of course not all) goods and services are produced, value and surplus-value are created, and profits generated for capitalists.

And yet the neoclassical notion of the firm, even when developed by Nobel Prize-winning economists (such as Oliver Hart and Bengt Holmstrom), is not much more than an empty box—without any real history and, as it turns out, without any links to politics.

Daniel Carpenter, the Allie S. Freed Professor of Government in the Faculty of Arts and Sciences and Director of Social Sciences at the Radcliffe Institute for Advanced Study at Harvard University, certainly thinks that’s a problem in terms of making sense of how firms came to be constituted historically and what their effects are on contemporary society.

Q: The neoclassical theory of the firm does not consider political engagement by corporations. How big an omission do you think this is?

 I think it’s an immense omission. For one, we can’t even talk about the historical origins of many firms without talking about corporate charters, limited liability arrangements, zoning, public contracts and grants, and so on. To view these processes as legal and not political is a significant mistake. I’m currently writing a lot on the history of petitioning in Europe and North America, and in areas ranging from railroads, to technology-heavy industries, to extractive industries, to banking, firms (or their investors) had to bring a case before the legislature, or an agency of government, or both. They usually used petitions to do so. 

 Beyond the past and into the present, there are a range of firm activities that we can’t understand without looking at politics. Industrial organization considers regulator-firm interactions, but does not theorize the fact that now most firms have regulatory affairs and compliance offices, or the fact that firms hire not just lobbyists but lawyers to do a lot of political work for them.

 And in the future, the profitability and survival prospects of many firms in the coming years will depend heavily, in a polarized environment, on the political skills of managers. The theory of the firm was developed in an era (1950s – 2000) when globalism was the rule. What might it look like if Trump and Brexit are the new norm?

Today, of course, many citizens are concerned about the corrupt links between the capitalist firms in which they work and the governments that are supposed to represent the people. In my view, that concern was one of the causes of the Brexit vote and Trump’s victory in the U.S. presidential election.

The problem is, neither the post-Brexit British government nor the Trump administration has given any indication they’re going to solve the problem of the firm. Quite the opposite. Both have tied themselves to the very same capitalist firms that have wreaked havoc on society for decades now.

Meanwhile, neoclassical economists continue to build their models based on a theory of the firm that bears no relationship to the way firms operate in the real world, manipulating market rules and political actors to their own ends.


Special mention

191134_600 bertramssecuritycouncilcagle_1000_590_532


Tim Harford offers a short but useful piece on the medieval origins of modern banking—in the Knights Templar, the great fair of Lyon, and so on.*

The Templars dedicated themselves to the defence of Christian pilgrims to Jerusalem. The city had been captured by the first crusade in 1099 and pilgrims began to stream in, travelling thousands of miles across Europe.

Those pilgrims needed to somehow fund months of food and transport and accommodation, yet avoid carrying huge sums of cash around, because that would have made them a target for robbers.

Fortunately, the Templars had that covered. A pilgrim could leave his cash at Temple Church in London, and withdraw it in Jerusalem. Instead of carrying money, he would carry a letter of credit. The Knights Templar were the Western Union of the crusades.

But, with the loss of control over of Jerusalem, the Templars were eventually disbanded in 1312.

So who would step into the banking vacuum?

If you had been at the great fair of Lyon in 1555, you could have seen the answer. Lyon’s fair was the greatest market for international trade in all Europe.

But at this particular fair, gossip was starting to spread about an Italian merchant who was there, and making a fortune.

He bought and sold nothing: all he had was a desk and an inkstand.

Day after day he sat there, receiving other merchants and signing their pieces of paper, and somehow becoming very rich.

The locals were very suspicious.

But to a new international elite of Europe’s great merchant houses, his activities were perfectly legitimate.

He was buying and selling debt, and in doing so he was creating enormous economic value.

And that’s Harford’s mistake: there’s is nothing about the buying and selling of debt (or, for that matter, any other financial service, from changing money to issuing letters of credit) that creates value, enormous or otherwise.

Banking often enables value to be created. Surplus-value, too. But it doesn’t create either value or surplus-value.

What bankers do is capture a portion of the surplus-value that is embodied in the goods and services that are produced, which is then distributed to them by those who actually appropriate the surplus-value. In other words, bankers (like many others, from managers to merchants) share in the booty.

Medieval bankers managed to get a cut of the surplus they did not create. And that’s exactly what bankers do today.


*Harford also notes that “by turning personal obligations into internationally tradable debts, these medieval bankers were creating their own private money, outside the control of Europe’s kings.” But he fails to mention the obvious contemporary parallel, Bitcoin, the private digital currency and payments system that was invented to finance criminal activities.


When it comes to artificial intelligence and automation, the current White House seems to want to have it both ways.

On one hand, it warns about the potentially unequalizing, “winner-take-most” effects of the economic use of artificial intelligence:

Research consistently finds that the jobs that are threatened by automation are highly concentrated among lower-paid, lower-skilled, and less-educated workers. This means that automation will continue to put downward pressure on demand for this group, putting downward pressure on wages and upward pressure on inequality. In the longer-run, there may be different or larger effects. One possibility is superstar-biased technological change, where the benefits of technology accrue to an even smaller portion of society than just highly-skilled workers. The winner-take-most nature of information technology markets means that only a few may come to dominate markets. If labor productivity increases do not translate into wage increases, then the large economic gains brought about by AI could accrue to a select few. Instead of broadly shared prosperity for workers and consumers, this might push towards reduced competition and increased wealth inequality.

But then it invokes, and repeats numerous times across the report, the usual mainstream economists’ nostrums about the “strong relationship between productivity and wages”—such that “with more AI the most plausible outcome will be a combination of higher wages and more opportunities for leisure for a wide range of workers.”

Except, of course, historically that has not been the case—certainly not in the United States.



For example, from the early 1970s to the present, workers’ wages have not kept pace with increases in productivity. Not by a long shot. As is clear from the chart above, productivity since 1973 has risen much more than workers’ compensation—72.2 percent, compared to a paltry 9.2 percent.


And while over the same period hours worked have in fact fallen, the decrease in the United States (a minuscule 5.6 percent) has been far less than the increase in productivity—and much less than in other countries, such as France (24 percent) and Germany (27.3 percent).

So, yes, whether the use of artificial intelligence leads to improvements for U.S. workers—in the form of higher wages and fewer hours worked—”depends not only on the technology itself but also on the institutions and policies that are in place.”

But the experience of the past four decades suggests it will not benefit the American working-class.

And there’s nothing to suggest that trend won’t continue—unless, of course, there is a radical change in economic institutions and policies, which allow workers to have much more of a say in the technologies that are adopted and how wages and hours are set.


Special mention

www-usnews tumblr_oieqin4vza1skfdzbo1_500


Certainly not by mainstream economists—not if they continue to defend their turf and to attack the new literature on “Slavery’s Capitalism” with the vehemence they’ve recently displayed.

It makes me want to forget I ever obtained my Ph.D. in economics and the fact that I’ve spent much of my life working in and around the discipline.

A recent article in The Chronicle of Higher Education [ht: ja] highlights Edward E. Baptist’s novel book, The Half Has Never Been Told (which I wrote about back in 2014), and some of the outrageous ways it has been criticized by mainstream economists—first in a review in the Economist (which was so over-the-top it was subsequently retracted) and then in a group of reviews published in the Journal of Economic History (unfortunately, behind a paywall).

In my view, this is not a clash between two disciplines (as the Chronicle would have it), but rather a fundamental incompatibility between mainstream economic theory and a group of historians who have refused to adhere to the epistemological and methodological protocols established and defended—with a remarkable degree of ignorance and intolerance—by mainstream economists.

What is at stake is a particular view of slavery in relation to U.S. capitalism—as well as a way of producing economic history (of slavery, capitalism, and much else).

Baptist’s argument, in a nutshell, is that slavery was central to the development of U.S. capitalism (“not just shaping but dominating it”) and systematic torture (a “whipping machine”) was one of the principal means slaveowners used to increase the productivity of cotton-picking slaves and thus boost the surplus they were able to extract from them.

Mainstream economists hold a quite different view—that slavery was an outdated, inefficient system that had little to do with the growth of capitalism in North America, and increased productivity in cotton production was due to biological innovation (improved varieties of seeds that yielded more pickable cotton) not torture in the labor process.

They also use different frameworks of analysis: whereas Baptist relies on slave narratives and contingent historical explanations, mainstream economists fetishize quantitative methods and invoke universal (transcultural and transhistorical) modes of individual decision-making.

Those are the two major differences that separate Baptist (and other “Slavery’s Capitalism” historians) and mainstream economists.

This is how one mainstream economist, Alan L. Olmstead, begins his review:

Edward Baptist’s study of capitalism and slavery is flawed beyond repair.

Olmstead then proceeds to accuse Baptist of being careless with the numbers, of “making things up,” and “misunderstanding economic logic,” all of which leads to “a vast overstatement of cotton’s and slavery’s ‘role’ on the wider economy and on capitalist development.”

He concludes:

All and all, Baptist’s arguments on the sources of slave productivity growth and on the essentiality of slavery for the rise of capitalism have little historical foundation, raise bewildering and unanswered contradictions, selectively ignore conflicting evidence, and are error-ridden.

Baptist, for his part, has responded to Olmstead’s scathing attack (as well as critical reviews by others) in the following fashion:

Some scholars axiomatically refuse to accept the implications of the fact that brutal technologies of violence drove slave labor. They retreat into homo economicus fallacies to resist considering the question of whether in some cases violence increased, or was calibrated over time to enhance production. They evade consideration of survivors’ testimony about those changes, insisting that this data is “anecdotal”—as if the enslavers’ claims on which they build arguments are epistemologically any different.

That’s a problem for those of us who work in and around the discipline of economics: mainstream economists are simply unwilling to give up on homo economicus and doggedly refuse to examine either the economic effects of the brutal system of torture that was central to U.S. slavery or the role slave cotton played in the development of U.S. capitalism. Not to mention their arrogance in responding to the work of anyone who argues otherwise.

And that’s why the other half of the story will never be told by mainstream economists.


The extensive media coverage since Fidel Castro died has included many different voices—from those of journalists who interviewed him and wrote about him, especially in the early years, through Cold Warriors and Cuban émigrés who did battle with him to political figures whose comments have been crafted to align with contemporary constituencies and goals.* But the media have left out one important group: ordinary people who, over the years, found themselves inspired by and generally sympathetic with (even when critical of many features of) the Cuban Revolution.

I’m referring to people around the globe—students, workers, peasants, activists, and many others, throughout the Americas and across the world—who have understood the significance of the Revolution for Cuba and, as a historical example of anti-imperialism and human development, for their own attempts to enact radical political and economic change.

What we haven’t learned from recent coverage is that re-revolutionary Cuba was under the thumb of the U.S.-backed dictatorship of Fulgencio Batista, who governed a relatively wealthy but highly unequal country in which the majority of people had no voice and suffered from high unemployment, a low level of literacy, poor health, and inadequate housing. And they were exploited in an economy dominated by large landowners, U.S. corporations, and American organized crime. The 26th of July Movement (a name that originated in the failed attack led by Fidel on the Moncada Barracks in 1953) launched an insurrection in 1956, with the landing of small force that found its way to the Sierra Maestra Mountains, and, with the support of an army of volunteers in the countryside and “Civic Resistance” groups in the cities, succeeded in overthrowing Batista. A small revolutionary organization with widespread popular support managed to confront and ultimately defeat a typical authoritarian Washington-backed Latin American regime just 90 miles off the U.S. coast.

And while a great attention has been paid to the growing tensions from early on between the new Cuban government and the United States, which sponsored a series of clandestine invasions and assassination attempts, mainstream accounts have overlooked the tremendously successful campaigns to do what had seemed impossible in Cuba and elsewhere—to eliminate illiteracy, promote health, and improve living and working conditions, especially in the countryside. In fact, one of the reasons Havana became and remained so shabby (as legions of foreign visitors who rarely venture outside the capital city never fail to describe) was the Cuban government’s focus on transforming conditions in rural areas so that, in contrast to many other countries, impoverished agricultural workers and their families would have no need to move en masse into the city.

That’s what I noticed when I traveled to Cuba in the late-1970s during the administration of Jimmy Carter, when U.S. travel restrictions were allowed to lapse. I didn’t see the urban ghettoes I drove through before boarding my flight in Montreal, and nowhere did I come across the poverty and inequality characteristic of rural areas across all the countries where I’d lived and worked in Latin America.


Thanks to the Revolution, Cuba has achieved enormous progress—not only in comparison to the rest of Latin America and the Third World but even (at least in terms of indicators like infant mortality) the United States. That radical turnaround, and the ability to maintain it in the face of unrelenting U.S.-government opposition over decades, is the major reason Fidel and the Cuban Revolution have been admired around the world.

By the same token, the Cuban Revolution has not been romanticized or supported uncritically, especially as a model for left-wing movements elsewhere. For the most part, the economy has been organized around state ownership, not worker-run enterprises. And a small number of political leaders, including Fidel himself, and a single political party have managed to hold onto power, with little in the way of democratic decision-making beyond the local level—not to mention public antipathy towards and discrimination against LGBT people, the jailing of journalists and political dissidents, and so on. Economically and politically, Cuba is no paradise.

Still, for all its faults and mis-steps, the Cuban Revolution has long served as an example of the ability of people to struggle against the impossible and to win. Fidel was thus on the right side of history.


*Including the anti-socialist drivel offered by John McTernan, a former speech writer for Tony Blair.