Posts Tagged ‘Brexit’


It is extraordinary that the hegemonic economic theory in the world today—neoclassical economics—still lacks an adequate theory of the firm.

It beggars belief both because neoclassical economics is the predominant theory that is taught to hundreds of thousands of students every year and used to make sense of the world and formulate policy in countless think thanks and government agencies and because the firm (or enterprise or corporation) is one of the central institutions of capitalism. It’s where many (but of course not all) goods and services are produced, value and surplus-value are created, and profits generated for capitalists.

And yet the neoclassical notion of the firm, even when developed by Nobel Prize-winning economists (such as Oliver Hart and Bengt Holmstrom), is not much more than an empty box—without any real history and, as it turns out, without any links to politics.

Daniel Carpenter, the Allie S. Freed Professor of Government in the Faculty of Arts and Sciences and Director of Social Sciences at the Radcliffe Institute for Advanced Study at Harvard University, certainly thinks that’s a problem in terms of making sense of how firms came to be constituted historically and what their effects are on contemporary society.

Q: The neoclassical theory of the firm does not consider political engagement by corporations. How big an omission do you think this is?

 I think it’s an immense omission. For one, we can’t even talk about the historical origins of many firms without talking about corporate charters, limited liability arrangements, zoning, public contracts and grants, and so on. To view these processes as legal and not political is a significant mistake. I’m currently writing a lot on the history of petitioning in Europe and North America, and in areas ranging from railroads, to technology-heavy industries, to extractive industries, to banking, firms (or their investors) had to bring a case before the legislature, or an agency of government, or both. They usually used petitions to do so. 

 Beyond the past and into the present, there are a range of firm activities that we can’t understand without looking at politics. Industrial organization considers regulator-firm interactions, but does not theorize the fact that now most firms have regulatory affairs and compliance offices, or the fact that firms hire not just lobbyists but lawyers to do a lot of political work for them.

 And in the future, the profitability and survival prospects of many firms in the coming years will depend heavily, in a polarized environment, on the political skills of managers. The theory of the firm was developed in an era (1950s – 2000) when globalism was the rule. What might it look like if Trump and Brexit are the new norm?

Today, of course, many citizens are concerned about the corrupt links between the capitalist firms in which they work and the governments that are supposed to represent the people. In my view, that concern was one of the causes of the Brexit vote and Trump’s victory in the U.S. presidential election.

The problem is, neither the post-Brexit British government nor the Trump administration has given any indication they’re going to solve the problem of the firm. Quite the opposite. Both have tied themselves to the very same capitalist firms that have wreaked havoc on society for decades now.

Meanwhile, neoclassical economists continue to build their models based on a theory of the firm that bears no relationship to the way firms operate in the real world, manipulating market rules and political actors to their own ends.


Actually, robots do kill people.

A 21 year old external contractor was installing the robot together with a colleague when he was struck in the chest by the robot and pressed against a metal plate. He later died of his injuries, reports Chris Bryant, the FT’s Frankfurt correspondent.

While we certainly need to be aware of industrial accidents associated with robots, what we really need to be more concerned about is the relationship between the use of robotics and the metaphorical killing of workers via the elimination of their jobs.

Richard Baldwin [ht: ja], president of the Centre for Economic Policy Research and Editor-in-Chief of Vox (, which he founded in June 2007), appears to agree:

Technological advances could now mean white-collar, office-based workers and professionals are at risk of losing their jobs

But, he argues, those who expect Brexit or the kinds of protectionist policies advocated by President Trump to bring back manufacturing jobs are sadly mistaken.

I think he’s right. Blaming international trade and immigration for the precarious plight of the working-class within advanced nations is wrongheaded.* Moreover, as Baldwin explains elsewhere, “We shouldn’t try and protect jobs; we should protect workers.”

However, the mistake Baldwin and other technological optimists make is to treat industrial robots (and their contemporary extensions, such as telepresence and telerobotics) in a purely instrumental fashion, as both inevitable and technically neutral. Just like the ubiquitous NRA bumper sticker: “Guns Don’t Kill People, People Kill People.”

As Bruno Latour (pdf) has explained, the NRA “cannot maintain that the gun is so neutral an object that is has no part in the act of killing.”

You are different with a gun in hand; the gun is different with you holding it. You are another subject because you hold the gun; the gun is another object because it has entered into a relationship with you. The gun is no longer the gun-in-the-armory or the gun-in-the-drawer or the gun-in-the-pocket, but the gun-in-your-hand, aimed at someone who is screaming. What is true of the subject, of the gunman, is as true of the object, of the gun that is held. A good citizen becomes a criminal, a bad guy becomes a worse guy; a silent gun becomes a fired gun, a new gun becomes a used gun, a sporting gun becomes a weapon.

And much the same is true of robotics. Employers are different when they have access to robots. They are another subject because they can reconfigure production by purchasing and installing robots; and robots are different objects when they enter into a relationship with employers, who stand opposed to their workers.

So, as it turns out, “it is neither people nor guns that kill” people. And, by the same token, it is neither employers nor robots that kill workers and their jobs. Responsibility for the action must be shared between the two—the employers who utilize robotics to increase productivity and raise profits, and the robots that are engineered, produced, and then sold for particular purposes, like transforming jobs and replacing workers.

So, yes, we shouldn’t try and protect jobs. Instead, we should protect workers. But the only way to protect workers is to create institutions for workers to be able to protect themselves. Leaving the European Union and electing Trump won’t do that. They are merely empty promises. Nor, as Baldwin presumes, will leaving robots in the hands of employers and expecting government programs to pick up the pieces.

It is still the case that most people are forced to have the freedom to attempt to sell their ability to work to a small group of employers, who have the option of using robots to replace them—across the globe—if and when they deem it profitable.

What that means is: robots and their employers do kill workers. Because of profits.


*And, as the United Nations Conference on Trade and Development (pdf) warns, “the increased use of robots in developed countries risks eroding the traditional labour cost advantage of developing countries.” That’s another reason to be cautious when it comes to facile predictions that the combination of globalization and robotics will be an unqualified advantage to workers in the Global South.


It’s now official, Truth is dead.

Oxford Dictionaries has selected “post-truth” as 2016’s international word of the year, after seeing a spike in frequency this year in the context of the Brexit referendum in the United Kingdom and the presidential election in the United States.*

Many of us are neither surprised nor dismayed by the realization that Big-T Truth—in relation to politics, the media, and much else—is being called into question. We’re not surprised because telling the truth was never a mainstay of political discourse or newspaper reporting. Remember the lies that served as the basis for President Lyndon B. Johnson’s 1964 order to launch retaliatory air strikes on North Vietnam and his request for a joint resolution of Congress—the Gulf of Tonkin Resolution—which gave him authorization, without a formal declaration of war by Congress, for the use of conventional military force in Southeast Asia? Or the New York Times in the run-up to the 2003 invasion of Iraq, especially Judith Miller’s now thoroughly discredited reporting about Iraq’s supposedly brimming stockpile of weapons of mass destruction?

Nor are we dismayed, since we’ve long understood that different sets of “facts” and “truths” are produced within different theoretical frameworks and that there’s no Archimedean standpoint—independent and outside of those frameworks—to decide that one or another corresponds to reality. The idea that there’s a set of bedrock facts or a single truth about reality is a holdover from positivism and other foundationalist theories of knowledge that have long been contested.

What we do need to be aware of is how those different facts and truths are constructed (the discursive and social conditions under which they are produced), and of course how they lead to different consequences (on the theories and the wider society). It’s a stance concerning knowledge that is often referred to as “partisan relativism”—relativist in the sense that validity criteria are diverse and internal to theoretical frameworks, partisan because producing knowledges always involves taking a stance, in favor of one set of facts and truths and against others.

To be clear, then, “post-truth”doesn’t mean (as if often presumed) that theoretical and empirical analysis grinds to a halt or that analysts—in whatever field, humanities, social sciences, or natural sciences—are unable to make pronouncements about the world. On the contrary. It makes discussion and debate, amongst and between those who use different theoretical frameworks, even more important—because, of course, the stakes for the world in which we live are so high.

Julia Shaw, a forensic psychologist, adopts much the same perspective

They say that we have found ourselves in a world lost to emotion, irrationality, and a weakening grasp on reality. That lies don’t faze us, and knowledge doesn’t impress us. That we are post-truth, post-fact. But, is this actually a bad thing?

I’m a factual relativist. I abandoned the idea of facts and “the truth” some time last year. I wrote a whole science book, The Memory Illusion, almost never mentioning the terms fact and truth. Why? Because much like Santa Claus and unicorns, facts don’t actually exist. At least not in the way we commonly think of them.

We think of a fact as an irrefutable truth. According to the Oxford dictionary, a fact is “a thing that is known or proved to be true.” And where does proof come from? Science?

Well, let me tell you a secret about science; scientists don’t prove anything. What we do is collect evidence that supports or does not support our predictions. Sometimes we do things over and over again, in meaningfully different ways, and we get the same results, and then we call these findings facts. And, when we have lots and lots of replications and variations that all say the same thing, then we talk about theories or laws. Like evolution. Or gravity. But at no point have we proved anything.

Still, we need to contend with the fact that so many liberals—especially liberal politicians, pundits, and political economists—are bemoaning what they consider to be the descent into a post-truth world. They’re worried that non-liberal political candidates and voters increasingly deny facts, manipulate the truth, and prefer emotion to expertise. And so they rush to defend “the facts” and Truth.

Rune Møller Stahl and Bue Rübner Hansen, I think, get it right:

liberals’ nostalgia for factual politics seems designed to mask their own fraught relationship with the truth. The supposedly honest technocrats and managers—who enacted neoliberal measures with the same ferocity as their right-wing counterparts—relied on a certain set of facts to displace the material truths they refused to acknowledge. . .

As liberals took over facts, they pushed social conflict to the non-factual realm, to the domain of values. Instead of struggles over domination and exploitation, we got the culture wars. There, progressive values held no sway; they were sold with a sense of moral superiority then betrayed by the spinelessness of triangulation and by policies that undermined the welfare state and organized labor.

As I see it, the defeats mainstream liberals suffered under the Brexit vote and the U.S. presidential election don’t prove that voters hate facts or truths. Those events (and we can expect more to come in the years ahead) merely show that enough regular citizens are fed-up with business as usual—with increasingly unconvincing liberal facts and truths, which deny the severe losses and dislocations under the existing rules and institutions—to revoke their trust in the so-called experts and, swayed by a different set of facts and truths, to throw in their lot with the only available alternatives.

The battle over facts, truths, and expertise hasn’t ended. But the idea that there’s only one—one set of facts, one truth, one group of experts—has. Which means the critique of the existing order After Truth has only just begun.


*According to Oxford Dictionaries, the first time the term post-truth was used was in a 1992 essay by the late playwright Steve Tesich in the Nation magazine. Tesich, writing about the Iran-Contra scandal and the Persian Gulf war, said that “we, as a free people, have freely decided that we want to live in some post-truth world.” The term “post-truth politics” was coined by David Roberts in a blog post for Grist on 1 April 2010, where it was defined as “a political culture in which politics (public opinion and media narratives) have become almost entirely disconnected from policy (the substance of legislation).”

Note: yes, that is Schrödinger’s cat at the top of the post.


Mark Tansey, “Coastline Measure” (1987)

The pollsters got it wrong again, just as they did with the Brexit vote and the Colombia peace vote. In each case, they incorrectly predicted one side would win—Hillary Clinton, Remain, and yes—and many of us were taken in by the apparent certainty of the results.

I certainly was. In each case, I told family members, friends, and acquaintances it was quite possible the polls were wrong. But still, as the day approached, I found myself believing the “experts.”

It still seems, when it comes to polling, we have a great deal of difficult with uncertainty:

Berwood Yost of Franklin & Marshall College said he wants to see polling get more comfortable with uncertainty. “The incentives now favor offering a single number that looks similar to other polls instead of really trying to report on the many possible campaign elements that could affect the outcome,” Yost said. “Certainty is rewarded, it seems.”

But election results are not the only area where uncertainty remains a problematic issue. Dani Rodrik thinks mainstream economists would do a better job defending the status quo if they acknowledged their uncertainty about the effects of globalization.

This reluctance to be honest about trade has cost economists their credibility with the public. Worse still, it has fed their opponents’ narrative. Economists’ failure to provide the full picture on trade, with all of the necessary distinctions and caveats, has made it easier to tar trade, often wrongly, with all sorts of ill effects. . .

In short, had economists gone public with the caveats, uncertainties, and skepticism of the seminar room, they might have become better defenders of the world economy.

To be fair, both groups—pollsters and mainstream economists—acknowledge the existence of uncertainty. Pollsters (and especially poll-based modelers, like one of the best, Nate Silver, as I’ve discussed here and here) always say they’re recognizing and capturing uncertainty, for example, in the “error term.”


Even Silver, whose model included a much higher probability of a Donald Trump victory than most others, expressed both defensiveness about and confidence in his forecast:

Despite what you might think, we haven’t been trying to scare anyone with these updates. The goal of a probabilistic model is not to provide deterministic predictions (“Clinton will win Wisconsin”) but instead to provide an assessment of probabilities and risks. In 2012, the risks to to Obama were lower than was commonly acknowledged, because of the low number of undecided voters and his unusually robust polling in swing states. In 2016, just the opposite is true: There are lots of undecideds, and Clinton’s polling leads are somewhat thin in swing states. Nonetheless, Clinton is probably going to win, and she could win by a big margin.


As for the mainstream economists, while they may acknowledge exceptions to the rule that “everyone benefits” from free markets and international trade in some of their models and seminar discussions, they acknowledge no uncertainty whatsoever when it comes to celebrating the current economic system in their textbooks and public pronouncements.

So, what’s the alternative? They (and we) need to find better ways of discussing and possibly “modeling” uncertainty. Since the margins of error, different probabilities, and exceptions to the rule are ways of hedging their bets anyway, why not just discuss the range of possible outcomes and all of what is included and excluded, said and unsaid, measurable and unmeasurable, and so forth?

The election pollsters and statisticians may claim the public demands a single projection, prediction, or forecast. By the same token, the mainstream economists are no doubt afraid of letting the barbarian critics through the gates. In both cases, the effect is to narrow the range of relevant factors and the likelihood of outcomes.

One alternative is to open up the models and develop a more robust language to talk about fundamental uncertainty. “We simply don’t know what’s going to happen.” In both cases, that would mean presenting the full range of possible outcomes (including the possibility that there can be still other possibilities, which haven’t been considered) and discussing the biases built into the models themselves (based on the assumptions that have been used to construct them). Instead of the pseudo-rigor associated with deterministic predictions, we’d have a real rigor predicated on uncertainty, including the uncertainty of the modelers themselves.

Admitting that they (and therefore we) simply don’t know would be a start.


Special mention

187576_600 187614_600


Special mention

650 187257_600


Special mention

187206_600 trump