Posts Tagged ‘epistemology’

In this post, I continue the draft of sections of my forthcoming book, “Marxian Economics: An Introduction.” The first five posts (herehereherehere, and here) will serve as the basis for Chapter 1, Marxian Economics Today. The next six (hereherehereherehere, and here) are for Chapter 2, Marxian Economics Versus Mainstream Economics. This post (following on a previous one) is for Chapter 3, Toward a Critique of Political Economy.

The necessary disclosure: these are merely drafts of sections of the book, some rougher or more preliminary than others. Right now, I’m just trying to get them done in some form. They will all be extensively revised and rewritten in preparing the final book manuscript.


It is difficult to fully understand the Marxian critique of political economy without some understanding of Hegel. No less an authority than Lenin wrote that “it is impossible completely to understand Marx’s Capital, and especially its first chapter, without having thoroughly studied and understood the whole of Hegel’s Logic.” Marx himself wrote “I therefore openly avowed myself the pupil of that mighty thinker, and even here and there, in the chapter on the theory of value, coquetted with the modes of expression peculiar to him.”

Those are the two major reasons for keeping Hegel in mind: because Marx, like many young German intellectuals in the 1830s and 1840s, started with Hegel; and because, many years later, Marx’s critique of political economy was still influenced by his theoretical encounter with Hegel.*

But, of course, that makes understanding the movement toward the Marxian critique of political economy a bit difficult for contemporary readers, who generally aren’t familiar with Hegel’s writings. So, in this section, I want to present a brief summary of Hegel’s philosophy. But, I caution readers, this should not be taken to be a presentation of all aspects of Hegel’s thought. We only want to examine Hegel to the extent that it aids our comprehension of Marx’s theoretical journey and his later critique of political economy.

In his twenties, Marx, along with other young German intellectuals (including Ruge, Bruno Bauer, and Ludwig Feuerbach), formed a loose grouping called, variously, the Young Hegelians or the Left Hegelians. In their discussions and debates, these young thinkers sought both to draw on Hegel’s philosophy and to radicalize it, aiming their attacks especially at religion and the German political system.** Later, they turned their radical critique on Hegel’s philosophy itself.

So, what was it in Hegel’s thought that was so influential for Marx and the other Young Hegelians? One area is particularly important: the theory of knowledge and, closely related, the philosophy of history.

On the first point, Hegel’s view was that the two previous traditions—of René Descartes and Immanuel Kant—got it wrong. Descartes argued that it was impossible to know things as they appear to us (phenomena) but only things as they are in themselves (noumena). Experience was deceptive. Hence, his focus on reason, which alone can provide certainty about the world. Kant posited exactly the opposite—that it was possible to know things as they appeared to us but not their essences, things as they are in themselves. Therefore, science was only capable of providing knowledge of the appearances of things, of empirical experiences and observations about nature; morality and religion operated in the unknowable realm of things in themselves.

Hegel’s great contribution was to solve the problem and affirm what both Descartes and Kant denied. For him, history was an unfolding of the mind (Absolute Spirit) coming to know itself as phenomenon, to the point of its full development, when it is aware of itself as it is, as noumenon. In other words, the consciousness of things as they appear to us leads to knowledge of the essence of things. At the end of the process, when the object has been fully “spiritualized” by successive cycles of consciousness’s experience, consciousness will fully know the object and at the same time fully recognize that the object is none other than itself. That is the end of history.

How does this historical process work? How does the mind or Absolute Spirit pass through successive stages until it reaches full awareness? That’s where the dialectic comes in. According to Hegel (especially the Phenomenology of Mind), human understanding passes through a movement that is characterized by an initial thesis (e.g., being) that passes into its opposite (e.g., nothingness), which entails a contradiction that is resolved by a third moment (e.g., becoming), which is the positive result of that opposition. For Hegel, this process of thesis-antithesis-synthesis (or, as it is sometimes referred to, abstract-negative-concrete) is both a logical process (the development of philosophical categories) and a chronological process (the development of society), which leads to greater understanding or universality (in both philosophy and in social institutions such as religion and politics), eventually leading to complete self-understanding—the end of history.

What Marx and the other Young Hegelians took from Hegel was a method and language that allowed them to challenge tradition and the existing order: a focus on history and a stress on flux, change, contradiction, movement, process, and so forth.

But they also turned their critical gaze on the more conservative dimensions of Hegel’s philosophy. For example, Feuerbach (in The Essence of Christianity, published in 1841) argued that Hegel’s Absolute Spirit was nothing more than deceased spirit of theology, that is, it was still an inverted world consciousness. Instead, for Feuerbach, God was the outward projection of people’s inward nature. Men and women were “alienated” from their human essence in and through religion—because they cast all their human powers onto a deity, instead of assuming them as their own. The goal, then, was to change consciousness by becoming aware of that self-alienation, through critique.

Marx, in particular, considered Feuerbach’s critique to be an important step beyond Hegel. Ultimately, however, he rejected the way Feuerbach formulated the problem (as individuals separated from their human essence, outside of society) and settled his account with the eleven “Theses on Feuerbach,” the last of which has become the most famous:

The philosophers have only interpreted the world, in various ways; the point is to change it.


*Even though I insist on the idea that a basic understanding of Hegel is necessary for understanding Marx’s theoretical journey, it is also possible to overstate the case. Marx’s method is neither a straightforward application nor a simple reversal of the Hegelian dialectic. But the time he wrote Capital, Marx had criticized and moved far beyond Hegel’s philosophy.

**At the time (beginning in 1840), Germany was governed by a new king, Frederick William IV, who undermined his promise of political reform by curtailing political freedom and religious tolerance. For the Young Hegelians, this was a real step backward in terms of following the rest of Europe (especially Britain and France) in modernizing political institutions and expanding the realm of freedom. And it was key to their eventual break from Hegel, since according to Hegel’s philosophy the Prussian state represented the fulfillment of history. (The contemporary equivalent is Francis Fukuyama’s famous book The End of History and the Last Man (1992), in which he argued that “not just. . .the passing of a particular period of post-war history, but the end of history as such: That is, the end-point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government.”


Jean-Pierre Roy, “The Sultan and the Strange Loop” (2016)

The U word has once again reared its ugly head.

I’m referring of course to uncertainty, which at least a few of us are pleased has returned to occupy a prominent role in relation to scientific discourse. The idea that we simply do not know is swirling around us, haunting pretty much every pronouncement by economists, virological scientists, epidemiological modelers, and the like.

How many people will contract the novel coronavirus? How many fatalities has the virus caused thus far? And how many people will eventually die because of it? Do face masks work? How many workers have been laid off? How severe will the economic meltdown be in the second quarter and for the rest of the year?

We read and hear lots of answers to those questions but, while individual forecasts and predictions are often presented as uniquely “correct,” they differ from one another and change so often we are forced to admit our knowledge is radically uncertain.

Uncertainty, it seems, erupts every time normalcy is suspended and we are forced to confront the normal workings of scientific practice. It certainly happened during the first Great Depression, when John Maynard Keynes used the idea of radical uncertainty—as against probabilistic risk—to challenge neoclassical economics and its rosy predictions of stable growth and full employment.* And it occurred again during the second Great Depression, when mainstream macroeconomics, especially the so-called dynamic stochastic general equilibrium approach, was criticized for failing to take into account “massive uncertainty,” that is, the impossibility of predicting surprises and situations in which we simply do not know what is going to happen.


The issue of uncertainty came to the fore again after the election of Donald Trump, which came as a shock to many—even though polls showed a race that was both fairly close and highly uncertain. FiveThirtyEight’s final, pre-election forecast put Hillary Clinton’s chance of winning at 71.4 percent, which elicited quite a few criticisms and attacks, since other models were much more confident about Clinton, variously putting her chances at 92 percent to 99 percent. But, as Nate Silver explained just after the election,

one of the reasons to build a model — perhaps the most important reason — is to measure uncertainty and to account for risk. If polling were perfect, you wouldn’t need to do this. . .There was widespread complacency about Clinton’s chances in a way that wasn’t justified by a careful analysis of the data and the uncertainties surrounding it.

In my view, Silver is one of the best when it comes to admitting the enormous gap between what we claim to know and what we actually know (as I argued back in 2012), which however is often undermined in an attempt to make the results of models seem more accurate and to conform to expectations.

And that’s just as much the case in social sciences (including, and perhaps especially, economics) and the natural sciences as it is in weather forecasting. Many, perhaps most, practitioners and pundits operate as if science is a single set of truths and not a discourse, with all the strengths and failings that implies. What I’m referring to are all the uncertainties, not to mention indeterminisms, linguistic risks and confusions, referrals and deferences to other knowledges and discourses, embedded assumptions (e.g., in both the data-gathering and the modeling) that are attendant upon any practice of discursive production and dissemination.

As Siobhan Roberts recently argued,

Science is full of epistemic uncertainty. Circling the unknowns, inching toward truth through argument and experiment is how progress is made. But science is often expected to be a monolithic collection of all the right answers. As a result, some scientists — and the politicians, policymakers and journalists who depend on them — are reluctant to acknowledge the inherent uncertainties, worried that candor undermines credibility.

What that means, in my view, is science is always subject to discussion and debate within and between contending positions, and therefore decisions need to be made —about facts, concepts, theories, models, and much else—all along the way.

As it turns out, acknowledging that uncertainty, and therefore openly disclosing the range of possible outcomes, does not undermine public trust in scientific facts and predictions. That was the conclusion of a study recently published in the Proceedings of the National Academy of the Sciences.

In the “posttruth” era where facts are increasingly contested, a common assumption is that communicating uncertainty will reduce public trust. . .Results show that whereas people do perceive greater uncertainty when it is communicated, we observed only a small decrease in trust in numbers and trustworthiness of the source, and mostly for verbal uncertainty communication. These results could help reassure all communicators of facts and science that they can be more open and transparent about the limits of human knowledge.

Even if communicating uncertainty does decrease people’s trust in and perceived reliability of scientific facts, including numbers, that in my view is not a bad thing. It serves to challenge the usual (especially these days, among liberals, progressives, and others who embrace a Superman theory of truth) that everyone can and should rely on science to make the key decisions.*** The alternative is to admit and accept that decision-making, under uncertainty, is both internal and external to scientific practice. The implication, as I see it, is that the production and communication of scientific facts as well as their subsequent use by other scientists and the general public is a contested terrain, full of uncertainty. 

Last year, even before the coronavirus pandemic, Scientific American [unfortunately, behind a paywall] published a special issue titled “Truth, Lies, and Uncertainty.” The symposium covers a wide range of topics, from medicine and mathematics to statistics and paleobiology. For those of us in economics, perhaps the most relevant is the article on physics (“Virtually Reality, by George Musser).

Musser begins by noting that “physics seems to be one of the only domains of human life where truth is clear-cut.”

The laws of physics describe hard reality. They are grounded in mathematical rigor and experimental proof. They give answers, not endless muddle. There is not one physics for you and one physics for me but a single physics for everyone and everywhere.

Or so it appears.

In fact, Musser explains, practicing physicists operate with considerable doubt and uncertainty, on everything from fundamental theories (such as quantum mechanics and string theory) to bench science (“Is a wire broken? Is the code buggy? Is the measurement a statistical fluke?”).

Consider, for example, quantum theory: if you

take quantum theory to be a representation of the world, you are led to think of it as a theory of co-existing alternative realities. Such multiple worlds or parallel universes also seem to be a consequence of cosmological theories: the same processes that gave rise to our universe should beget others as well. Additional parallel universes could exist in higher dimensions of space beyond our view. Those universes are populated with variations on our own universe. There is not a single definite reality.

Although theories that predict a multiverse are entirely objective—no observers or observer-dependent quantities appear in the basic equations—they do not eliminate the observer’s role but merely relocate it. They say that our view of reality is heavily filtered, and we have to take that into account when applying the theory. If we do not see a photon do two contradictory things at once, it does not mean the photon is not doing both. It might just mean we get to see only one of them. Likewise, in cosmology, our mere existence creates a bias in our observations. We necessarily live in a universe that can support human life, so our measurements of the cosmos might not be fully representative.

Musser’s view is that accepting uncertainty in physics actually leads to a better scientific practice, as long as physicists themselves are the ones who attempt to point out problems with their own ideas.

So, if physicists are willing to live with—and even to celebrate—uncertain knowledge, and even if the general public does lose a bit of trust when a degree of uncertainty is revealed, then it’s time for the rest of us (perhaps especially economists) to relinquish the idea of certain scientific knowledge.

Then, as Maggie Koerth recently explained in relation to the coronavirus pandemic, instead of waiting around around for “absolute, unequivocal facts” to decide our fate, we can get on with the task of making the “big, serious decisions” that currently face us.


*Although, as I explained back in 2011, the idea of fundamental uncertainty was first introduced into mainstream economic discourse by Frank Knight.

**And later central bankers (such as the Bank of England’s Andy Haldane) discovered that admitting uncertainty might actually “enhance understanding and therefore authority.”

***The irony is that “the Left” used to be skeptical about and critical of much of modern science—from phrenology, craniometry, and social Darwinism to the atom bomb, sociobiology, and evolutionary psychology.



Yesterday, the winners of the 2019 winners of the so-called Nobel Prize in Economics were announced. Abhijit Banerjee, Esther Duflo, and Michael Kremer were recognized for improving “our ability to fight global poverty” and for transforming development economics into “a flourishing field of research” through their experiment-based approach.

The Royal Swedish Academy of Sciences declared:

This year’s Laureates have introduced a new approach to obtaining reliable answers about the best ways to fight global poverty. In brief, it involves dividing this issue into smaller, more manageable, questions–for example, the most effective interventions for improving educational outcomes or child health. They have shown that these smaller, more precise, questions are often best answered via carefully designed experiments among the people who are most affected.

As every year, mainstream economists lined up to laud the choice. Dani Rodrik declared it “a richly deserved recognition.” Richard Thaler, who won the award in 2017 (here’s a link to my analysis), extended his congratulations to the Banerjee, Duflo, and Kremer and to the committee “for making a prize that seemed inevitable happen sooner rather than later.” While Paul Krugman, the 2008 Nobel laureate, refers to it as “a very heartening prize—evidence-based economics with a real social purpose.”

Nothing new there. To a one, mainstream economists always use the occasion of the Nobel Prize to applaud themselves and their shared approach to economic and social analysis—a celebration of private property, free markets, and individual incentives.

What is novel this time around is that the winners include the first woman economist to win the prize (Duflo) and only the third non-white economist (Banerjee).*

But what about the content of their work? I’ve discussed the work of Duflo and Banerjee on numerous occasions on this blog (e.g., here, here, and here).

As it turns out, I’ve written a longer commentary on the “new development economics” as part of a symposium on my book Development and Globalization: A Marxian Class Analysis, which is forthcoming in the journal Rethinking Marxism.

I begin by noting that idea of Banerjee, Duflo, Kremer and the other new development economists is that asking “big questions” (e.g., about whether or not foreign aid works) is less important than the narrower ones concerning which particular development projects should be funded and how such projects should be organized. For this, they propose field experiments and randomized control trials—to design development projects such that people can be “nudged,” with the appropriate incentives, to move to the kinds of behaviors and outcomes presupposed within mainstream economic theory.

Here we are, then, in the aftermath of the Second Great Depression—in the uneven recovery from capitalism’s most severe set of crises since the great depression of the 1930s and, at the same time, a blossoming of interest in and discussion of socialism—and the best mainstream economists have to offer is a combination of big data, field experiments, and random trials. How is that an adequate response to grotesque and still-rising levels of economic inequality (as shown, e.g., by the World Inequality Lab), precarious employment for hundreds of millions of new and older workers (which has been demonstrated by the International Labour Organization), half a billion people projected to still be struggling to survive below the extreme-poverty line by 2030 (according to the World Bank), and the wage share falling in many countries (which even the International Monetary Fund acknowledges) as most of the world’s population are forced to have the freedom to sell their ability to work to a relatively small group of employers for stagnant or falling wages? Or, for that matter, to the reawakening of the rich socialist tradition, both as a critique of capitalism and as a way of imagining and enacting alternative economic and social institutions.

I go on to raise three critical issues concerning the kind of development economics that has been recognized by this year’s Nobel prize. First, the presumption that analytical techniques are neutral and the facts alone can adjudicate the debate between which development projects are successful and which are not is informed by an epistemological essentialism—in particular, a naïve empiricism—that many of us thought to have been effectively challenged and ultimately superseded within contemporary economic and social theory. Clearly, mainstream development economists ignore or reject the idea that different theories have, as both condition and consequence, different techniques of analysis and different sets of facts.

The second point is that class is missing from any of the analytical and policy-related work that is being conducted by mainstream development economists today. At least as a concept that is explicitly discussed and utilized in their research. One might argue that class is lurking in the background—a specter that haunts every attempt to “understand how poor people make decisions,” to design effective anti-poverty programs, to help workers acquire better skills so that they can be rewarded with higher wages, and so on. They are the classes that have been disciplined and punished by the existing set of economic and social institutions, and the worry of course is those institutions have lost their legitimacy precisely because of their uneven class implications. Class tensions may thus be simmering under the surface but that’s different from being overtly discussed and deployed—both theoretically and empirically—to make sense of the ravages of contemporary capitalism. That step remains beyond mainstream development economics.

The third problem is that the new development economists, like their colleagues in other areas of mainstream economics, take as given and homogeneous the subjectivity of both economists and economic agents. Economists (whether their mindset is that of the theoretician, engineer, or plumber) are seen as disinterested experts who consider the “economic problem” (of the “immense accumulation of commodities” by individuals and nations) as a transhistorical and transcultural phenomenon, and whose role is to tell policymakers and poor and working people what projects will and not reach the stated goal. Economic agents, the objects of economic theory and policy, are considered to be rational decision-makers who are attempting (via their saving and spending decisions, their participation in labor markets, and much else) to obtain as many goods and services as possible. Importantly, neither economists nor agents are understood to be constituted—in multiple and changing ways—by the various and contending theories that together comprise the arena of economic discourse.

The Nobel committee has recognized the work of Banerjee, Duflo, and Kremer as already having “helped to alleviate global poverty.” My own view is that it demonstrates, once again, the poverty of mainstream economics.


*The only other woman, in the 50-year history of the Nobel Prize in Economics, was Elinor Ostrom (2009), a political scientist; the other non-white winners were Sir Arthur Lewis (1979) and Amartya Sen (1998).



John Baldessari, “Man Running/Men Carrying Box” (1988-1990)

It was Paul Samuelson who, in 1997, declared with morbid optimism that “Funeral by funeral, economics does make progress.”*

What Samuelson presumed is that, over time, wrong ideas would be killed and laid to rest and better ideas would flourish, thus creating the foundation for progress in economic thought.

That’s what I consider to be the epistemological utopianism of mainstream economic thought: using the correct scientific methods, the work that economists do gets closer and closer to the Truth—the singular, incontestable, capital-t truth. It used to be the case (for Samuelson and many others, such as fellow Nobel laureates Kenneth Arrow, Gerard Debreu, and Paul Krugman) that mathematical models represented the best way of making progress (inspired by a particular conception of nineteenth-century physics).** The current fad is to rely on randomized experiments and big data as evidence that economics is finally becoming a real, empirical science (akin to biology and medicine).

In the first case, rationalism is the reigning theory of knowledge; in the second case, it’s empiricism. However, both theories represent two sides of the same epistemological coin, defined by a radical separation between theory and reality and some sort of correspondence between them. In other words, both rationalism and empiricism are foundationalist theories of knowledge according to which the gap between theory and reality is eventually—”funeral by funeral”—closed.

It’s a utopianism that serves as both the premise and promise of mainstream economists’ practice. And we know something about the consequences of that epistemological utopianism—for example, the combination of ignorance and arrogance when it comes to the work of nonmainstream economists (who stand accused of not doing science and therefore of not contributing to the progress of economics), noneconomists (whose methods are neither mathematically nor empirically rigorous enough), and everyday economists (who either produce cultural representations that accord with the lessons of mainstream economics, in which case they be invoked as illustrations, or whose work is dismissed and needs to be attacked and eradicated, because it runs counter to mainstream economics). Not to mention the idea that, in the midst of the worst economic crises since the first Great Depression, mainstream economists could blithely assert that their theories had done just fine; the only problem was the fact that policymakers hadn’t adequately listened to or followed the advice of mainstream economists. Finally, of course, there’s the closing-off of publishing venues (like the leading journals), research funding (especially the National Science Foundation), teaching positions (especially in research universities), and so on—all in the name of a singular scientific method and conception of truth.

As I have shown (e.g., here and here), mainstream liberals today are also obsessed with the defense of science and capital-t Truth. In their zeal to attack Donald Trump and the right-wing media’s defense of his administration’s outlandish claims about a wide variety of issues—from climate change to the Mueller investigation—they increasingly invoke and rely on an absolutist theory of knowledge. And then, of course, claim for themselves the correct side in the current debates. They, too, are guided by the utopianism of essentialist theories of knowledge.

The problems with epistemological utopianism are legion. I’ve mentioned some of the nastier consequences above. But there are other issues. For example, in their defense of absolute truth, they invoke a time—before the current “post-truth” regime—when a set of institutions (such as journalism, science, and the academy) supposedly got it right. Except they can’t ever cite an example of how those institutions successfully adjudicated the facts in play—when, supposedly, there was universal assent to the truth claims, either within the academy or the wider society—and they ignore all the times when they simply got it wrong.

Moreover, they’re willing to admit that the claims to truth are often deflected by lots of other influences—such as narratives, confirmation bias, ethics, and information overload. But the problem is always “out there,” among regular people, and not the scientists themselves (whether in economics or other disciplines). Epistemological utopians simply can’t acknowledge that, in their daily practice, mainstream economists and liberal thinkers are also engaged in story-telling, that they accept evidence that confirms their preconceived notions and assess counter evidence with a critical eye, make ethics-laden decisions based on relations of unequal power, and operate with overconfidence based on the illusion of knowledge.

There are, of course, many alternatives to the utopianism of absolutist epistemology. One of them is what I call “partisan relativism,” associated with the Marxian critique of political economy.

In fact, I (with my friend and frequent coauthor Jack Amariglio) have just published an entry on “epistemology” in the Routledge Handbook of Marxian Economics. There, we discuss many different contributions to Marxian epistemology and highlight the role that postmodernism has played in providing an alternative to and moving beyond the long history of attributing to Marx a modernist project of attempting to delimit the certainly of scientific knowledge from non-science (or ideology). Thus, we write, postmodern Marxists

frequently call attention to the “relativism” that they believe is Marx’s main epistemological message and/or is exemplified in his texts. Marx’s aleatory materialism, for postmodern Marxists, also establishes an under-determination in the realm of knowledge; a discursive whole cannot close itself. Influenced by Jacques Derrida’s conception of “deconstruction,” postmodern Marxists insist that discourse is always marked by slippages, aporia, displacements, and deferments. For them, meaning is overdetermined and uncertain. A certain knower is thus a contradiction in terms.

In addition, if scientific discourse is not the mirror of nature, then there is an “ethical” dimension to all knowledge production. Cornel West, utilizing Richard Rorty among other “pragmatist” philosophers, brings out the enduring, constitutive ethical and political aspects of how and what we know, and what we intend to do with this knowledge.

Thus, we go on to explain, relativist Marxists dispute the claims of a certain knowing subject (indeed, they challenge the very idea that knowledge begins with a knowing subject) and focus instead on how knowledge claims are internal to theoretical frameworks and the manner in which knowledges produce within different theories or discourses have specific—and often quite different—conditions and consequences in the world within which those knowledges are produced.


For many Marxist epistemologists, knowledge is active and actionable, and its existence as material image/image of the material is one requisite condition for the revolutionary socioeconomic—especially class—change that Marx vehemently proposed.

And that, in the end, is the utopian moment of Marxian epistemology—not a utopian appeal or aspiration to absolute truth, but instead a practice (one might even call it an ethics) of materialist critique. That critique operates at two different levels: it is a critique of all theoretical claims (such as those made by mainstream economists) that normalize or naturalize the existing economic and social order and a critique of capitalism itself, since from a Marxian perspective capitalist societies are based on and serve to reproduce an exploitative class structure.

It should come as no surprise then that the utopian horizon of Marxian epistemology is summarily rejected by mainstream economists and liberal thinkers—or that the latter’s epistemological utopianism often serves to locate itself within and ultimately to justify, by treating as normal or natural, the existing set of economic and social institutions.


*“Credo of a Lucky Textbook Author,” Journal of Economic Perspectives 11 (Spring): 159.

**It’s a particular conception of physics that has been disputed by many others, including Thomas Kuhn (and his theory of “scientific revolutions”), Paul Feyerabend (who argued that there are no useful and exceptionless methodological rules governing the progress of science or the growth of knowledge), Richard Rorty (who criticized the idea of knowledge as representation), and Michel Foucault (who showed that different systems of thought and knowledge—epistemes or discursive formations, in Foucault’s terminology—are governed by different sets of rules). Their criticisms of essentialist epistemologies apply as well to the more recent turn to “empirical” methods as the foundation of economic knowledge.


I have often argued—in lectures, talks, and publications—that every economic theory has a utopian dimension. Economists don’t explicitly talk about utopia but, my argument goes, they can’t do what they do without some utopian horizon.

The issue of utopia is there, at least in the background, in every area of economics—perhaps especially on the topic of control.

Consider, for example, the theory of the firm (which I have written about many times over the years), which is the focus of University of Chicago finance professor Luigi Zingales’s lecture honoring Oliver Hart, winner of the 2016 Nobel Prize for economics, at this year’s Allied Social Science Association meeting.

One of the many merits of Oliver’s contribution is to have brought back the concept of power inside economics. This is a concept pervasive in political science and sociology, and pervasive in Marxian economics, but completely absent from neoclassical economics. In fact, Oliver’s view of the firm is very reminiscent of the Marxian view, but where Marx sees exploitation, Oliver sees an efficient allocation.

Zingales is right: Hart’s neoclassical treatment of control informs a theory of the firm that stands diametrically opposed to a Marxian theory of the firm. And those contrasting theories of the firm are both conditions and consequences of different utopian horizons. Thus, Hart both envisions and looks to move toward an efficient use of control within the firm such that—through a combination of incentives and monitoring—agents (workers) can be made to work hard to fulfill the goal set by the principal (capitalists). Marxists, on the other hand, see the firm as a site of exploitation—capitalists extracting surplus-value from the workers they hire—and look to create the economic and social conditions whereby exploitation is eliminated.

In my view, those are very different utopias—the efficient allocation of resources versus the absence of exploitation—that both inform and are informed by quite different theories of the firm.

As is turns out, the issue of control—and, with it, utopia—comes up in another, quite different context. As George DeMartino and Deidre McCloskey explain, in their rejoinder to Anne Krueger’s attack on their recent edited volume, The Oxford Handbook of Professional Economic Ethics,

When you have influence over others you take on ethical burdens. Think of your responsibilities to, say, your family or friends. And when you fail to confront those burdens openly, honestly, and courageously you are apt to make mistakes. As professional economists we have influence, and we do develop conversations about how we operate. Yet there is no serious, critical, scholarly conversation about professional economic ethics—never has been. That’s not good.

While the DeMartino and McCloskey volume includes contributions from both mainstream and heterodox economists (a point that Krueger overlooks in her review), it is still the case that the discipline of economics, dominated as it has been by mainstream economics, has never had a serious, sustained conversation about ethics.

Consider this: it is possible to get a degree in economics—at any level, undergraduate, Master’s, or doctorate—without a single reading or lecture, much less an entire course, on ethics. And yet economists do exercise a great deal of power over others: over other economists (through hiring, research funding, and publishing venues), their students (in terms of what can and cannot be said, talked about, and theorized in their courses), and the wider society (through the dissemination of particular theories of the economy as well as the policies they advocate to governments and multilateral institutions). In fact, they also exercise power over themselves, in true panopticon fashion, as they seek to adhere to and reinforce certain disciplinary protocols and procedures.

Economics is saturated with power, and thus replete with ethical moments.

Once again, the issue of control is bound up with different utopian horizons. Most economists—certainly most mainstream economists—are not comfortable with and have no use for discussions of ethics. That’s because, in their view, economists adhere to a code of objectivity and scientificity and an epistemology of absolute truth. So, there’s no room for an ethics associated with “influence over others.” That’s their utopia: a free-market of ideas in which the “truth,” of theory and policy, is revealed.

Other economists have a quite different view. They see a world of unequal power, including within the discipline of economics. And the existence of that unequal power demands a conversation about ethics in order to reveal the conditions and especially the consequences of different ways of doing economics. If there is no single-t, absolute truth—and thus no single standard of objectivity and scientificity—within economics, then the use of one theory instead of another has particular effects on the world within which that theorizing takes place. Here, the utopian horizon is not a free market of ideas, but instead a reimagining of the discipline of economics as an agonistic field of incommensurable discourses.

And, from a specifically Marxian perspective, the utopian moment is to create the conditions whereby the critique of political economy renders itself no longer useful. Marxists recognize that they may not be able to control the path to such an outcome but it is their goal—their ethical stance, their utopian horizon.

Liberal mainstream economists all seem to be lip-synching Bobby McFerrin these days.

Worried about automation? Be happy, write Laura Tyson and Susan Lund, since “these marvelous new technologies promise higher productivity, greater efficiency, and more safety, flexibility, and convenience.”

Worried about the different positions in current debates about economic policy? Be happy, writes Justin Wolfers, and rely on the statistics produced by government agencies and financial firms and the opinions of mainstream economists.

Me, I remain worried and I have no reason to accept mainstream economists’ advice for being happy.

Sure, new forms of automation might lead to higher productivity and much else that Tyson and Lund find so alluring. But who’s going to benefit? If we go by the last few decades, large corporations and wealthy individuals are the ones who are going to capture most of the gains from the new technologies. Everyone else, as I have written, is going to be forced to have the freedom to either search for new jobs or deal with the fundamental transformation of the jobs they manage to keep.

When it comes to separating fact from fiction, aside from the embarrassing epistemological positions liberals rely on, where are the statistics that might help us make sense of what is going on out there—numbers like the Reserve Army of Unemployed, Underemployed, and Low-wage Workers or the rate of exploitation.

You want me not to worry? Analyze what’s going to happen to workers and the distribution of income as automation increases and calculate the kinds of economic numbers other theoretical traditions have produced.

Even better, let workers have a say in what and how new technologies are introduced and change economic institutions in order to eliminate the Reserve Army and class exploitation.

Then and only then will I be happy.

China Financial Crisis Art

Chen Wenling, “What You See Might Not Be Real” (2009)

I’ll admit, there are times when I regret the fact that I’m a relativist. Wouldn’t it be nice, I say to myself on occasion, to be able to claim—beyond a shadow of a doubt, to my students, colleagues, or readers of this blog—that something or other (neoclassical economics or capitalism or name your poison) is wrong and that the alternative (Marxian economics or socialism or what have you) is absolutely correct.

But then I read a defense of capital-T truth—such as David Roberts’s [ht: ja] attack on the alt-right and fake news and his presumption that the liberal mainstream is uniquely capable of upholding “truth, justice, and the American way”—and I thank my lucky stars that I don’t have to make such outlandish, embarrassing arguments. Fortunately, my relativism means I’m not saddled with the mainstream liberals’ delusion that they have, if not God, at least Superman on their side.

I’ve been over this epistemological terrain before (e.g., here, here, and here). But it seems, in the current conjuncture, mainstream liberals—in their zeal to attack Donald Trump and the right-wing media’s defense of his administration’s outlandish claims about a wide variety of issues, from climate change to the Mueller investigation—increasingly invoke and rely on an absolutist theory of knowledge. And then, of course, claim for themselves the correct side in the current debates.

As Roberts sees it, the United States

is experiencing a deep epistemic breach, a split not just in what we value or want, but in who we trust, how we come to know things, and what we believe we know — what we believe exists, is true, has happened and is happening.

The primary source of this breach, to make a long story short, is the US conservative movement’s rejection of the mainstream institutions devoted to gathering and disseminating knowledge (journalism, science, the academy) — the ones society has appointed as referees in matters of factual dispute.

In their place, the right has created its own parallel set of institutions, most notably its own media ecosystem.

Consider the assumptions built into those statements for a moment. Roberts believes that society has appointed a unique set of mainstream institutions—journalism, science, the academy—to serve as referees when it comes to adjudicating the facts in play. Nowhere does he discuss how, historically, those institutions came to occupy such an exalted position. Perhaps even more important, he never considers the disputes—about the facts and much else—that exist among journalists, scientists, and academics. And, finally, Roberts never mentions all the times, in recent years and over the centuries, the members of those institutions who got it wrong.

What about the reporting on the weapons of mass destruction in Iraq? Or the Tuskegee Study of Untreated Syphilis in the Negro Male? Or the university professors and presidents, at Yale, Harvard, and elsewhere, who supported and helped devise the U.S. war in Vietnam?

The list could go on.

There is, in fact, good reason not to simply accept the “facts” as gathered and disseminated by mainstream institutions. Historically, we have often been misled, and even mangled and killed, by those supposed facts. And, epistemologically, the members of those institutions—not to mention others, located in different institutions—produce and disseminate alternative sets of facts.

Maybe that’s Roberts’s problem. He actually thinks facts are gathered, as if they’re just out there in the world, waiting to be plucked, harvested, or dug up like fruits and vegetables by people who have no particular interest in which facts find their way into their baskets.

Alternatively, we might see those facts as being created and manufactured, through a process of knowledge-production, which relies on concepts and theories that are set to work on the raw materials generated by still other concepts and theories. The implication is that different sets of concepts and theories lead to the production of different knowledges—different sets of facts and their discursive and social conditions of existence.

I have no doubt that many journalists, scientists, and academics “see themselves as beholden to values and standards that transcend party or faction.” But that doesn’t mean they actually operate that way, somehow above and apart from the paradigms they use and the social influences exerted on them and the institutions where they work.

As for as Roberts is concerned, only the “far right” rejects the “very idea of neutral, binding arbiters” and adheres to a “tribal epistemology.” And mainstream liberals? Well, supposedly, they have the facts on their side.

If one side rejects the epistemic authority of society’s core institutions and practices, there’s just nothing left to be done. Truth cannot speak for itself, like the voice of God from above. It can only speak through human institutions and practices.

For Roberts, it’s either epistemic authority or nihilism. Absolute truth or an “epistemic gulf” that separates an “increasingly large chunk of Americans,” who believe “a whole bunch of crazy things,” from liberal Democrats.

What Roberts can’t abide is that we “live in different worlds, with different stories and facts shaping our lives.” But, from a relativist perspective, that’s all we’ve ever had, inside and outside the institutions of journalism, science, and the academy. Throughout their entire history. Different stories and different sets of facts.

And that hasn’t stopped the conversation—the discussion and debate within and between those different, often incommensurable, stories and facts. The only time the conversation ends is when one set of stories and facts is imposed on and used to stamp out all the others. A project always carried out in the name of Truth.

Clearly, Roberts mourns the passing of a time of epistemological certainty and universal agreement that never existed.

Roberts instead should mourn the effects of a Superman theory of knowledge that got him and other mainstream liberals into trouble in the first place. In recent years, they and their cherished facts simply haven’t been persuasive to a large and perhaps growing part of the population.

And the rest of us are suffering the consequences.



I just received my copy of the Routledge Handbook of Marxian Economics, edited by David M. Brennan, David Kristjanson-Gural, Catherine P. Mulder, and Erik K. Olsen.

The handbook contains thirty-seven original essays, including two—“Epistemology” and “Postmodernism”—cowritten with my good friend and frequent collaborator Jack Amariglio.

The handbook is too expensive for most people to buy. But professors and students can ask their college or university to purchase a copy for the library.


Mark Tansey, “Coastline Measure” (1987)


I’ve been over this before.

But I continue to be amazed at the ubiquitous, facile references to science, evidence, and facts and the derision that is directed at the proposition that we live in a post-truth world. On topics as diverse as climate change, globalization, and the role of the working-class in electing Donald Trump, commentators invoke Truth, with a capital t, as an obvious, unproblematic characteristic of making statements about what is going on in the world.

To me, they’re about as silly—and dangerous—as attempting to measure the coastline using a tape measure.

This is the case even in studies, such as those conducted by Tali Sharot [ht: ja], about the supposed diminishing influence of evidence and the existence of confirmation bias.

The very first thing we need to realize is that beliefs are like fast cars, designer shoes, chocolate cupcakes and exotic holidays: they affect our well-being and happiness. So just as we aspire to fill our fridge with fresh fare and our wardrobe with nice attire, we try to fill our minds with information that makes us feel strong and right, and to avoid information that makes us confused or insecure.

In the words of Harper Lee, “people generally see what they look for and hear what they listen for.”

It’s not only in the domain of politics that people cherry-pick news; it is apparent when it comes to our health, wealth and relationships.

At one level, this makes sense to me. There’s a great deal of confirmation bias when we try to make sense of various dimensions of lives and the world in which we live.

But. . .

I also think people are curious about things—information, experiences, and so on—that don’t seem to fit their existing theories or discourses. And, when they do attempt to make sense of those new things, their ideas change (and, of course, as their ideas change, they see things in new ways).

Perhaps even more important, while people like Sharot acknowledge that people often “accept evidence that confirms their preconceived notions and assess counter evidence with a critical eye,” they never consider the possibility that the people who are conducting the research concerning confirmation bias are themselves subject to that same bias.

Why is it always people out there—you know, “the ones who are thinking about health, wealth, and relationships”—that cherry-pick the facts. What about the so-called scientists, including the ones who invoke the Truth; why aren’t they also subject to confirmation bias?

Sharot invokes “the way our brain works”—without ever acknowledging that she and her coinvestigators also use one theory, and ignore or reject other theories, to make sense of the brain and the diverse ways we process information. Others rely on the “scientific evidence” concerning climate change or the gains from globalization or the existence of a resentful white (but not black or Hispanic) working-class, which in their view others deny because they don’t believe the obvious “facts.”

What’s the difference?

I can pretty much guess the kind of response that will be offered (because I see it all the time, especially in economics): the distinction between everyday confirmation bias and real, Truth-based stems from the use of the “scientific method.”

The problem, of course, is there are different scientific methods, different ways of producing knowledge—whether in economics or cognitive neuroscience, political science or physics, anthropology or chemistry. All of those forms of knowledge production are just as conditioned and conditional as the way nonscientists produce (and consume and disseminate) knowledges about other aspects of the world.

As for me, I can’t wait for this period of fake interest in capital-t Truth to pass. Maybe then we can return to the much more interesting discussion of the conditionality of all forms of knowledge production.


Finally, after years of near-orgiastic celebrations of the internet of things—including, of course, Jeremy Rifkin’s extravagant claim that it would move us beyond capitalism and usher in the “democratization of economic life”—commentators are beginning to question some of its key assumptions and effects. What they have discovered is that the internet of things is, “in reality, a very queer thing, abounding in metaphysical subtleties and theological niceties.”

Nathan Heller, for example, finds that, while the gig economy can make life easier and more financially rewarding for many “creative, affluent professionals,” it often has negative effects on those who do the actual work:

A service like Uber benefits the rider, who’s saving on the taxi fare she might otherwise pay, but makes drivers’ earnings less stable. Airbnb has made travel more affordable for people who wince at the bill of a decent hotel, yet it also means that tourism spending doesn’t make its way directly to the usual armies of full-time employees: housekeepers, bellhops, cooks.

On top of that, the fact that the so-called sharing economy has become a liberal beacon (including, as Heller makes clear, among many Democratic activists and strategists) has meant the displacing of “commonweal projects that used to be the pride of progressivism” by acts of individual internet-based exchange.

Perhaps even more important (or at least more unexpected and therefore more interesting), Adam Greenfield focuses on the problematic philosophical assumptions embedded in the ideology of the internet of things.

The strongest and most explicit articulation of this ideology in the definition of a smart city has been offered by the house journal of the engineering company Siemens: “Several decades from now, cities will have countless autonomous, intelligently functioning IT systems that will have perfect knowledge of users’ habits and energy consumption, and provide optimum service … The goal of such a city is to optimally regulate and control resources by means of autonomous IT systems.”

There is a clear philosophical position, even a worldview, behind all of this: that the world is in principle perfectly knowable, its contents enumerable and their relations capable of being meaningfully encoded in a technical system, without bias or distortion. As applied to the affairs of cities, this is effectively an argument that there is one and only one correct solution to each identified need; that this solution can be arrived at algorithmically, via the operations of a technical system furnished with the proper inputs; and that this solution is something that can be encoded in public policy, without distortion. (Left unstated, but strongly implicit, is the presumption that whatever policies are arrived at in this way will be applied transparently, dispassionately and in a manner free from politics.)

As Greenfield explains, “Every aspect of this argument is questionable,” starting with the idea that everything—from users’ habits to energy consumption— is perfectly knowable.

Because that’s the promise of the internet of things (including the gig economy): that what individuals want and do and how the system itself operates can be correctly monitored and measured—and the resulting information utilized to “provide optimum service.” The presumption is there are no inherent biases in the monitoring and measuring, and no need for collective deliberation about how to solve individual and social problems.

The ideology of the internet of things is shorn of everything we’ve learned about both epistemology (that knowledges are constructed, and different standpoints participate in constructing those knowledges differently) and economic and social life (that the different ways the surplus is produced and distributed affect not only the economy but also the larger social order).

It seems the conventional ways of thinking about the internet of things are merely an extension of mainstream economists’ ways of theorizing the world of commodity exchange, allowing a definite social relation to assume the fantastic form of a relation between things.

That’s where metaphysics and theology leave off and the critique of political economy begins.