(now all together)
You’d think, if you’re going to write about the inhumane effects of robots on our daily lives, you’d at also acknowledge the long, rich history of human movements and thinking about machinery and other technological developments since at least the nineteenth century.
But that’s not what we get from Simon Chandler [ht: ja] who deplores the new artificial intelligence and robotic technologies being developed by a wide range of companies, from Toyota to Amazon. Why? Because they threaten to reduce human autonomy:
With artificial intelligence suggesting to people what to consume, when to turn the heating down, when to get out of bed, and when to do anything else, people will find themselves becoming ever more regularized and automated in their behavior. Regardless of the fact that AI is characterized by its ability to adapt, to learn from how its putative user reacts, it can adapt only so far (especially in its present form) and can perform only so many actions. This means that any person who allows AI into their home will have to adapt to its behavior; will have to begin conforming to their robot helper’s way of doing things, to its rhythms, schedules and choices. As such, they will become more formalized and systematized, losing much of their spontaneity, impulsiveness and autonomy in the process.
Because of this increased tendency toward repetition and inflexibility, the AI or robot assistant will make its “master” more repetitive and inflexible. Its master will come to divide her time and spend her day according to algorithms which, no matter how advanced, are still nowhere near as complex as the human brain. Therefore, with growing frequency, she may be reduced to a mere function of these algorithms, pressured into acting in accordance with her android butler, into adopting the stereotype it foists on her.
Because these AIs would be the product of single R&D centers, such as the Toyota Research Institute, this influence of robots on human behavior will also represent a general homogenizing and centralizing of said behavior. Instead of being the result of innumerable interactions with hundreds of people and with her own community, the AI user’s psychology and personality will be molded to a greater extent by Toyota, Google or Facebook, particularly if this user becomes more socially isolated and more reliant on robotic aids.
What Chandler seems not to understand is that technologies, once invented, take on a life of their own—or, at least, a certain degree of autonomy. And we have lots of examples of people reacting to and thinking about the consequences of those technologies, as they become relatively (and, perhaps these days, increasingly) autonomous.
I’m thinking, for example, of the machine-breaking Luddites who, as both Eric Hobsbawm and Thomas Pynchon explain, were not hostile to machines as such, but using a technique of trade unionism (when labor unions barely existed): “as a means both of putting pressure on employers and of ensuring the essential solidarity of the workers.”
There’s also Marx, who (especially in Part 4 of volume 1 of Capital) wrote a great deal about machinery—as a way of increasing relative surplus-value, in terms of its sweeping-away of handcraft workers, as a means of employing women and children, as weapons against the revolts of the working-class, and much more.
And, of course, building on and extending Marx’s analysis, Harry Braverman’s Labor and Monopoly Capital: The Degradation or Work in the Twentieth Century (pdf): on the role of scientific management as the “displacement of labor as the subjective element of the labor process and its transformation into an object” and the role of machines which “has in the capitalist system the function of divesting the mass of workers of their control over their own labor.”
More recently, we have plenty of other sources, such as AI, Robotics, and the Future of Jobs by the Pew Research Center. What is interesting about the report, which starts from the premise that automation and intelligent digital agents will permeate vast areas of our work and personal lives by 2025, is that almost half (48 percent) of the technological experts who responded to the survey
envision a future in which robots and digital agents have displaced significant numbers of both blue- and white-collar workers—with many expressing concern that this will lead to vast increases in income inequality, masses of people who are effectively unemployable, and breakdowns in the social order.
Finally, there’s Jacobin magazine’s special issue, “Ours to Master,” in which the various authors see new technologies both as today’s instruments of employer control and as the preconditions for a post-scarcity society. As Peter Frase explains,
The mainstream discourse tends toward the facile view that technology is a thing that one can be for or against; perhaps something that can be used in an ethical or unethical way. But technology in the labor process, just like capital, is not a thing but a social relation. Technologies are developed and introduced in the context of the battle between capital and labor, and they encode the victories, losses, and compromises of those struggles. When the terms of debate shift from the relations of production to a reified “technology,” it is to the benefit of the bosses.
I hope readers will find the links to these various sources useful.
My only point is that we can do much better than the humanist discussion of the inevitable engagement of humans with their uncontrollable creations (as in Chandler’s case) by examining the consequences and reactions (within specific and quite different capitalist and noncapitalist contexts) of the relatively autonomous technologies that are being invented today—a complex, contradictory process that will surely continue for the foreseeable future.