This book is no atlas; the term “atlas” suggests a neutral presentation. Instead, Kate Crawford’s book is a virulent critique of all AI. A better title would be “How computing forms part of the power structure of the modern world”:
AI takes the central position in society’s redemption or ruin, permitting us to ignore the systemic forces of unfettered neoliberalism, austerity politics, racial inequality, and widespread labor exploitation. (p214)
Kate Crawford’s book is the 21st-century equivalent of the Luddite attacks: hand-workers in the early 19th century who attacked industrial machinery, in the forlorn hope that by destroying the machines they could go back to a former way of working. The machines, of course, were soon replaced. Crawford’s book is a long diatribe against mass production, the gig economy, mining, McDonalds, the Chicago meat packing industry, Amazon warehouses, and even container shipping – and that’s just the first two chapters. The book ends with a coda of several pages condemning “space colonization” You might ask where AI figures in all of this. As with the Luddites blindly attacking what is in front of them, I think that Ms Crawford has simply let her often justified critique of specific implementations of AI extend into a generalised complaint about the business world and those aspects of modern life she is not happy with. Imagine if you substituted the word “computing” for “AI” in this book. Many of the arguments about the political economy in which the subject operates would still apply. You would have a book condemning computing, since computing could be viewed as a tool to implement the systems of power in the modern world.
Extending the argument
Frequently through the book Ms Crawford complains about low-paid labour used around AI projects. For example, on page 67, she talks about “fauxomation”, the way that humans are sometimes used to give the impression of automation, for example the Amazon Mechanical Turk system, by which humans can bid to carry out tasks for payment. Crawford’s complaint is:
their labor is interchangeable with any of the thousands of other workers who compete with them for work on platforms. At any point they could be replaced by another crowdworker, or possibly by a more automated system.
Is this an argument to eliminate mindless labour? That makes sense to me. But the argument here seems almost to condemn AI as it might take away the menial work the author has just condemned.
Many of the examples quoted by Crawford are examples of error and iniquity. However, she extends her argument to the point at which she condemns all AI. This is seen most clearly towards the end of the book. Asking the rhetorical question of whether more attention to ethics would improve AI, she writes:
ethics is necessary but not sufficient to address the fundamental concerns raised in this book. To understand what is at stake, we must focus less on ethics and more on power. AI is invariably designed to amplify and reproduce the forms of power it has been deployed to optimize. (p224)
Is AI “invariably” designed in this way? Similarly, she writes:
Datasets in AI are never raw materials to feed algorithms: they are inherently political interventions. The entire practice of harvesting data, categorizing and labeling it, and then using it to train systems is a form of politics. (p221)
Are datasets “Never” raw materials? Is the “entire” practice of harvesting data a form of politics? It would appear that all taxonomies are suspect, and hence AI is suspect too:
“Classifications are technologies that produce and limit ways of knowing, and they are built into the logics of AI.” p147
Every aspect of AI is condemned, even, paradoxically, the aim of constructing the largest possible datasets:
The collect-it-all mentality, once the remit of intelligence agencies, is not only normalized but moralized (p. 220).
“It does not matter that the amounts [of data] collected may vastly exceed a firm’s imaginative reach or analytic grasp … we do these things because we can”. (p112)
What would the author prefer? Any selection of data is a political act. Much of the book complains about collecting subsets of data that are skewed in some way. So why not collect a larger corpus? The Snowden Papers comprise over a million documents, but nobody is complaining that there are too many documents.
The main recommendation at the end of the book is “refusal”: an appeal to populism: “rejecting the idea that the same tools that serve capital, militaries, and police are also fit to transform schools, hospitals, cities and ecologies, as though they were value neutral [sic] calculators that can be applied everywhere.” (p226). She believes that “populations … [will] choose to dismantle predictive policing”. But where do these populations get their knowledge from? I would be dismayed if populations used a book like this to condemn all AI. I think there is a great contrast between this book and Cathy O’Neill’s Weapons of Math Destruction. O’Neill presents many well-argued case studies on how AI is used to draw unwarranted or biased conclusions from data. But her arguments do not question the whole basis of AI, merely the way it has been implemented as a form of social control. This book appears to contain no positive statements about AI: my impression is that Kate Crawford would rather that AI and machine learning did not exist at all.