The Means of Prediction: How AI Really Works (and Who Benefits) by Maximilian Kasy (University of Chicago Press, 2025)
With the AI “revolution” coming for our jobs, our creativity, our politics, our environment, and just about anything else you’d care to think of, it is heartening to see an emerging critique of the shameless boosterism characteristic of so much discussion of the technology. Important works have exposed AI systems’ penchant for racial and gender bias, their negative impacts on student achievement, and mental health, as well as highlighting the masses of low-paid, precarious human labour required to make these systems appear “intelligent.” What has been lacking in this robust critique however, is any sort of guide to action. While Kasy’s The Means of Prediction shouldn’t be read as a “guidebook” for 21st century luddism-against-the-computers, it does provide two invaluable contributions to the cause: an exposition of the systems of power that shape our technology, and thus, a framework for how to act. That is, how to take this technology and make it into something potentially liberatory. For that, The Means of Prediction: How AI Really Works (and Who Benefits) should be on every radical’s reading list.
Kasy teaches economics at the marquee institution of the English gentry, University of Oxford. Prior to that, he taught at the American equivalent, Harvard. Which is to say, his critique comes from the heart of the establishment. This provenance makes his argument that much more (pleasantly) surprising, given that he is open about the capital nature of AI systems, and technology in general. Right there on the first page of the preface, he argues, “What has not been presented [in all the critiques of AI published thus far] is a unified framework for understanding how AI will proceed in a society that is shaped by power and inequality[…] Amid all the breathless debates about technical details, new possibilities, and social problems, I argue that the key issue that unites all the problems of AI is the choice of objectives that AI pursues, and the question of who controls these objectives.” He proceeds to provide a non-technical introduction to the technology itself, emergent debates about AI and its discontents, as well as the various “parts” of the whole, control of which will determine who benefits from that technology’s adoption.
AI systems require a huge amount of four goods: data, computing infrastructure, technical expertise, and energy. It is the author’s contention that the privatization of the Internet allowed huge digital companies to emerge and capture these resources. Data is the product of millions of internet users’ activities — typed words, uploaded photos, edited videos, comments and “likes” on social media, etc. The amount of “compute” necessary to train the models that are the guts of an AI system are prohibitively expensive to all but large companies. Likewise for technical expertise, and access to the gigawatts of electricity required to run the “server farms” that contain these AI tools comes at a steep price. Ultimately, Kasy concludes, those who control these “means of prediction” control what “value functions” the AI is designed to pursue. “Consider the algorithms that select what is shown in your Facebook feed. The problem[…] is that the algorithmic decisions that make Mark Zuckerberg rich might also be decisions that undermine the democratic process, or harm teenage mental health[…] There are countless settings where algorithmic decisions are contentious — not because the algorithms are not aligned with their human owners, but because the objectives of these owners stand in conflict with the welfare of other people.”
The author provides fascinating forays into debates on “what is intelligence,” on the contrasting theories of fairness (equal treatment vs. social welfare) that shape policymaking, and the problematic emphasis on individual privacy that has dominated public discussion of remedies to digital ills. All well worth consideration, but this review will slip past those chapters for want of space, and focus on the question of explainability, for herein lies the author’s greatest contribution to progressive social movement building.
Explainability in AI systems, Kasy argues, really means different things to different actors. The controllers of these systems (ie — Silicon Valley capitalists) have thus far conflated various meanings in order to forestall public debate so as to foreclose on the possibility of democratic control. The author shows that there are three distinct kinds of explainability that are important to different groups: explaining the decision function, explaining decisions, and explaining the decision problems. The first addresses the question, “How are inputs mapped into decisions by the algorithm?” This is useful to engineers and no one else. The second, “What was a particular decision made in a particular instance?” would be valuable to those adversely affected by an AI system. The current controllers of the means of prediction focus almost exclusively on either or both of these concepts, to the exclusion of the one that matters most to society and to the prospect of democratic control, “What is the AI system trying to achieve and based on what information?”
It is only in answering that final question that workers can hope to wrest control of these immensely powerful tools away from capital and turn them into tools to better society. It’s not the algorithm that is the problem, it’s the human who controls it, who decides what purpose it will be put to — enriching the few, or empowering the many. Kasy’s final chapters cover some considerations to help us build the control mechanisms we need. Add The Means of Prediction to your own activist toolkit; it’s essential reading to build a better tomorrow and today.
Did you like this article? Help us produce more like it by donating $1, $2, or $5. Donate

