I have just finished Carissa Véliz’s new book Prophecy, and I cannot stop thinking about it. The philosopher from Oxford has written a witty, surprising, and urgently necessary account of how generative AI works — not as a truth machine, but as a fortune-teller.
AI Is Not a Truth Machine — It Is a Fortune-Teller
Large language models do not “know” anything; they predict the most probable next token, the most plausible combination of words they have seen before. They are, as Véliz puts it, built to be fortune tellers, not truth tellers. They colonise our lives with correlations while ignoring everything they do not know. And in doing so, they make Big Tech richer and the rest of us less safe and less free.
I love the book for exactly the reason the New York Times reviewer Jennifer Szalai highlights: it opens the reader’s mind to entirely new dimensions of what AI actually is. Yet for me, as a technologist who has worked quite a bit with AI, the single most important insight is not about artificial intelligence at all. It is about what AI reveals — and dramatically accelerates- about the society we already live in.
We Have Entered Baudrillard’s Simulacrum
We have quietly slid into what Jean Baudrillard called the simulacrum: a stage of reality in which signs, models, and classifications no longer represent the world; they precede and create it. Véliz never names Baudrillard in the passages I found most powerful, but her analysis of how statistical categories and predictive systems work lands in the same territory.
How Classifications Create the World They Claim to Describe
Here is the mechanism she lays bare (and that I have been watching with growing unease for years):
Precise and standard measures are preferred to accurate ones. What matters is that measurements play the role that we want them to; more than that, they are a truthful reflection of reality.
Classifications have an impact on people’s lives: people learn to fit the category to comply with the system. Categories tend to create the world they purport to represent. Statistical categories give rise to individual and collective identities. Those who fail to conform to taxonomies are stigmatized and excluded, and most people end up internalizing the values of bureaucracy. And then the numbers start working better — they comfortably inhabit the world they built after punishing or disappearing whoever or whatever defied their classification.
The Quiet Death of Common Sense
This is the deeper story. We have stopped treating reality as something messy, ambiguous, and best navigated by common sense. Instead, we treat it as a more or less fixed set of classifications and categories that serve as guides through an increasingly complex life. We internalise them. They become reality. Anyone who does not fit is no longer understood; they become outliers, outsiders, problems to be managed or ignored.
The craziest part? Most people do not even notice the shift. When in doubt about what to do or how to do something, we no longer ask ourselves what common sense or lived experience would suggest. We look up the regulation, the guideline, the risk matrix, and the approved category. The classification has replaced judgment. Bureaucracy has replaced wisdom.
AI: The Ultimate Booster of the Simulacrum
AI is not the cause of this transformation. It is the ultimate booster. Where earlier bureaucratic systems were slow and clumsy, predictive algorithms are fast, invisible, and terrifyingly effective. They do not merely describe the world; they optimise it according to the categories we have already accepted. They punish deviation before it even happens. They make the simulacrum run smoothly.
The Turkey That Trusted the Pattern
Véliz’s turkey example (borrowed from Bertrand Russell’s chicken) is perfect here. The farm animal trusts the pattern — food appears every morning — right up until the day it does not. Our society is doing the same with its classifications. We have convinced ourselves that if we just refine the categories enough, standardise the measures enough, predict the probabilities enough, reality will finally behave. The numbers will work. The world will fit the model.
It already does — for those who internalise the model. Everyone else disappears from the dataset or gets labelled “non-compliant” — outliers.
Why ‘Prophecy’ Is Not Just Another AI Book
This is why Prophecy is not just another AI book. It is a diagnosis of a civilisational change that most commentators are still missing. The real danger is not that the machines will become conscious. The real danger is that we have already outsourced our sense of what is real to the machines, and to the classifications they supercharge.
Time to Step Outside the Categories
I recommend Véliz’s book without reservation. Read it for the sharp history of prediction from ancient oracles to insurance actuaries to today’s chatbots. Read it for the devastating clarity on how prediction is really about power. But above all, read it for the larger story it tells almost in passing: we are living inside a self-reinforcing simulation of categories, and we are learning to love it because it feels safer than the messy, unpredictable world it replaced.
The question is no longer whether AI will change society. The question is whether we still remember what society looked like before the simulacrum took over, and whether we still dare to step outside the categories it demands we inhabit.