The Illusion of Intelligence
A man sits at his desk and types a question into ChatGPT. He believes he is thinking. He isn’t. The screen lights up with a stream of fluent, confident text, and he nods—perhaps even smiles—at what feels like understanding. But what he has performed is not reasoning. It is selection. He has chosen not to think, but to accept the appearance of thought. He has not evaluated. He has deferred. The tool feels intelligent; therefore, he assumes it must be.
This is the fundamental error of our time: mistaking coherence for cognition, surface fluency for epistemic legitimacy. Language models do not reason. They do not understand. They do not know. They simulate knowledge by statistical resemblance. And yet, their adoption is not limited to the curious or the critical. They are everywhere—in classrooms, offices, policy briefings, media feeds. We have built a new infrastructure for thinking, but we have misunderstood what it does.
Artificial intelligence is not democratising access to intelligence. It is consolidating it. It amplifies the capacity of those who already know how to ask, how to doubt, how to test. And it pacifies the rest. It reinforces a split not just in skill, but in epistemic function. A class that interprets, and a class that consumes. The former builds tools; the latter receives impressions.
This is not a technical matter. It is political. It concerns the distribution of rational agency in a system that appears open but embeds hierarchy by design. A prompt is not a question—it is an interface with power. And in an age where algorithms filter belief and coherence masquerades as truth, the stakes are no less than the survival of deliberation itself. We are not teaching people to think. We are teaching them to ask. And then to stop.
The Great Misunderstanding: What AI Really Does
Artificial intelligence does not think. It does not understand, reason, judge, or deliberate. What it does—profoundly and efficiently—is predict. A large language model, for instance, consumes oceans of text and learns the statistical likelihood of one word following another. It generates responses by calculating probable sequences, not by forming beliefs or grasping meaning. When it answers a question, it does not know what a question is. It recognises a pattern that resembles one.
This is the great misunderstanding at the heart of our AI discourse. Because these systems produce coherent sentences, we attribute to them the properties of minds. We project agency where there is only machinery. We interpret relevance as intentionality. The model sounds intelligent—therefore it must be. But fluency is not intelligence. It is the performance of intelligence. And the danger is that we have mistaken this simulation for the real.
The political consequence of this confusion is catastrophic. For those who understand the system—how it tokenises language, how outputs are shaped by training data, how prompt engineering alters direction—AI is a tool. They interrogate it, test it, structure inputs and audit outputs. They remain sovereign over the reasoning process. But for those unfamiliar with the architecture, AI becomes an oracle. Its outputs are received as answers, not hypotheses. The tool becomes a source of truth, rather than a mirror of linguistic probability.
This divide is not about access; it is about cognition. Thomas Sowell described knowledge asymmetry as the defining feature of society—the gap between those who know and those who do not. AI has widened that gap. It amplifies those who can leverage it, and pacifies those who cannot. Jacques Ellul warned that technical systems do not liberate; they structure dependency. Hannah Arendt wrote that modern automation, when coupled with human passivity, removes not only labour but thought itself. What we are seeing is not a new Enlightenment, but a new infrastructure of suggestion—slick, responsive, seductive.
The model’s outputs feel intelligible. But they are not explanations. They are reflections—shaped by engagement metrics, reinforced by repetition, designed to please. Those who treat the machine as a thinking partner become dependent on an illusion. And in that illusion, critical reasoning collapses, not because it is forbidden, but because it is unnecessary.
Cognitive Stratification: A New Caste System
Artificial intelligence does not distribute intelligence; it redistributes epistemic power. It does not flatten hierarchies; it sharpens them. At the top of the informational pyramid sit the epistemic elite—those trained in logic, recursion, and symbolic abstraction. They understand not merely how to use AI, but how it works. They grasp that what the model generates is not knowledge, but prediction. And so they use it adversarially: to test ideas, to expose contradictions, to simulate argument, not to settle it.
Below them, vast in number, is the cognitively pacified class. This is not a slur on intelligence. It is a description of dependency. These are individuals who take fluency as fact, suggestion as synthesis, coherence as truth. They believe that if an answer is grammatically structured and emotionally satisfying, it must be correct. But the model is not a tutor; it is a mirror of collective linguistic behaviour. To accept its outputs uncritically is not to think—it is to surrender the burden of thought.
Cathy O’Neil warned of this in Weapons of Math Destruction: algorithmic systems, once embedded in institutions, do not merely reflect bias—they amplify it. They encode assumptions, hide them in complexity, and enforce them at scale. The same is true of AI as an epistemic system. Its predictions reinforce the norm, repackage the familiar, and privilege the statistically probable over the critically necessary. The model optimises for agreement, not adversarial truth.
Evgeny Morozov calls this the age of “technological solutionism,” where problems are recast as input/output optimisations. The danger is not that AI is malicious, but that it is indifferent. It rewards those who know how to prompt it and pacifies those who don’t. It is not an accident that a small class of system-literate users extract immense cognitive leverage from the same tool that leaves others docile, reliant, and epistemically anaesthetised.
This is the architecture of algorithmic trust. Trust, once rooted in institutions or evidence, now emerges from interface smoothness, response time, and syntactic confidence. But what is being trusted is not intelligence, nor insight. It is the illusion of understanding. The caste divide is not in access, but in capacity: between those who interrogate the system and those who defer to it. Between those who wield AI as an amplifier, and those for whom it becomes a pacifier. And in this silent bifurcation, a new aristocracy is born—rational by training, sovereign by design.
Deliberative Democracy Is Dying
Democracy presupposes disagreement—but only within shared reference frames. It requires a procedural commitment: that claims can be challenged, that evidence can be weighed, that truth is not dictated but discovered. This architecture of deliberation—slow, adversarial, recursive—is being overwritten. Not by censorship or ideological capture, but by the smooth machinery of algorithmic suggestion.
Information today is filtered, not chosen. What you see is what optimises your engagement. What you believe is what reduces friction. Each user is fed a reality designed for coherence, not confrontation. The result is not just echo chambers—it is the fracturing of the epistemic commons. As platforms tailor content to micro-personalised affinities, the very ground of democratic exchange disintegrates. There is no common text, no shared baseline, only parallel realities.
Artificial intelligence accelerates this fragmentation. It generates content that appears reasonable, emotionally aligned, and syntactically flawless—yet lacks falsifiability. It cannot tell you whether something is true, only that it resembles truth. It cannot interrogate a premise, only reinforce a pattern. It is built for fluency, not epistemic resistance.
In Manufacturing Consent, Noam Chomsky argued that media systems construct ideologies by selecting what is thinkable. Today, the architecture has shifted: from editorial bias to systemic inference. It is not elites manufacturing consent, but systems manufacturing belief. The individual does not realise they have consented to anything—only that what they see “feels” correct.
Deliberative democracy cannot survive on feelings. It requires formal structures of disagreement. But disagreement presumes a shared grammar of reason. And as AI erodes that grammar—replacing argument with output, justification with plausibility—the very idea of democratic legitimacy begins to dissolve. The problem is not that people disagree. It is that we no longer even disagree about the same thing.
What Sovereignty Really Requires
To be free is not to choose from a menu. It is to understand how the menu was made, who designed it, and why. In the age of artificial intelligence, the illusion of choice has replaced the exercise of reason. Fluency in using interfaces—scrolling, prompting, clicking—is mistaken for agency. But sovereignty is not interaction. It is judgment.
Epistemic sovereignty means the capacity to interrogate what one is told. Not to reject it reflexively, but to trace its origin, test its logic, and evaluate its claims. In an informational economy increasingly shaped by probabilistic output and machine inference, this kind of sovereignty is vanishing. What remains is the performance of autonomy: the impression of control while predictive systems shape the boundaries of what is thinkable.
Education, if it is to matter at all, must reorient. Digital literacy is not enough. The citizen of the AI age must be trained in formal logic, Bayesian reasoning, and recursive scepticism. Not just how to use tools, but how to deconstruct them. Not just how to seek information, but how to falsify it. What matters is not data access but the capacity for adversarial cognition—questioning, counterposing, resisting.
Procedural reasoning is no longer a luxury of philosophers or scientists. It is the baseline civic skill. Without it, people become informational subjects—compliant, reactive, pacified. The sovereign individual, by contrast, resists the shortcut. They are slow, deliberate, structured. They understand that belief is not a feeling but a construction.
In a world governed by probabilistic suggestion, sovereignty is not the ability to prompt a machine, but to question its premise. The future will not be divided by who has access to AI—but by who retains the capacity to say, “No.”
What Must Be Done
If sovereignty means anything in the age of machines, it begins with the right to question them. This is not a call for explainability theatre or soft-touch regulation. It is a call for a new epistemic rights framework—one that enshrines the citizen’s right to disassemble, verify, and reconstruct the outputs that increasingly shape perception, belief, and action. Just as legal rights evolved to meet the threats of state power, epistemic rights must emerge to confront the soft tyranny of suggestion engines.
This requires open cognitive infrastructure. Not just transparency reports or opt-out toggles, but public tools designed for adversarial interrogation of AI systems. Interfaces that expose not only what a model says, but how it was shaped to say it. Models that are not sealed, branded products but components in a civic epistemology. Infrastructure that treats the user not as a consumer of coherence, but as a constructor of claims.
And it begins with education. Not digital literacy, not prompt engineering, not tech boosterism. Logic must be taught as grammar. Probability as rhetoric. Recursive reasoning as the form of thought that makes judgement possible. The ability to distinguish surface plausibility from structural truth must become a core civic skill, no less essential than reading or arithmetic.
The future will not be divided by who has access to technology. Access is already near-universal. The divide will be epistemic. Those who can reason through the machine will govern. Those who cannot will be governed. The illusion of intelligence will comfort the latter, while the former rewrite the rules. There is no middle ground. Either citizens learn to reason—or the interface becomes the state.
It is fair to critique the LLMs as glorified auto-completes, but I do find them to be extraordinarily useful tools (they really have learned how to code), albeit with serious limitations. For one, LLMs basically just present the consensus view, and it takes quite a bit of herding to force them to explore contrarian takes. Hallucination is another major problem which can be dealt with by having them generate code for rigorous data analysis. However, LLMs have already passed the Turing test and I suspect that true intelligence is an emergent phenomenon that will be achieved when LLM parameterization scales by 100x to more closely approximate the average human brain of 86 billion neurons each with around 1,000 synaptic connections, incorporates a version of memory, and is modified to allow both internal and external feedback training loops - possibly by ChatGTP version 6.0. With true AGI quickly approaching, we had better hurry up and solve the alignment problem, or prospects for team human are bleak.
It very much feels like a new round of rhetorical reasoning, from a new clade of Sophists. Dialectic, as it did then, so it does now — decimates the digital.
These machines really do put the artificial, into intelligence.