3  The Attention Economy and the Displacement of Public Reason

The authoritarian assault analysed in Chapter 2 is politically authored but technically conditioned. The attention-extractive platform economy — theorised by Zuboff, Williams, Citton, Wu and Hari (Citton, 2017; Hari, 2022; Williams, 2018; Wu, 2016; Zuboff, 2019) but better understood, here, in the broader register of Cohen, Véliz, Nissenbaum and Coeckelbergh (Coeckelbergh, 2022; Cohen, 2019; Nissenbaum, 2010; Véliz, 2020) — is not a passive backdrop to political anti-intellectualism but its constitutive technical infrastructure. Four sub-arguments:

  1. The platform economy is structurally hostile to the practices — slow, error-correcting, peer-mediated, institutionally framed — that constitute public reason. That hostility begins in early socialisation, not in the adult information environment.

  2. Attention extraction and surveillance extraction are not separable business models but the two faces of the same political-economic order. The weakening of privacy and the cognitive disorientation of the public sphere are co-produced.

  3. The same economy provides the discursive vector along which authoritarian-populist mobilisation operates, and the commercial mechanism through which the platforms’ owners align with the regime they amplify. The recursion is well-documented and politically consequential.

  4. The same economy provides the channel through which, in Part II, intellect re-appears not as deliberative practice but as branded surface — on the only register the platforms permit to monetise.

The argument is not technological-determinist. The platform economy is a political-economic order, not a fact of nature; it could have been built otherwise, and could yet be unbuilt or rebuilt. What is established here is the more limited but unavoidable claim that the attention economy is the medium in which the present crisis is metabolised — and that any account of political anti-intellectualism that brackets it is misspecified.


3.1 Three theses on attention

Thesis 1 — Attention is constitutively scarce, and deliberation requires it in non-fungible quantities

Following Citton (Citton, 2017) and Wolf (Wolf, 2018), the cognitive operations characteristic of public reason — sustained reading, peer contestation, the slow construction of warranted belief, the revision of one’s own commitments under the pressure of better evidence — require attentional conditions that are sociologically distributed and culturally cultivated. They cannot be conjured on demand. They cannot be reproduced under conditions of constant interruption. They are, in Wolf’s phrase, deep-reading capabilities: not native equipment but acquired neuro-cognitive infrastructure that has to be built and then maintained.

This is not a romantic claim about reading. It is a claim about the temporal conditions of inferential work: forming a justified belief on a politically consequential matter requires sustained contact with the relevant evidence, exposure to disconfirming arguments, time enough to revise. Each of those operations has a characteristic temporal grain, and that grain is incompatible with the millisecond-rate competitive bidding of attention markets. When the temporal substrate is removed, what remains is not a faster version of public reason but a different cognitive activity altogether — one that the rest of this chapter will characterise.

Thesis 2 — Platform business models are structurally engineered to extract attention as a commodity

Drawing on Zuboff’s analysis of surveillance capitalism (Zuboff, 2019), Williams’s explicit indictment of the persuasion industry (Williams, 2018), and Cohen’s structural-legal account of informational capitalism (Cohen, 2019), the chapter takes as established that the dominant platform business model rewards affective intensity, novelty, in-group reinforcement and outrage at scales and rates that are constitutively incompatible with deliberative cognition. This is not a matter of the platforms occasionally hosting bad content. It is a matter of the optimisation function. The metric the system optimises is engagement; the content that maximises engagement is the affectively saturated, hierarchically polarised, low-deliberation material that the system therefore privileges by design (Gillespie, 2018).

The point bears repeating because rebuttals to it routinely confuse the descriptive and the moral. To say the system is engineered to extract attention is not to say the engineers are wicked; it is to say that wickedness is unnecessary to the result. The business model produces the cognitive degradation as a by-product of its core operation. No conspiracy is required.

Thesis 3 — The result is not stupidity but disorientation

Picking up Harsin’s analysis of post-truth as a regime of attention rather than a regime of truth (Harsin, 2024), and Hadarics’s empirical work on authoritarianism’s biased perception of media freedom (Hadarics, 2025), the chapter argues that the platform-mediated public is not characterised by ignorance — it is over-informed — but by epistemic disorientation: the inability to coordinate justified belief across the politically relevant population. This is the political-cognitive condition under which competitive authoritarianism is electorally feasible. It is also the condition under which, as Chapter 4 develops in detail, being seen to think becomes a more reliable signal of status than the thinking itself.

The disorientation thesis carries important consequences for intervention. If the problem were stupidity, the remedy would be education in the older sense — more facts, better-credentialed sources, sharper fact-checks. The empirical record on fact-checking’s downstream electoral effects gives little ground for optimism in that register (Morelock and Narita, 2022). The disorientation thesis predicts what we observe: a public that is perfectly capable of identifying a falsehood when isolated, and yet unable to coordinate that capacity into stable, publicly shared, action-guiding beliefs. The damage is to the coordination function of the information ecosystem, not to individual cognitive capacity.


3.2 Attention from cradle to classroom: socialisation as infrastructure

A common move in attention-economy critique is to describe an adult information environment that was deliberative and is now distracted. That framing misses what is most consequential about the present moment, which is that the cohorts now entering political adulthood have never inhabited the prior environment. For them, the attention-extraction model is not a degradation of an inherited infrastructure; it is the infrastructure. This section argues that political anti-intellectualism’s deepest mechanism is not adult persuasion but the slow reshaping, across two decades of schooling-mediated socialisation, of the attentional habits on which deliberative citizenship depends.

3.2.1 From the Google effect to cognitive surrender

The point of departure is the well-established literature on cognitive offloading: the documented finding, going back to Sparrow’s so-called “Google effect” (Sparrow et al., 2011), that ready external retrieval of information predictably reduces intra-cranial encoding. The finding is robust, replicates, and is in itself neither surprising nor alarming. What has changed is the scope of the offloaded function. Where in 2011 the offloaded operation was recall, by the mid-2020s it is increasingly the inferential operation itself: not just “what was the date of the treaty” but “what should I think about this issue.”

Recent neurophysiological work, while still under peer review, points sharply in one direction. In the MIT Media Lab study reported by Hogenboom (Hogenboom, 2026; see also Kosmyna et al., 2025), fifty-four students were asked to write short essays under three conditions: with ChatGPT, with search-engine access (AI summaries disabled), and without technology. The unaided group showed widespread cortical activation of the kind associated with creative and integrative work; the search-only group showed strong activity in visual-processing regions; the ChatGPT group showed up to a 55% reduction in overall brain activation. Crucially, four months later, when the ChatGPT group was asked to write again without the tool, their neural connectivity was lower than that of students who had been moved in the opposite direction. The effect, in other words, appears to persist beyond the immediate task.

A separate study at the University of Pennsylvania, examining forecasting tasks, identified a behavioural pattern the authors labelled cognitive surrender: a tendency, in users of generative AI tools, to accept the system’s output with minimal scrutiny and to allow it to override their own intuition. The same broad pattern has now been documented outside the chatbot context. A multinational trial of AI-assisted colonoscopy found that endoscopists who had worked with the AI tool for three months were, after its withdrawal, measurably worse at spotting tumours unaided than they had been before (Hogenboom, 2026, citing the Lancet Gastroenterology study). Whatever one makes of the specific neural findings, the behavioural-skill finding is straightforward and corroborates the older deskilling literature: when cognitive operations are routinely delegated, the underlying capacity attenuates.

The political reading of this evidence is not that AI tools “make people stupid.” The political reading is that whatever fraction of inferential work is delegated is not, during the time of delegation, being done by the human; and that capacities not exercised do not remain stable. For a population whose central political task is forming and revising warranted beliefs about contested public matters, the substitution of generative-AI inference for one’s own inference is not a private cognitive matter. It is an infrastructure-level change in the conditions of public reason.

3.2.2 Reading as politically consequential capacity

Reading is the most direct test case, because the relevant data covers a long enough period to register a trend and because reading captures, in a single bundle, the cognitive habits that deliberation draws on: sustained attention, hypothetical reasoning, perspective- taking, integration of new information into prior commitments (Wolf, 2018).

The headline numbers — the long-running NEA series in the United States (National Endowment for the Arts, 2025), the Reading Agency UK figures (Reading Agency, 2024), the ENlit and Eurostat data for the European Union — show a slow, broadly consistent decline in the share of adults reading literary work for pleasure, with a marked acceleration among adolescents and young adults from the early 2010s onwards. The Cyberpsychology literature has begun to document the substitution effect at the behavioural level: not a decline in screen time but a structural reorientation of screen time away from text-extended formats and towards short-form video (Cyberpsychology, Behavior, and Social Networking, 2025).

PISA 2022 recorded across-OECD declines in reading proficiency of a magnitude not previously observed in the survey’s history. Roughly one in four fifteen-year-olds tested below the threshold the OECD considers minimal for participation in an information society. The cohort assessed had received much of its primary schooling under one-to-one device deployment.

The widely-noticed 2024–2025 boom in Barnes & Noble store openings, against this backdrop, is best read as a nostalgia-commerce phenomenon — bookshops as themed retail experience for a market that values the signal of reading more than its practice. The Dior tote bags emblazoned with the titles Madame Bovary, Les Liaisons Dangereuses and Les Fleurs du Mal, retailing in 2026 at around £2,400 (Cartner-Morley, 2026), are the same phenomenon legibly priced. We will return to this in Part II.

What matters here is the prior question: why has the substitution happened? The answer cannot be that short-form video is more entertaining than reading, because that has always been true of short-form competitors. The answer is structural: short-form video is engineered to capture attention at the millisecond grain; literary text is engineered to reward attention at the multi-minute grain. In any environment in which both compete on equal terms for the same scarce resource, the millisecond-grain medium wins by design. The decline in reading is not a moral failing of the younger cohorts. It is the predictable behavioural output of a choice architecture they did not design and did not consent to.

3.2.3 The pedagogical front

This is the context for the recent multi-jurisdictional pedagogical debate over school phones, AI in classrooms, and the reduction of what curriculum reformers euphemistically call “memorisation load.”1 The substantive question — whether smartphones should be excluded from primary and secondary schooling, whether calculators-then-AI should be permitted in mathematics teaching, whether children should still learn poems by heart — is genuine. But the debate has been distorted by a misframing. Both techno-enthusiast and techno-pessimist positions tend to treat the question as whether the new tools are cognitively useful. That is the wrong question. The right question is whether the tools, when introduced at given developmental stages, are compatible with the formation of the underlying capacities on which their later intelligent use depends.

Between 2024 and 2026, Australian, EU and UK regulators independently concluded that something had gone seriously wrong with what platforms do to children. The platforms appealed, complied with the parts they could not appeal, and reported record advertising revenue.

The empirical literature on cognitive offloading suggests an asymmetric pattern. Tools that offload routine retrieval after a domain has been mastered can free cognitive resources for higher- order operations. Tools that offload the same routine retrieval before the domain is mastered prevent the formation of the schemata into which higher-order operations would later be slotted. The same surface behaviour — the student looks up the answer — has opposite developmental implications depending on stage. The political consequence is that the children who are now eight or twelve years old, and whose primary cognitive tool from now on will be a generative AI assistant tuned for fluency, will be the twenty-somethings of the late 2030s. They are the cohort that democratic theorists used to call the next deliberative public.

This is also where the imagery of socialisation environment matters. Children do not first form attention habits and then, separately, form political dispositions. They form both at once, in the same ecology, by participating in it. If that ecology is the algorithmic feed — the recursive loop of short video, push notification and parasocial micro-celebrity — then the political dispositions that emerge from it are not something added on top of an attention- neutral substrate. They are the dispositions appropriate to that substrate: low tolerance for sustained complexity, high responsiveness to affective intensity, and a default trust in fluent self-presentation as a marker of competence. These dispositions are not random. They are the dispositions that competitive authoritarianism’s electoral machinery, on the platforms it inhabits, can reliably mobilise (Hadarics, 2025).

3.2.4 A pedagogical complication

Before this argument is taken in a direction it cannot bear, an important complication. Holt’s recent paper on academic freedom (Holt, 2025) makes a point that the argument here accepts and that later chapters will revisit. Both intellectualism and anti-intellectualism, Holt argues, can act as fear-generators that narrow the space of teachable disagreement. A pedagogy that insists on its own rationalism as the signature of correctness, that signals to students which conclusions are admissible, and that treats every classroom dispute as a test of cognitive virtue, produces its own form of constriction — one that the older anti-intellectual critique, however ungenerously, was sometimes registering.

The diagnostic point of this chapter is therefore narrower than a defence of the university or the classroom against distraction. It is an argument about the conditions under which public reason in any of its plausible forms — agonistic, deliberative, communitarian, pragmatist — can be sustained. Those conditions include sustained attention. They also include, on every plausible account, room for substantive disagreement that is not pre-resolved by procedural orthodoxy. Both can be lost: the first to the platform economy, the second to the over-policed defensive posture that the platform economy provokes in its more thoughtful adversaries.


3.3 The dismantling of gatekeeping

A short historical-empirical section reconstructs the dismantling, in two decades, of the institutional gatekeeping infrastructure on which twentieth-century liberal democracies depended for their information supply. The argument acknowledges, with Fraser and Mouffe (Fraser, 2022; Mouffe, 2018), the legitimate critiques of that earlier infrastructure — monopolistic in ownership, gendered and racially exclusionary in staffing and editorial focus, uncritically deferential to state security narratives in ways the post-Iraq decade catalogued. The point of the section is not to romanticise the prior arrangement. It is to show that what has replaced it is not a more democratic alternative. It is an infrastructure of engagement extraction under private monopoly control, with substantively worse properties on every dimension that matters for public reason.

3.3.1 The collapse of the institutional middle

The empirical anchors are by now familiar but worth assembling in one place. The collapse of US local newspapers (Abernathy, 2024); the financialisation and gutting of the British regional press; the Washington Post episode of late 2024 and what it inaugurated; the documented decline in literary reading in the anglophone world. The common thread is not the disappearance of journalism as such but the disappearance of the intermediate institutions: those of sufficient size to fund investigative work, of sufficient independence to publish it, and of sufficient territorial proximity to know what their readers needed to know.

That middle layer was where most of the actual work of the older public sphere happened. Its replacement is twofold. At the local level, the void has been filled by partisan content farms and by algorithmic content distribution that simulates the form of local journalism without its substance. At the national level, the void has been filled by what the platform economy permits to exist: a small number of high-circulation outlets owned by individuals or families with substantial commercial and political stakes elsewhere, and a long tail of personal-brand journalism on Substack and similar platforms.

3.3.2 The 2025 acceleration

The standard treatment dates this transformation to the 2010s and treats the current moment as its continuation. The current moment is, on the available evidence, qualitatively different. Levitsky, Way and Ziblatt’s December 2025 analysis (Levitsky et al., 2025) documents, in real time, the consolidation of US legacy media in pro-regime hands. Skydance Media’s acquisition of Paramount, with the Federal Communications Commission’s approval, gave the Ellison family control of CBS, whose subsequent rightward editorial shift was neither coincidental nor concealed. The Ellisons have moved to acquire Warner Bros. Discovery (which owns CNN) and a US version of TikTok. Fox News and X are already in the hands of openly pro-administration owners. CBS cancelled the late-night programme of a prominent administration critic and tightened editorial control over its flagship news programme; The Washington Post shifted its editorial line markedly to the right; Condé Nast gutted Teen Vogue’s political reporting (Levitsky et al., 2025).

The pattern is recognisable from the comparative literature on competitive authoritarianism (Levitsky et al., 2025). The Hungarian template under Orbán — in which control of the country’s most-read news website Origo was transferred via licensing pressure and government contracts to a politically allied private company, and similar manoeuvres extended to over five hundred Hungarian outlets — is now running, mutatis mutandis, in the United States. The point is not analogy. The point is that the combination of platform-mediated discourse and pro-regime ownership of the legacy commanding heights is a recognised authoritarian-consolidation mechanism, with documented political consequences in multiple jurisdictions.

The reflexive rebuttal — but the dissident voices are still online — misses how the mechanism works. Authoritarian-tilted attention infrastructures do not need to silence dissent. They need only to ensure that the dissenting message is reliably outcompeted, in the relevant attention markets, by the regime-aligned message. The phenomenon Levitsky, Way and Ziblatt document of widespread self-censorship by donors, law firms, universities, business leaders and individual journalists (Levitsky et al., 2025) is downstream of this: when the costs of speaking are observably high and the rewards are observably low, people stop speaking. The Murkowski formulation quoted in their analysis — the senator’s open admission that she and her colleagues are routinely afraid to use their voices — is a data point about the cumulative effect of the infrastructure described in this chapter, not an isolated political anecdote.


3.4 Privacy, surveillance and the public sphere: the dual extraction

The literature on the attention economy and the literature on surveillance capitalism are usually treated as adjacent but distinct. Treating them separately is a mistake. They describe two moments of the same political-economic process, and the political consequences are unintelligible if either is bracketed.

3.4.1 Privacy as democratic infrastructure

The argument runs as follows. Attention extraction at industrial scale requires information about the targets of the extraction: which content captures which user, at what time of day, in which emotional state, in response to which preceding stimulus. That information is personal data. The platform economy’s revenue model is therefore not merely an attention-extraction model; it is, as Cohen (Cohen, 2019) and Véliz (Véliz, 2020) have insisted, a data-extraction model in which attention is the proximate output and behavioural prediction is the deeper one. To weaken privacy is, in this register, to enable the very mechanism by which attention is captured at scale. To enable that mechanism is to cede control of the informational substrate of the public sphere. The two erosions are not adjacent; they are the same process described at different levels of abstraction.

The argument I developed in the wake of the Cambridge Analytica revelations of 2017–2018 (Moreno Muñoz, 2018) has, I think, aged well, and forms a direct antecedent to the case made here. The starting observation was mundane: liberal democracies face the challenge of protecting privacy in a sociotechnical context characterised by general-purpose access to information technologies and digital platforms of mediated interaction. The positive face of that access — new spaces for civic activism, accountability, political participation — is real and should be defended. Its counter-face, however, is the gradual attenuation of privacy beyond any defensible threshold: a process in which consumers nominally trade personal data for free services and small discounts, in which both unambiguously totalitarian states and security-prioritising democracies enthusiastically exploit the resulting infrastructures of hyper-surveillance, and in which the framework conditions for informed political deliberation are quietly dismantled. I argued then, and develop the case at greater length here, that the Cambridge Analytica episode should be read in this light: not as an isolated electoral scandal but as the public-relations crystallisation of an architecture that was already in place and remains in place. What that episode made visible was the electoral operationalisation of the dual-extraction model: targeted attention capture, calibrated by extracted behavioural data, deployed against named individuals at the moment their political deliberation was most plastic. The architecture has not been retired. It has been normalised — and the eight years since have furnished considerably more evidence for the diagnosis than against it.

Mark Zuckerberg apologised to the U.S. Senate for Cambridge Analytica in April 2018, pledging reform. The data-extraction architecture under discussion had been in operation since around 2010. By 2026 it remained, and was the industry standard.

Nissenbaum’s framework of contextual integrity (Nissenbaum, 2010) allows us to be more precise about what is wrong here. Privacy, on this account, is not a matter of secrecy in the abstract; it is a matter of information flowing in accordance with norms appropriate to its context. When a piece of behavioural data harvested in one context (say, a search query about a child’s medical symptoms) is re-deployed in another context (say, a political-ad-targeting inference about the user’s likely vote), what has been violated is not a private space but a contextual norm. The public sphere has specific contextual norms of its own — the norms of public argument, of contestable claims, of attribution and accountability — and the dual-extraction architecture systematically violates them. To defend privacy in this register is to defend the integrity of those norms, which is to say, to defend the public sphere itself.

The implications for democratic theory have been worked out forcefully by Cohen (Cohen, 2019) and Véliz (Véliz, 2020). Privacy is not a luxury good for the affluent and the paranoid. It is an infrastructural condition for the formation of the kind of politically agentic subject that liberal-democratic theory has always presupposed. A subject whose every behavioural trace is legible to commercial and state actors, whose political dispositions can be predicted in advance and pre-empted by tailored interventions, is not the deliberating citizen of either Habermas’s or Mouffe’s reconstructions of democratic politics. That subject is the predictable consumer of informational capitalism. The two theoretical lineages disagree on much else but converge on this: informational asymmetries between platform and user constitute a form of structural power, and structural power left ungoverned generates domination (Coeckelbergh, 2022; Cohen, 2019).

3.4.2 Racialised surveillance and the geography of harm

The dual-extraction model is not evenly distributed. Browne’s Dark Matters (Browne, 2015), Benjamin’s Race After Technology (Benjamin, 2019) and Eubanks’s Automating Inequality (Eubanks, 2018) have demonstrated, with case-level specificity, that the burden of algorithmic surveillance falls disproportionately on already marginalised populations. The pattern is consistent enough that it cannot be coded as a series of bugs. Surveillance technologies deployed in welfare administration, predictive policing, immigration enforcement and credit scoring systematically reproduce, and in many cases amplify, existing patterns of racial, class and gender subordination. The Dencik et al. Data Justice programme (Dencik et al., 2022) has consolidated this work into a broader normative framework in which the question is no longer “is privacy violated?” but “are the harms of informational extraction justly distributed?”. For the argument of this chapter, the implication is that the political effects of attention-and-surveillance capitalism include a differential compromise of the public sphere: the populations whose participation public reason most needs are precisely those most exposed to the intimidating, sorting and predictive functions of the infrastructure.

3.4.3 Algorithmic discourse as governance: the Palantir moment

Drawing on Akbari’s digital authoritarianism framework (Akbari, 2025) and Dean’s analysis of neofeudalising communicative capitalism (Dean, 2024), the chapter then argues that the platform economy is not merely commercial but governmental: it shapes the horizon of the politically thinkable. Two extended examples make this concrete.

The first is the role of X under its current ownership as a politically aligned amplifier of the second Trump administration’s preferred narratives. The recursive loop is by now well-documented: algorithmic amplification generates electoral mandate; electoral mandate licenses regulatory withdrawal from platforms; regulatory withdrawal entrenches algorithmic amplification. The point is not that the platform owner is unusually ideological — although the ownership-change-induced moderation collapse is by this point empirically established — but that the ownership-aligned platform is the characteristic governance form of competitive authoritarianism in its platform-symbiotic phase. It is what differentiates the 2025-and-after period from the 2016 Cambridge Analytica moment: the platform is no longer being instrumentalised against its owner’s preferences; the platform’s owner, the regime, and the extractive business model are now openly aligned (Levitsky et al., 2025).

The second is the publication, in spring 2026, of the so-called Palantir manifesto: a twenty-two-point declaration by the company’s CEO Alex Karp, drawn from the trade book co-authored with Nicholas Zamiska (Karp and Zamiska, 2025), which articulates more candidly than any recent corporate document the worldview of the Silicon Valley faction now openly aligned with the administration. The manifesto’s claims — that the post-war neutralisation of Germany and Japan should be reversed; that hard power must replace what it dismisses as theatrical pluralism; that the AI weapons era is the inheritor of the atomic deterrence era; that Silicon Valley owes the United States a moral debt to be repaid in defence contracts — speak for themselves (Sérvulo González, 2026). So does the response. Yanis Varoufakis’s characterisation of the document as the legitimating ideology of technofeudalism (Varoufakis, 2026), and Mark Coeckelbergh’s characterisation of it as technofascism (Coeckelbergh, 2026b), both register what is most arresting about it: that the document is a manifesto, not a marketing exercise, issued by a private firm deeply embedded in state surveillance and military targeting infrastructure (Coeckelbergh, 2026b). As Karpf observed, the demand underlying the rhetorical fireworks is straightforward: that the state should spend exceptional sums of money on Palantir’s products (Karpf, 2026).

Palantir’s own X profile describes its mission as “software that dominates” (Coeckelbergh, 2026b). What it dominates is left to the customer.

The reception evidence reinforces the structural reading. The longer- form book that the manifesto distils (Karp and Zamiska, 2025) was published by Crown in early 2025 to warm reviews across the most prestigious anglophone outlets and a strikingly cohesive set of endorsements. The publisher’s promotional page, which is itself an artefact worth analysing for what it claims to constitute as consensus, presents the book as praised by the Financial Times, the Wall Street Journal, the Washington Post, The New York Times Magazine, the Times Literary Supplement and The Times of London, the last describing it as the manifesto inspiring the current British government (Crown Currency, 2025--2026). Niall Ferguson read the book as a stirring case for a new Manhattan Project for the AI age; Walter Isaacson read it as a rallying cry for renewed cooperation between the technology industry and government (Crown Currency, 2025--2026). The endorsement set extends across the recognisable defence-and-finance establishment (James Mattis, Anders Fogh Rasmussen, Jamie Dimon, Stanley Druckenmiller, Eric Schmidt), the popular-history wing of the public-intellectual class (Isaacson, Ferguson, David Ignatius), and the conservative-political-theory press (Quillette, First Things, La Vanguardia) (Crown Currency, 2025--2026). The pattern is recognisable enough to be diagnostic: the book was received in 2025, by reviewers credentialed in history, social theory and constitutional law in functioning liberal democracies, as a serious work of political theory.

Read against the responses Coeckelbergh, Varoufakis and Karpf published a year later to the X-platform manifesto extracted from the same book by the same author, the asymmetry is striking. The underlying argument did not change in substance between the two documents; the medium and the framing did. The book’s three hundred pages of literary citation, corporate biography and Cold-War analogy were received as serious; the same worldview compressed into twenty-two declarative bullet points was identified, by less venue- prestigious readers operating outside the legacy reviewing infrastructure, as a manifesto whose lineage runs through Carl Schmitt rather than through liberal political theory (Coeckelbergh, 2026a, 2026b). The competence asymmetry is not real — Coeckelbergh holds a chair in philosophy of media and technology at the University of Vienna and develops the technofascism framing in a peer-reviewed paper for AI & Society (Coeckelbergh, 2026a); Varoufakis was Greece’s finance minister. What is real is the venue and timing asymmetry. The disorientation thesis (§3.1) predicts it. Prestige reviewing infrastructure now operates under the conditions this chapter has documented: financial dependence on the platform-capital ecosystems whose products it reviews, editorial compression by the contraction of slow time, optimisation for engagement-amenable framings of complex material. Under those conditions, an argument dispersed across a long-form trade book legibly tagged with the markers of seriousness (philosophy citations, historical analogies, cultivated self-presentation) is harder to identify than the same argument compressed into bullet-point form by the platform’s affordances. The book got the slow, prestigious, broadly favourable reading; the manifesto got the fast, marginal, accurate one. That is not a malfunction of public reason from below. It is a malfunction of public reason at the credentialing peak — and, as Coeckelbergh observes with some irony, the author at the centre of the case is one who studied critical theory under Habermas’s influence at Frankfurt and explicitly chose Schmitt’s intellectual register over Habermas’s (Coeckelbergh, 2026b). The intellectual formation of the parties most directly affected by the present condition — author and reviewers alike — did not insulate them from it. That fact is itself one of the most important data points the chapter has to offer.

The wider significance of the moment, however, is not the manifesto itself. It is that it landed in the middle of an active intra-sector disagreement. In the same months, more than five hundred Google executives and employees signed an open letter to Sundar Pichai opposing the company’s expanded Pentagon work (elDiario.es, 2026); Anthropic refused certain categories of military application involving lethal autonomous weapons and mass surveillance and was briefly placed by the administration on a supply-chain blacklist before being quietly readmitted (elDiario.es, 2026); and several thousand technology workers protested Immigration and Customs Enforcement contracts. The point is not that Silicon Valley is internally fractured — it is, but that has been true for two decades. The point is that the publicly visible fault line now runs between firms aligned with the dual-extraction-and-hard-power model and firms attempting, on terms whose stability remains to be seen, to articulate ethical limits within it. The question of which faction wins is not a tech-industry question. It is a question about the medium in which the public sphere of the next decade will operate.


3.5 What public reason needs (and does not get)

The chapter closes by specifying, against the empirical analysis above, what public reason in the broadly Habermas–Rawlsian register, revised by Fraser, Mouffe and Honneth (Fraser, 2022; Habermas, 2023; Mouffe, 2018), would require under contemporary conditions. The specification is normative; the assessment is empirical; and the gap between them is the chapter’s diagnostic claim.

3.5.1 Three requirements

  1. Slow time. Institutional spaces in which sustained, peer-reviewed, error-correcting inquiry can take place insulated from the imperatives of attention extraction. These are spaces with their own temporal grain: months for an investigative report, years for a peer-reviewed empirical claim, decades for a scientific consensus. Their epistemic productivity depends on their insulation from the millisecond markets.

  2. Shared epistemic ground. Overlapping, even if contested, factual and methodological commitments across the politically relevant population. The standard worry — that “shared epistemic ground” smuggles in a substantive ideological consensus — is real but overdrawn. What is needed is not agreement on conclusions but agreement on the kinds of moves that count as inferential moves: that evidence is contestable but not arbitrary; that expertise is revisable but not interchangeable with confident assertion; that the question “how would I know if I were wrong?” is a legitimate question and not, as much of the platform-mediated discourse has re-coded it, a sign of weakness.

  3. Distributed access. The conditions under which both producing and consuming public reason are not class privileges but publicly distributed capabilities. Concretely, this means stable, well-funded, locally embedded journalism; public-service broadcasters insulated from political-administrative pressure; accessible higher education with disciplinary autonomy; libraries and archives functioning as commons; and, increasingly, an analogous infrastructure for the digital information environment.

3.5.2 Where each requirement is now denied

Slow time is being squeezed by a combination of three pressures: the platform-economic erosion of long-form journalism’s revenue base; the financialisation of academic publishing into engagement-style metrics (Diaz, 2021); and, most recently, the direct coercive defunding of research universities under the second Trump administration. The empirical record on the latter is now substantial. In its first year, the administration opened or cited as basis for funding-loss threats more than 150 investigations into US colleges; six universities, including Brown, Columbia, Cornell, Northwestern and Penn, have publicly settled with the administration to restore frozen funding; the Department of Justice paused multiple investigations into the University of Virginia conditional on the institution’s “completing its planned reforms prohibiting DEI” through 2028; the administration attempted to cap research overhead reimbursement rates at 15% across NIH, NSF, DoE and DoD before federal courts blocked the policies; and the proposed FY2026 budget sought a 21% cut to federal scientific research funding (Spitalniak, 2026). Whatever one’s view of the substantive policy disputes, the infrastructural effect is to shrink the spaces in which slow time can be insulated.

Shared epistemic ground is denied by the very disorientation mechanism Thesis 3 describes. Hadarics (Hadarics, 2025) documents, across multiple jurisdictions, that authoritarian-leaning populations consistently and systematically misperceive the freedom of their own media environment, in directions consistent with the political project they support. This is not a perceptual error to be corrected by better fact-checking. It is the epistemic-political output of the dual-extraction architecture acting on populations whose formation already proceeded through that architecture. The Zolides work on anti-Fauci memes (Zolides, 2022), and the Morelock-Narita account of the QAnon-COVID nexus as a Habermasian legitimation crisis (Morelock and Narita, 2022), document one current of this. The Hasell-Chinn finding that aspirational lifestyle social-media use predicts anti-intellectualism and inaccurate beliefs even when the content is not explicitly political (Hasell and Chinn, 2023) documents the deeper one. The political effect is not localised to overtly political content; it is a property of the medium.

Distributed access is denied at multiple levels: the local-news collapse, the consolidation of national legacy media in pro-regime hands (Levitsky et al., 2025), the algorithmic re-distribution of attention away from public-service broadcasters, the steady commodification of educational signalling (Cartner-Morley, 2026). The political-economic direction is consistent. The infrastructural register in which the public sphere has historically functioned — a register in which information goods were produced under non-market and quasi-market conditions and distributed broadly — is being systematically substituted by a register in which they are produced under attention-extraction conditions and distributed to whoever the algorithm selects.

3.5.3 What can be done that is compatible with public reason

The chapter resists offering a programme. Its argument is diagnostic, and the temptation to follow diagnosis with a fluent solution is one of the things this book is at pains to identify and avoid. But the empirical material discussed in this chapter does allow some convergent observations on the shape of any plausible remedy.

First, structural intervention on the dual-extraction model is non-optional. Privacy regulation in the European model (Russo and Malgieri, 2023), algorithmic accountability requirements in the mode of Yeung’s regulation-as-governance framework (Yeung, 2017) and Binns’s algorithmic accountability primer (Binns, 2022), and contextual-integrity-based standards in the Nissenbaum tradition (Nissenbaum, 2010) are all candidates. None of them is sufficient on its own; collectively they constitute the regulatory minimum below which the public sphere cannot be reconstituted. The Mittelstadt et al. taxonomy of the ethics of algorithms (Mittelstadt et al., 2016) and the Floridi-Cowls unified-framework synthesis (Floridi and Cowls, 2019) supply the normative architecture; the work of operationalising it is political and is currently understaffed.

Second, public-interest information infrastructure must be re-built on terms that the platform economy cannot capture. Gillespie’s Custodians of the Internet (Gillespie, 2018) and Green’s Smart Enough City (Green, 2022) are useful here for what they avoid: the techno-solutionist trap of replacing one platform monopoly with another, even when the second is publicly owned. The mechanism the literature converges on is plural, decentralised, accountable infrastructure: public-service media insulated from pressure; libraries and archives as digital-and-physical commons; municipal and regional information capacity not contingent on Silicon Valley hosting; professional journalism with stable funding sources that are not the patrimonial whim of one billionaire.

Third — and this is the move the public-health-diplomacy literature makes most clearly — defending public reason in this environment requires political literacy on the part of the experts whose authority is the standing target of populist mobilisation. McKee and colleagues’ agenda for public health diplomacy in an age of populism (McKee et al., 2025) is in this respect a model that travels well beyond public health. The agenda’s nine recommendations include creating diplomacy laboratories for crisis simulation, empowering non-state actors (cities, regions, NGOs) as diplomatic agents, strengthening public engagement and active listening rather than unilateral broadcast, protecting health workers and journalists who become first targets of authoritarian-populist regimes, building alternative accountability systems in partnership with investigative media, reframing health as a diplomatic priority, diversifying funding to reduce dependence on politicised national budgets, training future leaders with explicit political literacy and negotiation skills, and reinventing multilateralism rather than ceding it (McKee et al., 2025). The transposition to the broader public- sphere agenda is straightforward: substitute public reason for public health, and the structure of the recommendations is the structure of any plausible defence.

Fourth, the cognitive infrastructure on which all of this rests must be rebuilt at the level of educational practice. The relevant working principle, drawing on the Hogenboom material discussed above, is what Vivienne Ming has called productive friction (Hogenboom, 2026): structuring AI tools so that they ask questions rather than supply answers, so that they challenge the user’s inferences rather than substitute for them. The “nemesis prompt” — the explicit instruction to a generative model to argue against the user’s own conclusions — is one specific instance. The more general principle is that cognitive tools should be designed to exercise the capacity they assist, not to substitute for it. This is the AI-era analogue of the older pedagogical principle that calculators are appropriate after, not instead of, mastery of the underlying operations.

The empirical evidence underwriting this argument is striking. Kosmyna and colleagues at the MIT Media Lab, as reported in Hogenboom (2026), documented up to a 55% reduction in neural activity among students composing short essays with LLM assistance (a finding from a preprint, not yet peer-reviewed at the time of writing). Ming, separately, recorded diminished gamma-band activity — a marker of cognitive effort, and one associated with later-life cognitive decline — in habitual chatbot users. Both findings converge on a single point: the friction that AI removes is not a cost to be minimised but the very mechanism by which the capacities it purports to support are consolidated.

3.5.4 The pedagogical complication, again

The Holt complication (2025) returns at this point with sharpened relevance. A defence of public reason can become its own form of cognitive constriction if it presents itself as the only permissible epistemic posture. The diagnosis advanced in this chapter is consistent with substantial substantive disagreement between the populations it treats as deliberative agents. What it is inconsistent with is an environment that systematically deprives those populations of the conditions under which their disagreements could be productively contested. The distinction is not subtle, but it is easy to lose under polemical pressure. The intellectual formation that is in danger is not a particular set of conclusions; it is the capacity for any set of conclusions to be formed and revised in dialogue. McDonald’s recent argument that the disciplines themselves — in their differing methods, frameworks and characteristic moves — already constitute a non-trivial viewpoint diversity (McDonald, 2026) is, in this register, a useful corrective to the assumption that public reason is a single homogeneous register that some external authority must impose.


If Part I has established that the institutional and infrastructural conditions of public reason are eroding, Part II asks where, then, intellectual life appears in the 2020s — and finds that, in its publicly visible forms, it appears as performance, brand and consumer identity, on the surface where the attention economy permits it to monetise. The next chapter takes up the case materials.


  1. For the European debate, see the wave of school-phone restrictions implemented or announced in France (2018, 2024 extension), the Netherlands (2024), Spain by autonomous community (2024–2025), the United Kingdom (DfE guidance, 2024), and Sweden’s partial reversal of one-to-one digital education in primary schools (2023–). For the AI-in-curriculum debate, the Faculty Concerned About ASU’s New AI Course Builder coverage (Inside Higher Ed Staff, 2026) is a representative current-state snapshot.↩︎