3 The Attention Economy and the Displacement of Public Reason
The authoritarian assault analysed in Chapter 2 is politically authored but technically conditioned. The attention-extractive platform economy — theorised by Zuboff, Williams, Citton, Wu and Hari (Citton, 2017; Hari, 2022; Williams, 2018; Wu, 2016; Zuboff, 2019) but better understood, here, in the broader register of Cohen, Véliz, Nissenbaum and Coeckelbergh (Coeckelbergh, 2022; Cohen, 2019; Nissenbaum, 2010; Véliz, 2020) — is not a passive backdrop to political anti-intellectualism but its constitutive technical infrastructure. Four sub-arguments:
The platform economy is structurally hostile to the practices — slow, error-correcting, peer-mediated, institutionally framed — that constitute public reason. That hostility begins in early socialisation, not in the adult information environment.
Attention extraction and surveillance extraction are not separable business models but the two faces of the same political-economic order. The weakening of privacy and the cognitive disorientation of the public sphere are co-produced.
The same economy provides the discursive vector along which authoritarian-populist mobilisation operates, and the commercial mechanism through which the platforms’ owners align with the regime they amplify. The recursion is well-documented and politically consequential.
The same economy provides the channel through which, in Part II, intellect re-appears not as deliberative practice but as branded surface — on the only register the platforms permit to monetise.
The argument is not technological-determinist. The platform economy is a political-economic order, not a fact of nature; it could have been built otherwise, and could yet be unbuilt or rebuilt. What is established here is the more limited but unavoidable claim that the attention economy is the medium in which the present crisis is metabolised — and that any account of political anti-intellectualism that brackets it is misspecified.
3.1 Three theses on attention
Thesis 1 — Attention is constitutively scarce, and deliberation requires it in non-fungible quantities
Following Citton (Citton, 2017) and Wolf (Wolf, 2018), the cognitive operations characteristic of public reason — sustained reading, peer contestation, the slow construction of warranted belief, the revision of one’s own commitments under the pressure of better evidence — require attentional conditions that are sociologically distributed and culturally cultivated. They cannot be conjured on demand. They cannot be reproduced under conditions of constant interruption. They are, in Wolf’s phrase, deep-reading capabilities: not native equipment but acquired neuro-cognitive infrastructure that has to be built and then maintained.
This is not a romantic claim about reading. It is a claim about the temporal conditions of inferential work: forming a justified belief on a politically consequential matter requires sustained contact with the relevant evidence, exposure to disconfirming arguments, time enough to revise. Each of those operations has a characteristic temporal grain, and that grain is incompatible with the millisecond-rate competitive bidding of attention markets. When the temporal substrate is removed, what remains is not a faster version of public reason but a different cognitive activity altogether — one that the rest of this chapter will characterise.
Thesis 2 — Platform business models are structurally engineered to extract attention as a commodity
Drawing on Zuboff’s analysis of surveillance capitalism (Zuboff, 2019), Williams’s explicit indictment of the persuasion industry (Williams, 2018), and Cohen’s structural-legal account of informational capitalism (Cohen, 2019), the chapter takes as established that the dominant platform business model rewards affective intensity, novelty, in-group reinforcement and outrage at scales and rates that are constitutively incompatible with deliberative cognition. This is not a matter of the platforms occasionally hosting bad content. It is a matter of the optimisation function. The metric the system optimises is engagement; the content that maximises engagement is the affectively saturated, hierarchically polarised, low-deliberation material that the system therefore privileges by design (Gillespie, 2018).
The point bears repeating because rebuttals to it routinely confuse the descriptive and the moral. To say the system is engineered to extract attention is not to say the engineers are wicked; it is to say that wickedness is unnecessary to the result. The business model produces the cognitive degradation as a by-product of its core operation. No conspiracy is required.
Thesis 3 — The result is not stupidity but disorientation
Picking up Harsin’s analysis of post-truth as a regime of attention rather than a regime of truth (Harsin, 2024), and Hadarics’s empirical work on authoritarianism’s biased perception of media freedom (Hadarics, 2025), the chapter argues that the platform-mediated public is not characterised by ignorance — it is over-informed — but by epistemic disorientation: the inability to coordinate justified belief across the politically relevant population. This is the political-cognitive condition under which competitive authoritarianism is electorally feasible. It is also the condition under which, as Chapter 4 develops in detail, being seen to think becomes a more reliable signal of status than the thinking itself.
The disorientation thesis carries important consequences for intervention. If the problem were stupidity, the remedy would be education in the older sense — more facts, better-credentialed sources, sharper fact-checks. The empirical record on fact-checking’s downstream electoral effects gives little ground for optimism in that register (Morelock and Narita, 2022). The disorientation thesis predicts what we observe: a public that is perfectly capable of identifying a falsehood when isolated, and yet unable to coordinate that capacity into stable, publicly shared, action-guiding beliefs. The damage is to the coordination function of the information ecosystem, not to individual cognitive capacity.
3.3 The dismantling of gatekeeping
A short historical-empirical section reconstructs the dismantling, in two decades, of the institutional gatekeeping infrastructure on which twentieth-century liberal democracies depended for their information supply. The argument acknowledges, with Fraser and Mouffe (Fraser, 2022; Mouffe, 2018), the legitimate critiques of that earlier infrastructure — monopolistic in ownership, gendered and racially exclusionary in staffing and editorial focus, uncritically deferential to state security narratives in ways the post-Iraq decade catalogued. The point of the section is not to romanticise the prior arrangement. It is to show that what has replaced it is not a more democratic alternative. It is an infrastructure of engagement extraction under private monopoly control, with substantively worse properties on every dimension that matters for public reason.
3.3.1 The collapse of the institutional middle
The empirical anchors are by now familiar but worth assembling in one place. The collapse of US local newspapers (Abernathy, 2024); the financialisation and gutting of the British regional press; the Washington Post episode of late 2024 and what it inaugurated; the documented decline in literary reading in the anglophone world. The common thread is not the disappearance of journalism as such but the disappearance of the intermediate institutions: those of sufficient size to fund investigative work, of sufficient independence to publish it, and of sufficient territorial proximity to know what their readers needed to know.
That middle layer was where most of the actual work of the older public sphere happened. Its replacement is twofold. At the local level, the void has been filled by partisan content farms and by algorithmic content distribution that simulates the form of local journalism without its substance. At the national level, the void has been filled by what the platform economy permits to exist: a small number of high-circulation outlets owned by individuals or families with substantial commercial and political stakes elsewhere, and a long tail of personal-brand journalism on Substack and similar platforms.
3.3.2 The 2025 acceleration
The standard treatment dates this transformation to the 2010s and treats the current moment as its continuation. The current moment is, on the available evidence, qualitatively different. Levitsky, Way and Ziblatt’s December 2025 analysis (Levitsky et al., 2025) documents, in real time, the consolidation of US legacy media in pro-regime hands. Skydance Media’s acquisition of Paramount, with the Federal Communications Commission’s approval, gave the Ellison family control of CBS, whose subsequent rightward editorial shift was neither coincidental nor concealed. The Ellisons have moved to acquire Warner Bros. Discovery (which owns CNN) and a US version of TikTok. Fox News and X are already in the hands of openly pro-administration owners. CBS cancelled the late-night programme of a prominent administration critic and tightened editorial control over its flagship news programme; The Washington Post shifted its editorial line markedly to the right; Condé Nast gutted Teen Vogue’s political reporting (Levitsky et al., 2025).
The pattern is recognisable from the comparative literature on competitive authoritarianism (Levitsky et al., 2025). The Hungarian template under Orbán — in which control of the country’s most-read news website Origo was transferred via licensing pressure and government contracts to a politically allied private company, and similar manoeuvres extended to over five hundred Hungarian outlets — is now running, mutatis mutandis, in the United States. The point is not analogy. The point is that the combination of platform-mediated discourse and pro-regime ownership of the legacy commanding heights is a recognised authoritarian-consolidation mechanism, with documented political consequences in multiple jurisdictions.
The reflexive rebuttal — but the dissident voices are still online — misses how the mechanism works. Authoritarian-tilted attention infrastructures do not need to silence dissent. They need only to ensure that the dissenting message is reliably outcompeted, in the relevant attention markets, by the regime-aligned message. The phenomenon Levitsky, Way and Ziblatt document of widespread self-censorship by donors, law firms, universities, business leaders and individual journalists (Levitsky et al., 2025) is downstream of this: when the costs of speaking are observably high and the rewards are observably low, people stop speaking. The Murkowski formulation quoted in their analysis — the senator’s open admission that she and her colleagues are routinely afraid to use their voices — is a data point about the cumulative effect of the infrastructure described in this chapter, not an isolated political anecdote.
3.4 Privacy, surveillance and the public sphere: the dual extraction
The literature on the attention economy and the literature on surveillance capitalism are usually treated as adjacent but distinct. Treating them separately is a mistake. They describe two moments of the same political-economic process, and the political consequences are unintelligible if either is bracketed.
3.4.1 Privacy as democratic infrastructure
The argument runs as follows. Attention extraction at industrial scale requires information about the targets of the extraction: which content captures which user, at what time of day, in which emotional state, in response to which preceding stimulus. That information is personal data. The platform economy’s revenue model is therefore not merely an attention-extraction model; it is, as Cohen (Cohen, 2019) and Véliz (Véliz, 2020) have insisted, a data-extraction model in which attention is the proximate output and behavioural prediction is the deeper one. To weaken privacy is, in this register, to enable the very mechanism by which attention is captured at scale. To enable that mechanism is to cede control of the informational substrate of the public sphere. The two erosions are not adjacent; they are the same process described at different levels of abstraction.
The argument I developed in the wake of the Cambridge Analytica revelations of 2017–2018 (Moreno Muñoz, 2018) has, I think, aged well, and forms a direct antecedent to the case made here. The starting observation was mundane: liberal democracies face the challenge of protecting privacy in a sociotechnical context characterised by general-purpose access to information technologies and digital platforms of mediated interaction. The positive face of that access — new spaces for civic activism, accountability, political participation — is real and should be defended. Its counter-face, however, is the gradual attenuation of privacy beyond any defensible threshold: a process in which consumers nominally trade personal data for free services and small discounts, in which both unambiguously totalitarian states and security-prioritising democracies enthusiastically exploit the resulting infrastructures of hyper-surveillance, and in which the framework conditions for informed political deliberation are quietly dismantled. I argued then, and develop the case at greater length here, that the Cambridge Analytica episode should be read in this light: not as an isolated electoral scandal but as the public-relations crystallisation of an architecture that was already in place and remains in place. What that episode made visible was the electoral operationalisation of the dual-extraction model: targeted attention capture, calibrated by extracted behavioural data, deployed against named individuals at the moment their political deliberation was most plastic. The architecture has not been retired. It has been normalised — and the eight years since have furnished considerably more evidence for the diagnosis than against it.
Mark Zuckerberg apologised to the U.S. Senate for Cambridge Analytica in April 2018, pledging reform. The data-extraction architecture under discussion had been in operation since around 2010. By 2026 it remained, and was the industry standard.
Nissenbaum’s framework of contextual integrity (Nissenbaum, 2010) allows us to be more precise about what is wrong here. Privacy, on this account, is not a matter of secrecy in the abstract; it is a matter of information flowing in accordance with norms appropriate to its context. When a piece of behavioural data harvested in one context (say, a search query about a child’s medical symptoms) is re-deployed in another context (say, a political-ad-targeting inference about the user’s likely vote), what has been violated is not a private space but a contextual norm. The public sphere has specific contextual norms of its own — the norms of public argument, of contestable claims, of attribution and accountability — and the dual-extraction architecture systematically violates them. To defend privacy in this register is to defend the integrity of those norms, which is to say, to defend the public sphere itself.
The implications for democratic theory have been worked out forcefully by Cohen (Cohen, 2019) and Véliz (Véliz, 2020). Privacy is not a luxury good for the affluent and the paranoid. It is an infrastructural condition for the formation of the kind of politically agentic subject that liberal-democratic theory has always presupposed. A subject whose every behavioural trace is legible to commercial and state actors, whose political dispositions can be predicted in advance and pre-empted by tailored interventions, is not the deliberating citizen of either Habermas’s or Mouffe’s reconstructions of democratic politics. That subject is the predictable consumer of informational capitalism. The two theoretical lineages disagree on much else but converge on this: informational asymmetries between platform and user constitute a form of structural power, and structural power left ungoverned generates domination (Coeckelbergh, 2022; Cohen, 2019).
3.4.2 Racialised surveillance and the geography of harm
The dual-extraction model is not evenly distributed. Browne’s Dark Matters (Browne, 2015), Benjamin’s Race After Technology (Benjamin, 2019) and Eubanks’s Automating Inequality (Eubanks, 2018) have demonstrated, with case-level specificity, that the burden of algorithmic surveillance falls disproportionately on already marginalised populations. The pattern is consistent enough that it cannot be coded as a series of bugs. Surveillance technologies deployed in welfare administration, predictive policing, immigration enforcement and credit scoring systematically reproduce, and in many cases amplify, existing patterns of racial, class and gender subordination. The Dencik et al. Data Justice programme (Dencik et al., 2022) has consolidated this work into a broader normative framework in which the question is no longer “is privacy violated?” but “are the harms of informational extraction justly distributed?”. For the argument of this chapter, the implication is that the political effects of attention-and-surveillance capitalism include a differential compromise of the public sphere: the populations whose participation public reason most needs are precisely those most exposed to the intimidating, sorting and predictive functions of the infrastructure.
3.4.3 Algorithmic discourse as governance: the Palantir moment
Drawing on Akbari’s digital authoritarianism framework (Akbari, 2025) and Dean’s analysis of neofeudalising communicative capitalism (Dean, 2024), the chapter then argues that the platform economy is not merely commercial but governmental: it shapes the horizon of the politically thinkable. Two extended examples make this concrete.
The first is the role of X under its current ownership as a politically aligned amplifier of the second Trump administration’s preferred narratives. The recursive loop is by now well-documented: algorithmic amplification generates electoral mandate; electoral mandate licenses regulatory withdrawal from platforms; regulatory withdrawal entrenches algorithmic amplification. The point is not that the platform owner is unusually ideological — although the ownership-change-induced moderation collapse is by this point empirically established — but that the ownership-aligned platform is the characteristic governance form of competitive authoritarianism in its platform-symbiotic phase. It is what differentiates the 2025-and-after period from the 2016 Cambridge Analytica moment: the platform is no longer being instrumentalised against its owner’s preferences; the platform’s owner, the regime, and the extractive business model are now openly aligned (Levitsky et al., 2025).
The second is the publication, in spring 2026, of the so-called Palantir manifesto: a twenty-two-point declaration by the company’s CEO Alex Karp, drawn from the trade book co-authored with Nicholas Zamiska (Karp and Zamiska, 2025), which articulates more candidly than any recent corporate document the worldview of the Silicon Valley faction now openly aligned with the administration. The manifesto’s claims — that the post-war neutralisation of Germany and Japan should be reversed; that hard power must replace what it dismisses as theatrical pluralism; that the AI weapons era is the inheritor of the atomic deterrence era; that Silicon Valley owes the United States a moral debt to be repaid in defence contracts — speak for themselves (Sérvulo González, 2026). So does the response. Yanis Varoufakis’s characterisation of the document as the legitimating ideology of technofeudalism (Varoufakis, 2026), and Mark Coeckelbergh’s characterisation of it as technofascism (Coeckelbergh, 2026b), both register what is most arresting about it: that the document is a manifesto, not a marketing exercise, issued by a private firm deeply embedded in state surveillance and military targeting infrastructure (Coeckelbergh, 2026b). As Karpf observed, the demand underlying the rhetorical fireworks is straightforward: that the state should spend exceptional sums of money on Palantir’s products (Karpf, 2026).
Palantir’s own X profile describes its mission as “software that dominates” (Coeckelbergh, 2026b). What it dominates is left to the customer.
The reception evidence reinforces the structural reading. The longer- form book that the manifesto distils (Karp and Zamiska, 2025) was published by Crown in early 2025 to warm reviews across the most prestigious anglophone outlets and a strikingly cohesive set of endorsements. The publisher’s promotional page, which is itself an artefact worth analysing for what it claims to constitute as consensus, presents the book as praised by the Financial Times, the Wall Street Journal, the Washington Post, The New York Times Magazine, the Times Literary Supplement and The Times of London, the last describing it as the manifesto inspiring the current British government (Crown Currency, 2025--2026). Niall Ferguson read the book as a stirring case for a new Manhattan Project for the AI age; Walter Isaacson read it as a rallying cry for renewed cooperation between the technology industry and government (Crown Currency, 2025--2026). The endorsement set extends across the recognisable defence-and-finance establishment (James Mattis, Anders Fogh Rasmussen, Jamie Dimon, Stanley Druckenmiller, Eric Schmidt), the popular-history wing of the public-intellectual class (Isaacson, Ferguson, David Ignatius), and the conservative-political-theory press (Quillette, First Things, La Vanguardia) (Crown Currency, 2025--2026). The pattern is recognisable enough to be diagnostic: the book was received in 2025, by reviewers credentialed in history, social theory and constitutional law in functioning liberal democracies, as a serious work of political theory.
Read against the responses Coeckelbergh, Varoufakis and Karpf published a year later to the X-platform manifesto extracted from the same book by the same author, the asymmetry is striking. The underlying argument did not change in substance between the two documents; the medium and the framing did. The book’s three hundred pages of literary citation, corporate biography and Cold-War analogy were received as serious; the same worldview compressed into twenty-two declarative bullet points was identified, by less venue- prestigious readers operating outside the legacy reviewing infrastructure, as a manifesto whose lineage runs through Carl Schmitt rather than through liberal political theory (Coeckelbergh, 2026a, 2026b). The competence asymmetry is not real — Coeckelbergh holds a chair in philosophy of media and technology at the University of Vienna and develops the technofascism framing in a peer-reviewed paper for AI & Society (Coeckelbergh, 2026a); Varoufakis was Greece’s finance minister. What is real is the venue and timing asymmetry. The disorientation thesis (§3.1) predicts it. Prestige reviewing infrastructure now operates under the conditions this chapter has documented: financial dependence on the platform-capital ecosystems whose products it reviews, editorial compression by the contraction of slow time, optimisation for engagement-amenable framings of complex material. Under those conditions, an argument dispersed across a long-form trade book legibly tagged with the markers of seriousness (philosophy citations, historical analogies, cultivated self-presentation) is harder to identify than the same argument compressed into bullet-point form by the platform’s affordances. The book got the slow, prestigious, broadly favourable reading; the manifesto got the fast, marginal, accurate one. That is not a malfunction of public reason from below. It is a malfunction of public reason at the credentialing peak — and, as Coeckelbergh observes with some irony, the author at the centre of the case is one who studied critical theory under Habermas’s influence at Frankfurt and explicitly chose Schmitt’s intellectual register over Habermas’s (Coeckelbergh, 2026b). The intellectual formation of the parties most directly affected by the present condition — author and reviewers alike — did not insulate them from it. That fact is itself one of the most important data points the chapter has to offer.
The wider significance of the moment, however, is not the manifesto itself. It is that it landed in the middle of an active intra-sector disagreement. In the same months, more than five hundred Google executives and employees signed an open letter to Sundar Pichai opposing the company’s expanded Pentagon work (elDiario.es, 2026); Anthropic refused certain categories of military application involving lethal autonomous weapons and mass surveillance and was briefly placed by the administration on a supply-chain blacklist before being quietly readmitted (elDiario.es, 2026); and several thousand technology workers protested Immigration and Customs Enforcement contracts. The point is not that Silicon Valley is internally fractured — it is, but that has been true for two decades. The point is that the publicly visible fault line now runs between firms aligned with the dual-extraction-and-hard-power model and firms attempting, on terms whose stability remains to be seen, to articulate ethical limits within it. The question of which faction wins is not a tech-industry question. It is a question about the medium in which the public sphere of the next decade will operate.
3.5 What public reason needs (and does not get)
The chapter closes by specifying, against the empirical analysis above, what public reason in the broadly Habermas–Rawlsian register, revised by Fraser, Mouffe and Honneth (Fraser, 2022; Habermas, 2023; Mouffe, 2018), would require under contemporary conditions. The specification is normative; the assessment is empirical; and the gap between them is the chapter’s diagnostic claim.
3.5.1 Three requirements
Slow time. Institutional spaces in which sustained, peer-reviewed, error-correcting inquiry can take place insulated from the imperatives of attention extraction. These are spaces with their own temporal grain: months for an investigative report, years for a peer-reviewed empirical claim, decades for a scientific consensus. Their epistemic productivity depends on their insulation from the millisecond markets.
Shared epistemic ground. Overlapping, even if contested, factual and methodological commitments across the politically relevant population. The standard worry — that “shared epistemic ground” smuggles in a substantive ideological consensus — is real but overdrawn. What is needed is not agreement on conclusions but agreement on the kinds of moves that count as inferential moves: that evidence is contestable but not arbitrary; that expertise is revisable but not interchangeable with confident assertion; that the question “how would I know if I were wrong?” is a legitimate question and not, as much of the platform-mediated discourse has re-coded it, a sign of weakness.
Distributed access. The conditions under which both producing and consuming public reason are not class privileges but publicly distributed capabilities. Concretely, this means stable, well-funded, locally embedded journalism; public-service broadcasters insulated from political-administrative pressure; accessible higher education with disciplinary autonomy; libraries and archives functioning as commons; and, increasingly, an analogous infrastructure for the digital information environment.
3.5.2 Where each requirement is now denied
Slow time is being squeezed by a combination of three pressures: the platform-economic erosion of long-form journalism’s revenue base; the financialisation of academic publishing into engagement-style metrics (Diaz, 2021); and, most recently, the direct coercive defunding of research universities under the second Trump administration. The empirical record on the latter is now substantial. In its first year, the administration opened or cited as basis for funding-loss threats more than 150 investigations into US colleges; six universities, including Brown, Columbia, Cornell, Northwestern and Penn, have publicly settled with the administration to restore frozen funding; the Department of Justice paused multiple investigations into the University of Virginia conditional on the institution’s “completing its planned reforms prohibiting DEI” through 2028; the administration attempted to cap research overhead reimbursement rates at 15% across NIH, NSF, DoE and DoD before federal courts blocked the policies; and the proposed FY2026 budget sought a 21% cut to federal scientific research funding (Spitalniak, 2026). Whatever one’s view of the substantive policy disputes, the infrastructural effect is to shrink the spaces in which slow time can be insulated.
Shared epistemic ground is denied by the very disorientation mechanism Thesis 3 describes. Hadarics (Hadarics, 2025) documents, across multiple jurisdictions, that authoritarian-leaning populations consistently and systematically misperceive the freedom of their own media environment, in directions consistent with the political project they support. This is not a perceptual error to be corrected by better fact-checking. It is the epistemic-political output of the dual-extraction architecture acting on populations whose formation already proceeded through that architecture. The Zolides work on anti-Fauci memes (Zolides, 2022), and the Morelock-Narita account of the QAnon-COVID nexus as a Habermasian legitimation crisis (Morelock and Narita, 2022), document one current of this. The Hasell-Chinn finding that aspirational lifestyle social-media use predicts anti-intellectualism and inaccurate beliefs even when the content is not explicitly political (Hasell and Chinn, 2023) documents the deeper one. The political effect is not localised to overtly political content; it is a property of the medium.
Distributed access is denied at multiple levels: the local-news collapse, the consolidation of national legacy media in pro-regime hands (Levitsky et al., 2025), the algorithmic re-distribution of attention away from public-service broadcasters, the steady commodification of educational signalling (Cartner-Morley, 2026). The political-economic direction is consistent. The infrastructural register in which the public sphere has historically functioned — a register in which information goods were produced under non-market and quasi-market conditions and distributed broadly — is being systematically substituted by a register in which they are produced under attention-extraction conditions and distributed to whoever the algorithm selects.
3.5.3 What can be done that is compatible with public reason
The chapter resists offering a programme. Its argument is diagnostic, and the temptation to follow diagnosis with a fluent solution is one of the things this book is at pains to identify and avoid. But the empirical material discussed in this chapter does allow some convergent observations on the shape of any plausible remedy.
First, structural intervention on the dual-extraction model is non-optional. Privacy regulation in the European model (Russo and Malgieri, 2023), algorithmic accountability requirements in the mode of Yeung’s regulation-as-governance framework (Yeung, 2017) and Binns’s algorithmic accountability primer (Binns, 2022), and contextual-integrity-based standards in the Nissenbaum tradition (Nissenbaum, 2010) are all candidates. None of them is sufficient on its own; collectively they constitute the regulatory minimum below which the public sphere cannot be reconstituted. The Mittelstadt et al. taxonomy of the ethics of algorithms (Mittelstadt et al., 2016) and the Floridi-Cowls unified-framework synthesis (Floridi and Cowls, 2019) supply the normative architecture; the work of operationalising it is political and is currently understaffed.
Second, public-interest information infrastructure must be re-built on terms that the platform economy cannot capture. Gillespie’s Custodians of the Internet (Gillespie, 2018) and Green’s Smart Enough City (Green, 2022) are useful here for what they avoid: the techno-solutionist trap of replacing one platform monopoly with another, even when the second is publicly owned. The mechanism the literature converges on is plural, decentralised, accountable infrastructure: public-service media insulated from pressure; libraries and archives as digital-and-physical commons; municipal and regional information capacity not contingent on Silicon Valley hosting; professional journalism with stable funding sources that are not the patrimonial whim of one billionaire.
Third — and this is the move the public-health-diplomacy literature makes most clearly — defending public reason in this environment requires political literacy on the part of the experts whose authority is the standing target of populist mobilisation. McKee and colleagues’ agenda for public health diplomacy in an age of populism (McKee et al., 2025) is in this respect a model that travels well beyond public health. The agenda’s nine recommendations include creating diplomacy laboratories for crisis simulation, empowering non-state actors (cities, regions, NGOs) as diplomatic agents, strengthening public engagement and active listening rather than unilateral broadcast, protecting health workers and journalists who become first targets of authoritarian-populist regimes, building alternative accountability systems in partnership with investigative media, reframing health as a diplomatic priority, diversifying funding to reduce dependence on politicised national budgets, training future leaders with explicit political literacy and negotiation skills, and reinventing multilateralism rather than ceding it (McKee et al., 2025). The transposition to the broader public- sphere agenda is straightforward: substitute public reason for public health, and the structure of the recommendations is the structure of any plausible defence.
Fourth, the cognitive infrastructure on which all of this rests must be rebuilt at the level of educational practice. The relevant working principle, drawing on the Hogenboom material discussed above, is what Vivienne Ming has called productive friction (Hogenboom, 2026): structuring AI tools so that they ask questions rather than supply answers, so that they challenge the user’s inferences rather than substitute for them. The “nemesis prompt” — the explicit instruction to a generative model to argue against the user’s own conclusions — is one specific instance. The more general principle is that cognitive tools should be designed to exercise the capacity they assist, not to substitute for it. This is the AI-era analogue of the older pedagogical principle that calculators are appropriate after, not instead of, mastery of the underlying operations.
The empirical evidence underwriting this argument is striking. Kosmyna and colleagues at the MIT Media Lab, as reported in Hogenboom (2026), documented up to a 55% reduction in neural activity among students composing short essays with LLM assistance (a finding from a preprint, not yet peer-reviewed at the time of writing). Ming, separately, recorded diminished gamma-band activity — a marker of cognitive effort, and one associated with later-life cognitive decline — in habitual chatbot users. Both findings converge on a single point: the friction that AI removes is not a cost to be minimised but the very mechanism by which the capacities it purports to support are consolidated.
3.5.4 The pedagogical complication, again
The Holt complication (2025) returns at this point with sharpened relevance. A defence of public reason can become its own form of cognitive constriction if it presents itself as the only permissible epistemic posture. The diagnosis advanced in this chapter is consistent with substantial substantive disagreement between the populations it treats as deliberative agents. What it is inconsistent with is an environment that systematically deprives those populations of the conditions under which their disagreements could be productively contested. The distinction is not subtle, but it is easy to lose under polemical pressure. The intellectual formation that is in danger is not a particular set of conclusions; it is the capacity for any set of conclusions to be formed and revised in dialogue. McDonald’s recent argument that the disciplines themselves — in their differing methods, frameworks and characteristic moves — already constitute a non-trivial viewpoint diversity (McDonald, 2026) is, in this register, a useful corrective to the assumption that public reason is a single homogeneous register that some external authority must impose.
If Part I has established that the institutional and infrastructural conditions of public reason are eroding, Part II asks where, then, intellectual life appears in the 2020s — and finds that, in its publicly visible forms, it appears as performance, brand and consumer identity, on the surface where the attention economy permits it to monetise. The next chapter takes up the case materials.
For the European debate, see the wave of school-phone restrictions implemented or announced in France (2018, 2024 extension), the Netherlands (2024), Spain by autonomous community (2024–2025), the United Kingdom (DfE guidance, 2024), and Sweden’s partial reversal of one-to-one digital education in primary schools (2023–). For the AI-in-curriculum debate, the Faculty Concerned About ASU’s New AI Course Builder coverage (Inside Higher Ed Staff, 2026) is a representative current-state snapshot.↩︎