6  AI and the Automation of Intellectual Labour

Between roughly 2022 and 2026, generative artificial intelligence has displaced significant quantities of the routine cognitive labour through which intellectual life was historically reproduced — summarising, drafting, translating, formatting, citing, explaining — and the displacement is still in train. This is not a marginal addendum to the analyses of the preceding chapters but a third structural force, alongside political authoritarianism (Part I) and platform-economic capture (Chapter 3), that is reshaping the conditions of public reason in our period.

Three claims organise what follows. First, AI is a labour-process transformation: it changes who does what cognitive work, on what timescales, with what error-distributions, at what cost. Second, AI is an infrastructural transformation: the firms that own the foundation models are largely the same firms — or sit in the same oligopolistic structure — as those that own the attention-economic platforms, with the consequence that the political-economic incentives that produced the engagement economy now structure the knowledge-production economy. Third, AI is a cultural-symbolic transformation: the displacement of routine intellectual labour forces a renegotiation of what a human mind is for, and therefore of what counts as a desirable cognitive trait — a renegotiation that intersects directly with the performative-intellectualism analysis of Part II.

Foundation-model owners and the attention-economic incumbents.
The same firms that own the dominant foundation models also own — or sit in oligopolistic relation to — the dominant infrastructures of the engagement economy.
Alphabet controls Google Search, YouTube and Android, and develops the Gemini family of models through Google DeepMind.
Meta owns Facebook, Instagram, WhatsApp and Threads, and develops the Llama model family.
Microsoft holds an exclusive cloud and commercial partnership with OpenAI, integrates GPT-class models across Windows, Microsoft 365 and Azure, and operates one of the three hyperscale clouds on which most frontier training runs.
Amazon operates AWS — the largest of the three hyperscale clouds — develops its Nova and Titan model families, and holds a multi-billion-dollar partnership with Anthropic.
Apple controls iOS and macOS distribution, has begun deploying on-device models under the Apple Intelligence brand, and partners with OpenAI for off-device inference. The continuity is structural rather than coincidental: training-scale compute, large-scale data pipelines and consumer-facing distribution channels concentrate where capital is already concentrated, reproducing under the AI label the oligopolistic logic that structured the engagement economy (Bratton, 2026; Zuboff, 2019).

A clarification is in order at the outset. What follows is not a polemic against AI, and certainly not a refusal of the genuine benefits of these tools. Many of the routines they have absorbed were tedious and unequally distributed; many of the capabilities they make available were previously the privilege of expensive infrastructures. The argument is that how the technology has been built, who owns it, and under what social practices it is now being deployed are political-economic questions that have been largely answered, so far, by a small number of firms whose interests are not those of the cognitive enterprise — the collective human effort to improve the quality of thinking, of learning, and of public reasoning. The point holds even, and perhaps especially, when some of these firms make the question of public purpose explicit: Palantir’s April 2026 manifesto, taken up later in the chapter, recasts democratic deliberation itself as a “friction” to be engineered away (Coeckelbergh, 2026; Karp and Zamiska, 2025). That asymmetry, and what it costs, is the chapter’s subject.

6.1 What is automated, what is not

A descriptive distinction can be drawn — provisional, contestable, revisable — between cognitive tasks where rapidly improving automation is now a fact and tasks where, on present trajectories, automation remains partial or absent.

On the side of rapidly improving automation: summarisation of documents, translation between major languages, first-draft generation of routine prose, citation formatting and bibliographic search, code completion, standard explanation of established concepts. On the side of resistance: original empirical investigation in unfamiliar terrain; sustained interpretation that requires accountability for an inherited disciplinary archive; peer contestation grounded in tacit professional norms; pedagogical relationships under conditions of mutual recognition.

The line is not stable. Boundaries shift; resistant tasks become partly automatable; new forms of human contribution appear. What matters analytically is that the distribution of cognitive labour has shifted, and is shifting, in ways that have political-economic and cultural consequences regardless of where any particular boundary settles in 2027 or 2030.

A 2025 survey-based study of knowledge workers using generative AI in their daily tasks reported self-perceived reductions in cognitive effort across drafting, summarising and information retrieval, with confidence effects that varied by domain expertise: workers more confident in the AI’s output reported less critical evaluation of that output, and vice versa (Lee et al., 2025). Earlier productivity studies on professional writing tasks documented substantial speed gains from AI assistance, with quality assessments varying by task (Noy and Zhang, 2023).

Stadler, Bannert and Sailer (Stadler et al., 2024) found that LLM use reduces immediate cognitive load in student scientific inquiry but compromises depth of engagement. Kosmyna et al. (Kosmyna et al., 2025), in an EEG study of essay-writing tasks across three groups (LLM, search engine, brain-only), reported reduced inter-regional brain connectivity in the LLM group — up to 55% in some bands — consistent with reduced internal generation of content. Their “cognitive debt” argument is that early reliance on LLMs impairs later unaided performance, and that participants who had used LLMs reproduced narrower idea sets and reported weaker ownership of their texts.

In 2024–2025, several actors — Sakana AI, Autoscience Institute, FutureHouse (now spinning out a commercial subsidiary, Edison Scientific), and US national-laboratory teams at Argonne, Oak Ridge and Lawrence Berkeley — announced systems claiming to automate substantial parts of the scientific research workflow: literature review, hypothesis generation, experimental design, manuscript authoring (Beel et al., 2025; Lu et al., 2024). Independent evaluations have found the manuscripts produced to be of uneven quality — Beel, Kan and Baumgart (Beel et al., 2025) compare them to “an unmotivated undergraduate rushing to meet a deadline” — but the trend is toward systems intended to operate with increasing autonomy.

The frontier between “automatable” and “resistant” is not stable, and is actively contested within the AI research community itself. Apple’s 2025 Illusion of Thinking study reported that frontier reasoning models exhibit a complete accuracy collapse beyond a threshold of problem complexity, with the models reducing their own reasoning effort precisely as the task gets harder (Shojaee et al., 2025). Replications and counter-replications followed within weeks, and the resulting debate is itself an instance of the contestation: Dellibarda Varela et al. (2025) re-ran two of the original benchmarks and concluded that the picture is more nuanced than either critics or defenders of the original paper had allowed. Independent evaluations of the most ambitious end-to-end “AI scientist” pipelines have similarly found their outputs uneven relative to their billing (Beel et al., 2025). Some boundaries will consolidate; others will move. The chapter takes no position on which. The political-economic and cultural-symbolic consequences traced below do not depend on settling the technical question.

The distributional point is more important than the boundary question. Even where automation works well, what it absorbs is unevenly distributed across domains, regions and class positions. The tasks most densely automated in 2026 are precisely those through which intellectual formation — the moving from novice to competent to expert — has historically taken place: writing first drafts, summarising in one’s own words, working through citations, drafting explanations. That has consequences, taken up below, that no descriptive boundary line can settle.

6.2 The shifting frontier and the renegotiation of cultural value

A complication of the period 2024–2026 has been that public expectations about what can be automated have moved faster than the technical reality. Several firms have declared the goal of end-to-end “AI scientists” capable of formulating hypotheses, running experiments, and producing publishable manuscripts with limited human involvement, restricted at most to a validation phase (Beel et al., 2025; Lu et al., 2024). Whether these systems live up to their billing is a separate question; the point here is that the claim is on the table, made by serious actors with serious capital, and absorbed by educational institutions, funding bodies and the public under their reputational halo.

This claim — and the broader claim of which it is a part, that generative AI is now extending from prose into music, image, code, mathematics, scientific discovery, and creative production generally — forces a question that the cultural commentary has mostly avoided. If the cognitive activities that have historically counted as marks of a sophisticated, desirable, interesting human mind are, one by one, being absorbed into machine work, what remains?

The Cartner-Morley piece on “intellectualism pop” — discussed in Chapter 4 — caught the moment in which “smart is sexy” became a cultural slogan. But its anchoring is now in question. Being smart was attractive, in part, because cognitive labour was scarce, costly and unevenly distributed: a person’s visible thoughtfulness signalled the investment of time and the shaping of an inner life. Once that labour can be cheaply externalised, the signal has to find a different referent.

Three responses are now visible. The first is the line attended to in Chapter 4: the displacement of intellect into signs of intellect — the highlighted novel, the bookshelf photographed in soft focus, the title-bag — read here, in the AI period, as the production of an aesthetic of unautomated thought. The signs are valuable not because they correspond to actual cognitive practice but because they correspond to its image at a moment when its practice is increasingly outsourced.

The second response is a renewed cultural premium on those forms of cognitive labour that cannot — at least not yet — be automated: embodied creative judgement, original empirical investigation, sustained interpretation grounded in disciplinary memory, pedagogical relationship under conditions of mutual recognition. These are the cognitive capacities that survive automation by definition, because their automation would constitute a different order of technological change. The empirical-economic literature converges on this prediction from a different vantage: in customer-service settings, the introduction of generative-AI assistance compresses skill differentials by raising the productivity of less-experienced workers while leaving experienced workers’ performance largely unchanged (Brynjolfsson et al., 2025), a pattern consistent with what Acemoglu’s task-based model identifies as automation’s tendency to widen the gap between capital and labour income while displacing routine cognitive tasks (Acemoglu, 2025). The implication, read against the cultural argument here, is that the cognitive capacities resistant to automation become not only scarce in supply but increasingly unevenly remunerated. They will plausibly become the genuinely scarce cognitive goods of the coming decade.

The third response — the harder one to think through — is a quieter erosion. If generative AI can produce passable music, passable poetry, passable scientific writing, passable creative prose, then the categorical status of a human mind as a producer of those things is altered. Not because any individual practitioner has been replaced — most have not — but because the cultural framing of what their work means is open to renegotiation. Whether this renegotiation produces a robust new appreciation of what only human minds can do, or a quieter slide toward the devaluation of intellectual labour as such, is one of the open questions of the present moment. The cultural register and the economic register are not independent: a de-eroticisation of intellectual labour — its loss of status as a marker of sophistication and desirability — runs alongside its precarisation as a labour input, despite the sustained formation that competent intellectual work has always required. Susskind and Susskind (Susskind and Susskind, 2022) documented the early form of this convergence within the established professions; subsequent labour-market evidence on generative-AI exposure shows the pattern extending into knowledge work more broadly, with productivity gains accruing disproportionately to capital and to a narrow class of workers in complementary positions (Acemoglu, 2025; Brynjolfsson et al., 2025). The answer will not be technical. It will be political-economic and cultural-pedagogical, and open to deliberate shaping.

6.3 The political economy of foundation-model ownership

The firms that own the foundation models — Microsoft (with OpenAI), Google (with DeepMind), Anthropic, Meta, and a handful of challengers — sit in roughly the same oligopolistic structure as the platforms analysed in Chapter 3. The continuity is not a coincidence. The compute, data and talent required to train frontier models concentrate where capital is already concentrated; the distribution channels for AI products run through the same operating systems, app stores and cloud infrastructures that mediated the platform era. Recent quarterly results are read by analysts not as autonomous AI revenue but as AI-driven cloud revenue for the same incumbents (Bratton, 2026).

The deeper point, however, concerns the relation between these firms and the cognitive enterprise — the collective effort, by education systems, public research bodies, libraries, journalism and civic institutions, to improve human capacities for thinking, learning and public reasoning. The relation is, on the evidence, asymmetric in ways that should be named.

Some of these firms have built tools that practitioners in education and research find genuinely useful, and have funded education and research grants that have produced real benefit. The point is not to deny these contributions. The point is that the firms’ interest is by definition lucrative, that their philanthropic and educational activity is marginal in scale relative to their core business, and that even their most visible philanthropic vehicles operate within constraints set by the broader corporate and political environment. A documented illustration of the period is the Gates Foundation’s halting of grants administered by a major consulting firm with ties to the Democratic Party in early 2025, in the wake of pressure from the Trump administration; the Ford Foundation, in the same period, began scrutinising grants that “could be criticised” as partisan (Levitsky et al., 2025). The Gates Foundation is not a direct corporate vehicle of Microsoft, but the wealth that funds it, and the public-relations halo it generates, are closely associated with Microsoft’s institutional reputation; analogous adjustments have been documented at corporate-linked foundations across the sector. Whatever one thinks of these decisions individually, they document the dependence of corporate philanthropy on political weather — the opposite of what an autonomous cognitive enterprise would require, and a structural fact that should temper any reading of these firms as principal agents of public-interest cognitive work.

A second asymmetry concerns the use of cognitive infrastructures the firms do not pay for. The political economy of training-data appropriation reproduces the broader pattern of unpriced extraction from the cultural commons that Zuboff named (2019): books, articles, code, images, audio, the writings of millions of users, absorbed into training sets without compensation and on terms set unilaterally by the firms doing the absorbing. The works whose quality the models inherit were produced by the very cognitive enterprise from which the firms have remained distant.

A third asymmetry has become visible in the period 2025–2026, and deserves particular attention because it complicates any simple reading. The most prominent public conflict in the AI sector during this period was not between AI firms and democratic accountability, but between AI firms over the question of whether democratic accountability should constrain them at all. Palantir’s April 2026 manifesto — read by Coeckelbergh as “technofascism in plain sight” (Coeckelbergh, 2026) and by Varoufakis as “techfeudalism” (Varoufakis, 2026) — argued that Silicon Valley owes a “moral debt” to American hard power, that “the question is not whether AI weapons will be built but who will build them and for what purpose,” and that the “frictions of democratic process” are “a bug to be engineered away” (Karp and Zamiska, 2025). The same period saw Anthropic refuse to allow its models to be used for autonomous lethal weapons or mass surveillance, an internal letter from over five hundred Google executives and employees protesting an expanded Pentagon contract, and a Trump-administration attempt to blacklist Anthropic that ultimately failed (elDiario.es, 2026). The contestation matters because it shows that the political economy of foundation-model ownership is not monolithic: there are real disagreements within the sector about what these systems ought to be for, and democratic publics retain some leverage where they choose to exercise it.

OpenAI’s countermove.
Three weeks after the Palantir manifesto, Altman published Our Principles on the OpenAI website (Altman, 2026). The document inverts Karp and Zamiska’s vocabulary almost point by point: where Palantir frames democratic deliberation as “friction,” OpenAI’s first principle commits to ensuring that “key decisions about AI are made via democratic processes and with egalitarian principles, and not just made by AI labs”; where Palantir invokes a “moral debt” to American hard power, Altman invokes a mission to ensure “that AGI benefits all of humanity.” The contrast is real at the level of stated commitments. Whether it survives the structural fact that OpenAI shares with Palantir the same compute infrastructure, the same investor base, and the same oligopolistic market position is the open question — and the question on which democratic publics retain leverage if they choose to exercise it.

The point of recording the asymmetries is not to deny the existence of internal contestation. It is to clarify the structural situation: a small number of firms, sitting in oligopolistic positions inherited from the platform era, now own the infrastructure of an emerging knowledge-production economy. Their incentives are commercial; their accountability to publics is thin; the philanthropic and ethical activity through which they signal public-interest commitment is, in scale and in structure, secondary to the core business. That is the situation against which any serious response — pedagogical, regulatory, civic — must be designed.

The transition this completes is the one Hari traced in Stolen Focus: the platform economy of the 2010s organised attention as a resource to be captured and monetised, with documented costs to sustained reading, sleep, mental health and democratic deliberation (Hari, 2022). The AI economy of the 2020s extends the same logic to the production of knowledge itself. Where the attention economy degraded the conditions under which people think, the knowledge-production economy proposes to do their thinking for them, on terms set by the same firms, with similar opacity, and with comparable indifference to the long-term effects on the cognitive capacities of those who use the tools. The autonomous, informed citizen — the one whose responsible use of AI was supposed to be the technology’s democratic justification — is structurally disfavoured by an arrangement that profits from dependence rather than from autonomy. Véliz’s argument that privacy operates as democratic infrastructure rather than as a private preference (Véliz, 2024) generalises naturally to the cognitive case: where the conditions for unsupervised reading, reasoning and deliberation depend on infrastructures owned by firms whose business model is the harvesting of those activities, the formal liberty to exercise civic autonomy is hollowed out by the practical conditions under which it must be exercised.

6.4 Cognitive offloading and the displacement of formation

This section draws together a body of empirical work, much of it post-2024, on what happens to the cognitive practices through which intellectual formation has historically been reproduced when those practices are displaced to machines.

The starting observation comes from work that predates the AI period but has become newly relevant. Hari (Hari, 2022) documented the collapse of sustained reading among American adults across the 2000s and 2010s, with the median attention span on a single task in office settings dropping to roughly three minutes; Wolf (Wolf, 2018) described, from a literacy-research vantage, the migration of the reading brain from linear depth to scanning and skimming under screen-based conditions. These findings established a baseline of weakened sustained-attention practices before generative AI became routinely available — that is, the cognitive infrastructure on which AI tools were grafted was already attenuated.

What the 2024–2026 evidence adds is the cognitive cost of grafting LLM use onto that already attenuated infrastructure. The MIT study by Kosmyna and colleagues (Kosmyna et al., 2025) is the most discussed: in an EEG comparison of essay-writers using ChatGPT, a search engine, or no tools, the LLM group showed reduced inter-regional brain connectivity (up to 55% in some bands) consistent with weaker internal generation of content; participants who had relied on the LLM in earlier sessions and then wrote without it showed intermediate connectivity patterns rather than recovering to the brain-only baseline. The authors framed this in terms of “cognitive debt”: LLM use defers cognitive effort in the short term but incurs long-term costs in retention, retrieval, and the sense of authorship over one’s own ideas. Critically, the LLM group reported weaker ownership of their texts and reproduced narrower n-gram patterns — that is, less independent ideational range.

The Microsoft–CMU study by Lee and colleagues (Lee et al., 2025) reaches a convergent conclusion from a different methodology. Surveying knowledge workers using generative AI in daily tasks, the authors found self-reported reductions in cognitive effort across drafting, summarising and information-retrieval; critical-thinking engagement varied with confidence in the AI’s output, with workers more confident in the AI engaging in less critical evaluation. Stadler, Bannert and Sailer (Stadler et al., 2024) documented the same pattern in student scientific inquiry: lower mental effort, lower depth of engagement.

A fourth strand concerns what is sometimes called “skill atrophy.” Frequent users of AI tools, in a range of recent studies, bypass deeper engagement with material in tasks like brainstorming and problem-solving, with measurable consequences for unaided performance (Kosmyna et al., 2025; Lee et al., 2025). The pattern is not universal and has important moderators — Kosmyna’s “Brain-to-LLM” group, which used the LLM after writing without tools, integrated AI suggestions more strategically and showed more cohesive neural signatures — but the direction of the effect is consistent: introducing AI assistance early, before underlying skills are consolidated, weakens the formation of those skills.

The consequences are not symmetrically distributed. The most privileged students continue to be exposed to friction — small seminars, individual supervision, pedagogical relationships that require sustained engagement, infrastructures that make AI assistance an option rather than a substitute. The least privileged students, in under-resourced schools and over-large classes, encounter intellectual work primarily through mediated and increasingly automated channels, with weaker scaffolding for the unaided practices through which AI assistance, when it comes, can be integrated rather than depended on. Hayles’s analyses of distributed cognition (Hayles, 2023) caution against a simple “humans become less intelligent” reading: cognition is and has always been distributed, and the question is not whether tools are used but how they are integrated into the practices through which human capabilities are formed. The empirical answer in 2026 is that, on present trajectories, integration is poor, and the cost falls disproportionately on those who can least absorb it (Holopainen et al., 2025).

A final consideration concerns generational effect. The youngest cohorts now in formal education are the first to have encountered LLM-mediated cognitive work before they consolidated the practices of unaided cognitive work. Whether this constitutes a generational break — comparable to, but distinct from, the screen-mediated literacy break Wolf described — is too early to settle empirically; the longitudinal evidence is not yet available. What is available is suggestive enough that the question should be taken seriously rather than assumed away. Educators are already reporting, in qualitative terms, that students forget content more easily than a few years ago (Hogenboom, 2026); the converging evidence from neural, behavioural and survey studies (Kosmyna et al., 2025; Lee et al., 2025; Stadler et al., 2024) is consistent with that report. A precautionary posture — designing pedagogical practice as if cognitive debt is a real phenomenon, while waiting for the longitudinal evidence — is the rational response (Rivera-Novoa and Duarte Arias, 2025).

6.5 The premium on the unautomated sign

A pivot back to the analysis of Chapter 4 is now possible. If the displacement of routine cognitive labour is a real political-economic process, then the cultural premium attached to signs of unautomated cognition rises. This is, in part, the mechanism the mausoleum thesis tracked from a different angle: when a practice loses its embedded function, what circulates in its place is the sign of the practice as a luxury commodity. The handwritten marginalia, the dog-eared paperback, the longhand journal, the Substack essay flagged as “written by a human” — all of these acquire status precisely because they are difficult to counterfeit at scale, and precisely because their cultural meaning references a cognitive practice that is now distributed unevenly.

The analytical move here is that the mausoleum phenomenon analysed in Part II has two drivers, not one. Its first driver, taken up in Chapter 4, was the platform attention economy: the conversion of cognitive practices into engagement content for distribution. Its second driver, taken up here, is the AI economy: the absorption of routine cognitive labour into machine work. Both drivers reinforce one another. What circulates as a luxury sign in 2026 — the photographed bookshelf, the Madame Bovary tote — is a sign that responds simultaneously to platform extraction and to AI-mediated automation. It marks the human as the bearer of an interiority that is, by both forces, made scarce.

Whether the response to that scarcity ends in genuine re-cultivation of the practices the signs reference, or in their ongoing replacement by signs that displace them further, is again a political-economic and cultural-pedagogical question, not a technical one. The next section turns to what a serious response would require.

6.6 Conditions for a genuine reconstruction

A reconstructive programme adequate to the situation diagnosed above would require simultaneous work at three levels. The chapter records what each level involves and where, on the evidence available in 2026, work is — and is not — happening. The fuller elaboration of these conditions is the work of Part III; what follows is the orienting sketch.

At the infrastructural level, the question is whether public provision can keep pace with private oligopoly. The historical analogue is the public library: a piece of public infrastructure that, in the long aftermath of its mid-twentieth-century consolidation, made the products of an emerging knowledge economy available on terms that did not depend on the capacity to pay. A serious AI policy in our period would build the equivalent: public foundation-model access (with pricing transparent and use-cases audited), public computational capacity, public training-data provisioning that respects the rights of contributors, and public evaluation regimes that the firms must meet rather than self-report against. None of this is currently being delivered at meaningful scale in any major jurisdiction. The European Union’s AI Act addresses some risks of deployment but not the underlying distributional question, and independent assessments have noted that its compliance-cost structure may in fact entrench the position of firms with the regulatory infrastructure to absorb it (Bommasani et al., 2024; Kretschmer et al., 2023). The United Kingdom has, on the evidence of 2025–2026, oriented its policy toward attracting investment from the same firms whose oligopolistic position is the problem: the AI Opportunities Action Plan of January 2025 (UK Government, Department for Science, Innovation and Technology and Clifford, 2025) was followed by the September 2025 Tech Prosperity Deal with the United States (Browne, 2025), whose principal effect to date has been to channel multi-billion-dollar commitments from Microsoft, Nvidia, Google and OpenAI into UK data-centre infrastructure. The United States has actively dismantled what regulatory architecture existed: Executive Order 14148 of 20 January 2025 rescinded the Biden administration’s AI executive order; Executive Order 14179 of 23 January 2025 directed federal agencies to remove “barriers” to AI development (The White House, 2025a); the December 2025 Executive Order on the National Policy Framework for Artificial Intelligence established a litigation task force to challenge state AI laws considered inconsistent with federal policy (Christian and Waltzman, 2025; The White House, 2025b).

At the pedagogical level, the question is whether educational institutions can take the differential distribution of cognitive friction as a primary problem and address it. The empirical evidence summarised in the previous section is consistent on one point: AI assistance integrates well into the cognitive lives of those who have already consolidated unaided cognitive practices, and integrates badly into the cognitive lives of those who have not (Kosmyna et al., 2025; Lee et al., 2025; Stadler et al., 2024). The pedagogical implication is that AI exposure should be delayed, in formation, until the underlying practices are in place — not refused, but sequenced. Implementing such a sequencing requires resources that are themselves unevenly distributed, and the work of the next chapter is in part to consider what defending the conditions for such pedagogy looks like under a hostile political economy.

At the cultural level, the question is whether the practices of authentic cognitive labour can be defended as practices, rather than relegated to the order of luxury signs. The mausoleum phenomenon is the symptom, not the disease; the disease is the collapse of the conditions under which sustained reading, careful writing, original investigation and pedagogical relationship can be ordinary rather than aspirational. There are concrete infrastructures that support those conditions and that can be defended against erosion: independent bookshops and reading communities, public broadcasters with genuine commissioning authority, peer-reviewed journals not captured by oligopolistic publishers, open-access scholarship infrastructures, public science-communication platforms, libraries (again), schools, and — the test case for Part III — universities. None of these is flourishing in 2026; most are under sustained pressure. But none is irrecoverable, and several are recoverable on timescales shorter than the political horizon usually contemplated for such institutions.

Recent evidence on recoverable cultural infrastructures

Independent bookshops show fragile resilience: 45 new openings in 2024 despite sectoral contraction, confirming that curated reading communities remain viable (Booksellers Association, 2025).
Public-interest publishing and open-access infrastructures face financial strain but exhibit active reform momentum, with strong author trust and expanding institutional memberships (PeerJ Survey, 2025).
The wider knowledge ecosystem —universities, libraries, public-facing cultural institutions— is under sustained pressure yet structurally recoverable, with sector reports emphasising turbulence rather than terminal decline (Publishing Futures Report, 2025).

Sources:
- Booksellers Association (2025)
- PeerJ Author Survey (2025)
- Publishing Futures Report (2025)

Two conditions for grounded hope can be named. The first is that the recent evidence on cognitive offloading and on the political economy of foundation models has begun to circulate beyond the specialist literature into educational practice, into journalism, and into the explicit awareness of users — a precondition for any democratic response. The second is that the contestation within the AI sector itself, visible in the Anthropic–Palantir conflict and in the internal letters at Google and across other firms, documents that the trajectory is not closed. Where engineers, researchers and citizens have organised around the question of what these systems are for, they have on occasion changed outcomes. They have not, on the available evidence, reversed the structural direction; but they have demonstrated that the structural direction is contested, and contestation is the precondition for change.


If Parts I and II have, between them, identified three structural forces — authoritarian assault, platform-economic capture, and AI-mediated automation — that are eroding the conditions of public reason, Part III asks what reconstruction would require. The university, as the institution where formation, research and democratic deliberation have historically intersected, is the test case.