Last month I had the rare pleasure of a dialogue with mathematician and computer scientist as part of the the opening plenary for the annual meeting of the Association of University Presses (AUP), my professional tribe.1
As a CIFAR Chair, Guillaume Lajoie is a key player in Canada’s national AI research programme, the largest such programme run by the public sector. The session was expertly marshalled by David Aycock, of Baylor University Press. Feedback was good, looks like we hit the right note, more a tour of the horizon than a deep dive into different issues. It was a real pleasure to be in dialogue with a researcher — we need more of such sharing sessions! Guillaume is a wonderful communicator, and I enjoyed surveying some of his papers in preparation (including those that veer into philosophy).2
Of course we have come to different conclusions on some points in the AI debate, but it was great to talk it through, and to field such thoughtful questions from our audience. We ran out of time to discuss one of the questions which David had prepared, so I thought I’d write up this edition from my notes on that issue - the impact of AI on the public sphere.
The impact of AI on the public sphere
Publishers are professionally inclined towards ideas of the public and the public sphere. Those of us doing university press publishing, we’ve devoted our careers to the logic of the public sphere. The sad trajectory of Twitter shows the distance from reality to Jürgen Habermas’ ideal notion of the public sphere as a single space where individuals could form views in discussion, a space unconstrained by the power and sanction of the state, and not shaped entirely by economic interests. But at the same time, Twitter retains a hold because it partakes of the power of that ideal.
I find the thinking of Michael Warner (Publics and Counterpublics) on the idea of multiple publics as perhaps more attuned to a world of highly diverse ideas of the good life, and pointing to the necessity and function of spheres of choice and dialogue in modern society.
“a public can only produce a sense of belonging and activity if it is self-organized through discourse rather than through an external framework.” - Michael Warner
What lovely praise of the bookshop! But enough epistemological throat-clearing!
What will be the impact of AI on the public sphere?
The public sphere is under serious threat from many directions, and I don’t think AI is itself the main one. But its impact may bring matters to a head. Before we consider those possible dynamics, it’s worth thinking about AI as speech. What kind of speech is it?
Emily Bender will tell you – in fact she told us all in the Stochastic 🦜🦜🦜s paper3 — that speech formally requires communicative intent, and since machines don’t have intent, they can’t produce speech, they are just stochastic 🦜🦜🦜s. Speech coming from LLMs is a correlation of words in output to words in input. There is no causation, no motor, no speaker, no there there. Fooled by statistics — again!
Looked at from a slightly different angle, AI-generated speech is simply bullshit, in Harry G Frankfurt’s sense of the word. “The bullshitter doesn't care if what they say is true or false, but cares only whether the listener is persuaded.” Or to modify that slightly for GenAI, the model only cares to make output plausible. LLMs as mass engines of plausibility.
Much as I go along with these elegant philosophical defences of human business, limits to these approaches were first made clear to me in a brutal Scott Aaronson blog post, way back in March 2023, writing contra Chomsky:
“In a certain sense you’re right. The language models now being adopted by millions of programmers don’t write working code; they only seem-to-write-working-code. They’re not, unfortunately, already doing millions of students’ homework for them; they’re only seeming-to-do-the-homework. Even if in a few years they help me and my colleagues do our research, they won’t _actually_ be helping, but only seeming-to-help. They won’t change civilization; they’ll only seem-to-change-it.”
Another way of putting Aaronson’s point is to remember the ideas of the contextualists — truth claims depend on context, and we will have different assessments of truth claims depending on the context, the consequences, the stakes. It seems to me pretty clear that we will come to accept truth claims from AI generated speech in more and more circumstances.
What kind of speech is LLM output?
Yes this is dangerous. But I don’t see the threat coming from the AI-nature of the speech. The threat is that this looks like speech coming from a speaker, but in fact it is corporate speech, not individual speech. (I took this point from the wonderful Literary Theory for Robots by Dennis Li Tenen). The output of an LLM is a corporate product, not just in the sense of having been created to make a profit, but also through the fact that a complex assemblage of people, finance capital, machines, interests, and a large corpus of people’s content are all involved in creating that speech, confusing our natural tendency to interpret speech as coming from a person.
Plenty of people are making noise about the difference between AI speech and human speech, but probably we should worry more about the difference between individuals working in a public sphere and the speech of corporations within that space. The Citizens United decision of the US Supreme Court seems in retrospect to have been a disastrous confusion of freedom of speech of individuals with rules which should constrain corporate ability to interfere in elections. In that case capital seeks to gain the freedom of individuals in a way that erodes and degrades the precious public sphere. In the case of AI companies seeking to avoid liability for the output of their models they wish to push all the responsibility for their corporate speech product on to the individual who calls it forth in the last stage. GenAI is super-powering corporate speech, and our ability to form useful and functioning publics will suffer even further as a result, if we don’t get much smarter on this much faster.
I think we need to look very carefully at the process of speech and discourse in our media ecologies. It’s the politics and the people, not the data centres, concerning as the energy question might be.4
I’m very grateful to AUP Director Peter Berkery, outgoing AUP 2023-4 Chair Jane Bunker, Director of Cornell University Press, and Michelle Sybert of Notre Dame UP and David Aycock, Deputy Director of Baylor University Press, who were co-chairs of the AUP annual meeting programming committee. David also ran our session flawlessly, and formulated some great questions.
My favourite Lajoie paper was “Sources of Richness and Ineffability for Phenomenally Conscious States” (https://arxiv.org/abs/2302.06403) which proposes a information-systems reasoning for the ineffability of consciousness. From the abstract “In our framework, the richness of conscious experience corresponds to the amount of information in a conscious state and ineffability corresponds to the amount of information lost at different stages of processing. We describe how attractor dynamics in working memory would induce impoverished recollections of our original experiences, how the discrete symbolic nature of language is insufficient for describing the rich and high-dimensional structure of experiences, and how similarity in the cognitive function of two individuals relates to improved communicability of their experiences to each other.”
Of course you’ve read this! It foresees so much. Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜”, Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–23. Virtual Event Canada: ACM, 2021. https://doi.org/10.1145/3442188.3445922
Sorry, I’m loving the footnotes this round. I’m not sure how much to worry about AI energy use, but I certainly don’t buy the argument that it’s all OK because a global 6% increase in energy use due to AI expansion of data centre capacity will be offset by a 10% increase in energy efficiency due to AI discoveries…