The current discourse around generative AI is steeped in speculation: how effective can large language models get? How will they affect employment and education? And are they leading to artificial general intelligence (AGI)? But beyond the discourse, the models themselves are built on speculation: drawing from a giant dataset of natural language in text, they predict the next word in a sequence. Earlier approaches to natural language generation (such as Markov models) also predicted the next word, but recent large language models (LLMs) combine more complicated algorithms, concepts of attention, and larger datasets to conceal their predictive nature and produce far more coherent and plausible natural language. Yet AI writing detectors operate on this idea that AI writing is more predictable than that of humans: humans tend to write with greater “burstiness” and “perplexity.”
With the contrast between human and AI writing as a framing device, this talk traces the ways that prediction has operated in generative AI and other historical attempts to automate writing. Attendees of the talk will come away with an understanding of: current Large Language Models driving generative AI writing and how they differ from earlier models; how AI models do and don’t replicate human writing; and the practical effects of generative AI in writing and pedagogy.
Annette Vee is Associate Professor of English and Director of the Composition Program at the University of Pittsburgh, where she teaches writing and digital composition. She is the author of Coding Literacy (MIT Press, 2017) and has published on computer programming, digital literacy, blockchain technologies, intellectual property, and AI-based text generators.

