Since the days of the ancient Greek historian Herodotus, credited as being the author of the first great narrative history, the field of history has been a distinctly human craft. The rise of the GPT-2 language model, however, presents a new and exciting question: can a transformer-based language model write history? The answer, as demonstrated by my findings, is complicated. History is both subjective and constantly changing. There are multiple schools of thought — such as cultural history, social history, environmental history, and economic history — each with conflicting methodologies as to how history is to be written and understood. This presents a significant challenge to a so-called “GPT-2 Historian,” as this language model would need to have a thorough understanding of the past and be able to synthesize this understanding into a coherent flow of thought. Accordingly, a GPT-2 Historian risks contradictions and logical fallacies in its historical writing. However, my previous experience with GPT-2 has demonstrated that it is capable of replicating elements of writing style as well as occasionally generating a decently coherent flow of thought. With this in mind, I will now explore GPT-2’s potential to produce original historical writing after being fed three full-length history dissertations discussing different aspects of the life and times of the 26th President of the United States, Theodore Roosevelt. After being fed these dissertations, how well can a “GPT-2 Historian” write its own history of the Rough Rider?
Holt, Grant, "The GPT-2 Historian: Can a language model write history" (2021). IPHS 300: Artificial Intelligence for the Humanities: Text, Image, and Sound. Paper 27.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.