Large Language Models (AI)

← Back to Index

Artificial Intelligence (well, sort of — what we casually refer to as AI) is with us now and it’s not going anywhere, so our approach to learning and coursework must adapt.

What you refer to as ‘AI’ is more specifically a Large Language Model or LLM: essentially, these systems are trained on vast corpora of human-produced text to identify statistical patterns in language. LLMs can generate remarkably coherent responses to queries, though they don’t truly ‘understand’ content the way humans do. This potent technology is transforming every field, and history is no exception.

Tasks such as assembling a literature review, writing an analytical essay, or deriving insights from a primary source, are all fundamental activities for most collegiate history classes. Because these tasks are all closely related to textual corpora, LLMs are quite good at them, which means we as historians now have a powerful new tool at our disposal — which you will make use of in this course. Of course, powerful new tools bring dangers and drawbacks along with possibilities.

Philosophical Approach

Especially in the short term, the scale of change brought by LLMs is threatening, and creates new challenges of intellectual honesty and course policies: How do we differentiate between your intellectual labor and that of the LLM? What is the point of artificially separating you from a tool that you will have in the “real world”?

These questions have forced humanists and social scientists (historians are both) to reconsider what is at the core of the field: a critical assessment of what constitutes historical evidence, and making sense of that evidence through analytical reasoning. AI shifts the focus away from the products of these pursuits and toward research skills and habits of mind: the fact that a machine can now competently produce an analytical essay with no grammatical errors and based on primary sources makes an ability to critically evaluate those products more important than ever.

Ethics

As new as this new technology may seem, many of the ethical challenges are all too familiar. Long before AI, one could submit a paper written by someone else, or neglect to properly credit an idea with a citation. In many (perhaps most) cases, we can apply the same ethical guidelines to the use of AI that we have always used. Foremost among them is transparency: when in doubt, err on the side of acknowledging the methods used to complete an assignment.

LLM Policies

This course embraces the “broad use” level of AI policy specified by the Center for Teaching and Learning:

The use of Generative AI tools, including ChatGPT, is encouraged/permitted in this course for students who wish to use them. You may choose to use AI tools to help brainstorm assignments or projects or to revise existing work you have written. However, to adhere to scholarly values, students must cite any AI-generated material that informed their work (this includes in-text citations and/or use of quotations, and in your reference list). Using an AI tool to generate content without proper attribution qualifies as academic dishonesty.

In practice, however, these guidelines are more difficult to follow than it may seem at first because there is such a broad spectrum of AI usage. What if you write your own essay, but ask an LLM to improve the formatting and check it for sentence clarity and spelling errors? Does that require acknowledgement? What if you asked the LLM to identify the “most important” sections of a primary source?

There are no easy answers to these questions, and expectations will vary widely between courses. In this class, I take it for granted that you will use LLMs to assist with tasks such as formatting, grammatical / syntactical accuracy, and even basic brainstorming: unless an assignment specifically requests it, you do not need to offer an explicit acknowledgment. A question you should be asking is: did the computer shape my conceptual understanding of this source or scholarly work? Is my creative output inspired in a fundamental way by the AI? In a sense, these quandaries are not new: if you got your idea from somewhere else, you need to cite your source, and there have always been edge cases. When in doubt, err on the side of transparency.

Your instructor is bound by these same principles when preparing course material and offering feedback. I may use LLM assistance to help format slides and to reconfigure the presentation of my feedback. However, I will never feed any of your material into an LLM not approved for broad use by Pitt (i.e., to avoid privacy concerns); the content and phrasing of all feedback will always come directly from me; and I will never use an LLM to determine, or even suggest, a grade.

LLMs as Assistants for Learning History

The following are some legitimate use cases (and words of caution) for using AI in the field of history:

Updated on September 18, 2025