By Thomas Deans, Noah Praver, and Alexander Solod, University of Connecticut
Even those far from college writing programs are talking more about writing these days, and we all know why: AI.
Some disciplinary associations, including CCCC, MLA, and AWAC, have formed task forces on AI, posted working papers, or issued position statements (nothing yet from IWCA). Some writing centers have started creating materials to guide tutors. And all of us will be mulling over how to handle AI as we plan for the coming year.
Throughout spring 2023, a faculty writing center director (Tom), a peer tutor and senior philosophy/political science major (Noah), and a senior computer science major with lots of AI experience (Alex) met weekly to keep abreast of emerging developments in large language models (LLMs). We discussed the affordances and constraints of such natural language generation tools for learning in higher education and explored how tutors might employ AI in the writing center.
Here we will share several scenarios that not only model practical uses of AI in tutorials but also, we hope, provoke discussion. But before getting to those we’ll provide a little institutional context and some basics on LLMs.
Since the introduction of ChatGPT in November 2022, there has been an avalanche of commentary about the consequences of LLMs for writing and the teaching of writing. Waves of scholarship in academic journals are sure to follow (the spring 2023 issue of Composition Studies, for example, is devoted to thinking through AI).
To assess how students were actually using AI for writing at our large public university, we conducted a brief survey during February-April of 2023. Results showed that about 8% of respondents (254 total) reported using AI regularly and 12% occasionally; about 30% had tried it once or twice, and about half had never tried it. When we asked participants to look ahead to the near future, more than 60% reported that they were somewhat or extremely unlikely to use any AI tools during that academic year. Still, more than 70% said they were interested in learning more about how to use AI for academic coursework.
We’re mindful that student behavior may change significantly by fall 2023, especially as AI tools migrate into word processing applications. And soon we will see more advanced iterations of LLMs, including GPT4, which some predict will usher in a new set of capabilities. Still, we see no need to panic. We’re in a transitional stage and have opportunities to adjust and experiment, even to educate rising generations about how to use AI writing tools purposefully, creatively, and ethically.
Under the Hood of Large Language Models
To grasp the capabilities and constraints of natural language generation applications, we need to understand how they work. LLMs derive their prowess from the transformer, a deep learning architecture that allows models to interpret language within its context. A basic transformer model follows four steps:
- Breaks text down into small pieces called tokens
- Encodes the tokens from text to numbers
- Uses learned probabilities to predict the next token in the sequence
- Decodes tokens back into text
Trained on extensive datasets that include Wikipedia, news articles, internet forums, novels, academic papers, and more, the model learns the probabilities for which words follow other words.
It is essential to know that the model’s use of language is purely probabilistic. It has no understanding of the material. All it is doing is selecting the next best piece of information based on learned statistical probabilities.
While this process does a great job at generating believable and often quite impressive text, it also brings two significant limitations: hallucinations and a lack of originality.
LLMs provide quick, coherent answers to our queries, but they often “hallucinate”–that is, generate false information (but state it confidently and deploy it in ways that sound good on the surface). Hallucinations may include citing non-existent scientific theories, making up sources, misrepresenting terms, attributing works to the wrong authors, inaccurately summarizing plots, and more. While hallucinations can be reduced by fine-tuning the model with a specialized dataset and creating better prompts, they’ll still happen. In writing centers we should teach writers that they need to scrutinize everything that tools such as ChatGPT produce.
The problem with originality is a bit more complex. Even though each LLM output is “original” in the sense that it is unique (if you try the very same prompt more than once, each response will be different), they’re bad at creating truly original content. They hew toward what is most probable at both the micro-level of word choice and macro-level of echoing what is in the training data. So if you ask an LLM to generate a fantasy story, it defaults to archetypal high-fantasy reminiscent of The Lord of the Rings, a canonical hero’s journey that includes magic, dragons, and elves. For academic work, responses trend toward what is typical rather than original, albeit in a smart-sounding academic register, delivered in error-free prose.
While there is more we could say about the back-end mechanics of LLMs, we hope that this quick overview demystifies their basic operations. We now want to pivot to the user end and share pragmatic strategies for generating more constructive human/machine hybrid writing.
Getting More Out of LLMs Through Prompt Engineering
Prompt engineering may not be a familiar term to the casual ChatGPT user, but it is to AI specialists. It involves strategically designing the input prompt, often in multiple steps, to shape the output that the LLM generates. Even small additions and adjustments to prompts can markedly improve what you get out of ChatGPT.
Tutors should understand that while interfaces like ChatGPT are designed to work intuitively, they produce much better results if users adopt a more deliberate workflow, one that can be broken down into four steps:
- Give the model an identity
- Be as specific as possible in your request
- Guide the model through every step of the process
- Refine your results
Most who try out these moves even a few times soon find themselves habitually employing at least those first two steps of prompt engineering.
Prompt engineering is somewhat akin to what tutors do at the beginning of a session: ask the writer about the assignment, audience, and purpose so they can narrow the context and deliver relevant advice. Just as many new tutors are often not yet aware of how important the first five minutes are to setting up a productive direction, casual AI users aren’t aware of the value of prompt engineering,
First, give the model an identity. Since LLMs are trained on vast amounts of data, there are many roles it can take. For instance, if you want it to brainstorm ideas for an essay, have it pretend to be an expert in whatever field you’re writing about. In some of the scenarios below Noah uses a simple mode of prompt engineering by first asking ChatGPT, “Play the role of a sociology professor” before entering a query related to a sociology paper. You could also start prompts with “Act as if you are a…” and then articulate your question or task. By assigning an identity to the LLM, you sharpen context and direct its output in the direction you want.
Second, make your request specific. LLMs cannot guess your intentions, so the more you leave for the machine to interpret, the worse results may be. State what you want the output to look like (its length, tone, format, genre), its audience, and anything you want emphasized.
Finally, you may need to guide the model and refine your results. If the initial output isn’t to your liking, tell the model what’s wrong. Don’t expect it to produce an optimal answer on the first try. Instead, conceptualize the LLM as an (imperfect) third collaborator in the session and presume that in most complex situations, getting the most out of LLMs will involve more than a one question/one answer affair. Instead it will be an interactive and interactive process, much like a mature composing process.
We think that it’s important for tutors to not only understand the basics of more effective LLM workflows but also teach tutees about these four steps.
As you’ll see in the scenarios below, Noah and his colleagues foregrounded consent when raising the prospect of using ChatGPT in a tutorial. They were also careful to ask if the writer’s instructor had a policy on AI; and if there was one, they respected those bounds.
We realize that alongside concerns about academic integrity and informed consent are several other ethical quandaries that may be less obvious but are no less important: how content put into LLMs is used by big tech companies–without consent and or compensation–as training data; how generative AI models that scrape such data into their models often replicate the status quo or amplify inequalities more than redress them; and how the AI detectors that some institutions are turning to get things wrong too often, can be biased against non-native writers of English, and could lead to a culture of surveillance rather than growth.
In our writing center at UConn, we haven’t taken up the thorny issues of intellectual property or inequality. However, because for the past two years Tom has been involved in a National Science Foundation grant focused on supporting neurodiverse graduate writers, which includes a strand exploring on how students with ADHD and dyslexia might leverage LLMs, as a staff we’ve been considering how to use AI to supply writers with one more option in their “multimedia toolkit.” Even back in August 2022, months before ChatGPT was released, our whole staff was reading and reflecting on some articles about GPT-3. Currently we’re advocating against our institution subscribing to detection services as the default way to deal with AI, and some members of the leadership team are organizing faculty development sessions on designing assignments and teaching writing across the disciplines with AI in mind.
Scenarios in the Center
During the latter part of the spring 2023 semester, Noah, while an undergraduate tutor, started to selectively introduce AI as an option during sessions when he thought it could address something germane to the writer’s concerns and complement the usual give-and-take.
The following seven snippets of actual sessions show Noah and his colleagues using ChatGPT in tandem with their usual strategies for handling situations that come up frequently in writing centers. These scenarios include:
- Extending the Writer’s Text
- Unblocking a Writer
- Generating a Thesis and Defining Terms
- Rephrasing and Supporting a Thesis Statement
- Shortening and Sharpening a Thesis Statement
- Editing Wordy Sentences Quickly
- Generating Headings and Titles
In some cases the tutors might have been more strategic with their prompt engineering or more iterative in their use of ChatGPT, but we think real-life examples by tutors who are themselves AI novices may be more instructive than idealized or expert-led models.
Extending the Writer’s Text
Noah was working with a writer on an essay about the AIDS Quilt Memorial in San Francisco. They talked about the assignment, read the current draft, and discussed potential ways to improve it, which included expanding one of her paragraphs. They concluded that another sentence or two would really round it out, but the writer wasn’t certain what to add. Noah queried the writer about trying ChatGPT, and they pasted the paragraph and told it to come up with three options for how to write the next two sentences. That led to further productive conversation and progress.
Later in the session, after talking about the conclusion paragraph, they decided to try ChatGPT again. They pasted the whole essay into ChatGPT and told it to come up with three options for the conclusion paragraph. They reviewed those, and none were perfect, but the writer took bits and pieces from each version, along with her own contributions, to craft a conclusion that she liked.
Unblocking a Writer
A writer came in to work with Noah on an essay about animal nutrition. She needed 2-3 pages on this topic, but only had two sentences written. She needed to generate more text.
Noah worked with the writer to create an outline that addressed every aspect of the prompt. However, she still kept staring at the blinking cursor on her computer screen, asking, “But what do I write now?” He asked if she would be comfortable using ChatGPT to help generate some ideas about what to write next. Eager for anything that might help, she agreed.
After starting with simple prompt engineering (”Can you play the role of an animal science researcher?”), they copied what she had already written into ChatGPT and asked, “Come up with three options for how I might continue writing the next two sentences.”With the three AI-generated options, plus the outline that they had worked on together, the writer felt much more confident in herself. After fifteen or so minutes, she had finished her first paragraph and begun her second. This was huge progress.
As you may have noticed in this and the previous scenario, Noah usually prompts ChatGPT to produce two or more outputs. This can discourage writers from passively defaulting to whatever the LLM generates on its first try; it also feeds into how he is trying to teach writers to use AI selectively, critically, and creatively–and not just in sessions with him but also later on their own.
Generating a Thesis and Defining Terms
Another tutor was assisting an undergraduate on an essay about tea and culture. The writer had many great ideas, but they were unorganized. Additionally, the essay was supposed to be thesis driven, but when asked to articulate her thesis, the writer was stumped.
They used ChatGPT in a few different ways. They copied the writer’s introduction paragraph into ChatGPT and told it to create a thesis statement from that. This helped the writer by giving her a potential first draft of a thesis statement; it also pushed forward the conversation in the tutorial. The writer ended up incorporating only a small section of the AI-generated text into her final thesis statement.
Later, the writer wanted to provide some sort of definition of culture and how it relates to tea, as this was a requirement for the assignment. However, she was having trouble writing a sentence to convey these points. They told ChatGPT to write a definition of culture and to use certain phrases such as “something lasting from generation to generation,” “something you grow up with”, and “that people around you know the culture” (these were all phrases that the writer was playing around with and wanted to include). While ChatGPT’s first response was too long, by asking ChatGPT to shorten it, they got the desired outcome.
Rephrasing and Supporting a Thesis Statement
A writer asked a tutor for assistance for her essay on FDR’s New Deal. Her draft was fairly well developed but lacked a strong thesis statement.
They decided to use ChatGPT to reword the thesis statement. They put the draft thesis in (“Little did Roosavelt know that this deal created to get the country back on its feet would be relevant over one hundred years later”) and told it to make the wording stronger and more explicit. Here’s what ChatGPT produced: “The deal initiated by Roosevelt with the aim of revitalizing the country’s economy in the past century has persisted in relevance, demonstrating the long-lasting impact of his policies on modern society.” Although this version doesn’t strengthen the thesis in terms of content or originality, it converts its language to a more academic register, which the writer liked better.
Where ChatGPT offered a more substantial lift was in generating possible supporting arguments, which she was struggling with. They put the new thesis into ChatGPT and told it to come up with seven arguments to support it. The writer went on to incorporate some of these into her paper; but even the arguments that she ultimately rejected were still helpful for her brainstorming process.
Shortening and Sharpening a Thesis Statement
Noah was working with a student on a history essay about Christopher Columbus. The writer was concerned about his essay’s organization and that his thesis statement was too long. Noah agreed that making it more concise could help the writer become clear about what he was really trying to say.
They put his three-sentence thesis statement in ChatGPT and told it to generate three options for how to shorten it to one sentence. Each version left out a small segment of what the writer wanted to say, so they took bits and pieces of each option that ChatGPT produced and assembled them to craft a thesis that the writer thought reflected his intended meaning. This helped the writer become more clear about the purpose of his essay.
In this case, as in the others, Noah prompted some meta-reflection on the process and emphasized the writer’s agency in making choices.
Editing Wordy Sentences Quickly
A writer came in for a short appointment, only 25 minutes, needing to work on a proposal for one of his business classes. Even though he booked a short session, he had a long draft. With this in mind, the two got started right away, making sure to read quickly and keeping their comments brief.
The appointment was going smoothly until they hit a road bump, a sentence which both Noah and the writer agreed was wordy and needed to be fixed. However, neither of them could come up with a fast and easy solution. Noah suggested trying ChatGPT. They pasted in the wordy sentence with a request to rewrite four versions of it. They made sure to clarify to ChatGPT that it should be clear and concise. It delivered four options, and the tutee ended up liking one. The session ended on time.
True to the so-named transformer at the center of its design, LLMs are particularly adept at transforming text, which makes them quite effective for summarizing, translating, and editing. More often than not, ChatGPT may not be better than us at writing, but simply faster.
Generating Headings and Titles
One of Noah’s fellow tutors was helping a doctoral student who was writing on critical race theory. At one point during their session, the writer admitted that she needed help thinking of a title for one of the headings in her essay.
The two pulled up ChatGPT and pasted the entire section that the writer needed a heading for and asked ChatGPT to generate three possible options. The writer liked one of these options and copied it directly into her piece.
In a parallel case outside the writing center, earlier this year Alex was preparing a scientific poster that summarized a strand of his research on AI in public health for an upcoming academic conference. He wasn’t happy with his title, so he did some quick prompt engineering (“Play the role of a computer science researcher presenting a poster at an academic conference”), put all the poster content in ChatGPT, and asked it to generate a title. It came up with one that was pithy and alliterative but still serious–just the kind of title that graduate students and professors might produce but that most undergraduates wouldn’t have the experience to create. It included a minor factual error that Alex needed to correct, but otherwise the AI-generated title was much better than his earlier one.
In her 2017 CCCC Chair’s Address, Linda Adler-Kassner emphasized that “writing is never just writing,” by which she meant that student writing is always entwined with ideology, with identity, with emotions, with its educational context, with larger social forces (she also sounded an caution about the potentially pernicious effects of AI-style data analytics, especially if consumed passively).
We echo Adler-Kassner in claiming that tutoring writing is never just tutoring writing. That is, when tutors work with writers on texts, they often as well coach “studenting” (navigating relations with professors, seeking out resources on campus, demystifying academic culture, etc.) and introduce a range of writing-adjacent tools (library databases, translation applications, citation management tools, etc.). LLMs are the newest tool in this broader conception of writing and tutoring processes.
We’re not recommending that tutors use ChatGPT for every session, but we do think that AI will more and more be a collaborator in the composing process, making the time ripe for introducing it into tutorials in small, strategic ways. .
As we do that in our center, we are–true to the writing center ethos–trying to keep the long-term development of writers in mind. We anticipate, for example, that as a tutor and writer practice prompt engineering during a session, the writer will carry forward much of what they learn about the process.
There are no doubt many ways to use AI in writing centers that we haven’t touched on here, and we look forward to hearing about those.
Tom Deans directs the writing center at the University of Connecticut, where he is also a professor in the English Department. His research interests include writing across the disciplines, community writing, prose style, and representations of writers in literary and sacred texts.
Noah Praver graduated from the University of Connecticut in May 2023 with a degree in political science and philosophy. During his time at UConn, he served as a tutor at the Writing Center and played active roles in the undergraduate philosophy and rock climbing communities.
Alexander Solod is an AI researcher specializing in human-AI collaboration strategies. His research involves the development of effective prompting techniques for Large Language Models, detection of AI written text, as well as the creation of AI-powered tools for clinical and educational applications. He’s the CEO and founder of Ed-Tech Innovation and Strategy (EDTIS) Group, focusing on assisting educational systems adapt to the current AI-driven landscape.