“Are We Walking the Walk?”: Undertaking Writing Center Assessment


Writing Center Research / Tuesday, May 3rd, 2022

By Angela J. Zito

Assessment is not everyone’s favorite thing to talk about, but it is one of mine. In this post, I’ll try to convince you that writing center assessment can be a worthwhile and invigorating process that places the things you care about most—inclusivity, student learning, accessibility, tutor education—front and center. Assessment projects urge us to ask ourselves, “Are we walking the walk, as well as talking the talk?”

That is, after all, the core project of assessment: to make apparent the gap between intended and actual effects. Of particular importance—for us at UW-Madison and for all writing centers—is the endeavor to identify gaps between values and practice, between commitments and policies. Writing center assessment provides a means for identifying and, more importantly, closing (however incrementally) gaps between professed values of inclusivity and the real effects that policy and practice have on the people we engage with.

In the sections that follow, I share about our Writing Center’s ongoing development of an assessment project that aims to initiate long-term, iterative evaluation of our programs’ various enactments of inclusive, accessible, and antiracist values.

pull quote reads, "Of particular importance—for us at UW-Madison and for all writing centers—is the endeavor to identify gaps between values and practice, between commitments and policies."

Reviewing the Literature

Our process of developing an assessment project has been anything but linear—which, as we’ve learned from the literature on writing center assessment, is not uncommon! Even engaging with scholarship on the topic has been a recurring part of the process, not something that we “completed” before beginning in earnest.

image shows cover of Ellen Schendel and William J. Macauley, Jr.'s book "Building Writing Center Assessments that Matter"

We found that Ellen Schendel and William Macauley’s Building Writing Center Assessments that Matter offers a helpful overview of assessment research in writing center studies (as does Miriam Gofine’s review “How Are We Doing?”). In their book, Schendel and Macauley also provide accounts of their own assessment projects, which demonstrate that any center’s approach can (indeed should) be unique to its own local context and purposes.

To help us develop an assessment that would be attuned to the idiosyncrasies of our Writing Center, we adopted the three-step approach Macauley outlines in his chapter “Getting from Values to Assessable Outcomes”:

  1. Identify what your center values
  2. Identify indicators of those values
  3. Identify assessable outcomes

In this approach, we would begin by scoping out to consider our Center’s overarching mission. From there, we would focus our attention on increasingly specific aspects of our Center’s work that we believe carry out that mission. Ultimately, we would arrive at “assessable” outcomes—that is, outcomes we would be able to observe and evaluate.

Of course, each of Macauley’s three steps is a significant undertaking of its own, prompting us to seek out additional models and guidance along the way. For instance, we found Geist and Geist’s article “Drafting a Writing Center: Defining Ourselves Through Outcomes and Assessment” valuable in navigating the first step (“Identify what [our] center values”). Geist and Geist make a strong case for thinking expansively about the work of a writing center—that is, recognizing that we do (and should assess) so much more than tutoring. They propose five levels of inquiry to help a writing center define itself expansively:

  •  What is our central purpose?
  •  What are our responsibilities to the college at large?
  • What are our responsibilities to our broader academic community?
  • What kind of training are we offering to our tutors?
  • What kind of service and support are we offering to student writers? (5)

As we worked our way into Macauley’s second and third steps, we found that thinking about assessment methods (e.g., surveys, appointment forms, focus groups) helped us to define for ourselves what were “indicators” of our values being put into practice and what were “assessable outcomes” of the same. Lerner’s “Of Numbers and Stories” and Pleasant and Trakas’s “Two Approaches to Writing Center Assessment” offered some guidance in this, helping us to identify what quantitative, qualitative, direct, and indirect methods we might use to evaluate how and how well we’re “walking the walk.”

Getting Underway

Our process began in the fall of 2020 when, like so many writing centers, we were continually discovering new aspects of our work that would need to be revised for a fully online semester. This included revising our annual instructor evaluation survey, which would need to be updated for electronic distribution and, because our services had changed so much since “the pivot,” would need to ask new questions about the effectiveness of our programs.

We assembled a committee of academic staff members to undertake the project and quickly found that these revisions would be most effective if they were informed by our overarching goals as a Center: What outcomes did we hope to achieve through our one-to-one instruction, writing groups, workshops, and outreach? What role did we want our unit to play in our institution’s responses to anti-Black, anti-Asian, and other forms of violence that year (and into the future)?

This would require us to fully articulate those goals, and, by going through the process of articulating them, we would be putting ourselves in a good position to design an assessment project that homed in on one or more specific areas of our practice.

screenshot of UW-Madison Writing Center's Inclusivity Statement page
This screenshot of the UW-Madison Writing Center website shows the beginning of our inclusivity statement, which reads, “The Writing Center’s mission is to provide a welcoming, egalitarian, and accessible learning environment that is committed to affirming social justice and meeting the diverse needs of the entire UW–Madison community.”

(Re)Articulating Our Center’s Values and Goals

In their account of developing an assessment plan, Geist and Geist write that, “At its core, assessment is about relationships and responsibilities. The first step in our overhaul was asking ourselves two questions. First, what were our responsibilities? And second, to whom were we responsible?” (5). These questions reflect where we found ourselves in 2021, considering our relationships and responsibilities capaciously—encompassing our undergraduate and graduate student writers, our graduate instructors, our undergraduate receptionists, and our faculty and staff partners across campus and in the community.

Fortunately, we had other projects underway at this time that helped us to articulate the overarching goals of our Writing Center. For instance, teams of career staff and graduate leaders collaborated to update the Writing Center’s Mission Statement and to draft an Accessibility Statement. The conversations informed our identification of goals that embrace responsibility to a variety of stakeholders. One such goal reads: “Through its many programs and services, the UW-Madison Writing Center aims to serve as a campus leader in advocating for and practicing inclusive, accessible, and anti-racist writing instruction and assessment.”

However, for our evaluation survey (or any assessment method) to be truly useful, we would need to hone such capacious goals into multiple measurable “outcomes.” As Macauley details in “Getting from Values to Assessable Outcomes,” the intermediate steps of an assessment project reveal the complexities of individual goals like the one stated above. Our conversations about what “indicators” we might observe in tutoring sessions that demonstrate anti-racist instruction, for example, helped us to identify at least three different categories of outcomes that we might assess: program impact (outcomes for the broader University, city, and writing center communities), professional development (outcomes for continuing education of Writing Center staff), and student learning (outcomes for student-writers who use Writing Center services). The table below illustrates how one goal might be parsed across these three categories.

Parsing One Goal into Multiple Outcomes

Tutoring GoalProgram ImpactProfessional DevelopmentStudent Learning
Engage student writers in critical, reflective dialogue about writing broadly, and about their concerns regarding specific drafts.Student population feels heard in their experiences with writing and the Writing Center more broadly.Instructors practice rhetorical listening (attending to students’ expressed goals and preferences).Student writers describe their own writing process and situate challenges associated with a particular draft within larger rhetorical contexts.
Program Impact – Outcomes for the broader University, city, and writing center communities
Professional Development – Outcomes for continuing education of Writing Center staff
Student Learning - Outcomes for student-writers who use Writing Center services

This process of honing goals into outcomes led to more transparent alignment between our Writing Center’s mission and the feedback we would solicit through our evaluation surveys.

Revising the Evaluation Survey

With a clear set of goals for our Center’s work and a comprehensive view of its programming, we could have pursued any number of assessment projects by spring 2021. The instructor evaluation survey remained our priority, though, because we felt our responsibility keenly to our instructors and our students. The survey presented a “built-in” means of providing instructors with much-desired feedback on their virtual sessions and material for their teaching portfolios, and it provided students with a convenient and direct means of sharing their experiences with the instruction as well as the logistics of our online Writing Center.

The work we had done up front to narrow our goals into outcomes made the revision process much more efficient—but not necessarily simpler! We were able to identify questions that needed replacing because they would likely yield ambiguous responses. For instance, we had trouble imagining useful responses to the question “How intellectually engaging did you find your session today?” Was this question asking whether the student felt they learned something in the session? Was it asking whether the session was enjoyable? Was it asking whether the instructor’s expertise in the subject of writing and rhetoric was apparent?

The hard work of the revision process lay in identifying which of any of these possibilities best aligned with the outcomes we wanted for our one-to-one instruction. Ultimately, we did arrive at some new questions that we’re happy with (for now, anyway). For instance:

  • To target a student learning outcome, one question we now ask is “Did you learn about or apply concepts about writing? These might include audience, purpose, language conventions, or something else.”
  • To target a professional development outcome, we ask, “How would you describe your instructor’s collaboration with you? Forms of collaboration might include asking questions, discussing options for revision, or something else.”
  •  To target a program impact outcome, we ask, “The Writing Center aims to be inclusive, accessible, and anti-racist. From your perspective, how might we better meet these goals?”

We are currently wrapping up our third round of collected responses to this revised survey, and already we have made good use of the information students provide us. Each semester since the revision, during our first staff meeting, we have reviewed some key themes in the responses and discussed together what practices we might adopt or change to better serve our students and support our colleagues.

Drafting an Assessment Research Question

Though we’ve already made good use of the evaluation survey in seeking to address gaps between our values and practices, we have not yet fully embarked upon our larger assessment project. We approach this project as a form of research, and as such we are taking care to pose a significant (and feasible!) research question. 

That means we need to narrow our focus. Our first means of narrowing the focus is to identify which overarching goal for the Center we want to learn more about. We’ve selected the goal shared above: “Serve as a campus leader in advocating for and practicing inclusive, accessible, and anti-racist writing instruction and assessment.” Our second means of narrowing the focus is to identify a specific program area that we might assess in order to learn how well we are achieving the goal. We have selected our one-to-one instruction as the program to study, in large part because we have already developed a research instrument (the survey) that will yield useful information.

This has led us to the following provisional research question: “How do Writing Center instructors implement an anti-racist mission through specific strategies in writing instruction?”

Selecting Additional Methods

Following Ellen Schendel’s advice in “Integrating Assessment into Your Center’s Other Work,” we plan to use data collection strategies and administrative procedures that we already have in place. The updated evaluation survey, for example, will not require any additional labor for us to use as part of the assessment project (outside of analyzing the data!).

We plan to select two additional methods to complement the evaluation survey responses. We’re considering the value of both qualitative and quantitative sources of information (the evaluation survey provides a bit of both). Neal Lerner has suggested that quantitative data is best suited to secure funding, whereas qualitative data “contributes to improving what you do” (111).

Pleasant and Trakas have argued recently that both kinds of evidence might serve either purpose, and go on to claim that the real decision lies in selecting direct over indirect measures. The evaluation survey is one indirect measure we’ll already be using, in that it relies on self-reporting and overall impressions. Direct assessment, in contrast, relies on artifacts produced before, during, or after the “intervention” (i.e., the tutoring session). While Pleasant and Trakas identify student writing as the ideal direct measure, we’re interested in gaining insight into the perceptions and experiences of antiracist practice, which might be better assessed through observations.

A road stretches into the distance with the shadows of six individuals are present in the foreground.

Closing Thoughts

A real challenge in these final stages of designing our assessment project is the knowledge that there are so many other goals, outcomes, research questions, and methods we might select. Geist and Geist remind us, however, that this will not be the one and only assessment project we undertake. “One of the foundational tenets of assessment pedagogy,” they write, “is that meaningful assessment is recursive in nature, and […] perpetually subject to revision” (5).

I think it’s that recursive nature of assessment that I find so worthwhile and invigorating—it’s an active process of critical self-reflection and forward-looking revision. 

pull quote reads "I think it’s that recursive nature of assessment that I find so worthwhile and invigorating—it’s an active process of critical self-reflection and forward-looking revision."

Even if assessment still isn’t your favorite thing to talk about, I hope to have prompted some thought about your writing center’s values, practices, and means of identifying the gaps in between. If so, what ideas do you have for drafting your next assessment project? Perhaps your writing center has recently revised its outcomes, or perhaps you’ve used methods in the past that you wouldn’t use again… please share in the comments!

Works Cited

Geist, Joshua and Megan Baptista Geist. “Drafting a Writing Center: Defining Ourselves Through Outcomes and Assessment.” Praxis: A Writing Center Journal, vol. 15, no. 1, 2017, pp.  4–11.

Gofine, Miriam. “How Are We Doing? A Review of Assessments within Writing Centers.” Writing Center Journal, vol. 32, no. 1, Mar. 2012, pp. 39–49. 

Lerner, Neal. “Of Numbers and Stories: Quantitative and Qualitative Assessment Research in the Writing Center.” Building Writing Center Assessments That Matter, edited by Ellen Schendel and William Macauley, Utah State University Press, 2012, pp. 131–139. Project Muse.

Macaulay, William. “Getting from Values to Assessable Outcomes.” Building Writing Center Assessments That Matter, edited by Ellen Schendel and William Macauley, Utah State University Press, 2012, pp. 50–81. Project Muse.

Pleasant, Scott E. and Deno P. Trakas. “Two Approaches to Writing Center Assessment.” Writing Lab Newsletter: A Journal of Writing Center Scholarship, vol. 45, no. 7–8, Mar./Apr. 2021, pp. 3–10.

Schendel, Ellen. “Integrating Assessment into Your Center’s Other Work: Not Your Typical Methods Chapter.” Building Writing Center Assessments That Matter, edited by Ellen Schendel and William Macauley, Utah State University Press, 2012, pp. 140–161. Project Muse.

Angela J. Zito is Teaching Faculty with the UW-Madison Writing Center, where she also serves as Associate Director of the Madison Writing Assistance and Writing Across the Curriculum programs. Her scholarship on assessment practices has appeared in the journals To Improve the Academy and Pedagogy (forthcoming).

The author, a brown-haired woman.

Leave a Reply

Your email address will not be published.