Making Participation Count

By Harry Brighouse

For four years I read more syllabi than any one person ever should. With colleagues on our university’s Curriculum Committee, we vetted all new course proposals, which had to include a syllabus specifying how students would be graded. The wide variety of approaches to student participation was striking.

Most syllabi encouraged, but did not formally grade, participation. In some, participation constituted as much as 50% of the grade. A good number measured it as attendance by exacting a price for more than a specified number of absences. But few clearly detailed what counted as quality participation and how the faculty member would monitor it.

Unfamiliar with the practice, I started asking faculty why they graded participation and what they counted. The standard response was that you have to grade it, “otherwise students won’t talk.”

I was skeptical. Whereas we can provide students with a reasonable understanding of what is required when writing an essay, taking a test, setting up an experiment, or making a presentation, participation is vaguer. But let’s assume that participation is, as colleagues tended to say, speaking in class—an action that is, in principle, readily observable and gradable. A number of problems arise.

The first problem is obvious: It’s not just talking, but talking productively, that we care about. Saying things that are interesting and useful to the conversation is a sign of good participation; saying things that are off-topic is a sign of bad participation. If we’re going to grade students’ talking, we should focus on quality, not quantity.

Students need to know this. But once they do, some feel pressure to impress you with correct or pat comments. In setting expectations, it’s hard to overstate that quality includes getting things wrong—for good reason. As a recent graduate wrote to me, “One thing I’m especially grateful for: I’m more willing to risk getting things wrong in discussion and writing than I used to be because you made it clear in class that making mistakes is part of engaging rigorously with philosophy and not something to fear. That seems obvious now, but it wasn’t always.”

The second is the “zero sum” problem. By comparison, when writing an essay, one student does not diminish the opportunity available to another student to write her essay. But class time is finite. One student talking, however well, reduces the time available to others. We can mitigate this some by continuing conversations online. But the unintended consequence remains: If it’s a competition for “air time,” talkative students pursuing a strong participation grade necessarily limit their classmates’ opportunities to get the good grade too.

Third are the “interaction effects.” Whereas the quality of one student’s essay does not affect the quality of another student’s submission, conversations are interactive and unpredictable. A well-intentioned conversational gambit could still provoke low-quality responses. Good moderation can only mitigate, not eliminate, the effects. If you doubt that, think about the worst 25% of department meetings you attended last year.

Finally, the quality of discussion is in very large part a function of the skill of the teacher. Even well-prepared and skilled students can participate badly if we’ve not cultivated our own ability to manage discussion well—to give students the right guidance for preparation, respectfully deflect irrelevancies, ask the right questions, get students to engage with one another, induce shy speakers, and prevent the verbose from dominating. The last is especially important: If we don’t manage dominant talkers, the rest cannot participate.

Given these shortcomings, grading participation on who talks and who doesn’t reminds me of William Bruce Cameron’s comment (often misattributed to Albert Einstein) that “not everything that can be counted counts, and not everything that counts can be counted.”

What else should “count”? Students participate by flicking through their text to find the relevant passage and point it out to a neighbor, by listening intently to their peers, by indicating through their body language that an idea is worth considering, or by noticing that someone else wants to say something and drawing them into the conversation.

I want my students to be doing all of these things—participation as engagement—and I try to consistently make my expectations clear. This includes what they should do and should try to avoid—so they don’t think that only talking, and any kind of talking, counts. Various strategies can encourage these broader forms of participation: getting to know your students, having them learn one another’s names, cold calling, and above all, carefully formulating discussion prompts and purposefully moderating the ensuing conversations.

When you and your students understand exactly what counts as high-quality participation and you’re honing your skills to enable all students to participate well, most of the above problems can be neutralized.

But one challenge remains, and it gives me pause on the validity of grading participation. Again by comparison, I grade papers and exams one at a time and when I can focus on each one without distraction. I do not attempt to grade papers while also ensuring that my students are learning through a discussion that I am moderating. Even if I were excellent at running a high-quality discussion among a small group of students—and I’ve got room for improvement—I don’t think I have the mental ability to simultaneously grade their involvement in a fair and standard manner.

The human limitations on us all mean I’ve got to become a more acute observer of all the other and rich forms of participation outside of discussion. By letting my students know I’m looking for these things too—graded or ungraded—they’re likely to do them more often.

What to read next: “Navigating the Need for Rigor and Engagement: How to Make Fruitful Class Discussions Happen” by Harry Brighouse


Sova logo

New Certificate to Strengthen Guided Pathways

Nationwide, the guided pathways movement is helping more students succeed. But as Sova co-founders Alison Kadlec and Paul Markham observe, “not nearly enough has been done to meet faculty where they are, speak to their interests, and bring them into this work as true partners.”

To bolster faculty engagement—for the strongest possible impact on student success—ACUE and Sova are developing a new certificate in guided pathways implementation. This credential will focus on best practices that ensure student learning and persistence to completion—pillars three and four of the Pathways model.

Sova supports higher education organizations across the country in efforts to scale evidence-based student success initiatives. It provides facilitation support to the Pathways Partner Collaborative, a group of leading student success-focused organizations responsible for supporting the implementation and scale of guided pathways nationwide.

“Sova bring a wealth of insights—all of which will inform our new certificate,” said Penny MacCormack, ACUE’s chief academic officer. “We’re going to ensure that faculty have a rich understanding of their role in guided pathways and are equipped with evidence-based teaching approaches.”

The guided pathways reform model has four main pillars: first, clarify programs of study to better inform students about the sequence of courses needed for a degree. Second, ensure that students choose the right program, one that fits their strengths and career aspirations. Third, help students stay on the path to complete their studies. Fourth, ensure learning at every step along the path, so that graduates are well-prepared.

To date, work across these pillars has largely been led by administrators, advisors, and student support personnel. But, as MacCormack notes, “it’s hard to overstate the influence of faculty on students’ career interests, choice of program, depth of learning, and determination to complete their studies.”