Faculty Spotlight: John Yoder

Dr. John Yoder, professor of plant sciences at UC Davis, has implemented an extremely effective element to his large-lecture undergraduate courses: peer review via Google Forms. “It’s helpful for students to read others’ papers,” Yoder explains, “and it also allows faculty to ‘outsource’ some of the preliminary grading.” Setting up a Google Form based on a peer review rubric streamlines this process because all of the students’ reviews are automatically complied into a Google Spreadsheet, which Yoder can easily export to Excel. Students access the Google Form via a link in SmartSite.

To create the rubrics for these forms, and to ensure that the peer review process is successful, Yoder conducts a “peer review calibration” at the beginning of the quarter. He distributes one good, one average, and one poor paper that the students use to discuss what makes a “good” paper. The class then develops a rubric and uses it to evaluate the three papers; Yoder uses the same rubric to create the Google Form. If one student is far off from his or her peers, he or she redoes the evaluation. “Once they are calibrated,” says Yoder, “the students use the same system to grade each other’s papers.”

In one of Yoder’s undergraduate courses, he takes this strategy a step beyond simply evaluating essay drafts. He divides his students into twenty-eight five-person teams, which are further divided into four discussion sections. “Each team comes up with a plan for selling a product,” Yoder explains, “and then the business proposals are peer-reviewed. The teams compete against each other within the discussion section; the section picks a winner and they work together on the winning pitch, which competes against the other discussion sections’ winners. The winners get a bump up in grade and everyone in the winning section gets a bonus point.”

Yoder also uses Google Forms to implement an intra-team review that ensures individual group members receive appropriate credit for their work. In one scenario, each student rates her teammates on a scale of 1-5. Three is for average, four and five are for those who did a bit more work, and one and two are for those who don’t do as much work. Yoder says this scenario works pretty well, but sometimes students just give everyone a three. In another scenario, each group member is given nine “points” or “shares” to distribute to her four teammates, which forces her to give some classmates more and some less. As with the peer review, the students’ responses are synthesized and exported to a spreadsheet that the instructors and TA review. “This system allows the TAs and instructor to go in and see if multiple team members gave one student more or less points,” says Yoder, “and we use this information to grade the individual students.”

The technology is not complicated and peer review is not new, but Yoder’s combination of the two has created an exceptional environment for fostering collaboration and writing skills.

Post Author: Mary Stewart

5 thoughts on “Faculty Spotlight: John Yoder

    Mikal Saltveit

    (December 15, 2012 - 9:13 am)

    Two questions:
    What internal controls are needed to prevent the evaluations of effort from becoming personality contests?
    How does the open review process fit in with the Universities increased concern to protect the student’s identity?

    mkstewart

    (January 10, 2013 - 1:27 pm)

    Hi Mikal,

    I posed your questions to John and received the following response:

    For the second question, all the peer reviews are done anonymously using the last 4 digits of the student ID number as identifier. I think this doesn’t conflict with privacy concerns

    The first question is trickier; how to get students to write honest, fair evals. There is no silver bullet here, but rather incremental steps. First, we spend a lot of time discussing the rubric and have practice peer review sessions on example papers to calibrate the responses. If an eval is significantly different from the norm, we have the student redo their calibration exercise. In the future I am going to spend even more time on the rubric and grading calibration because this best illustrates the difference between good and bad writing. Second, the proper weight needs to be given to the reviews. The paper has three sequential graded components; an outline graded by the instructor, a draft graded by peer review, and the final version graded by the TA. The outline is 10%, peer review 20% and final paper 70%. The attempt is to let students know their peer reviews are significant, but not so much that they fear getting screwed by another student. Third, we don’t take the student peer review scores automatically, but if there is a wide divergence in the peer review scores, the TA will look over the draft and eliminate or properly weigh the outlier scores. Finally, the course is about scientific integrity and we discuss the pros and cons, dangers and concerns, of peer review to a considerable extent- whether they act on this properly is of course a question.

    Best,
    Mary

    Elyse Lord

    (November 10, 2013 - 1:20 pm)

    This is fantastic; was the original presentation video recorded, and, if so, where would one find that video?

    […] and will therefore move on to the final judging. (You can read more about his course in The Wheel: http://wheel.ucdavis.edu/2012/12/faculty-spotlight-john-yoder/ […]

Leave a Reply

Your email address will not be published. Required fields are marked *