top of page
Professor & Students

Improving peer feedback in project-based classrooms

My Role
User Research
Interaction Design


Team Members

Karen Xu

Shuqi Yang

Ginny Zhao

Lab  CMU Oh! Lab





Semi-structured Interview


Theory-driven Coding

The Challenge

Peer feedback systems in project courses offer substantial value. They allow students to learn from one another, reflect on their work, and improve their learning without heavily increasing the time and effort instructors need to add to their already busy schedules. In many project courses, students are asked to provide feedback after class. However, this poses many challenges. With this system, instructors do not have sufficient real-time visibility of peer feedback processes of students and do not have an easy way to interact with feedback in a timely and visible manner. In addition, with this current system, student feedback is not always timely, relevant, diverse, or sufficient. Are there ways to improve the peer feedback systems in project-based classrooms that are efficient and valuable to both students and instructors?


Our team created an instructor dashboard that better facilitates the peer feedback processes in design and project courses.

Our design allows instructors to easily see all the courses they teach, upcoming and past presentations for each course, and each group for presentations all in one place. While presentations are occurring in class, instructors can provide feedback and manage peer feedback in real-time as well as on their own time when the presentation is over. They have control over feedback by being able to hide, star, and reply to student feedback. 

Research and Design Iterations

Iteration 1

In order to understand instructors' current experiences and challenges, we conducted semi-structured interviews with 10 instructors.

Participants: All instructors have experience teaching small studio classes where peer feedback sessions are essential components of the class. 6/10 instructors have almost 20 years of teaching experience.


Method: Participants were asked questions about their ideologies as instructors, experience in utilizing peer feedback, and expectations about peer feedback as well as difficulties they encountered in existing peer feedback systems.

Analysis of our first round of interviews revealed these major findings:

Experience and Expectations 

1 Instructors hope for classes to be engaging and collaborative

2 Students learn important skills from practicing feedback

3 Instructors believe peer feedback is useful in engaging students

4 Instructors believe peer feedback can be used to assess students.



1 Students fail to give constructive feedback

2 Students give irrelevant or rude feedback

3 Students don’t participate in giving feedback; the same students always give feedback to          their peers

Based on findings, we created a rubric with 3 areas to guide our designs. 





1. Feedback Quality Control (assess the quality of feedback students are giving, provide guidance for giving feedback)

2. Student Engagement (keeping track of progress and participation of students)

3. Pragmatics (provide relevant information and reduce the cognitive burden for instructors when it comes to peer feedback processes)

Using this rubric, our team created  3 instructor dashboard prototypes (clickable on Figma)

Interface 1:

Interface 2:

Interface 3:

Iteration 2

To test and listen to feedback on our designs, we engaged in a second round of interviews with instructors.

We reached out to the same instructors, and 7/10 agreed to participate in the follow-up interview.

Method: Due to the pandemic, we interacted with participants remotely, and instructors were given remote control to interact with the interfaces. Participants were shown the three different interfaces one by one. For each interface, instructors were asked to first explore on their own and think aloud as they do so. Then, they were given different classroom scenarios and asked to talk through what they would do under those circumstances. For each interface, instructors were also given follow-up questions, such as challenges they ran into and how they would use the interface in their teaching. The interview closed with overall questions, such as what features they would combine from each interface and which interface they would most likely use in their teaching.


Analysis: We analyzed the interviews using theory-driven coding and affinity diagramming. Each team member took 2-3 codes and coded all 7 transcripts. 


1. Most instructors find it useful to keep track of individual information of students as well as class trajectory as a whole. Instructors would like to know how each student is doing in class throughout the process so as to provide guidance when necessary. At the same time, instructors would also like to know the class trend.

“I think the most helpful would be seeing individual students in one that I could provide coaching and feedback to them on how they’re doing with the feedback.” 

2. During class instructors prioritize class attention and their own attention being on other students while they are presenting.  Instructors also prefer not to be distracted during their students’ presentations and only want to view relevant information.  

3. Instructors also point out that when thinking about grading feedback as a practice rather than assessment, they are willing to put more trust to students. They believe such activities would help students polish their feedback giving skills.

“I mean the idea of seeing what people are saying, and especially seeing how many people reacted to that, seems pretty helpful, and the live comments I think would be especially interesting to have a sense of what that feels like.” 

 “I mean I would look for the comments.I guess I’m supposed to mark them if they’re specific and objective, but I think that would be distracting for me because I’d be trying to watch the presentation and taking notes on the presentation"

"It feels kind of good. It feels like that’s a chance to exercise stuff and sort of think about feedback in a way that would be useful.”

Using these findings, we iterated on our designs.

Our next step was to create one prototype combining what instructors said about all three prototypes.

Our next iteration of the instructor dashboard is shown below:

Our team removed information instructors mentioned was distracting or not relevant when viewing presentations in real time. We also combined features from the three interfaces that instructors said were very beneficial and saw useful in their teaching during presentations in terms of peer feedback.

We also finalized the style guide based on mock-ups created by the implementation team.

Iteration 3 (The Final Design)

Throughout the project, our team also worked with a front-end software engineer to together allow this solution to come to life. During iteration 2, due to technical constraints, we moved forward with a hybrid desktop view. However, our team believed a full desktop version would lead to much higher readability and efficiency, especially because instructors will use PeerPeesents in real-time during class. We encouraged and discussed switching to a full desktop view during the next meeting, and after coming to a consensus to switch, we implemented a full desktop view in the final design shown below. 

We also decided to change the design system to a more modern and clean look. However, we kept the content consistent as users found immense value from the existing content.

Our next step is to test the visual design and usability of the new design. 

Next Steps

Our next step is to continue interacting and listening to our users in order to continue implementing feedback and improving the dashboard in terms of content, usability, and visual design. We have conducted think-aloud usability tests and semi-strucutred interviews using this new iteration of designs with 3 instructors so far, and our team plans to continue learning from a diverse range of instructors from different fields and universities.

bottom of page