Assessing individuals in team projects: A case study from computer science
Paper in proceeding, 2011
In this paper we describe an ongoing action research (Kember & Gow, 1992) project to improve teaching and
learning in the course “Model-driven software development” given by Computer Science and Engineering. It is
a project-based course and after taking the course the students should be better able to analyse and specify
software through models.
There were two drivers for the course reform. Before the reform in 2009, the software models were used in an
informal way and therefore it was hard to validate the correctness of the system from them. The students were
instead assessed through a final written exam even if most of the work was done in the team project. But from
contacts with industry, we got hold of a tool that enables testing and verification of model behaviour. So it was
now possible to assess the teams by testing their models. The second driver for the reform was John Biggs’ idea
of constructive alignment (Biggs, 1996). The idea is that there should be a consistency between the learning
objectives, the teaching methods and the assessment methods. If the assessment methods, in particular, do
not match the learning objectives students tend to take a surface approach to learning. Since this was a project
course we wanted our assessments to be more focused on the project, so we dropped the written exam.
The question then became: How can we assign fair grades to individual members of the teams? We introduced
a variety of new assessment methods in order to better judge the contribution of each student and what they
had learned during the course. These methods comprised: voluntary written exams, peer assessment (grading
and ranking of team members, and mid-course review of a report by another team), self assessment and an
oral group exam at the end of the course. By introducing these new assessment methods, the purpose of
assessment in the course shifted from being only summative (i.e. assigning a grade at the end of the course) to
also being formative (i.e. helping the students to learn during the whole course).
From the course evaluations we could draw several conclusions. Overall, the students were satisfied with the
new assessment package. Only a few of them wanted a written exam at the end. This is encouraging since it
was the first time the reformed course was given and many things were new both to the students and the
teachers. Most of the students found the voluntary exams to be helpful, but the peer assessment part turned
out to be more controversial. The mid-course review of reports by other teams was only mentioned in positive
terms, while all the comments on peer grading/ranking were negative. The students did not mind criticizing
each other face-to-face but found it disturbing to grade each other anonymously. In general, they also found it
difficult to evaluate team members and the reports by other teams. And most of them believed that they were
doing the job of the teacher when grading/ranking the team members.
From a teacher's point of view, the new assessment package is more efficient. We are now more confident in
the grades we are giving. Moreover, it did not take more time to use the new assessment methods compared
to using the written exam. Finally, the work we put into assessment is now done during the course, not after.
Action research consists of a spiral of cycles, where each cycle involves a new process of problem solving,
generated by the previous cycle. We have only completed the first cycle. A key lesson from the first cycle is the
importance of making the assessment process and criteria clear to the students at the beginning of the course.
In the next cycle, we will address the following questions: What models have other teachers used for assigning
grades to individual students in team projects? Is it possible to improve the peer assessment part? How can we
give more rapid feedback on the voluntary exams? How can we make the most of the oral group exam? The
review by Segers et al. (2003) will provide us with a starting point for a more extensive exploration of the
literature in the area.
References
Biggs (1996). Enhancing teaching through constructive alignment. Higher Education, 32, 347-364.
Kember & Gow (1992). Action research as a form of staff development in higher education. Higher Education, 23, 297-310.
Segers, Dochy & Cascallar (2003). Optimising new modes of assessment. Kluwer Academic Publishers.