Happy with my completion of the CS101 course, my second foray into MOOC learning took the form of the Human-Computer Interaction course at Coursera. The course started on 28th May 2012 and ran for five weeks. Again, I was very pleased with the course and saw it through to the end, even though the lectures seemed substantially longer and it was competing against my thesis writing for my time.
The course was a lot more ambitious than CS101, in terms of the assessments and reward for participation. In the CS101 course, the criteria for receiving a certificate of completion was to achieve a score >80% on average across the assessments (which were all multiple choice questions). In the HCI course, two different ‘tracks’ were offered:
– ‘apprentice’ track: like the CS101 course, certificate is issued on the basis of completing weekly multiple-choice type questions (‘quizzes’), with >80% mark overall
– ‘studio’ track: awarded on the basis of getting a certain mark in two components, the quizzes and project-based tasks, and participating in peer grading of the projects.
The HCI course was the first course at Coursera to use peer grading, so I was particularly interested to see how it went. Coursera regards peer grading as the way to make more sophisticated assessments (than multiple-choice questions) scalable. Essentially, by training the students to mark each others’ assignments, it means that the course can support thousands of students doing more sophisticated project or essay based work without needing to employ academics to mark it, and the students would also likely get more educational value out of the course, by learning vicariously from assessing others work, a win all-round. I was also interested to see if peer grading resonates with Ivan Illich’s concept of ‘learning webs’ (see his 1973 book, Deschooling Society).
It is an ambitious move though, and there were several issues which surfaced as a result of the peer grading process. I’m not going to go into the technical niggles which some students encountered, but am more interested in the less easily anticipated issues which came up on the forums:
- Resistance to peer grading It’s hard to judge the scale (it’s very hard to tell how many people are taking a course, from the students’ viewpoint; it could be hundreds, it could be tens of thousands), but there seemed to be a degree of resistance to the idea of peer grading. While some of this could be attributed to some of the technical issues, my pet theory is that it’s because peer grading is such a radically different model to ‘offline’ educational assessment, unlike anything the students have been used to in their education before. For example, narrated video lectures are simply a digital analogue of a teacher talking at the front of a bricks-and-mortar classroom; peer grading on the other hand is a very different move from having the instructor and TAs mark all the assignments, putting extra responsibility on the student themselves.
- Turning in blank assignments to get access to view the work of others This was quite an intriguing behaviour which emerged, being reported by markers in the forums seeking advice about what to do about it. The general opinion was that graders should give a zero mark to blank assignments. What intrigues me about this though is that despite the unfamiliar nature of the peer grading process and the fact that a ‘zero’ mark would be useless in terms of assessment, some students chose to submit blank assignments, I assume, to be able to view the work of others and get the vicarious learning benefits. This seems to me like an indication that the peer grading process does indeed offer educational value to the markers. It also raises some interesting questions about altruism. This reminds me of an example from my previous academic life as a Biologist. Pseudomonas aeruginosa is a type of bacteria. Amongst other elements, it needs a certain amount of iron to survive, although iron is generally quite scarce in forms which could be used directly by the bacteria. So it secretes siderophores, which are compounds which will bind to free iron, and the bacteria can then take up to use it. P. aeruginosa has been used as the basis for many experiments about evolution of co-operation, as in any population of the bacteria, there is always a small proportion of bacteria who do not produce their own siderophores. They ‘cheat’ instead, by not expending the energy and nutrients to make siderophores, but taking up those released into the local environment by others. There will always be a small proportion who ‘cheat’; but if the proportion of cheaters gets too high, the system falls apart(If you’d like to find out more about this, check out the work of Angus Buckling at the University of Oxford). Whether models of cooperation from evolutionary biology hold in online networks is something which has been incubating in my mind for a while (also in terms of open scholarship and reciprocity, inspired by a chapter in Martin Weller’s book The Digital Scholar), although I’m yet to think of an elegant way of experimenting with it.
- The need for assessment to be part of a conversation Although I didn’t take part in the peer grading (alas, I didn’t have enough time to submit a project for the ‘studio’ track, so wasn’t able to grade myself), I get the impression that when assignments were presented to students to grade, they did not include identifying information about the student who submitted the work, or how to contact them. The forums became a sort of unofficial backchannel where markers used posts to try to get in touch with the students who submitted the work, on occasions where they wanted a bit more information about the project they were marking. I can see why grading would be anonymous – to prevent collusion I guess – but this highlighted the need for more sophisticated assessments such as these project-based submissions to be part of a conversation between student and assessor, rather than the assessor simply being a human to apply the assessment rubric through.
One response to “HCI – Interesting issues with peer grading”
Students submitting blank assignments so they can see the work of other students? Who saw that one coming? I wonder if these students do indeed return grades not having undertaken the task themselves and also how reliable their grades are compared to other students.
I’m taking the Introduction to Sociology course offered by Princeton via Coursera. One of the unanticipated outcomes was that students awarded “more than the total number of points” to their peers! (Sadly I hadn’t submitted any work and so could not benefit from this). This video from the course leader giving feedback on the midterm peer assessment is interesting viewing.