Monthly Archives: July 2012

HCI – Interesting issues with peer grading

Happy with my completion of the CS101 course, my second foray into MOOC learning took the form of the Human-Computer Interaction course at Coursera. The course started on 28th May 2012 and ran for five weeks. Again, I was very pleased with the course and saw it through to the end, even though the lectures seemed substantially longer and it was competing against my thesis writing for my time.

The course was a lot more ambitious than CS101, in terms of the assessments and reward for participation. In the CS101 course, the criteria for receiving a certificate of completion was to achieve a score >80% on average across the assessments (which were all multiple choice questions). In the HCI course, two different ‘tracks’ were offered:
– ‘apprentice’ track: like the CS101 course, certificate is issued on the basis of completing weekly multiple-choice type questions (‘quizzes’), with >80% mark overall
– ‘studio’ track: awarded on the basis of getting a certain mark in two components, the quizzes and project-based tasks, and participating in peer grading of the projects.

The HCI course was the first course at Coursera to use peer grading, so I was particularly interested to see how it went. Coursera regards peer grading as the way to make more sophisticated assessments (than multiple-choice questions) scalable. Essentially, by training the students to mark each others’ assignments, it means that the course can support thousands of students doing more sophisticated project or essay based work without needing to employ academics to mark it, and the students would also likely get more educational value out of the course, by learning vicariously from assessing others work, a win all-round. I was also interested to see if peer grading resonates with Ivan Illich’s concept of ‘learning webs’ (see his 1973 book, Deschooling Society).

It is an ambitious move though, and there were several issues which surfaced as a result of the peer grading process. I’m not going to go into the technical niggles which some students encountered, but am more interested in the less easily anticipated issues which came up on the forums:

  • Resistance to peer grading It’s hard to judge the scale (it’s very hard to tell how many people are taking a course, from the students’ viewpoint; it could be hundreds, it could be tens of thousands), but there seemed to be a degree of resistance to the idea of peer grading. While some of this could be attributed to some of the technical issues, my pet theory is that it’s because peer grading is such a radically different model to ‘offline’ educational assessment, unlike anything the students have been used to in their education before. For example, narrated video lectures are simply a digital analogue of a teacher talking at the front of a bricks-and-mortar classroom; peer grading on the other hand is a very different move from having the instructor and TAs mark all the assignments, putting extra responsibility on the student themselves.
  • Concerns about privacy This included concerns about identity (graders having access to photographs of people, requried as part of the needfinding assessment), and also about intellectual property (the projects by their nature being creative and looking for – potentially commercially valuable – technology soltuions). The courses’ Privacy Policy was explicit about the need for peers to see assessments and the compromises to privacy this entails (“The Coursera and HCI staff can only forbid but not control the distribution of your submitted work outside the confines of this web site.”; “By participating in the peer assessment process, you agree that: You will respect the privacy and intellectual property of your fellow students. Specifically, you will not keep a copy or make available to anyone else the work created by your classmates, unless you first obtain their permission.”), although I can see how this might seem less than reassuring.
  • Turning in blank assignments to get access to view the work of others This was quite an intriguing behaviour which emerged, being reported by markers in the forums seeking advice about what to do about it. The general opinion was that graders should give a zero mark to blank assignments. What intrigues me about this though is that despite the unfamiliar nature of the peer grading process and the fact that a ‘zero’ mark would be useless in terms of assessment, some students chose to submit blank assignments, I assume, to be able to view the work of others and get the vicarious learning benefits. This seems to me like an indication that the peer grading process does indeed offer educational value to the markers. It also raises some interesting questions about altruism. This reminds me of an example from my previous academic life as a Biologist. Pseudomonas aeruginosa is a type of bacteria. Amongst other elements, it needs a certain amount of iron to survive, although iron is generally quite scarce in forms which could be used directly by the bacteria. So it secretes siderophores, which are compounds which will bind to free iron, and the bacteria can then take up to use it. P. aeruginosa has been used as the basis for many experiments about evolution of co-operation, as in any population of the bacteria, there is always a small proportion of bacteria who do not produce their own siderophores. They ‘cheat’ instead, by not expending the energy and nutrients to make siderophores, but taking up those released into the local environment by others. There will always be a small proportion who ‘cheat’; but if the proportion of cheaters gets too high, the system falls apart(If you’d like to find out more about this, check out the work of Angus Buckling at the University of Oxford). Whether models of cooperation from evolutionary biology hold in online networks is something which has been incubating in my mind for a while (also in terms of open scholarship and reciprocity, inspired by a chapter in Martin Weller’s book The Digital Scholar), although I’m yet to think of an elegant way of experimenting with it.
  • The need for assessment to be part of a conversation Although I didn’t take part in the peer grading (alas, I didn’t have enough time to submit a project for the ‘studio’ track, so wasn’t able to grade myself), I get the impression that when assignments were presented to students to grade, they did not include identifying information about the student who submitted the work, or how to contact them. The forums became a sort of unofficial backchannel where markers used posts to try to get in touch with the students who submitted the work, on occasions where they wanted a bit more information about the project they were marking. I can see why grading would be anonymous – to prevent collusion I guess – but this highlighted the need for more sophisticated assessments such as these project-based submissions to be part of a conversation between student and assessor, rather than the assessor simply being a human to apply the assessment rubric through.

1 Comment

Filed under Uncategorized

CS101 – Who studies on a MOOC?

Since my goal as a MOOC student is to enhance my knowledge of Computer Science, CS101 at Coursera was my first MOOC. Actually, that isn’t strictly true; earlier in the year, I had signed up for the Learning Analytics MOOC, although I did not stick to it and only dropped in to selected online sessions during the course. I think this was due to a combination of factors, mainly that I was very busy at the time in my ‘real life’ course, and the assessments in CS101 really helped me to keep up and stay focused along the way. CS101 was certainly the my first MOOC in the sense that I was the first I completed!

The course started on 23rd April 2012 and ran for six weeks. As a first taste of the Coursera platform, I was impressed; the course was well structured and organised, and the teaching materials were well thought through and engaging. I was slightly disappointed that the videos were not reusable (the lectures on computer networking would have been nice to incorporate in my online notes on Web Science, for example); to me, a crucial part of the concept of Open Educational Resources (OER) is reusability and remixability. Generally, the course materials were published under a Creative Commons Attribution-ShareAlike 3.0 license, however the video lectures were exempted from this and remained copyright Stanford University. So while the course is free, and anyone can study on it, whether it is OER is debatable.

I would guess that being an introductory-level course and one of the first courses offered by Coursera, this was for many of the participants their first experience of using a MOOC, and the Coursera platform. As a result, students were keen to introduce themselves and find out a bit about their classmates, and several forum threads sprang up for introductions. For me, with my background in e-learning research, this provided a fascinating insight into the reasons why people would choose to study a MOOC, and the students’ backgrounds. This is an interesting topic because while there have been suggestions that courses like this are mainly taken by students who are already educationally priveleged (e.g. Anya Kamenetz, Who can learn online, and how?), I don’t think that there is a lot of real data being used to explore this.

I had intended to analyse one of the forum threads in order to address this; however, it quickly became apparent that even taking just one thread, this is a hell of a lot of data! I’m still hoping to do this analysis at some point in the future. In the meantime, I don’t agree with the idea that MOOCs such as this serve just to make the elite even smarter; while I did see highly motivated high school students and undergraduates supplementing their formal education with the CS101 course, this is too much of a generalisation in my opinion. I also saw the more senior students whose last formal study was 30+ years ago, looking to get up-to-date with modern programming languages; and stay-at-home mothers taking the course, sometimes with their young children. Let’s not forget too that the ‘mainstream MOOCs’ such as Coursera, Udacity and EdX are very new, and the early-adopters of a technology (I use adopter to mean students here) may be more tech-savvy and inclined to experiment (see the ‘technology adoption lifecycle’; as it’s early days, mainstream MOOCs are probably in the ‘innovators’ phase right now). I would expect the demographic to shift a bit as the platforms become more well-known and more widely adopted across society.

It would be really interesting to catch-up with students across a range of backgrounds (not just those looking to enter formal higher education, but not excluding them either) say a year after the course to see if the course enabled them to achieve their broader goals, and how what they learned during the course had been used in practice. It’s an exciting time for figuring out what the mainstream MOOCs mean, for opening-up learning, and reconfiguring the relationship with higher education. What is needed though is more data about the phenomenon; the MOOC platforms are sitting on a goldmine in terms of data to answer questions such as who can learn online.

Leave a comment

Filed under Uncategorized