IHT&S – the ‘p’ words: peer grading, plagiarism and patch writing

The Internet History, Technology & Security (IHT&S) course began in July 2012, and ran for seven weeks. It was quite an interesting contrast to the courses I had taken previously, because while the topic is aligned with Computer Science, it also had a historical stance and interpretive nature. Various thoughts:

A course which felt more like a course

IHT&S felt a lot more like I was taking an actual course; whereas in the CS101 and HCI courses, my usual study pattern was simply to spend 1-2 hrs on a Sunday afternoon watching the lectures and another 10 minutes answering the short multiple choice questions or doing the coding problems, this course felt like a bigger commitment somehow, and I have been giving some thought to why this might be.

  • The lecture load was similar, so this was not responsible.
  • One key difference was that the instructor (‘Dr. Chuck’) seemed to be more pro-active, even holding informal office hours in US cities he happened to be visiting during the run of the course.
  • The course was more demanding in its assignments (compared to the CS101 or ‘apprentice track’ of the HCI course – not the HCI ‘studio’ track, which is the most sophisticated assessment method and best use of peer grading I’ve seen so far); the quizzes (multiple choice questions) were longer, with questions worded in ways which required a greater degree of thought, and it also included a short peer graded essay (200-400 words). Note that this essay was initially intended to be a ‘warm up’ for a more complex peer graded midterm exam, although this was abandoned (see below). This was the first Coursera course I had taken which included a final exam, although I’m not sure that this really impacted my thinking or study pattern during the course; that is, I didn’t feel more pressure or hold the course in higher esteem due to there being an exam.

First experience of peer grading

Although the HCI course included a peer graded project (for the ‘studio’ track), I hadn’t been able to take part in it due to time pressures (so completed the ‘apprentice’ track, via quizzes alone). The original assessment plan for IHT&S had been mainly weekly quizzes, with a short peer graded essay in ~week 3(ish), a peer graded midterm exam (recall the course was 7 weeks long), and a final exam. This seemed like quite an ambitious mix at the start of the course, and in practice it was modified. The first peer graded essay (200-400 words) was clearly intended to be a practice run, to make students familiar with the peer grading process and use of rubrics, as everyone got full marks for taking part regardless of however their peers had graded them.

Althought the peer grading assignment was quite short at 200 to 400 words, I found it to require quite a lot more time than the quizzes. Marking my peers’ assignments felt awkward at first – particularly if they had done outside reading and were using examples which I couldn’t be sure were correct or not – but quickly became quite an enjoyable exercise. However, I’m not sure how much I actually learned from the exercise, as (in the essays which were assigned to me anyway) frequently the essays had failed to answer the actual question and repeated the lecture content while lacking focus. In contrast, for those which used examples from outside the lecture material, I couldn’t be sure how reliable the material was or if I could trust it. Probably the most valuable aspect of doing peer grading was getting practice at delivering balanced, constructive feedback!

In light of the peer graded assignment, the instructor decided to abandon the peer graded midterm and make further peer graded assignments optional. It was not entirely clear why this change was implemented; the impression that I got was that there had been wide variation in interpreting the way to mark essays consistently (although my cynical side thinks that it was to promote retention of students; I suspect far fewer students completed the peer grading assignment than tend to complete the quizzes, e.g. see Tim Owens’ ‘Failing Coursera’ post and discussion; also this article with Dr. Chuck confirms a drop in student numbers completing peer grading). I had found the rubric easy to use, albeit a bit simplistic, but probably OK for a short essay; from the discussion forums, some had criticised the lack of recognition for original thought and critical thinking. Something which was not addressed by the rubric, and was by far the most discussed and controversial topic on the forums, was how to deal with plagiarism. Although the rubric did not ask students to look for plagiarism, many took it upon themselves to Google the content, and a slew of accusations were aired (including some falsely, as others pointed out).

The issue of plagiarism in this course and the science fiction course being run simultaneously attracted a lot of negative publicity for Coursera, prompting Daphne Koller to state that Coursera would consider implementing anti-plagiarism software. Personally, I’m not sure about how effective this would be for an exercise like the one in the IHT&S course; if you have 5,000 (or whatever – probably at least 5,000! For the sake of argument, LOTS, anyway) students all writing 200-400 words on how Bletchley Park in WW2 is an example of the intersection of people, information and technology, you’ll probably get quite a few essays which are similar, just because of the focus and word limit (a bit like the infinite monkey theorem – except the variables are a lot more controlled here).

However, let’s not forget that the HCI course used peer grading too – and there was no mention of the ‘p’ word. The peer graded assignments were a lot more demanding in the HCI course, where students effectively undertook an entire project – something which is a lot more difficult, impossible even, to cobble together from Wikipedia. I think that the important message here is to use peer grading for larger, more challenging assessments such as projects; if something can be addressed in as little as 200 words, it could probably be assessed just as well through quizzes.

Questioning the necessity of English as the MOOC lingua franca

A question which is raised by ‘plagiarism-gate’ is why; why would someone choose to plagiarise rather than simply write 200 words? I wonder if it is due in part to different levels of essay writing skills rather than cheating (in some cases anyway) – patch writing is arguably part of a learning curve, for example.

Given that the idea of using peer grading is to bypass the instructor and place the burden of marking on the students themselves, why must all students write in English? One of the first things that Dr. Chuck flagged up in his introductory lecture was its’ international nature, encouraging students to form study groups in their locality. Students have also been encouraged to add subtitles to the lectures to translate them into different languages.

I suppose that the argument against would be that if there only happened to be say five people speaking your language, and you know each other from your study group, collusion could be an issue. However, realistically this is probably a fairly low risk, and a field could easily be added to peer grading submissions to specify which language it is written in and match it to a peer who wrote in the same language, for various major languages at least. This may assist in both the plagiarism issue, and also assuage concerns raised by some in the forums expressing concern about markers’ English proficiency.

Advertisements

Leave a comment

Filed under Uncategorized

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s