IHT&S – the ‘p’ words: peer grading, plagiarism and patch writing

The Internet History, Technology & Security (IHT&S) course began in July 2012, and ran for seven weeks. It was quite an interesting contrast to the courses I had taken previously, because while the topic is aligned with Computer Science, it also had a historical stance and interpretive nature. Various thoughts:

A course which felt more like a course

IHT&S felt a lot more like I was taking an actual course; whereas in the CS101 and HCI courses, my usual study pattern was simply to spend 1-2 hrs on a Sunday afternoon watching the lectures and another 10 minutes answering the short multiple choice questions or doing the coding problems, this course felt like a bigger commitment somehow, and I have been giving some thought to why this might be.

  • The lecture load was similar, so this was not responsible.
  • One key difference was that the instructor (‘Dr. Chuck’) seemed to be more pro-active, even holding informal office hours in US cities he happened to be visiting during the run of the course.
  • The course was more demanding in its assignments (compared to the CS101 or ‘apprentice track’ of the HCI course – not the HCI ‘studio’ track, which is the most sophisticated assessment method and best use of peer grading I’ve seen so far); the quizzes (multiple choice questions) were longer, with questions worded in ways which required a greater degree of thought, and it also included a short peer graded essay (200-400 words). Note that this essay was initially intended to be a ‘warm up’ for a more complex peer graded midterm exam, although this was abandoned (see below). This was the first Coursera course I had taken which included a final exam, although I’m not sure that this really impacted my thinking or study pattern during the course; that is, I didn’t feel more pressure or hold the course in higher esteem due to there being an exam.

First experience of peer grading

Although the HCI course included a peer graded project (for the ‘studio’ track), I hadn’t been able to take part in it due to time pressures (so completed the ‘apprentice’ track, via quizzes alone). The original assessment plan for IHT&S had been mainly weekly quizzes, with a short peer graded essay in ~week 3(ish), a peer graded midterm exam (recall the course was 7 weeks long), and a final exam. This seemed like quite an ambitious mix at the start of the course, and in practice it was modified. The first peer graded essay (200-400 words) was clearly intended to be a practice run, to make students familiar with the peer grading process and use of rubrics, as everyone got full marks for taking part regardless of however their peers had graded them.

Althought the peer grading assignment was quite short at 200 to 400 words, I found it to require quite a lot more time than the quizzes. Marking my peers’ assignments felt awkward at first – particularly if they had done outside reading and were using examples which I couldn’t be sure were correct or not – but quickly became quite an enjoyable exercise. However, I’m not sure how much I actually learned from the exercise, as (in the essays which were assigned to me anyway) frequently the essays had failed to answer the actual question and repeated the lecture content while lacking focus. In contrast, for those which used examples from outside the lecture material, I couldn’t be sure how reliable the material was or if I could trust it. Probably the most valuable aspect of doing peer grading was getting practice at delivering balanced, constructive feedback!

In light of the peer graded assignment, the instructor decided to abandon the peer graded midterm and make further peer graded assignments optional. It was not entirely clear why this change was implemented; the impression that I got was that there had been wide variation in interpreting the way to mark essays consistently (although my cynical side thinks that it was to promote retention of students; I suspect far fewer students completed the peer grading assignment than tend to complete the quizzes, e.g. see Tim Owens’ ‘Failing Coursera’ post and discussion; also this article with Dr. Chuck confirms a drop in student numbers completing peer grading). I had found the rubric easy to use, albeit a bit simplistic, but probably OK for a short essay; from the discussion forums, some had criticised the lack of recognition for original thought and critical thinking. Something which was not addressed by the rubric, and was by far the most discussed and controversial topic on the forums, was how to deal with plagiarism. Although the rubric did not ask students to look for plagiarism, many took it upon themselves to Google the content, and a slew of accusations were aired (including some falsely, as others pointed out).

The issue of plagiarism in this course and the science fiction course being run simultaneously attracted a lot of negative publicity for Coursera, prompting Daphne Koller to state that Coursera would consider implementing anti-plagiarism software. Personally, I’m not sure about how effective this would be for an exercise like the one in the IHT&S course; if you have 5,000 (or whatever – probably at least 5,000! For the sake of argument, LOTS, anyway) students all writing 200-400 words on how Bletchley Park in WW2 is an example of the intersection of people, information and technology, you’ll probably get quite a few essays which are similar, just because of the focus and word limit (a bit like the infinite monkey theorem – except the variables are a lot more controlled here).

However, let’s not forget that the HCI course used peer grading too – and there was no mention of the ‘p’ word. The peer graded assignments were a lot more demanding in the HCI course, where students effectively undertook an entire project – something which is a lot more difficult, impossible even, to cobble together from Wikipedia. I think that the important message here is to use peer grading for larger, more challenging assessments such as projects; if something can be addressed in as little as 200 words, it could probably be assessed just as well through quizzes.

Questioning the necessity of English as the MOOC lingua franca

A question which is raised by ‘plagiarism-gate’ is why; why would someone choose to plagiarise rather than simply write 200 words? I wonder if it is due in part to different levels of essay writing skills rather than cheating (in some cases anyway) – patch writing is arguably part of a learning curve, for example.

Given that the idea of using peer grading is to bypass the instructor and place the burden of marking on the students themselves, why must all students write in English? One of the first things that Dr. Chuck flagged up in his introductory lecture was its’ international nature, encouraging students to form study groups in their locality. Students have also been encouraged to add subtitles to the lectures to translate them into different languages.

I suppose that the argument against would be that if there only happened to be say five people speaking your language, and you know each other from your study group, collusion could be an issue. However, realistically this is probably a fairly low risk, and a field could easily be added to peer grading submissions to specify which language it is written in and match it to a peer who wrote in the same language, for various major languages at least. This may assist in both the plagiarism issue, and also assuage concerns raised by some in the forums expressing concern about markers’ English proficiency.

Leave a comment

Filed under Uncategorized

HCI – Interesting issues with peer grading

Happy with my completion of the CS101 course, my second foray into MOOC learning took the form of the Human-Computer Interaction course at Coursera. The course started on 28th May 2012 and ran for five weeks. Again, I was very pleased with the course and saw it through to the end, even though the lectures seemed substantially longer and it was competing against my thesis writing for my time.

The course was a lot more ambitious than CS101, in terms of the assessments and reward for participation. In the CS101 course, the criteria for receiving a certificate of completion was to achieve a score >80% on average across the assessments (which were all multiple choice questions). In the HCI course, two different ‘tracks’ were offered:
– ‘apprentice’ track: like the CS101 course, certificate is issued on the basis of completing weekly multiple-choice type questions (‘quizzes’), with >80% mark overall
– ‘studio’ track: awarded on the basis of getting a certain mark in two components, the quizzes and project-based tasks, and participating in peer grading of the projects.

The HCI course was the first course at Coursera to use peer grading, so I was particularly interested to see how it went. Coursera regards peer grading as the way to make more sophisticated assessments (than multiple-choice questions) scalable. Essentially, by training the students to mark each others’ assignments, it means that the course can support thousands of students doing more sophisticated project or essay based work without needing to employ academics to mark it, and the students would also likely get more educational value out of the course, by learning vicariously from assessing others work, a win all-round. I was also interested to see if peer grading resonates with Ivan Illich’s concept of ‘learning webs’ (see his 1973 book, Deschooling Society).

It is an ambitious move though, and there were several issues which surfaced as a result of the peer grading process. I’m not going to go into the technical niggles which some students encountered, but am more interested in the less easily anticipated issues which came up on the forums:

  • Resistance to peer grading It’s hard to judge the scale (it’s very hard to tell how many people are taking a course, from the students’ viewpoint; it could be hundreds, it could be tens of thousands), but there seemed to be a degree of resistance to the idea of peer grading. While some of this could be attributed to some of the technical issues, my pet theory is that it’s because peer grading is such a radically different model to ‘offline’ educational assessment, unlike anything the students have been used to in their education before. For example, narrated video lectures are simply a digital analogue of a teacher talking at the front of a bricks-and-mortar classroom; peer grading on the other hand is a very different move from having the instructor and TAs mark all the assignments, putting extra responsibility on the student themselves.
  • Concerns about privacy This included concerns about identity (graders having access to photographs of people, requried as part of the needfinding assessment), and also about intellectual property (the projects by their nature being creative and looking for – potentially commercially valuable – technology soltuions). The courses’ Privacy Policy was explicit about the need for peers to see assessments and the compromises to privacy this entails (“The Coursera and HCI staff can only forbid but not control the distribution of your submitted work outside the confines of this web site.”; “By participating in the peer assessment process, you agree that: You will respect the privacy and intellectual property of your fellow students. Specifically, you will not keep a copy or make available to anyone else the work created by your classmates, unless you first obtain their permission.”), although I can see how this might seem less than reassuring.
  • Turning in blank assignments to get access to view the work of others This was quite an intriguing behaviour which emerged, being reported by markers in the forums seeking advice about what to do about it. The general opinion was that graders should give a zero mark to blank assignments. What intrigues me about this though is that despite the unfamiliar nature of the peer grading process and the fact that a ‘zero’ mark would be useless in terms of assessment, some students chose to submit blank assignments, I assume, to be able to view the work of others and get the vicarious learning benefits. This seems to me like an indication that the peer grading process does indeed offer educational value to the markers. It also raises some interesting questions about altruism. This reminds me of an example from my previous academic life as a Biologist. Pseudomonas aeruginosa is a type of bacteria. Amongst other elements, it needs a certain amount of iron to survive, although iron is generally quite scarce in forms which could be used directly by the bacteria. So it secretes siderophores, which are compounds which will bind to free iron, and the bacteria can then take up to use it. P. aeruginosa has been used as the basis for many experiments about evolution of co-operation, as in any population of the bacteria, there is always a small proportion of bacteria who do not produce their own siderophores. They ‘cheat’ instead, by not expending the energy and nutrients to make siderophores, but taking up those released into the local environment by others. There will always be a small proportion who ‘cheat’; but if the proportion of cheaters gets too high, the system falls apart(If you’d like to find out more about this, check out the work of Angus Buckling at the University of Oxford). Whether models of cooperation from evolutionary biology hold in online networks is something which has been incubating in my mind for a while (also in terms of open scholarship and reciprocity, inspired by a chapter in Martin Weller’s book The Digital Scholar), although I’m yet to think of an elegant way of experimenting with it.
  • The need for assessment to be part of a conversation Although I didn’t take part in the peer grading (alas, I didn’t have enough time to submit a project for the ‘studio’ track, so wasn’t able to grade myself), I get the impression that when assignments were presented to students to grade, they did not include identifying information about the student who submitted the work, or how to contact them. The forums became a sort of unofficial backchannel where markers used posts to try to get in touch with the students who submitted the work, on occasions where they wanted a bit more information about the project they were marking. I can see why grading would be anonymous – to prevent collusion I guess – but this highlighted the need for more sophisticated assessments such as these project-based submissions to be part of a conversation between student and assessor, rather than the assessor simply being a human to apply the assessment rubric through.

1 Comment

Filed under Uncategorized

CS101 – Who studies on a MOOC?

Since my goal as a MOOC student is to enhance my knowledge of Computer Science, CS101 at Coursera was my first MOOC. Actually, that isn’t strictly true; earlier in the year, I had signed up for the Learning Analytics MOOC, although I did not stick to it and only dropped in to selected online sessions during the course. I think this was due to a combination of factors, mainly that I was very busy at the time in my ‘real life’ course, and the assessments in CS101 really helped me to keep up and stay focused along the way. CS101 was certainly the my first MOOC in the sense that I was the first I completed!

The course started on 23rd April 2012 and ran for six weeks. As a first taste of the Coursera platform, I was impressed; the course was well structured and organised, and the teaching materials were well thought through and engaging. I was slightly disappointed that the videos were not reusable (the lectures on computer networking would have been nice to incorporate in my online notes on Web Science, for example); to me, a crucial part of the concept of Open Educational Resources (OER) is reusability and remixability. Generally, the course materials were published under a Creative Commons Attribution-ShareAlike 3.0 license, however the video lectures were exempted from this and remained copyright Stanford University. So while the course is free, and anyone can study on it, whether it is OER is debatable.

I would guess that being an introductory-level course and one of the first courses offered by Coursera, this was for many of the participants their first experience of using a MOOC, and the Coursera platform. As a result, students were keen to introduce themselves and find out a bit about their classmates, and several forum threads sprang up for introductions. For me, with my background in e-learning research, this provided a fascinating insight into the reasons why people would choose to study a MOOC, and the students’ backgrounds. This is an interesting topic because while there have been suggestions that courses like this are mainly taken by students who are already educationally priveleged (e.g. Anya Kamenetz, Who can learn online, and how?), I don’t think that there is a lot of real data being used to explore this.

I had intended to analyse one of the forum threads in order to address this; however, it quickly became apparent that even taking just one thread, this is a hell of a lot of data! I’m still hoping to do this analysis at some point in the future. In the meantime, I don’t agree with the idea that MOOCs such as this serve just to make the elite even smarter; while I did see highly motivated high school students and undergraduates supplementing their formal education with the CS101 course, this is too much of a generalisation in my opinion. I also saw the more senior students whose last formal study was 30+ years ago, looking to get up-to-date with modern programming languages; and stay-at-home mothers taking the course, sometimes with their young children. Let’s not forget too that the ‘mainstream MOOCs’ such as Coursera, Udacity and EdX are very new, and the early-adopters of a technology (I use adopter to mean students here) may be more tech-savvy and inclined to experiment (see the ‘technology adoption lifecycle’; as it’s early days, mainstream MOOCs are probably in the ‘innovators’ phase right now). I would expect the demographic to shift a bit as the platforms become more well-known and more widely adopted across society.

It would be really interesting to catch-up with students across a range of backgrounds (not just those looking to enter formal higher education, but not excluding them either) say a year after the course to see if the course enabled them to achieve their broader goals, and how what they learned during the course had been used in practice. It’s an exciting time for figuring out what the mainstream MOOCs mean, for opening-up learning, and reconfiguring the relationship with higher education. What is needed though is more data about the phenomenon; the MOOC platforms are sitting on a goldmine in terms of data to answer questions such as who can learn online.

Leave a comment

Filed under Uncategorized