Found this whilst tidying up my desktop and posting here in case it is of interest: last summer I was invited to create a one minute video about my current thoughts on MOOCs for Glasgow Social Media week, which took place in September 2012. Click here to view the video.
Monthly Archives: September 2012
In my first post, I remarked upon some of the demographic trends I had observed when taking the CS101 course at Coursera, and expressed my hope that the MOOC providers would share some of their data to help the wider world understand the impact of MOOCs.
Dr. Chuck, the instructor on the Internet History, Technology & Security (IHT&S), took the initiative to circulate a demographic survey – and to share the results with the course participants. He encouraged us to reflect on the data and blog about it, so here we are: some answers to my first question, of ‘who studies a MOOC?’. Graphs have been drawn and interpreted by me, data gratefully courtesy of Dr. Chuck. The data comes with the following ‘health warning’ from Dr. Chuck: “Of course the caveat is that it is not scientific, it is partial, incomplete, your results may vary, void where prohibited, etc etc etc. It is anecdotal at best but certainly interesting.” We can’t tell how representative the sample is of the course as a whole, or assume that IHT&S can be generalised to other courses, but it does provide an interesting insight and raises some interesting questions.
Note: ‘Associate degree’ denotes 2 years undergraduate-level study; ‘Bachelors degree’ was described as 4 years of udnergraduate level study, based on the American model
- Male students outnumber female by 2:1. Why? How much does gender of students depend on the course topic?
- The course was fairly popular across the whole range of ages. Not sure why an under 18 category was not included. Modal category is ’25 to 34 years old’; interesting that this is the category following the one which university-level study would typically fall into. Does this indicate the importance of MOOCs as a next-stage in lifelong learning for the recent graduate? Is this in response to career pressures – MOOCs as a way to get ahead in the workplace?
- Most respondents have a degree already – either undergraduate or masters, relatively few doctorates.
Students’ previous experiences of online learning
- While the course is the first MOOC that most students have taken, more than half of the respondents have taken online courses before. Are MOOCs particularly attractive to students who have previously studied online? Do they have different expectations of the online MOOC environment to students who have not studied online before?
Reasons for taking the course
Note: Respondents could select multiple answers about their motivation for taking the course.
- Givent that respondents could select multiple responses to their motivations for taking the course, it is more meaningful in a sense to focus on the categories which people did not select. In this case, it is notable that the lowest response categories – ‘Supplement other college/university classes courses’ and ‘Decide if I want to take college/university classes on the topic’ – are the ones which relate study to formal higher education structures.
- Non-students outnumber those in formal education by approx 5:2.
Reuse and OER
- While this pair of questions could suggest that most or all teachers taking the course would consider reusing the course materials in their own teaching, these responses should be treated with some caution, as more positive responses were gained to the reuse question (510) than respondents who indicated that they are actually teachers (451).
So: I’m intrigued by the gender differences, and the indication that MOOCs may be playing an important role in initiating lifelong learning in the years after formally leaving the academy. Of course, this is quite speculative as the data here is quite limited and only form one course. I’d be very interested to hear others’ take on the data – please do feel free to leave a comment here.
The Internet History, Technology & Security (IHT&S) course began in July 2012, and ran for seven weeks. It was quite an interesting contrast to the courses I had taken previously, because while the topic is aligned with Computer Science, it also had a historical stance and interpretive nature. Various thoughts:
A course which felt more like a course
IHT&S felt a lot more like I was taking an actual course; whereas in the CS101 and HCI courses, my usual study pattern was simply to spend 1-2 hrs on a Sunday afternoon watching the lectures and another 10 minutes answering the short multiple choice questions or doing the coding problems, this course felt like a bigger commitment somehow, and I have been giving some thought to why this might be.
- The lecture load was similar, so this was not responsible.
- One key difference was that the instructor (‘Dr. Chuck’) seemed to be more pro-active, even holding informal office hours in US cities he happened to be visiting during the run of the course.
- The course was more demanding in its assignments (compared to the CS101 or ‘apprentice track’ of the HCI course – not the HCI ‘studio’ track, which is the most sophisticated assessment method and best use of peer grading I’ve seen so far); the quizzes (multiple choice questions) were longer, with questions worded in ways which required a greater degree of thought, and it also included a short peer graded essay (200-400 words). Note that this essay was initially intended to be a ‘warm up’ for a more complex peer graded midterm exam, although this was abandoned (see below). This was the first Coursera course I had taken which included a final exam, although I’m not sure that this really impacted my thinking or study pattern during the course; that is, I didn’t feel more pressure or hold the course in higher esteem due to there being an exam.
First experience of peer grading
Although the HCI course included a peer graded project (for the ‘studio’ track), I hadn’t been able to take part in it due to time pressures (so completed the ‘apprentice’ track, via quizzes alone). The original assessment plan for IHT&S had been mainly weekly quizzes, with a short peer graded essay in ~week 3(ish), a peer graded midterm exam (recall the course was 7 weeks long), and a final exam. This seemed like quite an ambitious mix at the start of the course, and in practice it was modified. The first peer graded essay (200-400 words) was clearly intended to be a practice run, to make students familiar with the peer grading process and use of rubrics, as everyone got full marks for taking part regardless of however their peers had graded them.
Althought the peer grading assignment was quite short at 200 to 400 words, I found it to require quite a lot more time than the quizzes. Marking my peers’ assignments felt awkward at first – particularly if they had done outside reading and were using examples which I couldn’t be sure were correct or not – but quickly became quite an enjoyable exercise. However, I’m not sure how much I actually learned from the exercise, as (in the essays which were assigned to me anyway) frequently the essays had failed to answer the actual question and repeated the lecture content while lacking focus. In contrast, for those which used examples from outside the lecture material, I couldn’t be sure how reliable the material was or if I could trust it. Probably the most valuable aspect of doing peer grading was getting practice at delivering balanced, constructive feedback!
In light of the peer graded assignment, the instructor decided to abandon the peer graded midterm and make further peer graded assignments optional. It was not entirely clear why this change was implemented; the impression that I got was that there had been wide variation in interpreting the way to mark essays consistently (although my cynical side thinks that it was to promote retention of students; I suspect far fewer students completed the peer grading assignment than tend to complete the quizzes, e.g. see Tim Owens’ ‘Failing Coursera’ post and discussion; also this article with Dr. Chuck confirms a drop in student numbers completing peer grading). I had found the rubric easy to use, albeit a bit simplistic, but probably OK for a short essay; from the discussion forums, some had criticised the lack of recognition for original thought and critical thinking. Something which was not addressed by the rubric, and was by far the most discussed and controversial topic on the forums, was how to deal with plagiarism. Although the rubric did not ask students to look for plagiarism, many took it upon themselves to Google the content, and a slew of accusations were aired (including some falsely, as others pointed out).
The issue of plagiarism in this course and the science fiction course being run simultaneously attracted a lot of negative publicity for Coursera, prompting Daphne Koller to state that Coursera would consider implementing anti-plagiarism software. Personally, I’m not sure about how effective this would be for an exercise like the one in the IHT&S course; if you have 5,000 (or whatever – probably at least 5,000! For the sake of argument, LOTS, anyway) students all writing 200-400 words on how Bletchley Park in WW2 is an example of the intersection of people, information and technology, you’ll probably get quite a few essays which are similar, just because of the focus and word limit (a bit like the infinite monkey theorem – except the variables are a lot more controlled here).
However, let’s not forget that the HCI course used peer grading too – and there was no mention of the ‘p’ word. The peer graded assignments were a lot more demanding in the HCI course, where students effectively undertook an entire project – something which is a lot more difficult, impossible even, to cobble together from Wikipedia. I think that the important message here is to use peer grading for larger, more challenging assessments such as projects; if something can be addressed in as little as 200 words, it could probably be assessed just as well through quizzes.
Questioning the necessity of English as the MOOC lingua franca
A question which is raised by ‘plagiarism-gate’ is why; why would someone choose to plagiarise rather than simply write 200 words? I wonder if it is due in part to different levels of essay writing skills rather than cheating (in some cases anyway) – patch writing is arguably part of a learning curve, for example.
Given that the idea of using peer grading is to bypass the instructor and place the burden of marking on the students themselves, why must all students write in English? One of the first things that Dr. Chuck flagged up in his introductory lecture was its’ international nature, encouraging students to form study groups in their locality. Students have also been encouraged to add subtitles to the lectures to translate them into different languages.
I suppose that the argument against would be that if there only happened to be say five people speaking your language, and you know each other from your study group, collusion could be an issue. However, realistically this is probably a fairly low risk, and a field could easily be added to peer grading submissions to specify which language it is written in and match it to a peer who wrote in the same language, for various major languages at least. This may assist in both the plagiarism issue, and also assuage concerns raised by some in the forums expressing concern about markers’ English proficiency.