Networked Life, Social Network Analysis, & a new appreciation for feedback

Two key Coursera courses which did not fall by the wayside for me this Autumn were ‘Networked Life‘ (Michael Kearns, University of Pennsylvania) and ‘Social Network Analysis‘ (Lada Adamic, University of Michigan).

Both courses held a particular significance for me. Whereas the MOOCs I had taken up until this point had been ‘just for fun’, these courses were directly related to my PhD, which I started this Autumn. The courses were incredibly helpful as introductory social network analysis courses, giving me a head start methodologically, and saving me probably a couple of hundred pounds in face-to-face training courses. The courses were also the first MOOCs which I’ve found to be challenging; graph theory gave my brain the best mathematical workout since I did A-level maths (and I really do mean that as a compliment!).

Although the courses were ostensibly very similar, in terms of the subject matter and scope, they each provided very different MOOC experiences, and I ended up with very different marks – 98.8% for Networked Life, compared to 80.3% for Social Network Analysis (just scraping a certificate)! Here I’m going to try to explore the reasons behind this.

‘Networked Life’ (NL) had a simpler course structure than ‘Social Network Analysis’ (SNA). NL was slightly short, at 6 weeks, compared to SNA’s 8 weeks. In NL, the material was taught via video lectures (including simulations), and assessed by multiple-choice questions. Assessments were quite strict in that you could only attempt quizzes a maximum of twice. In contrast to the other Coursera courses I’ve taken in the past, all of the NL lectures and quizzes were available right from the start of the course, i.e. there was no weekly release of material (which was nice – I enjoyed being able to get further ahead when I had spare time, but still having weekly deadlines for the quizzes kept me from falling behind when time was tighter). The quizzes were tough, but the feedback on the first attempt was helpful and constructive, which really helped in learning from mistakes.

The main mode of material delivery in SNA was also video lectures including simulations. There were however two different ‘modes’ to the course; a basic certificate could be earned by gaining a mark of over 80% via the weekly multiple choice quizzes and final exam, while a certificate with distinction could be earned by completing this plus additional programming assignments (usually multiple choice questions relating to interrogating a network data set in a social network analysis package, plus a peer-assessed project). The multiple choice questions in SNA were the hardest I have encountered in a MOOC yet, mainly because it did not give any feedback after each attempt – not even an indication of which questions were right or wrong! This was very frustrating, and has given me a whole new level of appreciation of feedback! Because of this, I could generally do quite well on the assignments, but remember that the pass mark was set at 80% right from the start of the course; I found it really hard to get really good marks on the assignments due to the lack of feedback, and kept bumping along right on the pass mark.

I think it helped that I had chosen to do the optional programming assignments, as I had been doing a bit better on these (getting more like 90-100%). It was a bit of a surprise when I realised that I need to complete a project though – I’d thought that the optional programming assignments were a sort of ‘extra credit’ affair, and didn’t realise ’til a week before the project was due that doing the programming assignments kind of put me on the programming ‘track’ and I would fail if I did not submit a project. If you are interested, here is the project that I submitted – if you refer to it anywhere, please use the suggested citation at the top of the PDF file. I got a bit carried away and marked 25 peer assessments! It was compelling to see what other students had chosen for their project. I think this was the advantage of the SNA course over NL; NL was better for covering the basic concepts, but SNA offered the benefit of the opportunity to think about and apply what you had learned, which was invaluable for me.

It was a bit of a mystery to me as to whether I would actually get a certificate or not (my calculations put me right on the grade boundary), and was genuinely pretty surprised when I did. The following statistics were released at the end of the course:

“Some participation stats: 61,285 students registered, 25,151 watched at least one video, 15,391 tried at least one in-video quiz, 6,919 submitted at least one assignment, 2,417 took the final exam. 1303 earned the regular certificate. Of the 145 students submitting a final project, 107 earned the programming (i.e. ‘with distinction’) version of the certificate.”

The most surprising statistic here for me though is that out of the 2,417 students who took the final exam, I’m assuming having completed all the weekly assignments and made it through the course to the end, only 1,410 actually got a certificate. I think that there could not have been a bit more leniency in terms of the pass mark – 80% seemed quite arbitrary and was fixed at the start of the course, which had not run on Coursera before. Also, it turns out that with only 145 students submitting a project, I had actually personally marked 17% of the projects submitted! Which made me think … is this really massive?



Filed under Uncategorized

The MOOCs that got away: A brief roundup & reflection on my drop outs this Autumn

In the past couple of months, I’ve been spoiled for choice at Coursera, as many more universities have joined the platform. After finishing the Internet History, Technology and Security course, there were several which caught my eye . I initially thought ‘hey, why not, let’s push this and see how many I can do simultaneously’, but there was a another factor involved – needing to move house and starting my PhD! So compromises had to be made. Here is a whirlwind summary:

Networks: Friends, Money, and Bytes: I had signed up to this course quite a while in advance as it seemed very interesting, and if I hadn’t had two ‘essential’ courses at the same time (Networked Life and Social Network Analysis – blog post on these to follow soon), I imagine I would have stuck with it. Additional factors which led me to drop the course were: (i) it seemed very long, at 14 weeks; (ii) despite seeming like quite a lot of work, it did not offer a certificate (I’m slightly ashamed to say that this did contribute to putting me off – I know getting a certificate is not really the point!); and (iii), the course content had recently been published as a book, so I could potentially read this at some point in the future instead.

Web Intelligence and Big Data: I completed the first week of this course, before dropping out in the second. Again, had it been running at a time when I did not have higher priority MOOCs underway, I would probably have persevered. The course included programming assignments, which I don’t think I would have been able to do without further study (other students on the forum helpfully suggested that the Udacity CS101 course is a good introduction to Python, so I will aim to take this course in preparation for the next offering of Web Intelligence and Big Data).

Securing Digital Democracy: A bit of a detour from my main interests (I set out with MOOCs to learn about Computer Science), I took this course because I thought it may fit with my broader interest in Web Science. It was relatively short (5 months) – small, but perfectly formed. The professor, J. Alex Halderman, had an excellent presentation style and the lectures were nicely finished. The course was assessed by multiple choice question sets, and a final peer reviewed essay (although due to time pressures, the course topic not being a high priority for me at the moment, and safe in the knowledge that I had scored well enough on the quizzes to gain a certificate, I did not submit an essay).

Learn to Program: The Fundamentals: Completed the first weeks’ material, but then it slipped and missed deadlines. Think it would have been manageable otherwise – and useful as my lack of programming knowledge is proving a stumbling block with some other courses (such as Big Data). It was a victim of purely bad timing for me. Hoping it will prove popular and run again soon!

Reflecting on these hits and misses, I think it raises an interesting question about how you join together multiple MOOCs into a broader learning pathway. I’ve gone on a bit of a detour from my original aim of studying computer science, which is not a bad thing – the detours have been enjoyable and I don’t really have any kind of deadline I’m aiming for – but as a beginner, it is a bit tricky to tell where I ought to go next. There is also an issue about starting to need prerequisites for courses, such as programming, which creates a progression between courses, but this kind of progression is not explicit. On a related note, Coursera have recently introduced profile pages for students, which show which courses students are taking; it would be interesting to map the network of co-studied MOOCs to see if students’ choices cluster into traditional disciplines or emergent interdisciplinary areas. As for my next MOOC, I am planning on going back to Computer Science and taking either CS101 at Udacity, or EdXs’ CS50 – watch this space!

1 Comment

Filed under Uncategorized

MOOC thoughts from Glasgow Social Media Week

Found this whilst tidying up my desktop and posting here in case it is of interest: last summer I was invited to create a one minute video about my current thoughts on MOOCs for Glasgow Social Media week, which took place in September 2012. Click here to view the video.


Leave a comment

Filed under Uncategorized

IHT&S addendum: Some answers to ‘who studies a MOOC?’

In my first post, I remarked upon some of the demographic trends I had observed when taking the CS101 course at Coursera, and expressed my hope that the MOOC providers would share some of their data to help the wider world understand the impact of MOOCs.

Dr. Chuck, the instructor on the Internet History, Technology & Security (IHT&S), took the initiative to circulate a demographic survey – and to share the results with the course participants. He encouraged us to reflect on the data and blog about it, so here we are: some answers to my first question, of ‘who studies a MOOC?’. Graphs have been drawn and interpreted by me, data gratefully courtesy of Dr. Chuck. The data comes with the following ‘health warning’ from Dr. Chuck: “Of course the caveat is that it is not scientific, it is partial, incomplete, your results may vary, void where prohibited, etc etc etc. It is anecdotal at best but certainly interesting.” We can’t tell how representative the sample is of the course as a whole, or assume that IHT&S can be generalised to other courses, but it does provide an interesting insight and raises some interesting questions.

Student demographics

Note: ‘Associate degree’ denotes 2 years undergraduate-level study; ‘Bachelors degree’ was described as 4 years of udnergraduate level study, based on the American model

  • Male students outnumber female by 2:1. Why? How much does gender of students depend on the course topic?
  • The course was fairly popular across the whole range of ages. Not sure why an under 18 category was not included. Modal category is ’25 to 34 years old’; interesting that this is the category following the one which university-level study would typically fall into. Does this indicate the importance of MOOCs as a next-stage in lifelong learning for the recent graduate? Is this in response to career pressures – MOOCs as a way to get ahead in the workplace?
  • Most respondents have a degree already – either undergraduate or masters, relatively few doctorates.

Students’ previous experiences of online learning

  • While the course is the first MOOC that most students have taken, more than half of the respondents have taken online courses before. Are MOOCs particularly attractive to students who have previously studied online? Do they have different expectations of the online MOOC environment to students who have not studied online before?

Reasons for taking the course

Note: Respondents could select multiple answers about their motivation for taking the course.

  • Givent that respondents could select multiple responses to their motivations for taking the course, it is more meaningful in a sense to focus on the categories which people did not select. In this case, it is notable that the lowest response categories – ‘Supplement other college/university classes courses’ and ‘Decide if I want to take college/university classes on the topic’ – are the ones which relate study to formal higher education structures.
  • Non-students outnumber those in formal education by approx 5:2.

Reuse and OER

  • While this pair of questions could suggest that most or all teachers taking the course would consider reusing the course materials in their own teaching, these responses should be treated with some caution, as more positive responses were gained to the reuse question (510) than respondents who indicated that they are actually teachers (451).

So: I’m intrigued by the gender differences, and the indication that MOOCs may be playing an important role in initiating lifelong learning in the years after formally leaving the academy. Of course, this is quite speculative as the data here is quite limited and only form one course. I’d be very interested to hear others’ take on the data – please do feel free to leave a comment here.


Filed under Uncategorized

IHT&S – the ‘p’ words: peer grading, plagiarism and patch writing

The Internet History, Technology & Security (IHT&S) course began in July 2012, and ran for seven weeks. It was quite an interesting contrast to the courses I had taken previously, because while the topic is aligned with Computer Science, it also had a historical stance and interpretive nature. Various thoughts:

A course which felt more like a course

IHT&S felt a lot more like I was taking an actual course; whereas in the CS101 and HCI courses, my usual study pattern was simply to spend 1-2 hrs on a Sunday afternoon watching the lectures and another 10 minutes answering the short multiple choice questions or doing the coding problems, this course felt like a bigger commitment somehow, and I have been giving some thought to why this might be.

  • The lecture load was similar, so this was not responsible.
  • One key difference was that the instructor (‘Dr. Chuck’) seemed to be more pro-active, even holding informal office hours in US cities he happened to be visiting during the run of the course.
  • The course was more demanding in its assignments (compared to the CS101 or ‘apprentice track’ of the HCI course – not the HCI ‘studio’ track, which is the most sophisticated assessment method and best use of peer grading I’ve seen so far); the quizzes (multiple choice questions) were longer, with questions worded in ways which required a greater degree of thought, and it also included a short peer graded essay (200-400 words). Note that this essay was initially intended to be a ‘warm up’ for a more complex peer graded midterm exam, although this was abandoned (see below). This was the first Coursera course I had taken which included a final exam, although I’m not sure that this really impacted my thinking or study pattern during the course; that is, I didn’t feel more pressure or hold the course in higher esteem due to there being an exam.

First experience of peer grading

Although the HCI course included a peer graded project (for the ‘studio’ track), I hadn’t been able to take part in it due to time pressures (so completed the ‘apprentice’ track, via quizzes alone). The original assessment plan for IHT&S had been mainly weekly quizzes, with a short peer graded essay in ~week 3(ish), a peer graded midterm exam (recall the course was 7 weeks long), and a final exam. This seemed like quite an ambitious mix at the start of the course, and in practice it was modified. The first peer graded essay (200-400 words) was clearly intended to be a practice run, to make students familiar with the peer grading process and use of rubrics, as everyone got full marks for taking part regardless of however their peers had graded them.

Althought the peer grading assignment was quite short at 200 to 400 words, I found it to require quite a lot more time than the quizzes. Marking my peers’ assignments felt awkward at first – particularly if they had done outside reading and were using examples which I couldn’t be sure were correct or not – but quickly became quite an enjoyable exercise. However, I’m not sure how much I actually learned from the exercise, as (in the essays which were assigned to me anyway) frequently the essays had failed to answer the actual question and repeated the lecture content while lacking focus. In contrast, for those which used examples from outside the lecture material, I couldn’t be sure how reliable the material was or if I could trust it. Probably the most valuable aspect of doing peer grading was getting practice at delivering balanced, constructive feedback!

In light of the peer graded assignment, the instructor decided to abandon the peer graded midterm and make further peer graded assignments optional. It was not entirely clear why this change was implemented; the impression that I got was that there had been wide variation in interpreting the way to mark essays consistently (although my cynical side thinks that it was to promote retention of students; I suspect far fewer students completed the peer grading assignment than tend to complete the quizzes, e.g. see Tim Owens’ ‘Failing Coursera’ post and discussion; also this article with Dr. Chuck confirms a drop in student numbers completing peer grading). I had found the rubric easy to use, albeit a bit simplistic, but probably OK for a short essay; from the discussion forums, some had criticised the lack of recognition for original thought and critical thinking. Something which was not addressed by the rubric, and was by far the most discussed and controversial topic on the forums, was how to deal with plagiarism. Although the rubric did not ask students to look for plagiarism, many took it upon themselves to Google the content, and a slew of accusations were aired (including some falsely, as others pointed out).

The issue of plagiarism in this course and the science fiction course being run simultaneously attracted a lot of negative publicity for Coursera, prompting Daphne Koller to state that Coursera would consider implementing anti-plagiarism software. Personally, I’m not sure about how effective this would be for an exercise like the one in the IHT&S course; if you have 5,000 (or whatever – probably at least 5,000! For the sake of argument, LOTS, anyway) students all writing 200-400 words on how Bletchley Park in WW2 is an example of the intersection of people, information and technology, you’ll probably get quite a few essays which are similar, just because of the focus and word limit (a bit like the infinite monkey theorem – except the variables are a lot more controlled here).

However, let’s not forget that the HCI course used peer grading too – and there was no mention of the ‘p’ word. The peer graded assignments were a lot more demanding in the HCI course, where students effectively undertook an entire project – something which is a lot more difficult, impossible even, to cobble together from Wikipedia. I think that the important message here is to use peer grading for larger, more challenging assessments such as projects; if something can be addressed in as little as 200 words, it could probably be assessed just as well through quizzes.

Questioning the necessity of English as the MOOC lingua franca

A question which is raised by ‘plagiarism-gate’ is why; why would someone choose to plagiarise rather than simply write 200 words? I wonder if it is due in part to different levels of essay writing skills rather than cheating (in some cases anyway) – patch writing is arguably part of a learning curve, for example.

Given that the idea of using peer grading is to bypass the instructor and place the burden of marking on the students themselves, why must all students write in English? One of the first things that Dr. Chuck flagged up in his introductory lecture was its’ international nature, encouraging students to form study groups in their locality. Students have also been encouraged to add subtitles to the lectures to translate them into different languages.

I suppose that the argument against would be that if there only happened to be say five people speaking your language, and you know each other from your study group, collusion could be an issue. However, realistically this is probably a fairly low risk, and a field could easily be added to peer grading submissions to specify which language it is written in and match it to a peer who wrote in the same language, for various major languages at least. This may assist in both the plagiarism issue, and also assuage concerns raised by some in the forums expressing concern about markers’ English proficiency.

Leave a comment

Filed under Uncategorized

HCI – Interesting issues with peer grading

Happy with my completion of the CS101 course, my second foray into MOOC learning took the form of the Human-Computer Interaction course at Coursera. The course started on 28th May 2012 and ran for five weeks. Again, I was very pleased with the course and saw it through to the end, even though the lectures seemed substantially longer and it was competing against my thesis writing for my time.

The course was a lot more ambitious than CS101, in terms of the assessments and reward for participation. In the CS101 course, the criteria for receiving a certificate of completion was to achieve a score >80% on average across the assessments (which were all multiple choice questions). In the HCI course, two different ‘tracks’ were offered:
– ‘apprentice’ track: like the CS101 course, certificate is issued on the basis of completing weekly multiple-choice type questions (‘quizzes’), with >80% mark overall
– ‘studio’ track: awarded on the basis of getting a certain mark in two components, the quizzes and project-based tasks, and participating in peer grading of the projects.

The HCI course was the first course at Coursera to use peer grading, so I was particularly interested to see how it went. Coursera regards peer grading as the way to make more sophisticated assessments (than multiple-choice questions) scalable. Essentially, by training the students to mark each others’ assignments, it means that the course can support thousands of students doing more sophisticated project or essay based work without needing to employ academics to mark it, and the students would also likely get more educational value out of the course, by learning vicariously from assessing others work, a win all-round. I was also interested to see if peer grading resonates with Ivan Illich’s concept of ‘learning webs’ (see his 1973 book, Deschooling Society).

It is an ambitious move though, and there were several issues which surfaced as a result of the peer grading process. I’m not going to go into the technical niggles which some students encountered, but am more interested in the less easily anticipated issues which came up on the forums:

  • Resistance to peer grading It’s hard to judge the scale (it’s very hard to tell how many people are taking a course, from the students’ viewpoint; it could be hundreds, it could be tens of thousands), but there seemed to be a degree of resistance to the idea of peer grading. While some of this could be attributed to some of the technical issues, my pet theory is that it’s because peer grading is such a radically different model to ‘offline’ educational assessment, unlike anything the students have been used to in their education before. For example, narrated video lectures are simply a digital analogue of a teacher talking at the front of a bricks-and-mortar classroom; peer grading on the other hand is a very different move from having the instructor and TAs mark all the assignments, putting extra responsibility on the student themselves.
  • Concerns about privacy This included concerns about identity (graders having access to photographs of people, requried as part of the needfinding assessment), and also about intellectual property (the projects by their nature being creative and looking for – potentially commercially valuable – technology soltuions). The courses’ Privacy Policy was explicit about the need for peers to see assessments and the compromises to privacy this entails (“The Coursera and HCI staff can only forbid but not control the distribution of your submitted work outside the confines of this web site.”; “By participating in the peer assessment process, you agree that: You will respect the privacy and intellectual property of your fellow students. Specifically, you will not keep a copy or make available to anyone else the work created by your classmates, unless you first obtain their permission.”), although I can see how this might seem less than reassuring.
  • Turning in blank assignments to get access to view the work of others This was quite an intriguing behaviour which emerged, being reported by markers in the forums seeking advice about what to do about it. The general opinion was that graders should give a zero mark to blank assignments. What intrigues me about this though is that despite the unfamiliar nature of the peer grading process and the fact that a ‘zero’ mark would be useless in terms of assessment, some students chose to submit blank assignments, I assume, to be able to view the work of others and get the vicarious learning benefits. This seems to me like an indication that the peer grading process does indeed offer educational value to the markers. It also raises some interesting questions about altruism. This reminds me of an example from my previous academic life as a Biologist. Pseudomonas aeruginosa is a type of bacteria. Amongst other elements, it needs a certain amount of iron to survive, although iron is generally quite scarce in forms which could be used directly by the bacteria. So it secretes siderophores, which are compounds which will bind to free iron, and the bacteria can then take up to use it. P. aeruginosa has been used as the basis for many experiments about evolution of co-operation, as in any population of the bacteria, there is always a small proportion of bacteria who do not produce their own siderophores. They ‘cheat’ instead, by not expending the energy and nutrients to make siderophores, but taking up those released into the local environment by others. There will always be a small proportion who ‘cheat’; but if the proportion of cheaters gets too high, the system falls apart(If you’d like to find out more about this, check out the work of Angus Buckling at the University of Oxford). Whether models of cooperation from evolutionary biology hold in online networks is something which has been incubating in my mind for a while (also in terms of open scholarship and reciprocity, inspired by a chapter in Martin Weller’s book The Digital Scholar), although I’m yet to think of an elegant way of experimenting with it.
  • The need for assessment to be part of a conversation Although I didn’t take part in the peer grading (alas, I didn’t have enough time to submit a project for the ‘studio’ track, so wasn’t able to grade myself), I get the impression that when assignments were presented to students to grade, they did not include identifying information about the student who submitted the work, or how to contact them. The forums became a sort of unofficial backchannel where markers used posts to try to get in touch with the students who submitted the work, on occasions where they wanted a bit more information about the project they were marking. I can see why grading would be anonymous – to prevent collusion I guess – but this highlighted the need for more sophisticated assessments such as these project-based submissions to be part of a conversation between student and assessor, rather than the assessor simply being a human to apply the assessment rubric through.

1 Comment

Filed under Uncategorized

CS101 – Who studies on a MOOC?

Since my goal as a MOOC student is to enhance my knowledge of Computer Science, CS101 at Coursera was my first MOOC. Actually, that isn’t strictly true; earlier in the year, I had signed up for the Learning Analytics MOOC, although I did not stick to it and only dropped in to selected online sessions during the course. I think this was due to a combination of factors, mainly that I was very busy at the time in my ‘real life’ course, and the assessments in CS101 really helped me to keep up and stay focused along the way. CS101 was certainly the my first MOOC in the sense that I was the first I completed!

The course started on 23rd April 2012 and ran for six weeks. As a first taste of the Coursera platform, I was impressed; the course was well structured and organised, and the teaching materials were well thought through and engaging. I was slightly disappointed that the videos were not reusable (the lectures on computer networking would have been nice to incorporate in my online notes on Web Science, for example); to me, a crucial part of the concept of Open Educational Resources (OER) is reusability and remixability. Generally, the course materials were published under a Creative Commons Attribution-ShareAlike 3.0 license, however the video lectures were exempted from this and remained copyright Stanford University. So while the course is free, and anyone can study on it, whether it is OER is debatable.

I would guess that being an introductory-level course and one of the first courses offered by Coursera, this was for many of the participants their first experience of using a MOOC, and the Coursera platform. As a result, students were keen to introduce themselves and find out a bit about their classmates, and several forum threads sprang up for introductions. For me, with my background in e-learning research, this provided a fascinating insight into the reasons why people would choose to study a MOOC, and the students’ backgrounds. This is an interesting topic because while there have been suggestions that courses like this are mainly taken by students who are already educationally priveleged (e.g. Anya Kamenetz, Who can learn online, and how?), I don’t think that there is a lot of real data being used to explore this.

I had intended to analyse one of the forum threads in order to address this; however, it quickly became apparent that even taking just one thread, this is a hell of a lot of data! I’m still hoping to do this analysis at some point in the future. In the meantime, I don’t agree with the idea that MOOCs such as this serve just to make the elite even smarter; while I did see highly motivated high school students and undergraduates supplementing their formal education with the CS101 course, this is too much of a generalisation in my opinion. I also saw the more senior students whose last formal study was 30+ years ago, looking to get up-to-date with modern programming languages; and stay-at-home mothers taking the course, sometimes with their young children. Let’s not forget too that the ‘mainstream MOOCs’ such as Coursera, Udacity and EdX are very new, and the early-adopters of a technology (I use adopter to mean students here) may be more tech-savvy and inclined to experiment (see the ‘technology adoption lifecycle’; as it’s early days, mainstream MOOCs are probably in the ‘innovators’ phase right now). I would expect the demographic to shift a bit as the platforms become more well-known and more widely adopted across society.

It would be really interesting to catch-up with students across a range of backgrounds (not just those looking to enter formal higher education, but not excluding them either) say a year after the course to see if the course enabled them to achieve their broader goals, and how what they learned during the course had been used in practice. It’s an exciting time for figuring out what the mainstream MOOCs mean, for opening-up learning, and reconfiguring the relationship with higher education. What is needed though is more data about the phenomenon; the MOOC platforms are sitting on a goldmine in terms of data to answer questions such as who can learn online.

Leave a comment

Filed under Uncategorized