I feel really good that this year I had the opportunity to attend and participate at the 9th European Conference on Technology Enhanced Learning, EC-TEL 2014. I have to say that so far it has been an interesting experience and I have had some very good conversations but I have also sat in on some very good presentations. I wanted to collate some of my thoughts and reflections about a workshop which I have attended yesterday about learning analytics for and in serious games. Now I have to state that in my own PhD work, I also make use of some learning analytics, focusing on the emergence of social networks in virtual worlds. However I am still a bit skeptic when people make sweeping statements about learning analytics and how they in fact can be used to determine the learning that takes place in an online/digital environment. Still, some of the talks in this workshop have indeed very clearly stated that learning analytics is certainly not about defining the learning occurring, but more about understanding how the learning trajectory evolves.
The first presentation, was a comprehensive overview of learning analytics and educational data mining and the current and future research trends in the area. The speaker, Christina Steiner described how one of the key factors driving LA, is assessment defining it as the gathering of information about a learner’s progress towards the achievement of goals and relative competences. So LA comes into play to make sense out of all the learner’s data as he traverses an online environment. Learning analytics and educational data mining have some common goals and similar definitions relating to the collection and collation of data about learners. However educational data mining emphasises automated adaptation and recommendations of a personalised learning experience. LA can be made use of in different time scales. It can be used to report what has happened (past), it can also be used to monitor (real-time progress), and it can also serve to predict (possible?) future learning issues. It is made up of 3 stages; collection, visualisation and predictive modelling, and improving on the LA process. So the main stakeholders in this are learners and teachers.
However this provokes in me some thoughts and questions: would analytics apply to all learning processes, and all learning domains? How accurate would the predictions be? Do they really predict and model learning? If a student has opened a resource, or maybe visited a site, or even uploaded an assignment or completed a quiz, can we really say that that learner has learned? Would he really have learned everything we had in mind as goals?
Let’s say that I, as a teacher, think that by performing an online activity, the learner would achieve a set objective. Therefore when I design and set up my LA process, I know that I would be measuring the student performance in line with that goal. However I believe learning is not linear. I know and everyone knows learning is a complex mesh that is very much dependent not the many experiences the learner goes through. So how can I say that a learner is advancing his progress (or vice versa) by ticking the box that the learner has gone through the activity? What sort of tools would I have to monitor what other additional skills and competencies that learner has attained by maybe going further than that activity? Maybe that activity was boring to the learner and he skipped it but that activity has prompted him to search further, and to investigate deeper than the activity. Where would that place the learner? In my books, that learner demonstrates a good critical skill set, but what about the system? Are we risking to going back to standardising learning with the inclusion of learning analytics to assess learning?