Measuring the “unmeasurable” – shooting oneself in the foot?

fractal-1232491_1280.jpgGave a lecture on the Computer Science Education Research course here at Uppsala University this week. What I tried to convey were different aspects on measuring in the context of doing computing/engineering education research. I did that in the context of my efforts of trying to shed more light on the development of professional competencies in degree programs, an area that is ripe with complexity and aspects where ways to measure is far from obvious.

While preparing for the lecture, or rather discussion seminar, I became somewhat cynic and wondered if pursuing research in complex environments where measurements typically have a subjective touch is just stupid. Wouldn’t it be better to look into things where one can identify clearly measurable aspects that reviewers will recognise and feel comfortable with and thus have a better chance of getting work published? Nah, not if this meant that what I could research would be limited to simple and uninteresting things.

Don’t get me wrong – I don’t think all measurable things are simple and uninteresting, just that there is a danger that it could be so. I think that there is a risk that what is looked at has to be reduced, or confined, to a level where what is measured more or less loses meaning. My advice, – easy to say, much harder to do -, was to strive for balance between the complexity of issues to research and possibilities to “measure”. I think such a balance could be achieved by searching for tools and theories suitable for addressing aspects of a complex issue, aspects that are of interest and provides interesting insights into the issue. I didn’t use this “map” (made by Roger McDermott, Robert Gordon University, Aberdeen, UK, in his presentation of our paper “Investigation into the Personal Epistemology of Computer Science Students” at ITiCSE in Canterbury 2013), but it would have been a good illustration to inspire discussions about this.

Slide10.jpg

Advertisements

Final version of FIE paper uploaded

church-1102046_1920.jpgI just uploaded the final version of the Aligning Quality Assurance at the Course Unit and Educational Program Levels for the ASEE/IEEE Frontiers in Education (FIE) conference. This paper is mainly about how a Database course was redesigned at Reykjavik University based on the results of a Quality Assurance process, which we have described in e.g. Quality Assurance using International Curricula and Employer Feedback presented at the ACM Australasian Computing Education conference (ACE) in Sydney, Australia 2015. The interesting things in my opinion is that the Quality Assurance process was actually seen as relevant and positive by the staff at Reykjavik University and the relatively detailed description of how the ACM/IEEE Curricula guidelines could be used in updating and implementing a course. The first since Quality Assurance processes in my experience mostly have been seen as something requiring a lot of work with not much value as seen from the teaching staff perspective. The second since it could lower the bar for teachers to use the ACM/IEEE Curricula guidelines, or similar ones, in their course planning.

 

 

Papers for Frontiers in Education, 2016

FIE 2016 CFP postcardI have four paper abstracts and one panel proposal accepted for the ASEE/IEEE Frontiers in Education (FIE)  conference to be held in Erie, Pennsylvania, USA in October 12-15. So far I have got notice that the panel Developments in global software education has been accepted with minor revisions, that the paper Students envisioning their future has been accepted, and that the papers A critical analysis of trends in student-centric engineering education and their implications for learning and Aligning quality assurance at the course unit and educational program levels both have been accepted with minor revisions. I am waiting to hear about the A framework for writing personal learning agreements  paper. Feels good so far   🙂