As I prepare for an upcoming SCUP North Atlantic presentation on blended learning and planning, I’m reviewing the literature on learning analytics and its influence on planning.
This article: Numbers Are Not Enough. Why e-Learning Analytics Failed to Inform an Institutional Strategic Plan in Educational Society, 2012 describes one institution’s approach to analytics where very detailed data was pulled to understand how users were interacting with the learning management system (LMS). The university also pulled grade (student achievement) data from the system to be used in the analysis. It is also worth noting that in addition to the financial investment required of accessing this additional data, they had a large advisory committee with senior leadership and widespread campus representation. This committee was then charged with the planning process to choose the next learning management system.
The article points out that although these analytics were ostensibly to be used in the strategic decision-making process in selecting the institution’s next LMS, that did not happen. As the authors note, instead of focusing on the “initial state” described by the analytics data, the committee focused on “ease of migration.” While that is not surprising, it would have been interesting to see if there was a missed opportunity for someone to reframe the debate through the course of discussion. Should someone have been tasked with reminding the committee periodically of this data? Of course the cost and ease of migration is a critical concern when considering a large transition; however, the initial state data could have been used to bridge a key point that the authors go on to make.
This key point is that the analytics data supports a “transmission” based understanding of learning instead of a community of practice or connectivist view of learning. Through researchers recognize the importance of peer-peer social connectedness online, those connections were not critically clear in the initial data. One might draw a larger distinction between technology-enhanced versus online courses in this case. If a course has face-to-face meeting, it would be very reasonable to expect that most of the interaction in the LMS is with content. Another consideration is that perhaps LMS data best captures time-on-task related to content moreso than collaborative efforts. For example, savvy students might construct responses to discussion questions off-line, and thus a quick response on the discussion forum might not reflect the time engaged on-task.
Still, this data might also lead to questioning the pedagogical underpinnings of the learning management system. If the baseline data shows that there is more interaction with the content and the institutional philosophy is that cultivating engagement with the community is a priority, they could choose an LMS that supports that function more visibly and use the next round of data to measure effectiveness of this solution. Of course, it could be that LMS’s in general have similar functionality in that regard and that a comprehensive professional development plan to encourage course design that prioritizes interaction and engagement with peers might be just as effective.
It’s also advantageous that the committee had access to grade data and the correlation of student achievement to every single aspect of LMS tool use that they studied clearly shows that the longer a student engaged with any aspect of the course, their grades reflected this.
Despite the limitations of the analytics derived from the system, they are still an important baseline on which to measure the success of endeavors to improve teaching and learning with technology.