Last night, as I was falling asleep, I thought about a presentation from Ted Frick that I saw in about 2006 about what basically amounts to educational AI. All at once, I was overwhelmed with both the magnitude of what his work has been attempting and doubt that it’s even possible to begin to break down all the variables that will support a predictive system of learning. And yet, my thought last night was that we’re going to get there, more or less, as a society in my lifetime. Metrics and analytics will be used to create predictive and adaptive learning in a meaningful way. It’s going to be a long haul, and the education industry will be resistant (sometimes for good reason), and there will need to be checks and balances and a human touch built into the system for both the analytics that measure efficacy and the metrics that are studies to make predictions both about what will help people learn and about their overall potential.
This thought process started with Carrie Saarinen’s really elegant comparison of current teaching and learning vis-a-vis learning outcomes to a spirograph. There’s a lot in the post to unpack, particularly around the current implementation of feedback and assessment in courses where a grade might be delivered on a student’s first attempt at demonstrating mastery. As she notes:
One of the problems I see with course level learning outcomes is an expectation that students will achieve mastery on the first attempt and that there are often minimal attempts at an outcome. Evaluating hundreds of courses each year, I look at thousands of different course activities, assignments, projects and quizzes it is striking to see how frequently activities map back to multiple outcomes and few outcomes map out to multiple activities. In other words, I see courses with dozens of outcomes but only a handful of linked activities.
This is such a key issue– that students would be expected to master an outcome and would receive a summative grade based on one chance at demonstrating their newly acquired knowledge. And as she rightly points out, changing this expectation would not only have to happen at the program level, there would have to be impact on how programs are designed and for how instructors see their role vis-a-vis students for this to change.
What is the connection between educational AI and individual course outcomes? Carrie’s piece was inspired by Michael Feldstein’s analysis of Pearson’s new focus on efficacy in learning. And the example of what Pearson is trying to do– well, it feels impressive. Can a rubric change a business? It can, but it has be immersed into the culture and the people who need to use the rubric/checklist need to feel like it’s solving a problem or that the mission is so critical that they’ll match their actions against the rubric routinely. However, the difference between efficacy in medicine and efficacy in education is that in medicine, the end result is pretty transparent. There are well defined metrics for assessing whether a body is healthy– there are standards of temperature, blood pressure, BMI, very minute breakdowns of what one’s blood should be comprised of, T cell count, thyroid level ranges, etc etc etc. What are our measures of success in education that will be widely adopted as universal standards? What’s the educational equivalent of normal blood pressure?
Education will get there, but it will be and my best guess is that a focus on efficacy and AI will be most helpful in content areas that will map to areas where there are clear right/wrong answers.