John Whitmer, who recently graduated (February 2013) from UC Davis’ School of Education CANDEL program, has been doing some fascinating work in a new field known as learning analytics. As John explains, “we spend substantial time, money, and effort in providing academic technology solutions—Learner Analytics enables us to understand the impact of this effort on student learning.” Similarly, if students don’t perform well in an online course, conventional research doesn’t tell us whether that is because they didn’t use the online materials or because they had difficulty learning with those materials. That’s where learner analytics comes in.
Unlike the kinds of analytics frequently used by companies like Amazon and Google, there are no simple outcomes for learner analytics. As John put it, “we are looking for very complicated things around education—at the end of the day, we want to know if it transformed your life.”
On a basic level, analytics can show us what tools and materials students do and do not interact with, and for how long, which can indicate if the tools and materials are functioning properly. However, simply asking if independent tools function properly is not enough because students tend to either use the LMS extensively or not use it at all.
For John’s dissertation, he looked at student use of an LMS according to four types of use—administration, assessment, engagement, and content—and he found that student use of the LMS explained 25% of the variation in student grade. His next question was whether or not the LMS was providing tools to students who really needed help, or whether it was simply providing tools to students who were likely to succeed anyway.
To answer this question, John separated students into at-risk and not-at-risk groups by demographic. He found that at-risk students were spending more time looking at the content on the LMS but receiving lower grades; put another way, imagine two students who put in the same amount of effort, but one gets a B and the other gets an A. The implication is that the B student is spending his/her time less effectively.
For John, these findings lead to more important questions, and he plans to continue evaluating the effectiveness of various educational technologies, especially for students who are at risk of not performing well in higher education contexts.