Learner Analytics & Direct versus Indirect Evidence

In response to our post about Using Data in Higher Education last week, Kara Moloney, assessment coordinator for UC Davis’ Center for Excellence in Teaching and Learning (CETL), had some interesting comments about learner analytics:

The data that fall into the "learner analytics" category provide "indirect" evidence of student learning, but do not provide actionable information about what students know and can do (unless the question faculty are asking is whether students are able to use the LMS). Indirect evidence is incredibly important to provide corroboration (or triangulation) with analyses of direct evidence of student learning, but it does not provide enough information to be used in isolation. There is a tendency to conflate indirect and direct evidence, which then can lead to erroneous claims based on insufficient data.

One of the key points Kara brings up is the distinction between direct and indirect evidence. Direct evidence, as defined on the CETL website, "provides concrete examples of students’ ability to perform a particular task or exhibit a particular skill.” Often, faculty collect direct evidence through quizzes, exams, projects, presentations, and/or papers, though you can also use pre/post tests or competency interviews.

Indirect evidence, on the other hand, “provides information that allows you to make inferences about student learning.” Researchers collect this kind of evidence through surveys, focus groups, exit interviews, graduation reports, and retention data. Learner analytics would provide a great way for faculty to collect this type of evidence from their students, but it’s important to remember that learner analytics are probably not providing direct evidence and as such we should be wary of the ways big data is put to use in higher education.

Please feel free to share your thoughts or questions about big data, learner analytics, or direct and indirect evidence by leaving a comment on this post!

Primary Category