Last week, I attended the UTA LINK Lab talk presented by Dragan Gasevic (@dgasevic) on learning analytics and research. This discussion shared all the digital traces and learning that can be collected and measured in our various learning environments, and questions how we are best doing some of these analytics within our institutions. Although we have a number of statistics, data, and information on our learners – how can we offer actionable insight, summative feedback, and information about learner progress. Our post-secondary institutions seem to want to only deal with the “R” word = Retention. Often institutions are looking to identify students at risk, provide information about learning success, and understand how to enhance learning – but how can we effectively use data when often times our metrics only focus on single outcomes?
Photo c/o the #dalmooc edX Course Site
Instead, it is the process and context that our education institutions need to identify when looking at learning analytics, that is, the need to understand and optimize learning (Butler & Winne, 1995). Whether we apply the community of inquiry framework, cognitive presence, which includes triggering events, exploration, integration and resolution (Garrison, Anderson & Archer, 2001), or the COPES (Conditions, Operations, Products, Evaluation, & Standards) model (Winnie, 1997) – it is the meaningful data points for learning analytics that really need to be identified within our educational institutions. As @dgasevic said, “Learning analytics is about LEARNING!” Often we assume the data collected from our courses and our systems will provide us with the answers; however if not identified in a purposeful way – why bother? What we really need to consider is, what does it mean to study and support the learning experience and not just the end results?
Here are a few areas of learning analytics and data evaluation need to be considered (just to name a few):
- learner agency and self-regulation
- interaction effect – external and internal conditions
- formal and informal learning communities
- instructional intervention methods
- multimodal learning
- emerging technology impact, i.e. mobile, wearable tech, etc.
Here are questions our institutions need to consider when they want examine learning analytics:
- What data we are collecting? And why?
- How does the learner information we know contribute to the PROCESS of learning?
- Who should be part of this learning analytic research for learning?
- How can we best present and interact with the data? Can this be more immediate?
- How can we encourage and support multidisciplinary teams to study learning analytics at our institutions?
- Are we being being driven by questions of need, access, and availability for the learning data collection?
- What ethical and privacy considerations should be considered when collecting data around learning?
Interested in learning more about learning analytics and data in education? Check out the paper in press by Gasevic, Dawson, and Siemens http://bit.ly/techtrends15 or better yet – join the 9-week Data Analytics & Learning MOOC that UTA & edX is hosting on this very topic starting Monday, October 20th: http://linkresearchlab.org/dalmooc/ or follow along with the conversation on Twitter #dalmooc.
Butler, D. L., & Winne, P. H. (1995). Feedback and self-regulated learning: A theoretical synthesis. Review of educational research, 65(3), 245-281.
Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. American Journal of Distance Education, 15(1), 7-23.
Gasevic, Dawson, Siemens (inpress). Let’s not forget: Learning analytics are about learning. TechTrends. http://bit.ly/techtrends15
Winne, P. H. (1997). Experimenting to bootstrap self-regulated learning. Journal of educational Psychology, 89(3), 397.
You must be logged in to post a comment.