Those of us engaged in and stimulated by #etmooc (an online Educational Technology & Media course) and other training-teaching-earning endeavors already have plenty of evidence that the best online learning offerings can produce results at least as good as what comes out of the best face-to-face learning. Our participation in that massive open online course (MOOC), in fact, is providing us with visceral proof that online engaging can be engaging, rewarding, and capable of producing tangible results if the right elements are in place and if we are properly prepared.
Focusing on failure rates of online learners drawn from a very large sample (40,000 community and technical college learners throughout Washington state, tracked over a five-year period), Xu and Jaggars have produced a paper that includes insights useful to any of us involved in training-teaching learning. “Adaptability to Online Learning: Differences Across Types of Students and Academic Subject Areas” (published through the Community College Research Center, Teachers College, at Columbia University), opens with a well-balanced introduction that cites previous research papers comparing face-to-face and online learning; provides observations about why some students may do better than others in online learning environments, e.g., “those with more extensive exposure to technology or those who have been taught skills in terms of time-management and self-directed learning…may adapt more readily to online learning than others” (p. 1); and includes the suggestion that “insufficient time management and self-directed learning skills” could contribute to the online learning failures examined in their paper (p. 4). Reading that section alone gives us a wonderfully concise overview of the challenges we and our learners face, and it serves as a great example of the sort of resources coming out of the open movement—the subject of our latest #etmooc module.
As we move more deeply into Xu and Jaggars’ 32-page paper, we learn more about the writers’ meticulous methodology; the subjects of their study and the types of courses they were attempting to complete; and the possibility that “older students’ superior adaptability to online learning lends them a slight advantage in online courses in comparison with their younger counterparts” (pp. 17-18). They go far beyond the usual basic levels of evaluation and ponder the possibility that peers’ behavior can have positive or negative effects on the learning process: “These descriptive comparisons suggest that a given student is exposed to higher performing peers in some subject areas and lower performing peers in others and that this could affect his or her own adaptability to online courses in each subject area” (p. 21).
In reaching the conclusion that those who struggle with face-to-face learning are even more likely to struggle with and fail at online learning, Xu and Jaggars lead us to an interesting set of conclusions and recommendations that include “screening, scaffolding, early warning, and wholesale [course] improvement” (p. 25). Acknowledging the difficulties inherent within each of their four suggestions, they leave us with proposals to define online learning “as a privilege rather than a right” and delay learners’ entry into online learning “until they demonstrate that they are likely to adapt well to the online context”; to incorporate “the teaching of online learning skills into online courses…”; to build “early warning systems into online courses in order to identify and intervene with students who are having difficulty adapting”; and “focus on improving the quality of all online courses…to ensure that their learning outcomes are equal to those of face-to face courses” (pp. 25-26).
None of this is revolutionary, nor is it beyond our reach. Preparing learners for new learning experiences before we toss them into the deep end of the learning pool simply makes good sense. Offering them help in developing their online learning skills is something that many of us already routinely do for online learners, and there are plenty of online examples at the community-college level alone for anyone who has not yet traveled this particular learning path. Building early warning systems into the process goes hand-in-hand with the increasing levels of attention we are giving to learning analytics and learning analytics tools; even at a rudimentary level, I’ve been able to increase retention rates in online courses by noting who is falling behind on assignments and sending individual notes to check in occasionally with those learners—the result is that the learners invariably note, in their course evaluations, that they had no idea online learning could be so personal and engaging. And the suggestion that we look for ways to further improve the quality of courses to make them more responsive to learners’ needs is a conclusion that hardly needs response; the wicked problem we face in meeting that challenge is to obtain the resources needed so we—and our learners—will be successful rather than being part of another report on why learners fail.