In 2014, Activision’s Call of Duty: Advanced Warfare outsold all other video games in the U.S. CoD: Advanced Warfare has state-of-the-art gameplay, voice acting by Kevin Spacey, and award-winning visual effects. The success of CoD and similar commercial game franchises seems to imply that heavy investments in technical embellishments draw customers, and it may even appear to suggest that current generations have come to expect (and are disappointed when they don’t receive) this level of sensory stimulation. Certainly these sentiments have more than a grain of truth, but high-end technology isn’t the only pathway to success.
Another top-earner of 2014 was Microsoft’s Minecraft, a game that involves roaming around a 3D sandbox and interacting with textured cubes. The game lacks a formal narrative or given player objectives; instead, players can participate in a number of different ways, from collaboratively hunting monsters to making crafts and building individual cities. Minecraft, which features 1980s-era graphics and a notable dearth of voice actors (either famous or mundane), was released gradually and iteratively. Each new version increased the game’s content and its players’ ability to creatively experiment with the world. The user community is also encouraged to create mods and further tailor the game in inventive ways. In short, Minecraft—now the best-selling PC game to date—triumphed, arguably as well as CoD: Advanced Warfare, by championing players’ creativity and involvement, and without relying on technological bells-and-whistles.
The contrasting approaches of COD: Advanced Warfare and Minecraft are a relevant analogy for instructional designers who want to enhance the engagement of their learning packages. Like COD: Advanced Warfare, instructional designers can incorporate additional bells-and whistles into the learning. (These kinds of technical add-ons aren’t always effective, of course, but well-designed adaptive mechanisms or high-fidelity sensory immersion can certainly have positive effects on engagement!) On the other hand, instructional designers can enhance engagement without bells-and-whistles and instead by creating space for local personalization, encouraging grassroots “ownership” of learning content, and facilitating active creative experimentation within the base content (à la Minecraft).
My friend, David, and I gave a talk about organic learning at the MODSIM 2015 conference. He came up with the cute moniker “organic learning” (or o-learning) as a label for the age-old principle that “learning should be relevant to the learning audience.” (In military jargon “organic” means something that belongs to a given, local unit; in other words, it’s not just “owned” or operated by some higher-level command.)
Frankly, when we started to write this o-learning paper, I wasn’t sure we’d have much to say. I mean, who needs be reminded that instruction should be designed with the local learners in mind? Every instructional design textbook says that. Then I started to think about some of the required training I’ve
endured received, particularly the kind developed by large, bureaucratic institutions: The kind of training that’s created from a central headquarters and then pushed out to multiple distributed sites. Even when local instructors deliver that training, there’s usually not much room to personalize it to the unique needs and experiences of the audience, and this problem is exacerbated when the training is hard-coded into technologies, e.g., e-learning courses. The end result of this “one-size fits none” kind of training is often decreased engagement, distraction, and boredom.
Solution: Increase engagement
When confronted with the problem of un-engaging, not-very-relevant training (assuming someone is willing to name that elephant, in the first place), the solution is often “let’s make it more interactive!” Certainly, increasing the interactivity of online courseware can boost engagement and learning; however, “increased interactivity” is sometimes just a euphemism for “more technological bells and whistles.” While employing state-of-the-art technology is admirable (if you can afford it and do it effectively), we need to find those technological innovations that truly support learning goals and then pair them with the instructional framework needed to reap the most value from our investments.
What’s more, we need learning that has local relevance for different groups of learning, so that the meaningfulness and utility of the training/education helps boost engagement. In other words, we need o-learning.
Here’s the working definition we invented (it’s still a little wonky…):
“o-Learning” means creating computer-based content that allows learners and local (organic) trainers/teachers to inject personal relevance into the material—without requiring them to necessarily design or deliver any of it. It means creating opportunities to explore, evolve, modify, and personalize the standard, SME-designed foundational content.
To say it another way, we’re talking about how to design a kind of “mass customization” into learning packages. We’re talking about finding a workable middle-ground between (a) rigid, impersonal, top-down-designed instructional content on one hand and (b) on the other hand, just giving local trainers and educators the raw building blocks (and burdensome requirement) to create their own local curricula. Option-A typically suffers because of the previously mentioned issues (e.g., lack of relevance, disengagement), and Option-B typically doesn’t work because it puts too much workload on the local instructors, who in many organizations lack the resources, know-how, and/or motivation to most effectively assemble those raw materials into good quality learning experiences.
Wait! That’s Adaptive Learning
The concept of o-learning sounds kind of like the idea of “adaptive learning” à la intelligent tutoring systems. I think they’re close cousins, and often overlapping, but not necessarily identical concepts.
Adaptive learning experiences attempt to vary the learning content and/or its delivery based upon different individuals’ or groups of learners’ characteristics or interactions. Although we often think about adaptive learning in the context of adaptive computer-based systems, (good) human tutors are technically adaptive, too. The crux of adaptive learning is that different people get different experiences.
Organic learning experiences incorporate local variation, so that different “tribes” of learners receive more personally relevant learning experiences. O-learning experiences allow space for the local culture (broadly defined) to be incorporated.
So the differences are:
- “Adaptive” might not mean “local.” An adaptive learning experience might be different for each person, but that doesn’t mean that it’s different in a way that includes local relevance. For instance, if identical twins (one in New York and one in Singapore) accessed the same mathematics tutor and somehow had identical interactions, then it’s possible that they might have the same learning experiences—without consideration of their unique local environments. (This isn’t necessarily a bad thing! Maybe the local context isn’t very useful for this math tutor to consider. The point is simply that a system can be adaptive without taking local variation into account.)
- “Local” might not mean “adaptive.” Similarly, imagine that a standard training package (e.g., how to sign your timesheet) has been developed by a central headquarters and distributed to local offices all around the world. Each office puts its unique spin on the training, but after that, incorporates no other adaptive elements. Every person from a given location who subsequently takes the training receives the same information. That’s not particularly adaptive! (Although, you could argue that adapting the “base state” of a given learning package for use at different locations is a gross form of macro-adaptation, but they seem like slightly different concepts, implementation approaches, and goals to me.)
In the end…
So, in contrast to my initial misgivings, our presentation ended up accomplishing two goals. (1) First, we created a term (a “brand,” so to speak) to help elevate the visibility of so-called o-learning. If you’ve read this far, then you don’t need to be convinced that “learning should be relevant to the local learners;” however, for large, bureaucratic institutions (where the instructional staff needs to explain their craft and defend why the design of learning content requires dedicated time and expert technique) a cute name like “o-learning” helps sell that message. (2) Second, we helped define a subtle requirement for some instructional designers. Namely, when instructional designers who are part of a larger training/education system are creating the instructional content, they need to design “hooks” to add o-learning into their curricula. In other words, we hope that instructional designers in large institutions will take a page out of the Minecraft playbook and design learning packages that enable distributed teachers/trainers, at the grassroots level, to inject their own creativity and personalized local touches, without foisting unnecessary responsibility on these personnel, risking the integrity of the instructional content, or relying on technological bells-and-whistles.