Training isn’t a substitute for good design

Training isn't a band-aid for bad design

Training isn’t the only—and often isn’t the best—risk mitigation strategy.

I teach Human-Systems Integration for Modeling and Simulation (HSI for M&S) at the local university. In one of the assignments, students are asked to identify five risks in a hypothetical system development project. After identifying the risks, they have to plan prevention and minimization strategies for each.

This is a pretty straightforward risk management assignment; the only somewhat unique feature is that all of the risks must involve humans in some way. Examples might include operator errors due to poor usability or lack of sufficient staff due to unsupportable manpower requirements.

Despite the wide array of “human-centric” risks that the students might select—and correspondingly broad range of prevention and mitigation strategies—I keep seeing the same risk management solution appear: Training.

Bad usability? We’ll train the operators to be more careful.

Cybersecurity concerns? We’ll provide training to make the users more aware.

Manpower issues? We’ll train our workforce to make them more productive.

I may be seeing these plans from my graduate students, but I’m sure that similar thoughts have gone through real-world system designers’ minds. It seems commonplace to see training used as a band-aid to fix system design issues that should have been prevented sometime earlier in the system’s lifecycle.

Why isn’t training the (only) answer?

Training is good, but it shouldn’t be the whole risk management strategy, and it certainly shouldn’t be listed as the main risk mitigation approach for most system design issues. Too often, I’ve seen groups adopt the idea that “training is a solution for all problems,” which is an issue for several reasons:

First, training is (essentially) the last opportunity for solving systems-level problems/risks. Before reaching the training stage, system developers have the opportunity to build many other safeguards or improved design features into the system. More often than not, these built-in, designed components more effectively and efficiently address system issues (versus trying to train the personnel to work around system issues). For example, if a computer interface has a poorly human-factored Graphical User Interface (GUI), the system developers could spend money upfront to improve the design, or they could implement training (downstream) to help operators learn to use the non-intuitive interface—and incur repeated training costs, achieve lower effectiveness, and probably frustrate the operators in the process.

Second, another caution about focusing too myopically on training is that this can reinforce the “blame the operator” culture. Although most errors require some incident at the “acute” (or “tactical” or “on-the-ground”) operator/system interface, that doesn’t mean that operator errors are the primary cause of most system failures. As James Reason’s well-known “Swiss Cheese” model of system failure shows, system failures require multiple “holes” to line-up, and both acute and latent issues are needed to cause a failure. Training generally only affects the acute errors; additional strategies are needed to mitigate the other latent “holes” within the system.

Third and finally, training is not a magic bullet. Think about all of the training courses you’ve experienced. How effective were most of them? And did they help you have consistently perfect performance once you completed them? It’s difficult and expensive to reliably and effectively train a workforce, and it’s unreasonable to expect humans to consistently perform without any error. So, while training is important, it shouldn’t be the sole defense against risks—particularly if system design features, implemented earlier in the system lifecycle, could more effectively and affordably manage those risks.

No Comments Yet

Leave a Reply

Your email address will not be published. Required fields are marked *