Category Archives: evaluation

Will at Work Learning: Net Promoter Score — Maybe Fine for Marketing, Stupid for Training

From Will Thalheimer…

More and more training departments are considering the use of the Net Promoter Score as a question–or the central question–on their smile sheets.

This is one of the stupidest ideas yet for smile sheets, but I understand the impetus–traditional smile sheets provide poor information. In this blog post I am going to try and put a finely-honed dagger through the heart of this idea.

via Will at Work Learning: Net Promoter Score — Maybe Fine for Marketing, Stupid for Training.

My take: something done poorly is best not done at all…and that sums up most of my feeling on the use of smiley sheets as the sole measure of “training success”. I recall my days as a MCSE / MCT for a major corporate training provider here in Canada. Microsoft Curriculum demanded a feedback form after every class. We were even supposed to send them to MS Canada, but apparently even they didn’t bother looking at them in detail.  However, woe betide any MCT who didn’t score highly.  As for me? I was less concerned about the numerical scores.  I used to tell my students, “a 5 or 6 out of 7 with some comments about what you feel needs improvement is of much more value to me than a 7 out of 7 with no comments at all.”

As time has gone on, I have fallen further away from Kirkpatrick’s model (Dan Pontefract’s comments on it notwithstanding) and I prefer to use other methods for evaluation. Will is very interested in “mythbusting” in the L&D space and this post is another example of some of the practices that persist in L&D – to our collective detriment.

e-Learning Project Autopsies – What Quincy M.E. can teach us about e-Learning Projects

Photo credit: http://sharetv.org/shows/quincy_meAfter five straight days dissecting and evaluating a select group of e-learning projects developed by external vendors, I felt a little bit like I was playing Quincy, M.E.  That’s honestly what the last week has felt like, because it was long, invasive, messy, and clinical. Yes, I know I’m dating myself somewhat, but CSI didn’t really work as a comparison because there’s so much “gee whiz” science involved there. Quincy, on the other hand, relied on instinct, experience, intuition, and good, old-fashioned detective work.

I’ll preface my comments by saying that this is not an indictment of the process or the players involved, rather, it’s a reflective summary of some things I wanted to share about how these kinds of projects can work more effectively down the road.

Every so often, we have to go outside the organization to get some development work done, and we’re always asked to do some kind of review.  Well, with three particularly challenging projects, our review turned into a real post-mortem, and my old TV memories from the formative years came to mind.

In no particular order, here are the things that I learned from this process, thanks to ol’ Quince.

1. If we’re doing an autopsy we have more questions than answers.

If you’re finding that the review process is turning into an autopsy, you’ve likely missed a few key indicators on “cause of death” of your project. To my way of thinking, there’s no such thing as “death from natural causes” on an e-learning project.  What is surprising is that “foul play” could be a contributor. What’s more likely in many cases is a bit of a blanket crime I’ll call “educational malpractice” The line between that and “foul play” is that there’s usually no malice in “malpractice”.

What does this mean for the L&D professional?  You really need to have the proverbial ducks in a row when embarking on a project. While just as important for in-house efforts, you’d better make sure you have a very clear vision communicated to your external developer and you need to know what the project plan is actually going to look like.  In Air Force terms, we like to avoid trying to fly the airplane while it’s still being built.  For corporate folks, it may mean having to answer some very uncomfortable questions about the project, its aims, and the amount of money thrown at it…especially if you wind up either pulling the plug or getting something that doesn’t work for you, the learners, and the organization.  As Nick Laycock recently pointed out (rightly), a post-mortem is likely a self-fulfilling prophecy and has a negative connotation.

2. Simple detective work is never easy

Quincy was portrayed as someone who relied as much on deductive reasoning as he did on his microscope and scalpel. More importantly, he also spent time outside the lab asking questions and putting a big picture together.

You will likely spend a lot of time checking and re-checking decisions and outcomes. As much as you’ll rely on the “physical” evidence (I guess we can include email trails there).  Don’t be pushed into completing findings by a certain date if you don’t have all the facts.  In short, don’t give up on a review (or fail to conduct one) because there’s always something to learn.

On a more positive note, humans are notorious for failing to acknowledge what they have done correctly (with the exception of those possessing an overabundance of hubris), so why not take a look at the things that went well and strive to repeat them?

3. Some wounds are self-inflicted

Try as some might to make a murder look like a suicide, there are some tell-tale signs that a good Examiner can always determine (not all of  them physical).

The L&D professional has to rely on their instincts as well as the evidence, and they have to look at a lot of contributing factors in the project; everything from project methodology, through communications, and even a critical evaluation of their own efforts. For all you know, a decision made at your end that seemed innocuous at the time might have been the proverbial butterfly that triggered the hurricane later.  Other things that get in the way are repeated time and again from project management professionals: lack of sponsorship, “scope creep”, lack of flexibility, poor risk assessment, poor communications, and so on. I have been witness to (and occasionally a contributor to) some of the previous issues on projects.

4. Not all wounds are fatal

One of Quincy’s skills was his ability to sort through multiple traumas and figure out which one was fatal. This distinction was critical when more than one wound presented itself.

Similarly, I cannot think of a project that has been error-free. In fact, there are times when I think Murphy was a project manager. Errors, on their own, aren’t necessarily bad things.  They become bad when they are covered up, unrecognized, or dismissed. So, what do we do? First, let’s acknowledge that errors happen. A well-run project will probably have some Risk Analysis done in the planning stage and can serve as a guide for dealing with contingencies. Next, if errors happen, don’t hide them.  Acknowledge them. Be transparent.  Examine them and take a good look at the potential impact, and adjust as needed.  Embrace them as chances to learn.  Radical?  Sure it is!  But think about the gains received through honesty versus the costs of deceit? I know I’d have greater respect for a vendor who openly acknowledges and error AND has a plan to address it, as opposed to the one who has been providing sunshine & roses updates when they really don’t reflect what’s going on in the background.

5. Build a library and share findings

Can you imagine if every piece of evidence gathered for a case had to stand on its own with no linkage to similar happenings, or other revelations?  Unthinkable in police work or medicine, but seems to be a fact of life for organizations who have multiple projects on the go.

Part of your project plan should include a trek through your own archive of projects, findings, and lessons learned.  By starting out with this “let’s repeat success” mindset, you’re more likely to work out a stronger plan and set of goals than you would if you ignored the potential learning from previous projects. With the ubiquitous nature of social media technologies and the growth of Personal Learning and Personal Performance networks (thanks Mark Britz) we can pose questions, share lessons learned, and collectively improve our learning projects from concept through implementation.

Besides…we’re L&D professionals.  It’s in our nature to learn.  We spend a lot of time and energy promoting learning’s virtues but every so often (and I know I’m not immune to this) we’re blinded to our own need to acknowledge certain lessons.  To borrow and re-purpose a phrase, “Educator, teach thyself“.

The Politics and “Business” of learning, Part 2

>Here’s the second installment of some posts to my Assessment & Evaluation learners.

Small-p politics is a always such a meaty subject and one that can sometimes become polarizing. So, I’m relieved on two fronts: first, that there’s a real richness of commentary here; and, two, that the polarization seems to be almost non-existent. However, there are some additional things I’d like you to consider before this phase of the discussion wraps up.

[name]’s article from the Ottawa Citizen does illustrate one potentially disturbing trend in some sectors of the public education system, and that is ‘entitlement’. While one is entitled to an education (by law, in most cases) one is not entitled to a false assessment of one’s success. (In simpler terms, “if you want it, you gotta work for it”.) Indeed, I’m rather disturbed by the implication of an education system that seems to feel that a “negotiated” pass is more effective in the long run than learning from one’s failure. I see parallels in some youth sports where the philosophy is “we don’t keep score, and there’s no winner or loser.”

So consider this as you continue this discussion: What’s the impact on the learner when the assessment and evaluation framework can be rendered null through negotiation and false entitlement? What happens to them when the “really” fail at something? Or…in more practical terms, would you want your heart surgeon to be someone who had Mom & Dad go to bat for him/her when they didn’t get a pass score in Anatomy 101 and thus scraped through Med School? Or would you want the confidence of knowing there’s some real rigour behind their lengthy training?

Now, let’s extend this discussion to workplace learning and we can consider formal and informal situations. What happens to the learner or the organization where compliance is an issue, and pass rates are forced upon the educator or assessor? Or, what happens when a peer coach doesn’t like telling someone they’re wrong about an interpretation of a key skill? Can you think of situations where this could have longer-term consequences?

While the learning content provided for your major project doesn’t have immediate life-or-death implications, consider the impact of the failure to meet outcomes. How do you support someone to “get there” and feel they have succeeded?

So, you’ve all hit on the nastiness of “politics” in learning. The question is: what are you going to do about it?

The Politics and “Business” of learning, Part 1

>I posted the bulk of this entry to the forum for one of the two courses I’m currently teaching.  The learners were sharing their observations and frustrations about politics and undue influence in supposedly objective evaluation frameworks.

So, mostly unedited, here is the first part for your perusal.

I must say that I am enjoying the discussions going on here and I wanted to add a few thoughts based on some of the recent comments. These thoughts are based on my own experiences working in a number of different learning environments. I offer these thoughts with the caveat that they’re somewhat of a blanket indictment; while I’m sure there are organizations who operate differently than those discussed here, what follows are my observations of a perceived norm across general corporate technical training vendors.


Both [name] and [name] spoke of the idea of wanting to be “liked” as a teacher/educator/instructor, and I don’t think anyone would disagree that there’s a small element of “ego” at work when you’re given the responsibility to train others. However, what one cannot lose sight of is the organizational interest in just how much learners “like” you. In organizations where training is provided ‘for profit’, customer satisfaction is huge, and rightfully so. However, it has been my experience that because many of these organizations are inserted as an “event fulfillment” provider rather than a strategic partner and stakeholder in someone’s learning process, the commitment to learning is somewhat less than it would be if the learning were facilitated through an in-house resource. Training vendors, therefore, are mostly concerned with “bums in seats”, preferably repeat ones. So, high satisfaction scores on the end-of-course smiley sheet become the almighty metric for vendor, buyer, and trainer/educator.


This leaves the educator in a bit of a dilemma: Do you do everything but stand on your head to chase a perfect evaluation score that tells you nothing about what you should be improving, or do you risk the wrath of those monitoring your scores by asking your learners to be genuine? Also consider whether or not the educator can really say whether or not the participant actually “learned” enough from you to put new skills and ideas into practice?


(As a sidebar, consider a different environment like military training. Based on my own experiences on both sides of the equation, I know there were very few instructors that I “liked”, in fact, there were a number of them that I cordially detested…but I learned something from each of them. As an instructor and later an instructor coach/monitor, I knew that my role was not to be “liked”, but to be an effective trainer/coach, and to be a positive role model, and to inspire the people I was responsible for. In that environment, instructor “likes” aren’t the metric of the day. Successful performance of the trainee definitely is.)


So when we look at the “business” of training, and what it means in terms of evaluation practice is that evaluation and assessment really tend not to happen through a full cycle of any kind. Most of these folks are living at Level 1 of the venerable Kirkpatrick model for evaluation and are either unable to proceed deeper or unwilling because of the business model. Ultimately, the learners are the ones who lose. Because there’s such a limited awareness of other frameworks, the Linus-blanket of the smiley sheet prevails to the detriment of all.


One of the aims of this course is to show people that there’s more to evaluation and assessment than just sticking a survey form under a learner’s nose and asking for their opinion, or giving them some multiple choice test that doesn’t really reflect what they need to know. This discussion should really help to hammer home the fact that putting an effective framework in place AND following through with it is what will really give you the full picture on learner success and the direct impact on the organization.


For some additional reading, if you can get your hands on it, I would draw your attention to Mann & Robertson (1996) for a thought-provoking discussion on evaluation of training initiatives. For example, the survey cited in this article says that over half of the US companies surveyed (52%) used trainee satisfaction as the key metric, 17% assessed transfer of knowledge to the job, 13% examined organizational change, and 13% didn’t evaluate any element of their training initiatives.

Reference:

Mann, S. & Robertson, I. (1996). What should training evaluations evaluate? Journal of European Industrial Training, 20(9), 14-20.