Monthly Archives: March 2011

A Small Planetary Diversion

>Sadly, it lookS like I won’t be able to get to see the Ruth Clark workshop in Toronto on the 29th. Sometimes circumstances just don’t work in one’s favour.

But I’m still hopeful to get to the mlearning workshop in Ottawa at the end of April.

I may be a little quiet for the next two weeks but rest assured I’m not going away.

The Politics and “Business” of learning, Part 2

>Here’s the second installment of some posts to my Assessment & Evaluation learners.

Small-p politics is a always such a meaty subject and one that can sometimes become polarizing. So, I’m relieved on two fronts: first, that there’s a real richness of commentary here; and, two, that the polarization seems to be almost non-existent. However, there are some additional things I’d like you to consider before this phase of the discussion wraps up.

[name]’s article from the Ottawa Citizen does illustrate one potentially disturbing trend in some sectors of the public education system, and that is ‘entitlement’. While one is entitled to an education (by law, in most cases) one is not entitled to a false assessment of one’s success. (In simpler terms, “if you want it, you gotta work for it”.) Indeed, I’m rather disturbed by the implication of an education system that seems to feel that a “negotiated” pass is more effective in the long run than learning from one’s failure. I see parallels in some youth sports where the philosophy is “we don’t keep score, and there’s no winner or loser.”

So consider this as you continue this discussion: What’s the impact on the learner when the assessment and evaluation framework can be rendered null through negotiation and false entitlement? What happens to them when the “really” fail at something? Or…in more practical terms, would you want your heart surgeon to be someone who had Mom & Dad go to bat for him/her when they didn’t get a pass score in Anatomy 101 and thus scraped through Med School? Or would you want the confidence of knowing there’s some real rigour behind their lengthy training?

Now, let’s extend this discussion to workplace learning and we can consider formal and informal situations. What happens to the learner or the organization where compliance is an issue, and pass rates are forced upon the educator or assessor? Or, what happens when a peer coach doesn’t like telling someone they’re wrong about an interpretation of a key skill? Can you think of situations where this could have longer-term consequences?

While the learning content provided for your major project doesn’t have immediate life-or-death implications, consider the impact of the failure to meet outcomes. How do you support someone to “get there” and feel they have succeeded?

So, you’ve all hit on the nastiness of “politics” in learning. The question is: what are you going to do about it?

The Politics and “Business” of learning, Part 1

>I posted the bulk of this entry to the forum for one of the two courses I’m currently teaching.  The learners were sharing their observations and frustrations about politics and undue influence in supposedly objective evaluation frameworks.

So, mostly unedited, here is the first part for your perusal.

I must say that I am enjoying the discussions going on here and I wanted to add a few thoughts based on some of the recent comments. These thoughts are based on my own experiences working in a number of different learning environments. I offer these thoughts with the caveat that they’re somewhat of a blanket indictment; while I’m sure there are organizations who operate differently than those discussed here, what follows are my observations of a perceived norm across general corporate technical training vendors.

Both [name] and [name] spoke of the idea of wanting to be “liked” as a teacher/educator/instructor, and I don’t think anyone would disagree that there’s a small element of “ego” at work when you’re given the responsibility to train others. However, what one cannot lose sight of is the organizational interest in just how much learners “like” you. In organizations where training is provided ‘for profit’, customer satisfaction is huge, and rightfully so. However, it has been my experience that because many of these organizations are inserted as an “event fulfillment” provider rather than a strategic partner and stakeholder in someone’s learning process, the commitment to learning is somewhat less than it would be if the learning were facilitated through an in-house resource. Training vendors, therefore, are mostly concerned with “bums in seats”, preferably repeat ones. So, high satisfaction scores on the end-of-course smiley sheet become the almighty metric for vendor, buyer, and trainer/educator.

This leaves the educator in a bit of a dilemma: Do you do everything but stand on your head to chase a perfect evaluation score that tells you nothing about what you should be improving, or do you risk the wrath of those monitoring your scores by asking your learners to be genuine? Also consider whether or not the educator can really say whether or not the participant actually “learned” enough from you to put new skills and ideas into practice?

(As a sidebar, consider a different environment like military training. Based on my own experiences on both sides of the equation, I know there were very few instructors that I “liked”, in fact, there were a number of them that I cordially detested…but I learned something from each of them. As an instructor and later an instructor coach/monitor, I knew that my role was not to be “liked”, but to be an effective trainer/coach, and to be a positive role model, and to inspire the people I was responsible for. In that environment, instructor “likes” aren’t the metric of the day. Successful performance of the trainee definitely is.)

So when we look at the “business” of training, and what it means in terms of evaluation practice is that evaluation and assessment really tend not to happen through a full cycle of any kind. Most of these folks are living at Level 1 of the venerable Kirkpatrick model for evaluation and are either unable to proceed deeper or unwilling because of the business model. Ultimately, the learners are the ones who lose. Because there’s such a limited awareness of other frameworks, the Linus-blanket of the smiley sheet prevails to the detriment of all.

One of the aims of this course is to show people that there’s more to evaluation and assessment than just sticking a survey form under a learner’s nose and asking for their opinion, or giving them some multiple choice test that doesn’t really reflect what they need to know. This discussion should really help to hammer home the fact that putting an effective framework in place AND following through with it is what will really give you the full picture on learner success and the direct impact on the organization.

For some additional reading, if you can get your hands on it, I would draw your attention to Mann & Robertson (1996) for a thought-provoking discussion on evaluation of training initiatives. For example, the survey cited in this article says that over half of the US companies surveyed (52%) used trainee satisfaction as the key metric, 17% assessed transfer of knowledge to the job, 13% examined organizational change, and 13% didn’t evaluate any element of their training initiatives.


Mann, S. & Robertson, I. (1996). What should training evaluations evaluate? Journal of European Industrial Training, 20(9), 14-20.

S2 Q9) Best bottom-up learning implementation. Or, at least, my most memorable one. (apologies to @LnDDave)

>I pondered the answer to this question for a while because it’s been some time since I did any real bottom-up learning, but I drew on one of my experiences in the Army Reserve as an example, and arguably the one I am most proud of although I won’t lay claim to the original idea, only its implementation for some of my soldiers.

In my ‘trade’ in the Army (Armoured Reconnaissance, “recce” to the Brits and Aussies/Kiwis, and ‘armored cavalry scouts’ to the Americans), Armoured Vehicle recognition was a key skill required at all levels.  At the time, we were still training to operate in a Cold War-type, conventional environment as opposed to the regional and sectarian strife going on today.

The ‘traditional’ method of AFV recognition was through slide decks.  In this case, real photo sides, because PPT wasn’t widely used in field training at that time.  One of the problems with this training environment is that many of the photos weren’t realistic.  Many of them were like “dealer” photos.  The other problem was that the photos didn’t represent what these vehicles might look like at a distance or what it might look like from different angles, or half-hidden, etc., etc.  In short, success in AFV recognition in training scenarios came down to slide memorization and an ability to draw on a few memorized characteristics in case you got stuck.

On one exercise, some Regular Force folks put a few of us into a mock observation post, gave us binoculars and had us peer out to see what we could see.  The Reg Force guys (being better funded than us part-time soldiers) had some 1/76 scale models laid out in a few areas and wow, were they ever hard to spot.  It made recognition more of a challenge and at that point I had the germ of an idea.

So, long story short, a year or so later, I was teaching the on-weekends version Corporal’s Qualifying Course in Recce and I talked the Course Officer into letting me handle the AFV recognition portion.  Fortunately, I was (and sometimes still am) an avid scale model builder and I had a very large array of 1/35 scale vehicles.  But, rather than using those instead of slides, I booked the indoor range as my classroom.  Through a little bit of math, I set up a simulated environment where the soldiers were looking at vehicles that appeared to be 800M to 1100M away.  I set up some ‘terrain’, borrowed some camouflage nets and a few other tricks and laid out a pretty challenging scenario for the students.

After a general briefing on the principles of recogntion, the soldiers were taken down to the range, handed binoculars, told that there were almost 40 vehicles out there, and they had 15 mins to identify them all from their ‘distant’ vantage point.

While the scores were lower than the slide memorization, the activity was a big hit with them.  They felt it was far more realistic, and understood just how hard it could be to accurately identify these vehicles at a distance…because reporting a fleet of jeeps is one thing, but it what you really saw was a fleet of tanks heading in your direction, the implications are a little different. 😉

The real confirmation of that success came when an officer I knew from an infantry regiment at our Armoury happened to be in on that weekend.  He was downstairs and saw what I was doing on the range.  He asked to sit in and simultaneously asked if I would run the same training for his Anti-Armour troops and then cleared it with my CO.

So while it wasn’t e-learning at all, I like to think that I set up a good environment for learning and it wasn’t something that would have come from the top-down.

You know it’s been a productive day, when…

>…when you realize your initial LrnBk Chat posts from the night before for Section two were really well-received (as the day starts)

…when you manage a course-correct with a client who was about to deliver some very disappointing e-learning to their customer and get them turned around in 90 minutes. (in the morning)

…when you find yourself unexpectedly in a sales discussion with two ex-Veeps from your former employer who sought you out to maybe build some solutions for them (over a long lunch, and you’re not even in a sales role)
…when you really catch the attention of a “Big 4” client on a new e-learning pilot (in the late afternoon)

…when you look at the time and realize that you have no synapses left to fire to participate in the weekly #lrnchat.

That, dear reader(s), is a productive day.

(now if I could just turn off my buzzing brain….)

Q7) “doing stuff ” at work or “learning”? A longer post, just for @LnDDave.

>When I read this question (which I mean to answer last week), I was reminded of an interview I had after getting out of my college Graphics Program about a million and a half years ago – long before I considered my part-time training work to be anything other than just that.

When the rather terse interviewer asked me what I expected out of the job, one of the things I said was that I wanted an opportunity to learn something.  His response was something along the lines of “oh, you’re not here to learn. You should know everything you need already to get started.”

Needless to say, I didn’t get the job…and thank heavens for that.

With respect to Clive’s statement, I (sorta) disagree, but let me first talk about the leaders.

In a number of environments, including some that should know better, there is a “culture of execution” among Sr. Management, and very little consideration given to what I now know is “informal learning”, or even continuing education.  What I find ironic is that if something goes wrong and someone gets hauled on the carpet, invariably one of the questions that gets asked is “well, what did you learn from this?”  I worked as a promoted-from-within Manager for a national technical training provider and I had to fight an uphill battle to get management to realize that their trainers needed time to prep for new courses as well as improve existing parts of their repertoire. It took me quite some time to get them to lower the “utilization” metric (meaning, days in the classroom) so that the trainers weren’t being forced to prep entirely on their own time.

So, I see a bit of a divide between the knowledge worker and the manager in that the knowledge worker will often be forced through circumstance to “learn” in order to “do stuff”, and is frequently left to their own, likely inefficient, devices.

For me, I know that I used to go to work to ‘do stuff’ and gave very little consideration to the learning involved, but as I’ve become more aware as a learner, I am trying to be more conscious of the things I learn along the way of ‘doing stuff’, even the painful or frustrating things. 

So, I disagree with the statement because I’m not convinced that ‘doing’ and ‘learning’ should be two separate things.

Q6) Courses, not resources: where not to do it, and Q6a) What are we doing to change?

>Q6) BBC turned away from courses and toward resources. Are their organizations where this would not be effective

I can see organizations that are heavily regulated or have strong compliance requirements remaining largely in the course model. I’m thinking of organizations where lack of “training” may translate into a genuined risk to individuals, organizations, or the environment.  So, orgs like Airlines, some primary Healthcare providers, or maybe even the military, although I’d love to eventually be proven wrong on all counts.

Q6a) If you are working towards this vision, what steps are you taking?

Our catalyst was the change 2 yrs ago to partner as a reseller for a rapid e-learning development platform. It gave us some serious flexibility in asset development that wasn’t present in our previous dependence on tools like Flash. I know I am also trying to influence the decision-makers, select clients, and our account execs on how we can position these resources as a stronger service offering that reflects a more realistic model for how people want to learn in the workplace.

Lrntect Q1 Response

>Q1) Shepherd says “As none of these [learning methods, learning media, the science of learning] is intuitive and obvious, the client cannot be expected to have this expertise. And for this reason, it is neither sufficient nor excusable for the learning architect to act as order taker.” What are some ways you avoid being an order taker

Our first defense against order-taking is knowledge and ongoing learning. It has been my experience (personally and from observation) that if you get to a plateau with skills or execution, you can only respond by “filling orders” based on previous, apparently similar requirements. So if you don’t bother staying abreast of new developments or alternate approaches, you will be stuck in a world of “thats the way we’ve always done it.

I also believe that order-filling is a result of a failure to fully understand the nature of the needs of the client and/or the learner. In these situations, our desire to give the client “what they asked for” in the chase for billable services outstrips our responsibility to give them “what they really need”.

On a more aggressive stance, at what point do we decline these “McCourses” when the client cannot be swayed from their stance? Do we simply bite our tongues and do it, or realize that the relationship is not going to be a win-win and walk away? I realize this gets into a whole other topic of client influence and business development, but do we keep perpetuating bad practice for the sake of revenue?