April 20, 2024

What to evaluate?

Author: Clark
Go to Source

In a couple of articles, the notion that we should be measuring our impact on the business is called out. And being one who says just that, I feel obligated to respond.  So let’s get clear on what I’m saying and why.  It’s about what to evaluate, why, and possibly when.

So, in the original article, by my colleague Will Thalheimer, he calls the claim that we should focus on business impact ‘dangerous’!  To be fair (I know Will, and we had a comment exchange), he’s saying that there are important metrics we should be paying attention to about what we do and how we do it. And no argument!  Of course we have to be professional in what we do.  The claim isn’t that the business measure is all we need to pay attention to. And he acknowledges that later. Further, he does say we need to avoid what he calls ‘vanity metrics’, just how efficient we are. And I think we do need to look at efficiency, but only after we know we’re doing something worthwhile.

The second article is a bit more off kilter. It seems to ignore the value of business metrics all together. It talks about competencies and audience, but not impacting the business. Again, the author raises the importance of being professional, but still seems to be in the ‘if we do good design, it is good’, without seeming to even check to see if the design is addressing something real.

Why does this matter?  Partly because, empirically, what the profession measures are what Will called ‘vanity’ measures. I put it another way: they’re efficiency metrics. How much per seat per hour? How many people are served per L&D employee?  And what do we compare these to?  Industry benchmarks. And I’m not saying these aren’t important, ultimately. Yes, we should be frugal with our resources. We even should ultimately ensure that the cost to improve isn’t more than the problem costs!  But…

The big problem is that we’ve no idea if that butt in that seat for that hour is doing any good for the org.  We don’t know if the competency is a gap that means the org isn’t succeeding!  I’m saying we need to focus on the business imperatives because we aren’t!

And then, yes, let’s focus on whether our learning interventions are good. Do we have the best practice, the least amount of content and it’s good, etc. Then we can ask if we’re efficient. But if we only measure efficiency, we end up taking PDFs and PPTs and throwing them up on the screen. If we’re lucky, with a quiz. And this is not going to have an impact.

So I’m advocating the focus on business metrics because that’s part of a performance consulting process to create meaningful impacts. Not in lieu of the stuff Will and the other author are advocating, but in addition. It’s all too easy to worry about good design, and miss that there’s no meaningful impact.

Our business partners will not be impressed if we’re designing efficient, and even effective learning, if it isn’t doing anything.  Our solutions need to be targeted at a real problem and address it. That’s why I’ll continue to say things like “As a discipline, we must look at the metrics that really matter… not to us but to the business we serve.”  Then we also need to be professional. Will’s right that we don’t do enough to assure our effectiveness, and only focus on efficiency. But it takes it all, impact + effectiveness + efficiency, and I think it’s dangerous to say otherwise.  So what say you?

The post What to evaluate? appeared first on Learnlets.

Read more