Home

The APQC Blog

Stop Being Afraid of Knowledge Management Forecasting and Measuring

I interviewed Phillip Jones, manager of the change management practice at Access Sciences Corporation, about forecasting the benefits and measuring the results of knowledge management.

Phillip Jones will be a breakout session speaker at APQC’s 2017 Knowledge Management Conference April 27-28. You can learn more about APQC’s 2017 KM Conference here.

APQC: Why is forecasting the benefits and measuring the results of a KM program so difficult?

Phillip: KM practitioners have a hard job. Businesses generally recognize the value of knowledge, and people have the skills and expertise needed to do their jobs. But what is knowledge? It’s not a tangible thing, and we love measuring tangible things—things we can count and things we can see.

The impacts and results of KM lack that physicality. We have to be creative. To measure the results of KM, we have to widen our scope and be more open-minded about embracing some uncertainty about results. The forecast and measurements we get may not be exact, but they are going to be much better than relying on hope and expectation (which we often do when measuring knowledge work).

APQC: What are KM practitioners’ biggest concerns about forecasting and measuring?

Phillip: The first is probably that they think they cannot forecast and measure convincingly. Because they are dealing with knowledge work and abstraction, they may not think that anyone will buy their forecasts or their reporting. I understand that concern. They are going to need to get their audience to appreciate that measurement, as Douglas Hubbard puts it in his book How to Measure Anything, is not telling you an exact outcome. Measurement is reducing the amount of uncertainty you have about the outcome.

Our practitioners need to embrace that second definition. We need to feel comfortable making a claim of likely outcomes and results, and embracing a little bit of math even if that’s not our formal training. (It’s not mine, but the resources are there to help!)

I also think fear is a big concern. I’m always nervous sharing these models, because so much of it is learned from necessity. I don’t have the hard science and math background many of my peers do. Sometimes impostor syndrome kicks in when I work these models. But I’ve done my due diligence and I’ve worked with people smarter on this than I am. Most importantly, my KM and change experience validates the results after we run the models. I think most KM practitioners will have that solid reality check.

Being able to describe why you are doing a thing and how you arrived at a conclusion (while maintaining intellectual integrity and not overstating results) makes the whole process less terrifying.

APQC: What forecasting and measurement models work in KM, and what lessons can it take from other fields?

Phillip: This is where we go into more art than science. The model looks like this:

  1. Break down what you’re doing with good questions
  2. Get to the core principles
  3. Make good estimates, calibrated with whatever input you can gather
  4. Make a model that follows those relationships
  5. Simulate with multivariate testing

There are many fields that deal with fundamentally human behaviors that don’t lend themselves to deterministic models. For example, one of the most exciting fields for analytics and prediction is in baseball. Sport is such a fundamentally human endeavor: It is psychological, inconsistent, and imperfect. Players go on hot and cold streaks, they challenge assumptions; nothing is guaranteed. But organizations that have improved their forecasting and measurement models have an edge—they can reduce the uncertainty of player and team performance. It has also come up a lot in voting patterns and election prediction, though that might not be a great example at the moment…

APQC: How can an organization determine the best model to help it anticipate and measure the results of its KM initiatives?  

Phillip: You’re going to have to be creative. You have to start with asking the right question. Whatever the first question you ask is probably not the right question to find your model.

Let’s use the baseball analogy again. Suppose you are looking to sign a blockbuster free agent. It’s going to cost you a lot of your budget to sign him, so you need to be sure it’s worth it. Your first question would be some variation of: “Is this player worth 100 million dollars over four years?” That’s the business question, but it’s not the measurement question. Worth is relative.

What makes a player worth something? Is it increased ticket sales? That’s one measure. Is it wins? That’s a different measure, and probably what most (not all) teams are worried about.

So now you look at how you measure the number of wins that player will give you over the players you already have (or could get for much cheaper). How do I know how many wins that player is worth? Is it how many runs he scores? Home runs? His batting average?

Then I have to ask if that performance will stay the same. If that player is 32 years old, the likelihood that they will still be hitting the same average in four years is different from a player who is 26.

So, to bring it back to KM… If you are a practitioner you need to follow that same chain of questions. There is a business question you want to ask, something like: “Is this program going to give me a good return on investment?” That’s a great question, but you are going to have to break that down to get to the right answer.

When you’ve unpacked it, you can look at your toolkit and say: “What is going to give me the best measure of these component questions, reduce my unknowns, so I can roll them back up to the original question and answer it with more certainty than I had before?”

APQC: What are the most common mistakes made when setting up a model to measure KM results?

Phillip: Assuming the model is magic. No matter how much work you do, no matter how careful you are, you’re only reducing uncertainty—not removing it entirely. People put too much faith in their models, and then confirmation bias becomes stronger. You disregard data that challenges your original projections and give outsize focus on those that support them.

Another is not calibrating your estimates. You might have hoped I had a magic bullet. I don’t. Most of this approach is based off careful estimation in some areas. You can do this pretty well if you are intellectually honest and keep bias at a minimum. But if you don’t, you’ll find yourself tweaking estimates to make the model fit what you think.

Giving up before you start. You might think this is just a bunch of numbers and it’s all made up. That would be dismissing something that gives you insight, even if it is not absolutely assured. These tools are made to supplement decision making where no measures existed. If you have measures already, don’t let me question them. However, the ones we can create with this approach will give you some ways to reduce that uncertainty and hopefully see a more complete picture.