Home

The APQC Blog

Revealing Common Mistakes Using Benchmarks and Analytics to Improve Performance

APQC asked process management experts about the keys to driving decisions with data, common mistakes while using benchmarks and analytics, and why selecting the right measures is so critical. These experts will be speaking at APQC’s Process & Performance Management Conference October 1-5.

The roundtable participants include:

  1. Joanne Gutowsky, QMS Engineer and Change Agent with CJ Systems Inc.

  2. Phillip Seawright, Vice President with TriCorps Technologies

  3. John Tesmer, Director Open Standards Benchmarking®, APQC

What are the common mistakes organizations make in predictive analytics and benchmarking when trying to meet goals and identify improvement opportunities? 

Joanne Gutowsky: One of the greatest challenge I find when with working with organizations is their lack of holistic “understanding” measures identified for benchmarking.  Too often they fail to explore the following information in their benchmarking measures:

  • Where did it come from?
  • Why was it used?
  • Why not something different?
  • What was the environment, culture, or rationale for creating that measure?

Often the answers reveal an ability for organizations to set more realistic, meaningful measures for their own purpose.  Benchmarks are just that… something to work from not copy, as no two environments are identical.

Phillip Seawright: Assuming an effort is underway, we see executives continue to ignore data and fall back on intuition in their decision making. Executives tend to rise to the top and assume they made good decisions to get there without really knowing if that is true. We also see a lack of storytelling skills by the person or team that create and covey the analytics reports. This can be due to the inability to show the thoroughness of the analysis, the testing and results of various scenarios, and even poor visualization techniques. 

John Tesmer: Picking too many measures. I see it frequently: companies select dozens of measures. Another way to look at this problem is to answer this question: sure, we can define all these measures, but should we? If your measures don’t track directly to an action, then why measure them? For example – don’t waste time collecting, tabulating, building dashboards, and tracking performance of a suite of measures when, in reality, you just need two or three measures with discrete actions associated with them.

How do smart organizations identify which data matters in improvement opportunities?

Joanne Gutowsky:  Smart organizations recognize the importance of stating their scope or purpose for the data. This allows them to be smarter about what needs to be collected; where it will be best handled or managed; who should be involved in its collection, maintenance, reporting, and use; and its analysis. With these things understood the data then becomes both meaningful and value added.

Phillip Seawright: We find that business users really need to visually understand the complexity of processes, relationships, systems, and data. This understanding helps them more easily identify their data hunting grounds. Typically, we find it valuable to create two of the following types of visual ‘maps’:

  • Processes—with swim lanes and actors and key data captured at key steps.
  • Systems—identifying key processes in each system and data flows to piles of data (we want to show where data is not flowing nicely.)
  • Relationships with customers—to show which platforms and people are used throughout the buyer or customer support journey. Most buyer journeys have at least two internal teams and between two and eight technology platforms.

John Tesmer: Smart organizations think about the measure itself rather than how the measure behaves in conjunction with other measures. To understand which data points are important, look at behaviors in your measure’s such as velocity of change or historical trends or the historical performance patterns. Another thing to consider is that a large array of measures may be necessary to identify the root cause of an issue. This could include looking at the productivity of a process in several ways such as its cost, effort, and transaction volume. For example, if costs are going up and transaction volume is going down, but effort is remaining static, what does that that indicate?

In your experience, how can analysts help overcome resistance to data-driven decision making?

Joanne Gutowsky: Analysts need to recognize their roles as “change agents” and that data-driven, decision making must be part of the organization’s culture, especially in older organizations where good-old boy relationships may exist. They cannot get discouraged nor take things personally (related to their work) when people resist, challenge, or even deny the facts. Resistors are often fighting the process and habits built from the previous culture, not necessarily the information. So, analysts need to stay factual, stay consistent, and stay kind. The results will speak volumes to those who are eager for change. 

Phillip Seawright: We take a multi-pronged approach that includes a mix of examples, key concepts, and exercises to help address resistance.

We typically start by showing three key scenes from Moneyball to explain organizational dynamics when shifting from intuition to data or metrics led decision making. The final scene we show is where Billy Beane (played by Brad Pitt) identifies the 3 ‘flawed’ players that will replace their All Star that went to the Red Sox.

Then, we explain cognitive biases and summarize key concepts from Daniel Kahneman’s book Thinking Fast & Slow. Typically starting with the bat and ball question to explain system 1 and 2 concepts.

Finally, we conduct a short exercise. In the exercise we ask executives and their team to identify which common cognitive biases they have experienced when making a decision and how it impacted their organization. This approach helps the teams apply the information, internalize the common biases, and see how they have impacted their own work.

John Tesmer: The key is to understand how to manage the knowledge around the measures and results of analysis. Most organizations prepare monthly dashboards, but few have the information management capability in place to understand what is happening and how to deal with it. So, analysts need to address and convey the following questions.

  • Have you established a plan for each measure?
  • Do you know what to do when it goes high or low?
  • Do you know who to talk to?

Sometimes in the heat of the moment, when you’re fighting that fire, all you want is a playbook to help you think through the next steps.