Why Process Professionals Need to Know About Analytics and Machine Learning

Carla O'Dell's picture

If you are into process improvement, now is the time to jump onto your organization’s analytics and digital transformation efforts. A recent APQC study found 53 percent of organizations plan on investing in advanced analytics and automation over the next 12 months. For the remaining 47 percent, it is likely not a question of “if” but “when.”

What do improvement professionals need to know to join in this wave of digital transformation?

I decided to ask Eric Siegel, an analytics guru. Eric’s mission is to make the how and why of predictive analytics (aka machine learning) understandable and captivating. He is the author of the award-winning book, Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die. Eric will keynote at APQC’s Process and Performance Management Conference in Houston October 4, 2018.

The first thing Eric did for me was demystify terms. Just like people, machines can learn from experience. Process data is, in actuality, codified experience: what steps are in the process, what actually happened, how long did it take or cost, and what were the outcomes? “Data is not a bunch of dry facts or boring ones and zeros; it’s a long list of things that have happened in the past,” Eric said. The more experience, the more nuanced the data and the learning can be.

Machine learning, also known as predictive analytics or predictive modeling, is when a computer learns from data to make predictions about the future. These predictions aren’t perfect but they are often much better than guessing, especially for predictions around high volume processes with a lot of historical data, like determining which customers will likely buy more, or which employees will quit after one year.

Eric gave an example of using process data to find a pocket of your customers who are three times more likely to cancel than average. That’s super helpful for aiming your expensive customer retention offers at the group most likely to leave. Other examples include fraud detection by predicting which transactions will turn out to be fraudulent, or reducing financial credit risks by predicting which individual credit card holder will turn out to be a good or bad credit risk.

The computer is extracting or drawing generalizations from experience—the “training data.” Those generalizations are called a predictive model.

Eric pointed out an interesting side benefit of collecting this data in the first place. “This data wasn’t amassed in order to do machine learning. It’s just a side effect of doing business as usual. It’s transactional residue that accumulates and, lo and behold, it turns out this stuff is really valuable because you can learn from it. You can derive these patterns to help improve the very transactional processes that have been accumulating the data in the first place.”

It seems to me that process improvement and predictive analytics (aka machine learning) have a lot in common. First, both have well defined methods and technologies. Second, process improvement, like predictive analytics, thrives on data and facts. Third, end-to-end processes crossing departments are often the biggest opportunity for gain. Process folks are used to building alliances across organizational silos.

Eric commented that the problem with this rich, end-to-end process data is that it may be disorganized, disparate, spread around, or siloed. Eric explains, “All of these different departments and silos have collected this data for their own reasons. First, you have to discover what you’re looking for and second, you have to convince everybody that they ought to let you use it for some third purpose.”

Conveying that “higher” purpose is key to buy-in. I am convinced that process improvement and predictive analytics only work if people get excited and make them work. Building that excitement sometimes goes by the sterile phrase “building a business case.” Eric offered an easy way to get started building a back-of-the-envelope business case, which he will share at our conference.

Eric concluded our conversation with this call to action for the APQC community.

“The carrot at the end of this stick is tangible, is concrete, and just needs to be conveyed and communicated in a clear way so that you can work past inertia and all of those kinds of [organizational] hurdles. It takes some patience, it takes some meetings, and it takes some doubling back.

“You’re riding a wave that’s inevitable. Learning from data to improve large scale operations is one of the last remaining points of differentiation from one large corporation to another as far as improving effectiveness and streamlining operations.”

Want to know more? Join us at APQC’s Process and Performance Management Conference October 4-5, 2018. Eric will give us an approach for getting started: for developing a back of the envelope business case; identifying the data you need and what it should look like; and only then worrying about core analytics technology itself.


Be the first to comment!