What Not To Do When Developing Performance Measures

When it comes to measuring a company’s progress in achieving organizational targets, we often choose the wrong metrics to track the wrong trends, thus measuring progress with flawed data. We can measure how prolific a baseball player is, how profitable a company may be and how effective a salesperson is at his craft.

When it comes to measuring a company’s progress in achieving organizational targets, we often choose the wrong metrics to track the wrong trends, thus measuring progress with flawed data.

We can measure how prolific a baseball player is, how profitable a company may be and how effective a salesperson is at his craft. But when it comes to measuring a company’s progress in achieving organizational targets, we often make glaring errors.

Rarely do we measure the right things for the right reasons. Rather, we choose the wrong metrics to track the wrong trends, thus measuring progress with flawed data. Here are 10 common mistakes to avoid in developing performance measures.

  1. Placing credibility in customer satisfaction surveys. Surveys seldom reflect a representative sample of customers and they are easy to cheat on. It’s easy to send surveys only to the customers you know would give good marks or to write survey questions in such a manner that you would get only good results.
  2. Reliance on superstitious measures. These are “feel good” measures that give a false sense of security, while the underlying issue remains unresolved. An example is the “Say NO to Drugs” campaign, which included billboards, TV ads, sessions in schools, etc. However, the data showed these activities resulted in little decrease in drug use among youth.
  3. Measurement for measurement’s sake. Like Sir George Mallory climbing Mount Everest, we count things because they are there, because we can and because it’s easy.
  4. Mixed metrics. This occurs when we measure something that can be either good or bad for the organization. For example, turnover. We count the departure of an outstanding performer the same as the departure of a poor performer.
  5. Relying exclusively on lagging indicators. These measures look back at what has happened in the past and cannot be improved. Here’s an example: Counting the number of heart attacks in 2002 cannot influence or predict the number of heart attacks in 2012. Cholesterol levels and dietary habits can predict numbers of future heart attacks, and if we change these metrics, then we can change the number of future heart attacks.
  6. Using measures that are not tied to targets — or that are tied to arbitrary targets rather than targets based on some kind of capability study.
  7. Annual metrics. Often seen in government, data is gathered at one point in time (usually the same point in time) every year. What we get is a snapshot of that particular point in time, but no clue about the remaining 364 days.
  8. Using measures that are unrelated to the organization’s vision. The vision answers the question, “What do we want to achieve during the current leader’s tour?” After the leader articulates the vision, a small number of strategic measures are developed. These are reflected in measures related to lower-level plans, budgets, goals, objectives and action items.
  9. Using measures you cannot control. The test of a good measure is whether by measuring it you can develop action items that would make the results change. One of the Federal Highway Administration’s key measures is highway fatalities, with the goal of reducing fatalities. But, if most highway fatalities are caused by speeding and drinking, over which the Federal Highway Administration has no control, how can you hold the Administration accountable for the fatality rate?
  10. Implementing measures without first predicting what behavior would occur in response. There’s a fast food company that measured "chicken efficiency" — the percentage of cooked chicken that had to be thrown out at the end of the day. Chicken efficiency was so important that it was the major determinant of franchise ratings, promotions and bonuses. The "top performing" restaurant managers stopped cooking chicken at 6:30 p.m., so when they closed at 11 p.m., they would have no chicken to throw out (100 percent chicken efficiency). What happens if customers come in and there's no chicken? "We cook it," the manager replied. Don't the customers get frustrated for having to wait? "Sure, but there are plenty of other chicken eaters. And most people never come back anyway."

Mark Graham Brown is a veteran consultant and regarded as one of the leading experts on performance measurement/balanced scorecard and the Baldrige model.

More