No Bullshit Metrics

I’ve been asked to build fancy metrics gathering tools and even fancier reports over and over the past years. While metrics can be a great help, there are numerous things to take into consideration to minimize side effects! These considerations are even more important when you’re dealing with qualitative dimensions, especially in creative processes such as in the field of engineering.

In enterprise realms, pointy haired bosses often try to “solve” problems by measuring stuff. Got too many support tickets? Then make a great looking trend chart about them! Once you have the numbers, apply some random improvement goal (“cut those by 20% next month”) and there you go! Somebody just created an easy to cheat game: instead of analyzing the really issue (shipping untested product, incomplete user manual…), support engineers will probably just reject a few more tickets or use untracked channels for resolution. And there’s nobody to blame for it. It’s pretty much human nature that we like to be good at our work. Thus, many become very clever in “looking great at whatever game corporate just invented”.

The whole thing gets worse when money is involved. I’ve seen unbelievable things happen in an engineering company simply because the purchasing manager had to meet a “get a x% discount average” policy to earn his yearly bonus. The choices were, take the cheap and fitting but non-discountable uC that engineering chose, or the old, crusty, more expensive and ill-fitted chip with that whopping 20% discount. The whole project turned into a medium size disaster.

The good news here is that one can skip on the worst side effects. It’s just a matter of:

  • what you think of your colleagues/employees (theory X & theory Y)
  • being good at games (anticipate cheating)
  • thinking a LOT about what (and how) you’re going to track!

Here’s my list of advice:

1. Don’t measure people

As soon as the metric is about competitiveness it has a high potential to discourage all but the high performing tier of those being monitored. So even if 5% of the not-so-good performers will feel spurred, that won’t compensate the loss of motivation of the others. If you have individual performance issues it’s probably a better idea to fix these with teaching, mentorship based models or general improvements in productivity (which will benefit the top performers too).

2. Don’t put money at play

Because even the best tend to forget about what’s necessary for the success of an organization when they are focusing on money-baked individual goals. And that can’t be good for a company in the long term.

3. Don’t track symptoms

This is really the example about support tickets I wrote about in the intro. The number of tickets is not an interesting dimension on its own. You’ll really want to figure out why the heck you customers struggle so much with the new release! Get those sleeves up and resolve issues at their origin!

4. Kill it once your problem’s gone

I believe that the best metric is the one you don’t need. First of all, you’ll get rid of a whole lot of data collection workload that way, because we all know metrics and reports spread like rabbits. So, if your software developer is busier with that shiny report(s) than with his actual work on the project, there’s probably something going very wrong.

Second, you should probably pay more attention to consolidate the new practice / tool / whatever you used to solve your problem in your organization than to monitor a resolved issue once your problem is solved. Permanent baby-sitting won’t help you any further! Eventually you’ll want to run the metric every now and then, just to be sure you didn’t fall back into old habits.

Finally, by doing this sort of spring-cleaning you’ll be able to steer what games are being played on the shop floor. There’s nothing worse for employees than to be clueless about management priorities.

4b. Monitor the metric’s lifetime

Metrics are about finding solutions to business problems. Ideally the metrics enter a steady state when you’ve finally found the right thing to do. Keeping track of the time it takes to enter that steady state is a good indicator of the effectiveness of your solution.

Having metrics phasing out rapidly is a good indication for how flexible your organisation adapts to new issues.

5. Monitor correlated dimensions

You can’t really change complex systems (like a work environment) without having side effects. As a simple example take the “fast, good, cheap” trilemma for production: you can have two out of three aspects fulfilled, but never all of them. So, if you’re going to improve on some aspect always watch out for related pitfalls.

6. KISS / Prefer comprehensible metrics

Tracking “The second derivative of the number of unit tests per employee per day” (that’s unit test jolt I think) has… a smell… Metrics are best when you don’t need a higher degree in mathematics to grasp their meaning. Having that said, the metric should reflect the level of complexity whatever it is representing. Because…

7. …oversimplification is dangerous

When the metric is a very abstract representation of the actual situation it will lead to taking the wrong measures. And that’s something you want to avoid at all cost!

No Bullshit metrics™

So, as with all things in life, balance is essential in metrics. Tracking the right thing, the right way and for a well defined time will lead to success, but it’s hard. I think many organisations already got pretty good at choosing what they actually measure. I think we should start thinking much more about metric lifecycles. And I’m pretty confident that having them short lived is the better choice!

A very smart person once told me that organizational issues are like babies; you should not leave them on their own, and if you get to understand them early then they generally turn into amazing solutions! I really like that analogy.

If you think I have missed something or would like to share your experience on the topic, please join the discussion below!