Friday 19 February 2010

Getting Value from the Quality Department (Part 4)

In the last three posts I've been looking at some of the issues facing Quality Teams in an Application Development and Maintenance environment, and some of the ways that these teams and their organisations can get more value from quality related activities.

In this final part I want to consider more problem areas, management and measurement. These two areas are crucially interwoven as we shall see later in the post.

Quality activities in any organisation are always going to be at risk if they take place but no-one takes any notice of them. Unfortunately, in many of the organisations that I've been involved with, quality is all too often considered as a necessary evil, and the exploits of the quality team are left to percolate in the background.

Executives and middle managers, only get interested when something nasty hits the fan. These situations generate knee-jerk reactions such as a review of quality activities (usually too localised), a commitment to prevention rather than cure, or policy word changes (but without enforcement), but these tend to be short lived and ineffective actions which fail to address the real problems in the same way that a sticking plaster cannot fix a ruptured artery.

In almost every case where I have seen little real management commitment to quality, it turns out that managers have no realistic or measurable objectives set around quality. There are often collective objectives like "Maintain ISO 9000 compliance" or "Achieve level 3 of CMMI by quarter 3 next year" but these are fairly meaningless at the best of times. They are also Boolean objectives; "Achieved" or "Not Achieved".

More useful quality related objectives might be "Improve resolution time of quality issues by 20%" or "Participate in 50% of quality incident reviews". Of course, this makes the assumption that quality reviews take place and quality issues are identified, but crucially they put the onus of responsibility onto individual managers and bring them into direct contact with quality activities. Failure to participate will have an impact on their bonus or salary review.

Of course, an organisation that has a quality department almost certainly collects lots of data. The trouble is that this is often all that happens. Data collection is of no value unless the business actually does something with the data, and when I talk of the business in this context, I'm referring to the decision makers, not just the data collection team.

In many cases the data collected is worthless even if anyone wanted to use it because it doesn't actually address any direct business requirements, or because the quality of the data is so poor that it is of no value. Historically, data collection, analysis and data based decision making has been seen as a good thing. Sadly, many organisations collect data that they think they need collect, without understanding what it is to be used for, who it is to be used by, or how it is going to be used. Vast amounts of time are spent providing numbers because the system says you must. Often the same numbers are demanded by different people, often in different formats and at different times.

At one company I worked for we had three time recording systems, one electronic and two paper based (all of which required predicted clock-in and out times as well as actuals). To my knowledge only one of these was actually used to determine anything of any importance (namely overtime pay!), but that was the way things were done.

Regardless of whether they like it or not, executives and managers need good diverse data to make good decisions. There is still an extraordinary number of managers who are either consciously or subconsciously oblivious to this fact. The real problem is that too managers believe that the only data of any importance is financial data, they are measured on their ability to manage P&Ls or to meet their financial targets. What they fail to understand is that financial data alone is useless in getting to the root cause of problems and trying to resolve them. For that, they need other information which can then be used in the context of the financial data to better understand why there are issues and what their causes are.

So why are these two apparently unrelated issues of management and measurement related and what do they have to do with the quality department? In too many organisations I've seen lots of potentially useful quality data wasted because of a lack of imagination both on the part of the quality team and that of management.

Data is presented in drab and meaningless charts which really only try to demonstrate that the quality team is doing stuff. At the same time managers fail to ask the necessary questions to be able to understand how quality data can help them improve their business.

Quality managers need to initiate the dialogue with management and coach them into understanding how they can make data work for them. Think of different ways to present the data, and think of useful things to say about it. Quality data taken out of context is meaningless. For example present audit or defect data alongside financial data to highlight potential correlations.

Encourage managers to ask the difficult questions about your improvement or quality programmes, and be prepared to lift yourselves out of the status quo. Only then will management begin to sit up and take notice.