Thursday 19 April 2012

Unused Information Holds Many Answers


Twitter can be a wonderful source of inspiration for a blog entry especially for an old pro like myself, who has encountered so many "coachable moments" that sometimes I forget what I want to share. So, thanks to the Standish Group for the inspiration for this entry with the following Tweet:
"44% of CIOs say it takes on average a day or less for their organization to reach a standard IT project decision"

This caught my eye and I responded by rhetorically asking how they measured that - probably a finger in the air. A few days later Standish came back to me with the response that "it was asked in our monthly DARTS survey of over 300 CIOs [which] had many questions on decision latency, complexity, & costs". It wasn't my intention to question how the Standish Group came about their data - rather how CIOs could actually provide a measured response in the first place?

In 28 years of working in the IT industry I have never seen a project, programme or business area maintain quantitative time related data on their decision making processes (except in my own projects!). If that sort of data is not available at the lowest levels of the organisation, I'm struggling to understand how a CIO can honestly answer the question on behalf of the whole business.

Standish have since tweeted lots more amazing stats based on their survey such as "39% of CIOs say it cost on average $500 or less for their organization to reach a standard IT project decision".

But how CIOs or anyone else responds to these questions isn't really the main point of this post. I'm interested in why teams (read departments/groups/functional areas as well as projects/programmes) don't record such data, and if they do, why they don't use it to better understand the way they operate.

Most projects maintain some kind of RAID log - probably using a standard template which came from the CMMI programme or PMO - and go through the regular motions of entering data and reviewing the outstanding items so they can close them. They probably prioritse each item, and assign a degree of severity. They may even put in open and close dates, but they rarely, if ever, do any analysis on the data, other than to monitor the number of open and closed actions over time (which generally tells you very little at all).

As a process management person I view these logs as an valuable source of input, if you're prepared to put in some effort and ask some awkward questions. Why do some issues take weeks or even months to close? Why should it take 10 days to reach a decision? Why don't open issues get reprioritised after a certain amount of time. Are there connections between the types of issue or decisions that cause the most problems?

These are the types of question that should be getting asked at departmental reviews, stakeholder reviews, and quality reviews but generally get ignored in favour of the familiar questions about timescales and budgets. If you ask different questions at these reviews, establish root causes and fix the problems, issues around budgets and timescales will probably start to fade into the background.

I find it bizarre that organisations spend so much time tracking code defects (rather than getting on with the business of fixing them as they arise) but seem to ignore management defects until they have actually caused operational failures.

While managemement continues to highlight time and money as the only critical yardsticks by which performance is measured, quality will always be an afterthought and the entire organisation will suffer as a result.