Wednesday 16 May 2012

Are You a Slave To Your Quality Management System? (Part 2)


In my previous post I proffered up some suggestions as to why many organisational Quality Management Systems end up as Quality Management Shambles. My hypothesis is that too many management systems (quality or otherwise!) are created for the wrong reasons, and generally get created without due care and attention to quality principles, systems principles or architectural and design principles. Over time, without a strong foundation on which to build, the QMS grows chaotically and incongruously, and ceases to be able to serve the people it should have been intended for in the first place. Instead, the organisation becomes a slave to the QMS.

In this second part, I'm going to expand on the five questions I posed in part one, with specific regard to the principles mentioned above - Quality, System, and Architecture/Design.

Question 1 -  What is the purpose of your QMS?


I surmise that if I asked that question in a certified ISO 9000 organisation (or many organisations successfully asssessed at CMMI L2/3) I would get a  different answer for each person that I asked - and certainly different answers from different levels of the business). Typical responses:

  • Developers - it's because of this ISO/CMMI initiative
  • Middle Management - to ensure compliance
  • Senior Management - so that our staff know what they have to do
  • Executive Management - so that we can standardise operations and efficiencies across the business

The real reason for a QMS has most likely been forgotten or distorted over time. If you cannot articulate you reasons for having a QMS (think elevator speech here) then it's probably time for a major review. Adherence and compliance to standards may be valid reasons for a QMS but they really must not be the primary drivers. You also need to consider whether your QMS has become a substitute for training. If your induction speech for new starters includes "you need to read the quality manual to understand how we do things round here" you probably need to rethink both your QMS and your training strategy, not to mention your induction techniques!

Question 2 - Who is the intended audience?


Most managers will glibly answer this by saying that the QMS is mandated for all staff, but the truth is that the mandate (and usage) will also certainly be biased towards the lower levels of the organisational hierarchy. In far too many organisations senior managers cannot even tell you where to locate the corporate policies never mind being able to explain them or even abide by them (even though they are responsible for them!).

Ideally a QMS will be architected from multiple viewpoints; a developers needs will differ from someone in HR or Finance, so it makes sense to organise a management system accordingly. That said, it doesn't follow that HR related material should be restricted to HR staff. A good QMS must be transparent across the enterprise.


Question 3 - How do we intend our staff to use it?

This is linked to elements of the preceeding questions. Ideally, the QMS will be a simple to use, easy to navigate, supporting reference for staff. Inexperienced 'users' can see as much detail as necessary, whilst more experienced 'users' can filter out the information so that the material acts as a memory jogger when they need guidance.

Having a good, well maintained QMS does not abrogate the organisation from ensuring that staff are 'trained' in company policy and procedure - and this should be on-going for all staff across the business, especially as changes and modifications are made.

Failure to design and architect a QMS from multiple perspectives will devalue it over time, and once it becomes 'shelfware' (or whatever the cyber equivalent is) it becomes a potential source of many other cultural problems.


Question 4 - Does the QMS reflect the way we actually work and our culture?

In the same way that failing to architect and design a QMS according to our basic principles initially will ultimately devalue it, the QMS should be continuously updated to reflect changes to working practices, regulations, and cultural changes. Most importantly, things that are wrong, inappropriate or outdated must be modified or removed as early as possible. Users do not want to be placed in a situation where they are required to demonstrate compliance to a process, policy or procedure which may be detrimental to their daily work. A mechanism for emergency fixes is critical and a queuing system for change requests is not good enough. A waiver system must be in place to allow teams to bypass incongruous instructions, and this process must be quick and simple. [Note that a waiver is a temporary mechanism and requires appropriate governance to prevent abuse!  - see my blog entry from July 2009 for more about waivers]

As a user of dozens of management systems over the years the things that annoy me most are (in no particular order) - over complexity, difficulty to navigate, response times and missing or inconsistent information. Which brings us onto the final question...


Question 5 - Is the QMS aligned to the system that is our organisation?

For me, this is the fundamental question. If we accept the basic premise that an organisation is a system, then it goes without saying that the QMS should map against that system and should mirror the entities, interactions and flows that exist in the real world of the enterprise. It should reflect the internal corporate culture and use the organisations language and terminology. Most of all, it should reflect what you do, not what you think the auditors or assessors are expecting you to do. When it comes to the QMS too many organisations waste too much time and effort doing the wrong things.

So, there you have some quick answers to my five questions. A good QMS will be an valuable asset to everyone in the organisation. A well designed and architected QMS that aligns to the business systems, objectives and values, and is maintained accordingly will be welcomed by the majority of staff and most importantly it will get used - for the right reasons.

Take back control of your QMS today, and stop being a slave to it!



Sunday 13 May 2012

Are You a Slave To Your Quality Management System? (Part 1)


When I first started working, IT businesses tended to fall into one of two categories - those that had a Quality Management System and those that didn't. Those that did tended to have a library of management standards (literally - there would be rooms stacked full of folders with thousands of pages of policies, processes and procedures) which employees were expected to obey without question. The whole purpose of the Quality Department was to maintain the Quality Management System and to ensure compliance to it. The Quality Management System not only represented thousands of trees, but the amassed knowledge, understanding and wisdom of the organisation over its lifetime.

Over time, these libraries have been (mostly) replaced by equally enormous libraries of on-line documents much to the relief of trees and tree huggers over the planet (but to the chagrin of printers). More and more companies have moved from the "have none" category to the "have" category thanks to the relentless movement towards ISO, CMMI, ITIL and whatever other set of standards, models and frameworks you wish to add.

What most of these companies share is that what they call a Quality Management System is anything but a system. In many cases it's actually a shambles, so for the rest of this piece when you see the acronym QMS it stands for Quality Management Shambles.

So how does something that starts off with a (hopefully) good intention end up in such a mess and what can you do about it?

Many organisations signed up to ISO 9000 because they were required to in order to do business with other organisations, especially government ones. Some did so because they saw that having a Quality Standard behind them help differentiate them from their competitors. Some probably did because a Quality Consultant told them it was a good idea. Whatever the underlying reason, one thing was certain - the first thing they did was to create a QMS in order to meet the requirements of the standard and then mandate that all employees followed the QMS to the letter so that the organisation could get (and subsequently keep) its certification. In many organisations, you can see that the QMS is actually organised according to the original headings of the standard in the same way that many businesses now organise their QMS to align with CMMI Process Areas. Over time, new bits got added to correspond to new departments, regulations, legislation, management proclamations and other influences. In the good places, old bits got updated or achived, and in the very good places improvement programmes were initiated to replace the bits that weren't working and so the ISO quality cycle was fulfilled. And the QMS became more and more shambolic.

By making the QMS meet a relatively arbitary standard (albeit globally recognised) the most crucial questions and drivers were generally ignored, namely:

  • What is the real purpose of the Quality Management System?
  • Who is the intended audience?
  • How do we intend our staff to use it?
  • Does it reflect the way we actually work and our culture?
  • Is it aligned to the system that is our organisation?

In other words - do we control our Quality Management System or are we slaves to it?

If you don't have answers to these questions, then you should really start to reconsider whether your Quality Management System has any place in your business other than as a stick to beat your staff with, or a tool to demonstrate compliance to a standard that may or not add genuine value to your business.

Next time, I'll look at the critical success factors in designing a [Quality] Management System that is fit for purpose and can be used to enhance the working environment rather than choking it.


Thursday 19 April 2012

Unused Information Holds Many Answers


Twitter can be a wonderful source of inspiration for a blog entry especially for an old pro like myself, who has encountered so many "coachable moments" that sometimes I forget what I want to share. So, thanks to the Standish Group for the inspiration for this entry with the following Tweet:
"44% of CIOs say it takes on average a day or less for their organization to reach a standard IT project decision"

This caught my eye and I responded by rhetorically asking how they measured that - probably a finger in the air. A few days later Standish came back to me with the response that "it was asked in our monthly DARTS survey of over 300 CIOs [which] had many questions on decision latency, complexity, & costs". It wasn't my intention to question how the Standish Group came about their data - rather how CIOs could actually provide a measured response in the first place?

In 28 years of working in the IT industry I have never seen a project, programme or business area maintain quantitative time related data on their decision making processes (except in my own projects!). If that sort of data is not available at the lowest levels of the organisation, I'm struggling to understand how a CIO can honestly answer the question on behalf of the whole business.

Standish have since tweeted lots more amazing stats based on their survey such as "39% of CIOs say it cost on average $500 or less for their organization to reach a standard IT project decision".

But how CIOs or anyone else responds to these questions isn't really the main point of this post. I'm interested in why teams (read departments/groups/functional areas as well as projects/programmes) don't record such data, and if they do, why they don't use it to better understand the way they operate.

Most projects maintain some kind of RAID log - probably using a standard template which came from the CMMI programme or PMO - and go through the regular motions of entering data and reviewing the outstanding items so they can close them. They probably prioritse each item, and assign a degree of severity. They may even put in open and close dates, but they rarely, if ever, do any analysis on the data, other than to monitor the number of open and closed actions over time (which generally tells you very little at all).

As a process management person I view these logs as an valuable source of input, if you're prepared to put in some effort and ask some awkward questions. Why do some issues take weeks or even months to close? Why should it take 10 days to reach a decision? Why don't open issues get reprioritised after a certain amount of time. Are there connections between the types of issue or decisions that cause the most problems?

These are the types of question that should be getting asked at departmental reviews, stakeholder reviews, and quality reviews but generally get ignored in favour of the familiar questions about timescales and budgets. If you ask different questions at these reviews, establish root causes and fix the problems, issues around budgets and timescales will probably start to fade into the background.

I find it bizarre that organisations spend so much time tracking code defects (rather than getting on with the business of fixing them as they arise) but seem to ignore management defects until they have actually caused operational failures.

While managemement continues to highlight time and money as the only critical yardsticks by which performance is measured, quality will always be an afterthought and the entire organisation will suffer as a result.

Friday 6 April 2012

Good Customer Service Does Not Include Obfuscation


This post is a slight departure from my normal entries and I wasn't even sure whether to put it in this blog. However, on reflection, it is quality related, or should I say, lack of quality! And it's about failing quality of both product and service.

I've recently had a couple of issues with my ISP (BT Total Broadband), one relating to the TV service (BT Vision), the second relating to their e-mail service (BT Yahoo Mail). In both cases I have posted comments on Twitter which have been picked up by the @BTCare customer service Tweeter, and have resulted in a series of farcical interactions and no resolution of the issues. But first let's quickly look at the problems.


The BT Vision Problem

BT Vision is BT's service offering set up to 'compete' in the Sky/Virgin TV space. The reality is that it only really offers Freeview TV channels, a limited On Demand service and a PVR Box. You pays your money and you makes your choice. It's adequate for my needs but that isn't the issue here. The issue is with the design of the software which runs on the BT Vision PVR box. As would be expected it is possible to record TV shows and series. And it's this latter task which drives me to despair. You cannot simply record a series on specific day/time/channel - the default setting is to record the "first run and any repeats". And given the number of repeats on UK TV this becomes a real issue. Simple example - I set BT Vision to record "Hairy Bikers' Bakeation" on BBC 2 on Tuesday at 20:00. By default this will also record the same show on BBC2 on Thursday at 19:00. Why anyone would want to do this is quite beyond me. To prevent the duplicate recording I have to edit the series recording settings and change the setting to "First Run Only". This is made quite time consuming because of the bizarre UI, but quite simply it shouldn't be necessary!

@BTCare picked up on my whinge on Twitter and I was asked to submit a problem form via the BT website which I duly did. I received no less than four telephone messages on my answer machine explaining how to change the settings (but not the defaults). This was after I had explained in no uncertain terms that the only solution to my particular problem was to redesign and rewrite the software!

The BT Yahoo Mail Problem

I use Apple Mail on my Mac to access my email from the BT Yahoo Mail service. Most of the time this works perfectly but on occasions the mail servers throw a wobbler and reject the password. This situation can last for minutes to hours and is well documented on the BT Forums (One query has generated 56 pages of related comments). Usually the problem goes away by itself, but it is clearly a bug and appears to happen on other third party mail clients (on Windows also) so is not an Apple Mail specific problem.

The latest "fix" that @BTCare suggested is that I shouldn't have more than one mail client trying to access mail at any one time. In other words, when I'm at home I have to turn off 'push' mail on my iPhone/iPad, and when I go out I have to close Mail on my Mac. Apparently this is also a requirement of Yahoo Mail policy (although no-one seems to be able to find the policy written anywhere).

Once again, I have been invited to submit my issue to BT via their website. On this occasion I've declined as it just leads to a series of useless telephone messages.

The Morals of the Story

There are two conclusions I have come to from these (so far unresolved) issues. The first is that Twitter is far from an ideal medium to manage customer service and customer relations. It may provide a "front" to demonstrate that the business cares about its customers and can be seen to be actively managing issues, but it is nothing more than that.

The second is that deliberate obfuscation of issues can hardly be considered as good practice for dealing with real problems. With both my issues, I have been palmed off with useless responses which show no understanding of the real problem, and exhibit little willingness to even try to really deal with the problems to any degree of customer satisfaction.

Even a basic acknowledgement of my issues and a genuine response (such as "we understand your frustration and have raised a change request for consideration") would help.

BT Customer relations managers and help desk managers could do with reading John Seddon's book "Freedom From Command and Control" and learn the difference between value and failure demand!

Of course, I could show my dissatisfaction by changing my ISP. All I would achieve by doing this would be to swap one set of issues with a different set and cause me a huge amount of inconvenience, time and money - not least of which would be caused by losing my primary email address which I have had for the past ten or more years!

In other words - bite off my nose to spite my face! No thanks, but at the same time - BT: thanks for nothing!


Thursday 29 March 2012

A Tough Nut to Crack


A week or so ago I found a link to an article entitled "7 Reasons you shouldn't touch systems thinking" on the thinkpurpose blog. The link was posted by Bob Marshall (@flowchainsensei) on Twitter, and I tweeted back to him that I found the item "profoundly disturbing". His response was "Excellent! Care to elaborate?", to which I answered that I couldn't put my reasons into a 140 character tweet and might end up writing my own blog post about it.

The 140 character constraint of Twitter was only a partial reason for not responding to Bob's question. The main problem was that I couldn't actually articulate my reasons properly - I just knew that I was troubled by the post. I've had a bit of time to think about things now so here's my response...

Whilst the article was published under a Systems Thinking moniker I think the first line of the text really sets the context:

 "Here's seven things you'll have to put up with if you start getting curious and learning."

It then goes on to list the seven reasons - all of which pretty much lead to the same conclusion. If you do start getting curious and learning there's a very good chance that in many organisations you'll just end up being frustrated, impotent and generally unhappy. But that's the price you'll have to pay for your efforts.

So. Is it worth it? Having spent most of the last twenty years getting curious and learning, I have been thwarted by many managers and colleagues (regardless of what position I've held in the organisation). I've dared to say and try things that aren't part of received wisdom and unsurprisingly have usually met with brick walls and head on collisions. Even when I've had a sympathetic ear, the discussion has often ended with something like "Of course, you're probably right, but that's not how we do things around here". And there's the rub! The system isn't generally geared up to cater to different  ways of thinking, and most people don't want to listen to stuff they don't want to understand.

It's not just about Systems Thinking, it's about any paradigm that isn't already compromised by misinterpretation (wilful or otherwise) or distorted for 'political' purposes. This includes the current buzz regarding Lean and Agile in particular.

We (myself and others like me) end up playing games - chipping away at the surface of resistance in the forlorn hope that one day we might make a breakthrough. Make a difference.

And this is really why I found the article so disturbing - because it struck a chord inside me that suggested that all attempts to change the current status quo will ultimately fail, which made me question my whole 'raison d'ĂȘtre' for a few moments.

But what the heck - at the end of the day, human nature is a tough nut to crack. But that doesn't mean you should give up trying.





Friday 23 March 2012

Lessons Learned about Lessons Learned (Part 2)

In my last post I discussed a few of the problems that I've encountered with Lessons Learned reporting (a.k.a Project Post Mortem reviews). In this post I'm going to propose some of things that could be done to add value to a process that is often judged with contempt by project managers and team members, misused or misinterpreted by process improvement teams and managers, and generally misunderstood by the people who could most benefit from it. These ideas are intended to be method independent and can be adopted whether you're using CMMI, Agile, or any other model or lifecycle.

Before I go any further however, I want to pick up on some of the comments made on my previous post, namely the issues of Timeliness (@flowchainsensei) and Trust (@bruhland2000), because these have a significant bearing on any proposal for improvement of this process.

In many organisations the Lessons Learned process only kicks in at the end of the project. There are several issues with this:
  • the project cannot benefit because it is already finished
  • project team members cannot properly recall details of things that could be of value in the future, and anyway they are already busy thinking about their next project
  • funding is no longer available to do the job properly
The other problem about timeliness is that project lessons learned reviews rarely coincide with process update cycles. Valid improvement opportunities (or fixes to incorrect processes) may not be made available to projects for months. Projects who need the fixes and therefore use them before they are "official" face the prospect of being "non-compliant" unless process waivers can be used (another process that is often so bureaucratic and time-consuming that projects would prefer to risk a non-compliancy than jump through hoops to get the appropriate waiver!).

The second issue regarding trust (or lack of it) is common in organisations where there is still a dominant blame culture. Whilst good facilitation of reviews can minimise some of the issues around trust, a more stable and long term way to deal with the issue is to be more explicit about what constitutes a valuable lessons learned review in the first place and focus on the processes, system interactions and organisational quagmires that need to be changed to give projects a better chance of succeeding.

So what are we really trying to achieve in a lessons learned review? Who are the beneficiaries? How can we make the process more robust and of value to the participants? And most importantly, how can we effectively use the information gathered in these (and other reviews) to ensure that the process returns more benefit than it costs, and therefore adds value to the organisation? I'm not going to try and define a one-size fits all process here. What I'd prefer to do is offer up some ideas for consideration that you may be able to build into an existing process set, or to discuss with senior management to bring around a mindset change if that is what is required.

Some of the issues that an organisation needs to addressed (and you need to be honest  when you answer these questions) are:
  • What is the real purpose of the process ? Is it intended to generate genuine improvement or is it in place to comply to some abstract model requirement? If you are not getting any tangible benefit from your lessons learned then you are doing this for compliance purposes and the process needs a major overhaul
  • Does the lessons learned review do anything to address systemic process or organisational issues in a timely fashion or does it simply generate outputs which are subsequently ignored because there is no real substance to them
  • Who is empowered to make process changes based on the lessons learned review? Can project teams make local changes to meet their specific issues? If they must wait for official changes to be released there is little incentive in them taking the time to participate in any such review - and the organisation is building a failure mechanism into its operations
  • Are successes identified by projects deemed to be "best practice" and built into the process set regardless of the specific set of circumstances which may have led to a specific success?
  • Do you have a repository of lessons learned reports and assets which consists primarily of similar findings and alternative templates and is rarely accessed by any project teams? Any genuine lesson learned should take the form of an actionable item leading to a direct improvement in the process asset library or change to company policy
  • Does your lessons learned report template contain sections like "what worked well", "what went wrong" and "what could be improved"? Focus the review away from these subjective descriptions and follow through on specifics like "which processes / interactions are causing us trouble and what can we do to resolve the problem?"
  • Have you considered adding some quantitative elements into to the report? This can be difficult as scoring issues is still very subjective, but if the same people contribute by scoring the same questions at regular intervals in a project you can at least begin to visualise how your improvement efforts are working. However, this shouldn't be used as for cross project comparisons. (I'll post an example of this type of template on my website in the downloads section in the next few days. This template has proven useful in the past and is still being used today)
  • Do you have any measures in place to assess the value of the process? If you assess the value using compliance audits or counts of reports submitted than you haven't. If you have a count of actionable items generated from lessons learned reviews you are at least on the way
  • Do process owners (or their equivalent) ever meet with project teams or do they rely on the SEPG representatives? Project managers with process issues should invite process owners to their lessons learned reviews (as non-project stakeholders) to hear the issues first hand (and not to defend their process)
  • Do you make a distinction between internal project reviews (status meetings) and lessons learned reviews? If projects have regular daily, weekly or monthly review meetings which address the purpose of the lessons learned process why should they be compelled to perform another process review just to tick the box. Don't penalise good practice for the sake of compliance or terminology!
As I wrote in the previous post many lessons learned processes are in place to fulfil a misinterpretation of a requirement, whether it be CMMI or ISO or whatever. The process is often designed to create evidence to demonstrate that the process is been executed, but it fails to address the real objective which is to provide feedback into the management system to try to stop bad history repeating itself. Far more effort is put into creating the outputs and monitoring compliance to the process than in addressing the real needs of the organisation - which is to improve. If the current effort wasted on Lessons Learned processes can be re-channeled into addressing the issues uncovered by the process then the people who are required to use it may start to respect it more because they will actually begin to reap some benefit from it.

As will the whole organisation.

Thursday 15 March 2012

Lessons Learned about Lessons Learned (Part 1)

My last post about "Best Practice" was something of a personal peeve rather than a major hurdle to successful business improvement, although lessons should be learnt from the story I told, namely:
  • your improvement focus shouldn't be about creating templates and completing documents to provide evidence for appraisals and audit
  • measures that you put in place need to be thought through from the perspective of those being measured if you want to avoid costly and wasteful dysfunctional behaviour
  • best implies that there is no room for improvement
Ultimately, I also suggested a relatively quick fix; simply stop using the term "best practice" and come up with something more appropriate.

"Lessons Learned" (learnt?) is a different kettle of fish however. Most organisations have some kind of lessons learned mechanism in place (very often associated with their "Best Practice Repository"!) and in my experience very few of these mechanisms add any real value to the organisation. They are put  in place to meet the "requirements" of CMMI, etc. and as is so often the case, fail to even to do this adequately because people fail to understand what the model is really trying to help the organisation achieve.



There are a whole bunch of issues associated with the Lessons Learned process, many of which would appear to be associated with the origins of the idea. When I first started working in Software Development over 25 years ago we never did Lessons Learned - but then again, we didn't do much of the stuff commonly considered as integral to the development process today. My first memory of anything resembling the concept of a lessons learned review was when I read an article by Tom deMarco on Project Postmortem Reviews * some time towards the end of the 1990s. This made a lot of sense back in those days - at least in the organisations I worked in where we had a small teams of developers who stayed together and worked on project after project. It made sense in projects that lasted for years, where phase postmortems could be used to identify and correct mistakes prior to starting the next phase. And in the days before I started getting involved in SW-CMM initiatives I introduced the concept and practice into several groups with some small success.

On a small scale the Project Postmortem Process defined in that paper can be very useful as long as two conditions remain in place:
  • The same teams of people are used in a "product development environment" so that they have a shared understanding of the issues which cause problems
  • There is a management environment that allows the problems to resolved internally within the teams rather than imposing  inappropriate solutions on the team without really understanding the underlying issues.
The Project Postmortem Process fails as soon as:
  • It is scaled up to become an organisational process - e.g. the focus changes from an internal review and change exercise to a management mechanism for fixing organisational issues
  • Project teams are constantly rearranged and staff treated as interchangeable resources
  • Emphasis changes from internal and potentially informal communication within a team, to a demand to codify the findings for wider use
  • Managers and process experts attempt to implement CMMI without understanding what the model really implies - i.e. that the process is part of the feedback mechanism to improve the business, not an exercise to generate paperwork to fill up a repository
  • The process is used to apportion blame
Over the years since first reading about the Project Postmortem Process I have been seen numerous lessons learned systems in numerous organisations. Most of them suffer from exacly the same problems.
  • Lessons Learned reports are documented and filed away and the feedback element to improve process rarely occurs
  • Lessons Learned are purely qualitative and can easily be distorted by the strongest or most vocal members of a team
  • Not all project team members or stakeholders are invited to participate in the process so not all perspectives are represented
  • Most staff perceive the process as a waste of time (and in most cases they are right)
  • Reviews are usually focused on project failures and successes and the process improvement opportunities are not realised
  • Managers or SEPGs that do any analysis on the results attempt to implement heavy handed changes based on their interpretation of the results without understanding the underlying circumstances or, most significantly, the differences between the environmental characteristics of the originating team (project or programme, agile or waterfall, etc.)
In fact many Lessons Learned reviews and reports have become so worthless that I can often predict the findings without knowing anything about the project or people. Typically the section on Things That Worked Well include:
  • Excellent teamwork and communication between project team members
  • Everyone went the extra mile and worked hard to complete the project under extremely difficult circumstances
  • The pizza brought in by senior management on the nights we had to work was really tasty and much appreciated
And in the Things That Could Have Been Better you'll see:
  • Really difficult relationship with the key stakeholder made this project a major challenge
  • Customer was never available
  • Requirements were so poorly specified we had to make them up as we went along
  • Technical procurement issues led to long delays
  • The CMMI initiative caused us a huge amount of extra work which didn't add value to the project
So very often, project lessons learned reviews tell us what we already knew, and probably have known about for some time. And the only thing that future projects can learn is that they are doomed to follow a similar pattern because nothing is being done to correct the organisational issues. The most likely outcome is that new status report and requirements specification templates will be imposed on the projects.

In part 2 I'll have a go at look at some of the things we might be able to do to make Lessons Learned a more valuable tool for the organisation, by extracting value for both projects and processes.

* Collier, DeMarco and Fearey: "A Defined Process for Project Postmortem Review" IEEE SOFTWARE 0740-7459/96/$05.00 © 1996 IEEE Vol. 13, No. 4: JULY 1996, pp. 65-72

Monday 12 March 2012

Good Practices, Recommended Practices but never Best Practice


If you've read previous posts on this blog, or if you follow me on Twitter, you may be aware that one of my current bug bears is the wilful misuse of the term "Best Practice". I have no idea where this concept originated, but it can't have been anywhere that believed in Continuous Improvement (or in any improvement for that matter!).



My first hands-on experience of "Best Practice" was with the introduction of a new global Quality Management System being adopted by a multi-national IT Services company across its Applications Delivery group. All the corporate assets such as templates and guidelines were stored in a "Best Practice Repository" (BPR) and tailored versions of these could be uploaded by local delivery groups. As a relatively naĂŻve practitioner I welcomed this approach initially. All staff had access to the local assets by default, and if no asset was locally available they were shown the corporate asset. Part of my job was to manage our local assets and their incorporation into the BPR. Various metrics were provided to understand how assets were used which I used to monitor and report on to my local senior management.

As a relatively mature organisation (compared to other delivery centres around the world) with a large number of tried and tested assets already in use in hundreds of projects, my team uploaded most of our templates into the BPR unless we felt that the corporate standard was an improvement on our own.

This was when the first warning bells started ringing as we became plagued with questions like "should we be using the corporate or local standard?". Notices of Instruction were duly posted with appropriate advice but the real question remained unanswered by the QMS owners - the question being "how can you have multiple Best Practices?". Either it's Best Practice or it isn't.

The second warning bell came when the European management team decided to compete with the rest of the world as to who could provide the most best practices, and all European organisations found that the number of assets uploaded and downloaded in the BPR became part of the balanced scorecard for process improvement teams. A frenzy of activity saw dozens and dozens of best practices appearing over the next few years and PMs and developers had free reign to download whatever took their fancy regardless of the quality of the materials. Many organisations simply edited the corporate standard, added their delivery centre name and logo, and renamed it to reflect their own identity. As is so often the case, the focus of improvement degenerated into wasteful template production rather than adoption of process into the mindset. What started in Europe soon filtered out across the world  and the BPR became an unmoderated morass of templates and guidelines.

After several global reorganisations a new version of the global applications QMS was developed and I was a member of the design team. Some lessons from the past were learnt, and although the BPR still existed we cleaned out 90% of the contents and put in strict controls on how assets could be added which involved several levels of review (local, regional and corporate). The B still stood for Best however.

This story isn't unique sadly. I've seen it repeated in organisations all over the world, in big companies and in small ones. As long as CMMI is interpreted as being about the production of templates and documentation the nonsense will continue. But until that happens, at least refer to your Process Asset Library as that, or if that's a bit too difficult to understand call it a Good Practice Repository.

Best implies that it can't be improved, and that there can only be one. Neither of these implications has a place in the world of continuous improvement.

Postscript - there's no prizes for seeing the other moral of this tale; that's right…the one about how a carelessly considered measurement can cause totally dysfunctional behaviour which could cost you dearly.


Tuesday 14 February 2012

A Dilemma Regarding Defect Tracking

In my previous post I harked back to the good old days when I wrote code for a living and used to pride myself that I could pretty much guarantee that there would be very few bugs in my production code. This was a good thing for the organisations I worked in because we really didn't have a process model for software development. Most people in those organisations had never heard of development models and the software groups tended to be structured around  functional roles - analysts and designers, who handed huge specs to programmers, who delivered completed systems to testers. There weren't even any real project managers - the role was shared between the lead analyst and a business/product manager.

So producing relatively defect free code was good because we didn't have defect tracking systems. We had lists of defects that came from the testing departments which programmers then had to track back to their own work and fix the issue long after it was initially created.

Since then defect tracking and management systems have proliferated and maintenance crews mop up the bugs long after they were created by someone else. Defect prioritisation meetings soak up stakeholders time and product release cycles stay the same regardless of the advances of technology and the power of new hardware and software tools.

No wonder that defect management is regarded by the lean community as waste, and why new techniques have been developed to better integrate the development functions and activities to find and fix defects as they are created rather than at the end of the development cycle.

As a software engineer, process person, and business improvement guy, I heartily approve of this and wish that more organisations could understand what a difference it could make to their release cycles and the overall quality of their products.

But as a user I find myself with a dilemma, primarily with big organisations who are responsible for product portfolios of millions of lines of code. I'm thinking in particular of the operating systems and hardware providers like Apple and Microsoft, where it's clearly impossible to test for every eventuality.

I'm reminded of the legend about Rolls-Royce when they built motor cars in the UK and owned the brand name for their cars. The story was that if your Rolls broke down you would have to ring a secret number and a service engineer would arrive on the scene with a spare car and take yours away, in a covered truck, to be repaired before returning it back to you. Rolls-Royces simply didn't break down and to prove it, you never saw one by the side of the road or being towed away.

Companies like Apple don't acknowledge bugs in their systems very often. If they do it's usually buried in the release of an update, and stated in very high level terms, like "fixes an issue with wireless networks and wake-up". I'm certain that Apple employs sophisticated defect tracking and management systems but as a user I don't know whether my specific problem is a known bug, a personal issue because of my system configuration, or a previously undiscovered problem. I can search the support forums and I can send feedback to Apple but these never get officially acknowledged, and many of the forums are populated by the blind leading the blind.

It would be so useful if Apple, like many smaller companies already do, could publicly provide a list of known issues, along with official work arounds where they exist. It would save us consumers hours of time trying to understand whether we can fix something or not. It would save hours of Genius Bar and Apple Support time and costs. And it would make Apple look better as their current system makes them look like they don't care.

As a developer I don't want defect lists. As a consumer I'm desperate to see them!

 

Monday 13 February 2012

A Message to the New Breed of Software Developers

With the advent of the "App" and their associated distribution stores (Mac, iPhone, Android etc.)  the act of developing software has never been more popular or more accessible. The opportunity to create the next "Angry Birds" in your bedroom or living room and become an overnight millionaire is clearly very enticing to many individuals.

Before I became involved in Quality and Process Management, and long before I became a consultant I earned my living as a software engineer. I use the term advisedly - I wasn't just a programmer; I was a systems designer, architect, tester, requirements engineer, configuration manager and I learnt my trade over the course of many years from some great and passionate people.

I was driven by simplicity, elegance and efficiency in both design and code. But mostly I was driven by a desire to be a brilliant engineer with acute attention to detail, and a self motivated need to write as near perfect code as possible. It helped that in those days we were constrained by both hardware and software limitations. Compliers were command lines driven, terse and unforgiving, and IDEs were few on the ground. Memory and disk space was grossly expensive and processors were slow. A build that today might take a minute or two would take half a day, so silly mistakes were costly and time consuming. Most people, including myself coded away from the machine, only committing when we had desk checked everything to iron out as many problems as possible. All these things led to the disciplines of efficiency and care that we learnt back then.

Today, computer resources are plentiful and cheap. Software development tools are powerful and much easier to use. The constraints we suffered are resigned to our memories and computer museums. The only constraints that haven't changed are time and money which are clearly still in short supply in all commercial development shops.

But despite these advances in technology those quaint old values that I shared with my colleagues, my mentors and mentees, should still be forefront in every developer's mind. Cutting corners is not acceptable. Shipping products that fail is not acceptable. Thinking of your customer simply as a cash cow is not acceptable.

I use mobile devices for many activities during the course of the day - some critical for business and some which are critical for my relaxation. I expect these devices to work without having to reboot them during the course of the day (as I do with my laptop and desktop machines). Far too many apps crash my devices and memory management is often diabolical. New versions of software reintroduce old bugs suggesting that no proper version control is in place.

I think it's great that people have the ability, imagination, enthusiasm,  capacity and desire to create software and there are a number of apps I use regularly on both my iOS devices and Macs which are a pleasure and delight to use. These are often sourced from those one person outfits whose desire to create great software outweighs the desire to get rich quick. They also tend to be great at supporting problems and always willing to go the extra mile to help fix things when they occasionally go astray. These people understand that they have a responsibility to focus on the details and to get things as right as possible as often as possible.

So my message to all developers, commercial and freelance, whether part of a team or solo artists, is simple. Please reassess your values and your methods next time you start to work on a piece of code or a new design. The best processes in the world are worthless if you - as an individual developer - fail to actually give a toss about what you are responsible for, and fail to give true consideration to your customers and their basic needs and requirements.

Wednesday 1 February 2012

Maybe Process Management Isn't Enough

For the past five or six years I've been evangelising about moving from a Process Improvement culture to one of Process Management. I've written about it in these blogs, I've spoken about it at conferences, and I've tried to encourage and promote the adoption of the concepts and principles in the workplace. I generally find my arguments are well accepted:

  • Process Improvement activities tend to be short term, project based endeavours which peak in the run up to an appraisal or audit and then fizzle out
  • Improvement teams spend much of their time recreating processes in their own image rather than building on existing processes to actually improve them
  • Process improvement activities are often top down and more aligned to compliance than aimed at the generation of added business value
  • We think in terms of Quality Management which encompasses planning and control, assurance, compliance and improvement, but only talk about process improvement

So when I talk about Process Management I'm thinking about a truly operational activity which is a defined and managed function that works holistically across the business and is aimed at generating improved business performance at all levels.

But I'm now having my doubts about whether this goes far enough, and reading a post earlier this week has encouraged me to write this entry. Chris Taylor published an article on BPM For Real entitled "Has process lost its meaning?"

Now I don't necessarily agree with Chris' views in this particular instance (although I totally agree about that fact that management speak and jargon has completed clouded our use of vocabulary and have written about that before!) he said enough to make me step back and think about my experiences over the past few years.

  • Some organisations simply did not have effective processes in place which was preventing them from achieving the levels of performance they could have expected from their people
  • Many organisations wasted their "improvement dollars" fixing perceived issues rather than genuine failures, often because they believed their process experts rather than listening to the process users
  • Too many managers simply didn't understand what they were really doing and initiated improvement activities aimed at doing the wrong things better
  • Lots of teams were involved in Continuous Tinkering rather than focused improvements
  • Business value was not being realised because it wasn't even part of the dialogue for consideration
  • Arbitrary quality targets were set and improvement activities were channelled into meeting these
  • Reactive quality compliance had higher management priority than proactive prevention of future quality issues

Some of these are genuine process issues which could be addressed by more rigorous process management. But fixing some of these problems requires more than new or improved processes. They need better understanding of business and management realities than many managers have. They need better trained and educated managers. They need process experts who have a better understanding of core business activities and values and who operate in the real world of the people who execute the processes (commonly called workers!).

My biggest problem is that I don't have a convenient label for what this discipline really is. It could be Performance Management, but that's already been hijacked by HR in the guise of personnel reviews. In the old days, it could have come under the moniker of Quality Management, but that's been hijacked by Testing and the compliance police.

Maybe I should just invent a word - Proformance perhaps. It's got a big red line under it as I'm writing this, so it doesn't appear to exist yet. Yes, I like that...

Business Proformance Management - the act of ensuring the right things are being done properly across an organisation, with the aim of improving business value and outcomes for all stakeholders including the people doing the work.

I shall reflect on this further and let you know how I get on. I'd welcome your thoughts on Proformance Management.

.

Wednesday 18 January 2012

Acting Under Pressure - Free Thinking or Conditioning

I've been having some pretty restless nights since I came back from Switzerland and last night was no exception. I woke up at 04:45 clutching at some snippets of a rather bizarre dream, but sadly wasn't conscious enough to jot them down. The gist of it was that I was in a war zone with some close friends from both work and personal life and I was questioning some of the decisions that were being taken, both tactical and strategic.

In those immediate moments after waking I realised that I had been thinking about was the difference between doing the right things and doing things right (regardless of whether they were the right things to do). As I rubbed the sleep out of my eyes, my mind started darting around all over the place. I started thinking about Verification and Validation process areas in CMMI (as you do!); I thought about jobs I had left because I had had tried to do what I thought was the right thing whilst all around me people were doing what they'd always done and still failing; and I thought about all the projects I've been involved in which might have succeeded if leaders had focused on doing the right things rather than doing the wrong things righter. It's sad to say that far too many of us have worked in organisations and projects that have resembled war zones and how much of our office language reflects conflict, with our war rooms, death marches, and battle plans.

Finally I started wondering how much of our behaviour changes when we're under pressure and we start to behave according to our genetic ancestry (fight and flight) or our social or workplace conditioning (command and control in most cases). I even wondered whether there is a genetic disposition that make some people behave like leaders and others act like sheep, particularly when the going gets tough.

Many of you will have seen coverage of the terrible events in Italy this week involving the cruse ship Costa Concordia. We'd all like to think that if we ever found ourselves in such a situation that we'd behave with dignity and calmness and make sure that we did the right thing in that context. The tragedy is that when we're faced with the reality of such a disaster, very few people live up to their ideals. It's much easier to do what you're told (rightly or wrongly) and to follow the majority. To do something that goes against the grain, to think differently, to take a different course of action that might go against all common sense is much harder.

But in an organisational context or a project context the pressures that we are up against are not life threatening. We not only have the opportunity to think or react differently to everyone else - we have an obligation to do so. And managers and leaders have an equal obligation to listen and make decisions accordingly based on analysis of what is being said and not against received wisdom or ingrained ideals of what is right.

Projects and organisations fail most often because they allow themselves to be persuaded by convention without thinking about the real consequences. Far too many decisions drive the types of behaviour which lead to people focusing on doing the wrong things, and trying to get better at doing them rather than simply doing the right thing because it's different.

UPDATE : Just before I went to post this entry I discovered someone else thinking similar thoughts today as I saw this on my Twitter feed...
flowchainsenseiAlways, always the toughest decision I have to make as a coach; whether to behave as management expects OR to coach teams effectively.

I fully empathise with this dilemma. In the end my heart usually rules my head and after towing the line for a while I'll edge towards trying to do what's right. Sometimes I'll end up paying the price, but at least my conscience is clear, and there are always some folk around who thank me for trying!
.