Tuesday 14 February 2012

A Dilemma Regarding Defect Tracking

In my previous post I harked back to the good old days when I wrote code for a living and used to pride myself that I could pretty much guarantee that there would be very few bugs in my production code. This was a good thing for the organisations I worked in because we really didn't have a process model for software development. Most people in those organisations had never heard of development models and the software groups tended to be structured around  functional roles - analysts and designers, who handed huge specs to programmers, who delivered completed systems to testers. There weren't even any real project managers - the role was shared between the lead analyst and a business/product manager.

So producing relatively defect free code was good because we didn't have defect tracking systems. We had lists of defects that came from the testing departments which programmers then had to track back to their own work and fix the issue long after it was initially created.

Since then defect tracking and management systems have proliferated and maintenance crews mop up the bugs long after they were created by someone else. Defect prioritisation meetings soak up stakeholders time and product release cycles stay the same regardless of the advances of technology and the power of new hardware and software tools.

No wonder that defect management is regarded by the lean community as waste, and why new techniques have been developed to better integrate the development functions and activities to find and fix defects as they are created rather than at the end of the development cycle.

As a software engineer, process person, and business improvement guy, I heartily approve of this and wish that more organisations could understand what a difference it could make to their release cycles and the overall quality of their products.

But as a user I find myself with a dilemma, primarily with big organisations who are responsible for product portfolios of millions of lines of code. I'm thinking in particular of the operating systems and hardware providers like Apple and Microsoft, where it's clearly impossible to test for every eventuality.

I'm reminded of the legend about Rolls-Royce when they built motor cars in the UK and owned the brand name for their cars. The story was that if your Rolls broke down you would have to ring a secret number and a service engineer would arrive on the scene with a spare car and take yours away, in a covered truck, to be repaired before returning it back to you. Rolls-Royces simply didn't break down and to prove it, you never saw one by the side of the road or being towed away.

Companies like Apple don't acknowledge bugs in their systems very often. If they do it's usually buried in the release of an update, and stated in very high level terms, like "fixes an issue with wireless networks and wake-up". I'm certain that Apple employs sophisticated defect tracking and management systems but as a user I don't know whether my specific problem is a known bug, a personal issue because of my system configuration, or a previously undiscovered problem. I can search the support forums and I can send feedback to Apple but these never get officially acknowledged, and many of the forums are populated by the blind leading the blind.

It would be so useful if Apple, like many smaller companies already do, could publicly provide a list of known issues, along with official work arounds where they exist. It would save us consumers hours of time trying to understand whether we can fix something or not. It would save hours of Genius Bar and Apple Support time and costs. And it would make Apple look better as their current system makes them look like they don't care.

As a developer I don't want defect lists. As a consumer I'm desperate to see them!

 

Print this post

No comments:

Post a Comment