Friday 23 March 2012

Lessons Learned about Lessons Learned (Part 2)

In my last post I discussed a few of the problems that I've encountered with Lessons Learned reporting (a.k.a Project Post Mortem reviews). In this post I'm going to propose some of things that could be done to add value to a process that is often judged with contempt by project managers and team members, misused or misinterpreted by process improvement teams and managers, and generally misunderstood by the people who could most benefit from it. These ideas are intended to be method independent and can be adopted whether you're using CMMI, Agile, or any other model or lifecycle.

Before I go any further however, I want to pick up on some of the comments made on my previous post, namely the issues of Timeliness (@flowchainsensei) and Trust (@bruhland2000), because these have a significant bearing on any proposal for improvement of this process.

In many organisations the Lessons Learned process only kicks in at the end of the project. There are several issues with this:
  • the project cannot benefit because it is already finished
  • project team members cannot properly recall details of things that could be of value in the future, and anyway they are already busy thinking about their next project
  • funding is no longer available to do the job properly
The other problem about timeliness is that project lessons learned reviews rarely coincide with process update cycles. Valid improvement opportunities (or fixes to incorrect processes) may not be made available to projects for months. Projects who need the fixes and therefore use them before they are "official" face the prospect of being "non-compliant" unless process waivers can be used (another process that is often so bureaucratic and time-consuming that projects would prefer to risk a non-compliancy than jump through hoops to get the appropriate waiver!).

The second issue regarding trust (or lack of it) is common in organisations where there is still a dominant blame culture. Whilst good facilitation of reviews can minimise some of the issues around trust, a more stable and long term way to deal with the issue is to be more explicit about what constitutes a valuable lessons learned review in the first place and focus on the processes, system interactions and organisational quagmires that need to be changed to give projects a better chance of succeeding.

So what are we really trying to achieve in a lessons learned review? Who are the beneficiaries? How can we make the process more robust and of value to the participants? And most importantly, how can we effectively use the information gathered in these (and other reviews) to ensure that the process returns more benefit than it costs, and therefore adds value to the organisation? I'm not going to try and define a one-size fits all process here. What I'd prefer to do is offer up some ideas for consideration that you may be able to build into an existing process set, or to discuss with senior management to bring around a mindset change if that is what is required.

Some of the issues that an organisation needs to addressed (and you need to be honest  when you answer these questions) are:
  • What is the real purpose of the process ? Is it intended to generate genuine improvement or is it in place to comply to some abstract model requirement? If you are not getting any tangible benefit from your lessons learned then you are doing this for compliance purposes and the process needs a major overhaul
  • Does the lessons learned review do anything to address systemic process or organisational issues in a timely fashion or does it simply generate outputs which are subsequently ignored because there is no real substance to them
  • Who is empowered to make process changes based on the lessons learned review? Can project teams make local changes to meet their specific issues? If they must wait for official changes to be released there is little incentive in them taking the time to participate in any such review - and the organisation is building a failure mechanism into its operations
  • Are successes identified by projects deemed to be "best practice" and built into the process set regardless of the specific set of circumstances which may have led to a specific success?
  • Do you have a repository of lessons learned reports and assets which consists primarily of similar findings and alternative templates and is rarely accessed by any project teams? Any genuine lesson learned should take the form of an actionable item leading to a direct improvement in the process asset library or change to company policy
  • Does your lessons learned report template contain sections like "what worked well", "what went wrong" and "what could be improved"? Focus the review away from these subjective descriptions and follow through on specifics like "which processes / interactions are causing us trouble and what can we do to resolve the problem?"
  • Have you considered adding some quantitative elements into to the report? This can be difficult as scoring issues is still very subjective, but if the same people contribute by scoring the same questions at regular intervals in a project you can at least begin to visualise how your improvement efforts are working. However, this shouldn't be used as for cross project comparisons. (I'll post an example of this type of template on my website in the downloads section in the next few days. This template has proven useful in the past and is still being used today)
  • Do you have any measures in place to assess the value of the process? If you assess the value using compliance audits or counts of reports submitted than you haven't. If you have a count of actionable items generated from lessons learned reviews you are at least on the way
  • Do process owners (or their equivalent) ever meet with project teams or do they rely on the SEPG representatives? Project managers with process issues should invite process owners to their lessons learned reviews (as non-project stakeholders) to hear the issues first hand (and not to defend their process)
  • Do you make a distinction between internal project reviews (status meetings) and lessons learned reviews? If projects have regular daily, weekly or monthly review meetings which address the purpose of the lessons learned process why should they be compelled to perform another process review just to tick the box. Don't penalise good practice for the sake of compliance or terminology!
As I wrote in the previous post many lessons learned processes are in place to fulfil a misinterpretation of a requirement, whether it be CMMI or ISO or whatever. The process is often designed to create evidence to demonstrate that the process is been executed, but it fails to address the real objective which is to provide feedback into the management system to try to stop bad history repeating itself. Far more effort is put into creating the outputs and monitoring compliance to the process than in addressing the real needs of the organisation - which is to improve. If the current effort wasted on Lessons Learned processes can be re-channeled into addressing the issues uncovered by the process then the people who are required to use it may start to respect it more because they will actually begin to reap some benefit from it.

As will the whole organisation.