mandag den 1. juni 2015

Thoughts on Process Improvement

A week ago I posted a small article with thought on sucessful facilitation of test process improvement at techwell.com. In that article I focused on one my favorite key points within process improvement - how to eat an elephant! the fact that big bang process improvement projects rarely have success because people can't keep up and digest all the new activities, workproducts etc. In stead I recommend an approach where you do it just like you would eat that elephant - one bite at the time.

But there are several key things to focus on when you are involved in improving a process, at least;


  • Know where you are (Current situation)
  • Know where you are going (Target situation)
  • How to eat an elephant
  • Learning
  • Ownership and commitment
  • Process first, then tools
In the following weeks I will do a bit of blogging on the other parts than the elephant :-).
  

Today: Know where you are - and know where you are going!




If you are going on a road trip and you want to draw a route (or get the route in your GPS) you need two key points: the position you are at now, and the target position where you want to go to. 
But that doesn’t just go for road trips and travelling – it goes for process improvement as well.

How can we identify what we need to improve if we don’t know how we are working today, and even more important – what works and what don’t? So you need to start with creating the baseline – identify current state of the process in your team/project/organization. 

There are many ways to do this, two of the formal ways of measuring the process maturity within testing is the TMMI and TPI NEXT assessment. Both of these are formal processes for identifying, analyzing and evaluating the maturity of a given test process – no matter whether we are talking about a single project or an entire organization. With these formal methods you can either do the assessment yourself or you can ask someone outside the organization to do it, and the result of the assessment should be both a report on current state but also a recommended roadmap for improvement.

But you can also do that in an informal manner. The important thing here is; Involve the right people from the beginning! In my current assignment we started with a brown paper exercise involving the different stakeholders in the program. Together we drew the current process using post-its and identified unknowns, questions and problems with post-its of another color.
 

















We discussed every step on the way agreeing how the process looked at that time, and after our workshop the drawing was presented to the rest of the team to ensure that we were agreeing on a common picture of the as-is process. The drawing stayed on a wall for a period allowing people to think and comment, a new color of post-its were available making it visible what was added afterwards :-)

With the drawing you have an illustration of where you are – what is the current state of your organization within testing, where your strengths are and also your weaknesses. 



With that knowledge it is now time to discus where you want to go. Together identify potential improvements, prioritize them and create an improvement backlog. Since I love illustrating things in swim lanes I took the brownpaper and created a process drawing in Vision that made the current responsibilities and flow visible. One of the first things in the backlog was then to create the to-be process drawing - the "dream target" and discuss what low hanging fruits you could pick to make the first visible improvements and get the sense of progress from an early point.

We of course agreed that this was not a process goal that could not change, we learn as we go and improve the dream target as we do - it is a moving target but at least we had a basic idea about where we were going.

At this point in time we didn't talk about tools, we talked a bit about what basic internal training we could do - but the main thing was to get a common picture of the journey we were going to take.



lørdag den 18. april 2015

Product risk, are we talking the talk or walking the walk.



Product risks is something we talk a lot about as testers... but do we only talk the talk or are we also walking the walk?

Recently I visited a project where they had written in their test strategy that they were doing risk based testing. They had completed a PRA (Product risk analysis), had a beautiful table in the strategy document showing all identified product risks and weighted them according to damage and change of failure. They also identified on which test levels they should test with what intensity to mitigate those risks… even identified test techniques to use to get the best test done….completely by the book… I felt happy… ;-) so beautiful. 

But then I started to take a look at the test being done, the test designs and test cases and I started to wonder. Nowhere was it visible to me that the identified test strategy was addressed, that any kind of test design techniques were used etc. So I asked the testers in that project: have you considered the PRA which was conducted for the system? Have you used the test techniques, have you even looked at the risk table when you designed and implemented the test for this system… and sadly the answer was NO. They hadn’t had time to use test design techniques they said, and they had forgotten about the test strategy document. 

Since then I have stumbled upon that a couple of times. One of my friends who’s also a test manager had conducted a PRA together with the business and the testers to get a picture of how the business saw the system. But when the result was presented to the testers and test lead the answer were; nice table but we don’t use it anyway.

So how do we change this? How do we go from talking the talk to actually walking the walk? Or should we maybe just accept that we don’t?

I actually think that we should walk the walk, the process of identifying and classifying product risk as a foundation for a test strategy and for testing is the right thing to do, but maybe we could do it in another way? Maybe we shouldn’t just hide the result of risk analysis in spreadsheets and tools? Maybe we should focus more on the conversation we have when we do the risk analysis – the knowledge we share and less about the formalities?

For example I am a great fan Product Risk Analysis as described in TMap, but I have my own lightweight version of how to do it – have taken a lot of the formality away and primarily focus on getting people to talk about risk. Getting the right mix of people together around a whiteboard, getting them to talk about what THEY see as product risks, and even more important getting them to discuss both damage and chance of failure – explaining to each other why they see the risk as that high (or low). 

The table that comes out of it is just like in TMap, but we have made it together, we have discussed, shared knowledge and even clarified potential misunderstandings with the scope during the workshop.

I even do the test strategy table (maybe not the test techniques… that depends on the testers), but rather than just putting it in a test strategy document I make it visible just next to the task board. And when someone starts a new task/story we talk about how that fits into the risk picture.  And when a tester starts on a new feature we take a look at the product risks identified and break it down to more detail for the given feature, ensuring the right focus and weighting of the test.

The main thing in my humble opinion is that we talk about risk to ensure that we have a common picture, and that we actually address them when we test - what form, shape or name we give it is less important..