Over the years I have read a lot of master test plans and
test strategies. And very often, there is some variation of the topic "we
are doing risk-based testing". But when I subsequently dive into the test
project, I find it hard to see it happen in practice. There is no red thread
from what is stated in the test strategy for the test that has been specified
and executed, and it is often not based on a product risk analysis.
In order to be able to do a risk-based test, you first
need to know what risks are related to the product / system you are going to
test. That is, some kind of product risk analysis must be carried out.
But one thing is to make a product risk analysis, something
else is to plan, specify and run tests that support the risk image you have
identified. The activities carried out in the test and quality assurance
project must all support a focus on mitigating the identified product risks -
illustrated in the following.
That is, as a minimum:
- The entire project is familiar with the output of the product risk analysis
- When test is planned, it must be based on the identified risk picture. That is, the intensity of the individual test tasks with reflects the risk it addresses
- The test techniques used for test design must be selected based on the product risk they address.
- The prioritization of the test run must reflect the risk picture - the highest risk is tested first
Now I can imagine at least two objections:
1. My testers cannot use test techniques so it does not
make any value to make the product risk analysis
2. We are agile so we do not do product risk analysis.
Let's start with the test techniques; A product risk
analysis identifies a common picture of what the test assignment is, it is
important no matter whether the testers can use test design techniques or not.
You can still give them a direction for how much test should be done, based on
some rules of thumb, e.g;
The list above is just meant as an example of how you
can use the output from your product risk analysis to provide a set of rules of
thumb for what to test, define those that fit your context.
Now the testers have a basis for their test
specification, but you also have a basis for making your test plan - it must
also reflect the above.
When the test is executed, follow up on your product
risk analysis - so you can report whether the identified product risks are
actually addressed in the test - and with what result. This of course
requires that you have some form of traceability between risks and tests.
But we are agile....
Regarding being in an agile project - product risk
analysis still adds value! Make it as part of your team's sprint planning,
perhaps integrate it directly so that product features are identified as part
of the clarification of features and stories. Perhaps playing risk poker like
playing planning poker. The discussion that will take place with Product owner
and team when product risks are identified gives great value to the team as a
whole. When the team has a common picture of the risks they face, they are also
more aware of their share in mitigating it.
Final comments
Remember, whether you are in an agile or traditional
project; It's not enough to do a product risk analysis once - it's a natural
follow-up as more clarifies in the project, which can pose new risks, we can
mitigate risks on a continuous basis, etc. And then you might consider whether
you should actually Have more levels of product risk analysis; one for the
project as a whole and one more detailed as the tester does on single features
to get deeper insight into a more specific level.
And remember - risk based testing is not just to identify the product risks, you need to make them known and mitigated.
Ingen kommentarer:
Send en kommentar