torsdag den 13. december 2012

All bugs are great bugs.... or

When it comes to bugs I used to have a clear approach when it came to bugs; "the only good bug is a dead bug" sort of .... :-) I thought that all bugs were great, that we needed to focus on attacking the system as aggressive as possible, that only our own limit when it came to imagination were setting the boundaries for what crazy test we could do in order to find these bugs.

Eg.for our C2 system; "let's make an area over the north pole with 274 position points, then change map format between lat-long, mgr and georef and for each map type zoom and pane like crazy. I bet that will kill it".
And it did - the system crashed eventually and I was happy - I had found a severity 1 bug - a crash.

Or for a recent system I tested within the energy sector "lets put a lot of hardcore html code into all the data entry fields, paste HUGE amounts of data into  fields that should have field length limitations".

And this is all fine - these are great and aggessive test scenarios... but what about the defects I found, how realistics are they? would a user (normal or hacker type) ever do this? will these defects ever be prioritized and fixed? Do my test case add value?

And here I really have a discussion with myself :-) because:
On one hand I think they do, they give us knowledge about weaknesses in the system, scenarios that we might use as basis for more realistic scenarios for other areas of the system. The one with the 274 points area was actually realistic, turns out the army actually uses a lot of position points when the create areas - it is just the part with the north pole and the crazy use of map types and zoom/pane I question.

On the other hand some of the really aggressive and to some extend crazy test cases I have executed in my time really are totally unrealistic, they are so far away from reality (see not real world - just reality) and from the users way of using the system that I feel that I am to some extend wasting my time doing those tests - and wasting developer time in getting them fixed.

So in the future I think I will do one thing for sure, I will ask my self "is this in any way realistic, will a user ever do anything like that - and if one in a million does...is it worth while?". Of course there is always the matter of security for systems that are open to the outside world - that makes the boundaries for what is realistic a bit different since hackers have a somewhat "crazier" imagination than most other users... but still.

What do you think - do you think we should attack all kinds of test situations, that ALL bugs needs to be addressed?

2 kommentarer:

  1. Hi Gitte,
    it seems to me, that the question is to find out which tests and which bug fixes are the most valuable/desirable.
    Is a given amount of testing effort (time, money…) that didn't find a bug, effort wasted? (I don't think so, at least not necessarily.) Is a bug is found that won't be fixed waste? (Again: May be, may be not).
    How about a found bug that actually is fixed. Well, that might be wasted effort. (E.g.: It took a long time to find the fault (the cause of the bug), also took a login time to fix & retest - and is related to a rarely used special case of a rarely used feature.)

    So I tend to conclude that, no we shouldn't "attack all kinds of test situations, that ALL bugs needs to be addressed".

    Even the your example of entering lots of text in an entry field & submitting that: In case of a publicly visible web site, I'd assume that it's well worth fixing, since it could be an attack vector for hacking the system.
    Even of a purely internal application…
    Well, in the end it's about getting an answer to the question: Is it worth the effort? If anybody can easily crash the system, restarting takes minutes and there are hundreds of users… you get my drift.

    SvarSlet
    Svar
    1. Totally get your drift :-) It should always be a question about what adds most value - And of course at test case adds value even though it doesn't find a bug - it is still a part of getting the information about the current state of quality in the application. My examples might not have been good enough to illustrate the point I am trying to make - that sometimes the scenarios and things we try out is too much outside the boundaries of "reality". Of course I should hope that a bug like that would be caught by the product owner when evaluating and prioritizing the incoming defects (if such a process exists in the project).

      Slet