søndag den 13. april 2014

The missing tweets from #STARCanada


I normally try to tweet a bit when I participates in conferences, but at STARCanada I simply couldn’t – my phone did not agree with the wifi, couldn’t find any in the rooms where track sessions and keynotes where held. 

So I wrote down my tweets as they popped up, so here is my STARCanada collection of tweets.
Actually a bit funny to see them all together in one page J

K1 Michael Bolton – why software drives us crazy
  • From quality assurance to quality assistance
  • Expectations vs desires
  • I keep finding myself shaving a yak!! (I think it meant: a chain of activities you want to do in order to reach a goal making it so difficult that one end up forgetting the original goal)
  • Too often a system presents what it can do as a presentation of the internal datastructure rather thatn actually supporting what the user needs to do
  • Geeks drives cars with stick shift, because they’re interested in the process of driving
  • We report: are there problems in the product that represent risks to the use.
  • Organisations don’t like hearing about problems
  • Testing is viewed as cost centers in many companies rather thatn the nerve system of the organization.
  • Think of testing not as “test cases” but as learning about product through experimentation.
T5 Nancy Kelln, Are your test reports a death sentences?
  • Comparing the five states you may go through when receiving a death sentence with reactions to getting bad news (test reports)
  • Anger – deneil – bargening – depression – acceptance
  • Every time you have a problem, then look at it and figure out how it can be a “people problem”.
  • Depression: due to the nature of testing we cannot just disconnect and move on
  • An emotional response to bad news may mean your message was heard
T9 Paul Holland, Agile test management and reporting – even in a non-agile project
  • Using bad metrics – promot bad behavior
  • Measure/compare elements that are consistent in size or composition. Test cases are not! (can take 5 minutes or 5 days). Bugs aren’t  either (complexity, probability etc).
  • Bad metrics risk creating unhealthy competition between teams (Test cases per tester, bugs per tester)
  • Bad metrics contain misleading information or gives a false sense of completeness 8e.g. coverage, pass rate, number of test cases)
  • Use white board for test execution
K3 Ray Arell, the art of complex system testing
  • Move from “what did you do yesterday” to “what did you LEARN yesterday”
  • The magic roundabout (see here)
  • The complex adaptive system model
  • Youd cannot connect the dots going forward, you can only connect them looking backwards (Steve Jobs)
  • Cognitive-edge.com got to check out!!
  • Don’t waste your crisis – this is where innovation happens
  • Simple systems: Mind numbing bureaucracy
  • Complex systems: Fluffy bunnies and tree huggers
  • Complicated systems: Tyranny of the experts
  • Chaos: True catastrophe
W8 Scott Barber: How metrics programs can destroy your soul
  • Qualitative vs quantitative metrics
  • Metrics without context doesn’t tell you anything
  • Build me software that makes me money – the “bring me a rock” game
  • “measurement dysfunction”. People tend to optimize metrics => invites “bad stuff”
  • Quantitative metrics are invitations for conversations
  • Inconsistent units: test/test cases, Size/importance of defects
  • Quality is subjective: How good is “good enough”? What does pass/fail mean?
  • What do you really want to know?