Wednesday, February 7, 2018

Testing the Crap out of Requirements Using Black-hat BDD - Wednesday, February 7, 2018

It is no secret that problems with requirements have a significant impact on software projects, so why do we just accept them as law and steam ahead with development? What if there were a way to test them and make them better, before a single line of code is written and possibly wasted?


This session presents an approach to testing requirements using BDD, but with a twist. Typical BDD, which we will call White-hat BDD, aims to faithfully translate given requirements into scenarios and examples using the simple Gherkin specification language. There is some art required here, as typical requirements are expressed in the formal language of shall/shall not, etc., and rarely includes illustrative examples, but the goal is to match the intent of the provided requirement as closely as possible.


Black-hat BDD, on the other hand (which I just made up), aims to clearly and obviously subvert the intention of the provided specification while still following the letter of the law, so to speak. In this way, counter-scenarios and counter-examples provide evidence of weaknesses in the requirement. Instead of allowing these weaknesses to quietly lead to misinterpretation or to fester as awkward constraints or holes in the product, this approach points them out directly and dramatically, so that they can be immediately fixed. Think of it as breaking the integrity of the requirements by exploiting weaknesses, similar to the way that black-hat hackers take advantage of security holes in software.


This approach can be used to address requirements that are:

  • Contradictory
  • Impractical
  • Incomplete
  • Inconsistent
  • Un-testable/Unquantifiable
  • Vague

The general approach is as follows:


  1. Start with a provided requirement (whether it is expressed in requirement-speak: shall, should, shall not, etc., or in user story form)
  2. Come up with a Black-Hat BDD scenario/example that intentionally comes to 'bad' behavior/results but is consistent with the wording of the requirement
  3. Allow the requirement giver to clarify by straightening out the scenario/feature
  4. Back and forth until no more glaring holes are identified


The conversation between requirement tester and provider, where the requirement is successively refined and improved, is much more friendly than it might seem based on my choice of terms; it is not adversarial, but a cooperative effort to produce a better end product. It can even be a little fun.


One important warning -- don't let these counter-examples end up accidentally in the 'real' BDD suite. Find some way to clearly mark them as inappropriate. One way is to use unique keywords, like 'Counter-Scenario' and 'Counter-Examples' that are not legal Gherkin, or to put the word "Nefarious" in the title.


For our session we practiced with examples from a made-up project, which was centered around the idea of tracking, predicting and coordinating brain waves/states for users so that they can target either single or shared experiences in an optimal state. The canonical example would be a group of users that want to simultaneously achieve a highly creative or productive 'flow' state, like for a collaborative work session.


Here are the flip-charts from our lively discussion:




















Thanks, everyone!

-- David Snook

3 comments:

  1. Flip-chart show as broken links to me.

    ReplyDelete
  2. Same here - looking from mac and windows - but maybe there is a preferred browser?

    ReplyDelete
  3. Sorry! Looks like either my mail server or the receiving server scraped those photos out (too big?). Here is a link to the flip-charts:

    https://1drv.ms/f/s!AvuWNYBuKesJkvZUJ2OPmb4TKw7pqw

    By the way, I'm half-way thinking of taking on the example project that we discussed (measuring/predicting/coordinating brain states), and I even have a potential name: WaveCatcher. :)

    ReplyDelete