Black Box Tesing


Focus on functional requirements.
Attempts to find:
  1. incorrect or missing functions
  2. interface errors
  3. errors in data structures or external database access
  4. performance errors
  5. initialization and termination errors.
Equivalence Partitioning

Divide the input domain into classes of data for which test cases can be generated.
Attempting to uncover classes of errors.
Based on equivalence classes for input conditions. An equivalence class represents a set of valid or invalid states
An input condition is either a specific numeric value, range of values, a set of related values, or a Boolean condition.
Equivalence classes can be defined by:
  • If an input condition specifies a range or a specific value, one valid and two invalid equivalence classes defined.
  • If an input condition specifies a Boolean or a member of a set, one valid and one invalid equivalence classes defined.
Test cases for each input domain data item developed and executed.

Boundary Value Analysis

Large number of errors tend to occur at boundaries of the input domain.
BVA leads to selection of test cases that exercise boundary values.
BVA complements equivalence partitioning. Rather than select any element in an equivalence class, select those at the ''edge' of the class.
Examples:
  1. For a range of values bounded by a and b, test (a-1), a, (a+1), (b-1), b, (b+1).
  2. If input conditions specify a number of values n, test with (n-1), n and (n+1) input values.
  3. Apply 1 and 2 to output conditions (e.g., generate table of minimum and maximum size).
  4. If internal program data structures have boundaries (e.g., buffer size, table limits), use input data to exercise structures on boundaries.
Cause Effect Graphing Techniques

Cause-effect graphing attempts to provide a concise representation of logical combinations and corresponding actions.
  1. Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
  2. A cause-effect graph developed.
  3. Graph converted to a decision table.
  4. Decision table rules are converted to test cases.
Comparison Testing

In some applications the reliability is critical.
Redundant hardware and software may be used.
For redundant s/w, use separate teams to develop independent versions of the software.
Test each version with same test data to ensure all provide identical output.
Run all versions in parallel with a real-time comparison of results.
Even if will only run one version in final system, for some critical applications can develop independent versions and use comparison testing or back-to-back testing.
When outputs of versions differ, each is investigated to determine if there is a defect.
Method does not catch errors in the specification.