Focus on functional requirements.
Attempts to find:
- incorrect or missing functions
- interface errors
- errors in data structures or external database access
- performance errors
- initialization and termination errors.
Divide the input domain into classes
of data for which test cases can be generated.
Attempting to uncover classes of
errors.
Based on equivalence classes for
input conditions. An equivalence class represents a
set of valid or invalid states
An input condition is either a
specific numeric value, range of values, a set of related values, or a Boolean
condition.
Equivalence classes can be defined
by:
- If an input condition specifies a range or a specific value, one valid and two invalid equivalence classes defined.
- If an input condition specifies a Boolean or a member of a set, one valid and one invalid equivalence classes defined.
Test cases for each input domain
data item developed and executed.
Boundary Value Analysis
Large number of errors tend to occur
at boundaries of the input domain.
BVA leads to selection of test cases
that exercise boundary values.
BVA complements equivalence
partitioning. Rather than select any element in an equivalence class, select
those at the ''edge' of the class.
Examples:
- For a range of values bounded by a and b, test (a-1), a, (a+1), (b-1), b, (b+1).
- If input conditions specify a number of values n, test with (n-1), n and (n+1) input values.
- Apply 1 and 2 to output conditions (e.g., generate table of minimum and maximum size).
- If internal program data structures have boundaries (e.g., buffer size, table limits), use input data to exercise structures on boundaries.
Cause-effect graphing attempts to
provide a concise representation of logical combinations and corresponding
actions.
- Causes (input conditions) and effects (actions) are listed for a module and an identifier is assigned to each.
- A cause-effect graph developed.
- Graph converted to a decision table.
- Decision table rules are converted to test cases.
In some applications the reliability
is critical.
Redundant hardware and software may
be used.
For redundant s/w, use separate
teams to develop independent versions of the software.
Test each version with same test
data to ensure all provide identical output.
Run all versions in parallel with a
real-time comparison of results.
Even if will only run one version in
final system, for some critical applications can develop independent versions
and use comparison testing or back-to-back testing.
When outputs of versions differ,
each is investigated to determine if there is a defect.
Method does not catch errors in the
specification.
No comments:
Post a Comment