Testing definitions

Acceptance Test: Formal tests (often performed by a customer) to determine whether or not a system has satisfied predetermined acceptance criteria. These tests are often used to enable the customer (either internal or external) to determine whether or not to accept a system.

Ad Hoc Testing: Testing carried out using no recognized test case design technique.

Alpha Testing: Testing of a software product or system conducted at the developer’s site by the customer.

Automated Testing: Software testing which is assisted with software technology that does not require operator (tester) input, analysis, or evaluation.

Bug glitch, error, goof, slip, fault, blunder, boner, howler, oversight, botch, delusion, issue, problem.

Beta Testing: Testing conducted at one or more customer sites by the end-user of a delivered software product or system.

Benchmarks: Programs that provide performance comparison for software, hardware, and systems.

Black Box Testing: A testing method where the application under test is viewed as a black box and the internal behavior of the program is completely ignored. Testing occurs based upon the external specifications. Also known as behavioral testing, since only the external behaviors of the program are evaluated and analyzed. For more details about Black Box Testing, please refer to Testing Methodologies

Boundary Value Analysis (BVA): BVA is different from equivalence partitioning in that it focuses on “corner cases” or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive the value often used as a technique for stress, load or volume testing. This type of validation is usually performed after positive functional validation has completed (successfully) using requirements specifications and user documentation.

Breadth Test: A test suite that exercises the full scope of a system from a top-down perspective, but does not test any aspect in detail.

Cause Effect Graphing:
• Test data selection technique. The input and output domains are partitioned into classes and analysis is performed to determine which input classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect set.
• A systematic method of generating test cases representing combinations of conditions. See: testing, functional. [G. Myers]

Code Inspection: A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards. Contrast with code audit, code review, code walkthrough. This technique can also be applied to other software and configuration items.

Code Walkthrough: A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer ’s logic and assumptions.[G.Myers/NBS] Contrast with code audit, code inspection, code review.

Compatibility Testing: The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

Defect: The difference between the functional specification (including user documentation) and actual program text (source code and data). Often reported as problem and stored in defect-tracking and problem-management system
Defect also called a fault or a bug, a defect is an incorrect part of code that is caused by an error. An error of commission causes a defect of wrong or extra code. An error of omission results in a defect of missing code. A defect may cause one or more failures. Please refer the article Defect Logging to know how to log a defect.

Decision Coverage: A test coverage criteria requiring enough test cases such that each decision has a true and false result at least once, and that each statement is executed at least once. Contrast with condition coverage, multiple condition coverage, path coverage, statement coverage.

Dirty testing is also called Negative testing.

Dynamic testing: Testing, based on specific test cases, by execution of the test object or running programs.

Equivalence Partitioning: An approach where classes of inputs are categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.

Error An error is a mistake of commission or omission that a person makes. An error causes a defect. In software development one error may cause one or more defects in requirements, designs, programs, or tests.

Error Guessing: Another common approach to black-box validation. Black-box testing is when everything else other than the source code may be used for testing. This is the most common approach to testing. Error guessing is when random inputs or conditions are used for testing. Random in this case includes a value either produced by a computerized random number generator, or an ad hoc value or test conditions provided by engineer.

Exception Testing: Identify error messages and exception handling processes a conditions that trigger them.

Exhaustive Testing(NBS): Executing the program with all possible combinations of values for program variables. Feasible only for small, simple programs.

Functional Testing: Application of test data derived from the specified functional requirements without regard to the final program structure. Also known as black-box testing.

Gray Box Testing: Tests involving inputs and outputs, but test design is educated by information about the code or the program operation of a kind that would normally be out of scope of view of the tester.

High-level tests: These tests involve testing whole, complete products.

Inspection: A formal evaluation technique in which software requirements, design, or code are examined in detail by person or group other than the author to detect faults, violations of development standards, and other problems [IEEE94]. A quality improvement process for written material that consists of two dominant components: product (document) improvement and process improvement (document production and inspection).

Integration: The process of combining software components or hardware components or both into overall system.

Integration testing: Testing of combined parts of an application to determine if they function together correctly. The ‘parts’ can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems. For more details about Integration Testing, please refer to Testing Methodologies

Load testing:  Load testing verifies that a large number of concurrent clients does not break the server or client software. For example, load testing discovers deadlocks and problem with queues.

Quality Assurance(QA): Software QA involves the entire software development PROCESS - monitoring and improving the process, making sure that any agreed-upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented to 'prevention'.

A set of activities designed to ensure that the development and/or maintanence processes are adequate to ensure a system will its meet requirements/objectives. QA is interested in processes.

Re- test: Retesting means we testing only the certain part of an application again and not considering how it will effect in the other part or in the whole application.

Regression Testing: Testing the application after a change in a module or part of the application for testing that is the code change will affect rest of the application.

Software testing is the process used to help identify the Correctness, Completeness, Security and Quality of the developed Computer Software. Software Testing is the process of executing a program or system with the intent of finding errors.

Test Bed: Test Bed is an execution environment configured for software testing. It consists of specific hardware, network topology, Operating System, configuration of the product to be under test, system software and other applications. The Test Plan for a project should be developed from the test beds to be used.

UAT testing - UAT stands for 'User acceptance Testing. This testing is carried out with the user perspective and it is usually done before the release.

Walkthrough: A 'walkthrough' is an informal meeting for evaluation or informational purposes. Little or no preparation is usually required.

1 comment:

  1. This blog is really very useful..thanks for sharing your knowledge with us

    ReplyDelete