Total Pageviews

Sunday, May 24, 2009

Life Cycle of Testing Process

Life Cycle of Testing Process

 

This article explains about Differant steps in Life Cycle of Testing Process. in Each phase of the development process will have a specific input and a specific output. Once the project is confirmed to start, the phases of the development of project can be divided into the following phases:

·  Software requirements phase.

·  Software Design

·  Implementation

·  Testing

·  Maintenance

In the whole development process, testing consumes highest amount of time. But most of the developers oversee that and testing phase is generally neglected. As a consequence, erroneous software is released. The testing team should be involved right from the requirements stage itself.

The various phases involved in testing, with regard to the software development life cycle are:

1. Requirements stage
2. Test Plan
3. Test Design.
4. Design Reviews
5. Code Reviews
6. Test Cases preparation.
7. Test Execution
8. Test Reports.
9. Bugs Reporting
10. Reworking on patches.
11. Release to production.

Requirements Stage

Normally in many companies, developers itself take part in the requirements stage. Especially for product-based companies, a tester should also be involved in this stage. Since a tester thinks from the user side whereas a developer can’t. A separate panel should be formed for each module comprising a developer, a tester and a user. Panel meetings should be scheduled in order to gather everyone’s view. All the requirements should be documented properly for further use and this document is called “Software Requirements Specifications”.

Test Plan

Without a good plan, no work is a success. A successful work always contains a good plan. The testing process of software should also require good plan. Test plan document is the most important document that brings in a process – oriented approach. A test plan document should be prepared after the requirements of the project are confirmed. The test plan document must consist of the following information:

• Total number of features to be tested.
• Testing approaches to be followed.
• The testing methodologies
• Number of man-hours required.
• Resources required for the whole testing process.
• The testing tools that are to be used.
• The test cases, etc

Test Design

Test Design is done based on the requirements of the project. Test has to be designed based on whether manual or automated testing is done. For automation testing, the different paths for testing are to be identified first. An end to end checklist has to be prepared covering all the features of the project.

The test design is represented pictographically. The test design involves various stages. These stages can be summarized as follows:

• The different modules of the software are identified first.
• Next, the paths connecting all the modules are identified.

Then the design is drawn. The test design is the most critical one, which decides the test case preparation. So the test design assesses the quality of testing process.

Test Cases Preparation

Test cases should be prepared based on the following scenarios:

• Positive scenarios
• Negative scenarios
• Boundary conditions and
• Real World scenarios

 

Design Reviews

The software design is done in systematical manner or using the UML language. The tester can do the reviews over the design and can suggest the ideas and the modifications needed.

Code Reviews

Code reviews are similar to unit testing. Once the code is ready for release, the tester should be ready to do unit testing for the code. He must be ready with his own unit test cases. Though a developer does the unit testing, a tester must also do it. The developers may oversee some of the minute mistakes in the code, which a tester may find out.

Test Execution and Bugs Reporting

Once the unit testing is completed and the code is released to QA, the functional testing is done. A top-level testing is done at the beginning of the testing to find out the top-level failures. If any top-level failures occur, the bugs should be reported to the developer immediately to get the required workaround.

The test reports should be documented properly and the bugs have to be reported to the developer after the testing is completed.

Release to Production

Once the bugs are fixed, another release is given to the QA with the modified changes. Regression testing is executed. Once the QA assures the software, the software is released to production. Before releasing to production, another round of top-level testing is done.

The testing process is an iterative process. Once the bugs are fixed, the testing has to be done repeatedly. Thus the testing process is an unending process.

SDLC : Software Development Life Cycle

SDLC : Software Development Life Cycle

The following are the actives of the SDLC

1) System engineering and modeling

2) Software require analysis

3) Systems analysis and design

4) Code generation

5) Testing

6) Development and Maintenance

System Engineering and Modeling

In this process we have to identify the projects requirement and main features proposed in the application. Here the development team visits the customer and their system. They investigate the need for possible software automation in the given system. By the end of the investigation study. The team writes a document that holds the specifications for the customer system.

Software Requirement Analysis

In this software requirements analysis, firstly analysis the requirement for the proposed system. To understand the nature of the program to built, the system engineer must understand the information domain for the software, as well as required functions, performance and the interfacing. From the available information the system engineer develops a list of the actors use cases and system level requirement for the project. With the help of key user the list of use case and requirement is reviewed. Refined and updated in an iterative fashion until the user is satisfied that it represents the essence of the proposed system.

Systems analysis and design

The design is the process of designing exactly how the specifications are to be implemented. It defines specifically how the software is to be written including an object model with properties and method for each object, the client/server technology, the number of tiers needed for the package architecture and a detailed database design. Analysis and design are very important in the whole development cycle. Any glitch in the design could be very expensive to solve in the later stage of the software development.

Code generation

The design must be translated into a machine readable form. The code generation step performs this task. The development phase involves the actual coding of the entire application. If design is performed in a detailed manner. Code generation can be accomplished with out much complicated. Programming tools like compilers, interpreters like c, c++, and java are used for coding .with respect to the type of application. The right programming language is chosen.

Testing

After the coding. The program testing begins. There are different methods are there to detect the error in coding .different method are already available. Some companies are developed they own testing tools

Development and Maintenance

The development and maintenance is a staged roll out of the new application, this involves installation and initial training and may involve hardware and network upgrades. Software will definitely undergo change once it is delivered to the customer. There are many reasons for the change. Change could be happen because of some unexpected input values into the system. In addition, the changes in the system could be directly affecting the software operations. The software should be developed to accommodate changes that could happen during the post implementation period.

Automated Testing Advantages, Disadvantages and Guidelines

Automated Testing Advantages, Disadvantages and Guidelines

This article start with brief Introduction to Automated Testing, Different methods in Automated Testing, Benefits of Automated Testing and the guidelines that Automated testers must follow to get the benefits of automation.

Advantages of Automated Testing

Introduction:

"Automated Testing" is automating the manual testing process currently in use. This requires that a formalized "manual testing process", currently exists in the company or organization.

Automation is the use of strategies, tools and artifacts that augment or reduce the need of manual or human involvement or interaction in unskilled, repetitive or redundant tasks.

Minimally, such a process includes:

·  Detailed test cases, including predictable "expected results", which have been developed from Business Functional Specifications and Design documentation

·  A standalone Test Environment, including a Test Database that is restorable to a known constant, such that the test cases are able to be repeated each time there are modifications made to the application.

The following types of testing can be automated

·  Functional - testing that operations perform as expected.

·  Regression - testing that the behavior of the system has not changed.

·  Exception or Negative - forcing error conditions in the system.

·  Stress - determining the absolute capacities of the application and operational infrastructure.

·  Performance - providing assurance that the performance of the system will be adequate for both batch runs and online transactions in relation to business projections and requirements.

·  Load - determining the points at which the capacity and performance of the system become degraded to the situation that hardware or software upgrades would be required.

Benefits of Automated Testing

Reliable: Tests perform precisely the same operations each time they are run, thereby eliminating human error

Repeatable: You can test how the software reacts under repeated execution of the same operations.
Programmable: You can program sophisticated tests that bring out hidden information from the application.

Comprehensive: You can build a suite of tests that covers every feature in your application.

Reusable: You can reuse tests on different versions of an application, even if the user interface changes.

Better Quality Software: Because you can run more tests in less time with fewer resources

Fast: Automated Tools run tests significantly faster than human users.

Cost Reduction: As the number of resources for regression test are reduced.

Choosing the right tools for the job and targeting the right areas of the organization to deploy them can only realize these benefits. The right areas where the automation fit must be chosen.

The following areas must be automated first

1. Highly redundant tasks or scenarios
2. Repetitive tasks that are boring or tend to cause human error
3. Well-developed and well-understood use cases or scenarios first
4. Relatively stable areas of the application over volatile ones must be automated.

Automated testers must follow the following guidelines to get the benefits of automation:

• Concise: As simple as possible and no simpler.

• Self-Checking: Test reports its own results; needs no human interpretation.

• Repeatable: Test can be run many times in a row without human intervention.

• Robust: Test produces same result now and forever. Tests are not affected by changes in the external environment.

• Sufficient: Tests verify all the requirements of the software being tested.

• Necessary: Everything in each test contributes to the specification of desired behavior.

• Clear: Every statement is easy to understand.

• Efficient: Tests run in a reasonable amount of time.

• Specific: Each test failure points to a specific piece of broken functionality; unit test failures provide "defect triangulation".

• Independent: Each test can be run by itself or in a suite with an arbitrary set of other tests in any order.

• Maintainable: Tests should be easy to understand and modify and extend.

• Traceable: To and from the code it tests and to and from the requirements.

Disadvantages of Automation Testing

Though the automation testing has many advantages, it has its own disadvantages too. Some of the disadvantages are:

• Proficiency is required to write the automation test scripts.
• Debugging the test script is major issue. If any error is present in the test script, sometimes it may lead to deadly consequences.
• Test maintenance is costly in case of playback methods. Even though a minor change occurs in the GUI, the test script has to be rerecorded or replaced by a new test script.
• Maintenance of test data files is difficult, if the test script tests more screens.

Some of the above disadvantages often cause damage to the benefit gained from the automated scripts. Though the automation testing has pros and corns, it is adapted widely all over the world.

Introduction to CMM

Introduction to CMM

Quality software should reasonably be bug-free, delivered on time and within budget. It should meet the given requirements and/or expectations, and should be maintainable.  In order to produce error free and high quality software certain standards need to be followed.

Software Quality:

Quality software should reasonably be bug-free, delivered on time and within budget. It should meet the given requirements and/or expectations, and should be maintainable.

In order to produce error free and high quality software certain standards need to be followed.

Quality Standards

ISO 9001: 2000 is Quality Management System Certification. To achieve this, an organization must satisfy ISO 9001: 2000 clauses (clauses 1 - 8).

Six Sigma is a process improvement methodology focused on reduction in variation of the processes around the mean. Its objective is to make the process defect free.

SEI CMM is a defacto standard for assessing and improving processes related to software development, developed by the software community in 1986 with leadership from SEI. It’s a software specific process maturity model. It provides guidance for measuring software process maturity and helps process improvement programs.

SEI CMM is organized into 5 maturity levels:

1.   Initial

2.   Repeatable 

3.   Defined 

4.   Manageable 

5.   Optimizing

1)Initial

The software process is characterized as ad hoc, and occasionally even chaotic. Few processes are defined, and success depends on individual effort and heroics.

2) Repeatable:

Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications.

3) Defined:

The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization. All projects use an approved, tailored version of the organization's standard software process for developing and maintaining software.

4) Managed:

Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled.

5) Optimizing:

Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies.

Comparison of CMM and ISO

Features

 

CMM

 

ISO 9000

 

Assess process maturity

 

Yes

 

No

 

Emphasizes continuous improvement

 

Yes

 

No

 

Provides self-assessments

 

Yes

 

Yes

 

Applicable to projects

 

Software only

 

No

 

Applicable to programs

 

Software only

 

Yes

 

Applicable to organizations (operations)

 

Software only

 

Yes

 

Requires a quality management system

 

Yes

 

Yes

 

Conclusion:

CMM ensures that the process followed for developing a product produces error free product. A company which is process driven is more successful than the company which is people driven. Hence a company needs to have a good process for software development for it to be successful.

Bug Life Cycle & Guidelines


Bug Life Cycle & Guidelines

In this tutorial you will learn about Bug Life Cycle & Guidelines, Introduction, Bug Life Cycle, The different states of a bug, Description of Various Stages, Guidelines on deciding the Severity of Bug, A sample guideline for assignment of Priority Levels during the product test phase and Guidelines on writing Bug Description.

Introduction:

Bug can be defined as the abnormal behavior of the software. No software exists without a bug. The elimination of bugs from the software depends upon the efficiency of testing done on the software. A bug is a specific concern about the quality of the Application under Test (AUT).

Bug Life Cycle:

In software development process, the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized. The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:


The different states of a bug can be summarized as follows:

1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed

Description of Various Stages:

1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.

2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.

3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.

4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testing team.

5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.

6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.

7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “DUPLICATE”.

8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester tests the bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.

9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.

10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.

Guidelines on deciding the Severity of Bug:

Indicate the impact each defect has on testing efforts or users and administrators of the application under test. This information is used by developers and management as the basis for assigning priority of work on defects.

A sample guideline for assignment of Priority Levels during the product test phase includes:

1.   Critical / Show Stopper — An item that prevents further testing of the product or function under test can be classified as Critical Bug. No workaround is possible for such bugs. Examples of this include a missing menu option or security permission required to access a function under test.
.

2.   Major / High — A defect that does not function as expected/designed or cause other functionality to fail to meet requirements can be classified as Major Bug. The workaround can be provided for such bugs. Examples of this include inaccurate calculations; the wrong field being updated, etc.
.

3.   Average / Medium — The defects which do not conform to standards and conventions can be classified as Medium Bugs. Easy workarounds exists to achieve functionality objectives. Examples include matching visual and text links which lead to different end points.
.

4.   Minor / Low — Cosmetic defects which does not affect the functionality of the system can be classified as Minor Bugs.

Guidelines on writing Bug Description:

Bug can be expressed as “Result followed by the action”. That means, the unexpected behavior occurring when a particular action takes place can be given as bug description.

1.   Be specific. State the expected behavior which did not occur - such as after pop-up did not appear and the behavior which occurred instead.

2.   Use present tense.

3.   Don’t use unnecessary words.

4.   Don’t add exclamation points. End sentences with a period.

5.   DON’T USE ALL CAPS. Format words in upper and lower case (mixed case).

6.   Mention steps to reproduce the bug compulsorily.

Automated Testing Best Practices: This article explains about many topics like The Case for Automated Testing,  Why Automate the Testing Process?, Using Testing Effectively, Reducing Testing Costs, Replicating testing across different platforms, Greater Application Coverage, Results Reporting, Understanding the Testing Process, Identifying Tests Requiring Automation and Task Automation and Test Set-Up.

The Case for Automated Testing



Today, rigorous application testing is a critical part of virtually all software development projects. As more organizations develop mission – critical systems to support their business activities, the need is greatly increased for testing methods that support business objectives. It is necessary to ensure that these systems are reliable, built according to specification and have the ability to support business processes. Many internal and external factors are forcing organizations to ensure a high level of software quality and reliability.

Why Automate the Testing Process?

In the past, most software tests were performed using manual methods. This required a large staff of test personnel to perform expensive and time-consuming manual test procedures. Owing to the size and complexity of today’s advanced software applications, manual testing is no longer a viable option for most testing situations.

Using Testing Effectively

By definition, testing is a repetitive activity. The methods that are employed to carry out testing (manual or automated) remain repetitious throughout the development life cycle. Automation of testing processes allows machines to complete the tedious, repetitive work while human personnel perform other tasks. Automation eliminates the required “think time” or “read time” necessary for the manual interpretation of when or where to click the mouse. An automated test executes the next operation in the test hierarchy at machine speed, allowing test to be completed many times faster than the fastest individual. Automated test also perform load/stress testing very effectively.

Reducing Testing Costs

The cost of performing manual testing is prohibitive when compared to automated methods. The reason is that computers can execute instructions many times faster and with fewer errors than individuals. Many automated testing tools can replicate the activity of a large number of users (and their associated transactions) using a single computer. Therefore, load/stress testing using automated methods require only a fraction of the computer hardware that would be necessary to complete a manual test.

Replicating testing across different platforms

Automation allows the testing organization to perform consistent and repeatable test. When applications need to be deployed across different hardware or software platforms, standard or benchmark tests can be created and repeated on target platforms to ensure that new platforms operate consistently.

Greater Application Coverage

The productivity gains delivered by automated testing allow and encourage organization to test more often and more completely. Greater application test coverage also reduces the risk if exposing users to malfunctioning or non-compliant software.

Results Reporting

Full-featured automated testing systems also produce convenient test reporting and analysis. These reports provide a standardized measure of test status and results, thus allowing more accurate interpretation of testing outcomes. Manual methods require the user to self-document test procedures and test results.

Understanding the Testing Process

The introduction of automated testing into the business environment involves far more than buying and installing an automated testing tool.

Typical Testing Steps: Most software testing projects can be divided into general steps

Test Planning: This step determines like ‘which’ and ‘when’.
Test Design: This step determines how the tests should be built the level of quality.
Test Environment Preparation: Technical environment is established during this step.
Test Construction: At this step, test scripts are generated and test cases are developed.
Test Execution: This step is where the test scripts are executed according to the test plans.
Test evaluation: After the test is executed, the test results are compared to the expected results and evaluations can be made about the quality of an application.

Identifying Tests Requiring Automation

Most, but not all, types of tests can be automated. Certain types of tests like user comprehension tests test that run only once and tests that require constant human intervention are usually not worth the investment incurred to automate. The following are examples of criteria that can be used to identify tests that are prime candidates for automation.

High path frequency – Automated testing can be used to verify the performance of application paths that are used with a high degree of frequency when the software is running in full production. Examples include: creating customer records.

Critical Business Processes – Mission-critical processes are prime candidates for automated testing. Examples include: financial month-end closings, production planning, sales order entry and other core activities. Any application with a high –degree of risk associated with a failure is a good candidate for test automation.

Repetitive Testing – If a testing procedure can be reused many times, it is also a prime candidate for automation

Applications with a Long Life Span – If an application is planned to be in production for a long period of time, the greater the benefits are from automation.

Task Automation and Test Set-Up

In performing software testing, there are many tasks that need to be performed before or after the actual test. For example, if a test needs to be executed to create sales orders against current inventory, goods need to be in inventory. The tasks associated with placing items in inventory can be automated so that the test can run repeatedly. Additionally, highly repetitive tasks not associated with testing can be automated utilizing the same approach.

Who Should Be Testing?

There is no clear consensus in the testing community about which group within an organization should be responsible for performing the testing function. It depends on the situation prevailing in the organization.

 

Technical Terms Used in Testing World

Technical Terms Used in Testing World

In this tutorial you will learn about technical terms used in testing world, from Audit, Acceptance Testng to Validation, Verification and Testing.

Audit: An independent examination of a work product or set of work products to assess compliance with specifications, standards, contractual agreements, or other criteria.

Acceptance testing: Testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system.

Alpha Testing: Acceptance testing performed by the customer in a controlled environment at the developer's site. The software is used by the customer in a setting approximating the target environment with the developer observing and recording errors and usage problems.

Assertion Testing: A dynamic analysis technique which inserts assertions about the relationship between program variables into the program code. The truth of the assertions is determined as the program executes.

Boundary Value: (1) A data value that corresponds to a minimum or maximum input, internal, or output value specified for a system or component. (2) A value which lies at, or just inside or just outside a specified range of valid input and output values.

Boundary Value Analysis: A selection technique in which test data are chosen to lie along "boundaries" of the input domain [or output range] classes, data structures, procedure parameters, etc. Choices often include maximum, minimum, and trivial values or parameters.

Branch Coverage: A test coverage criteria which requires that for each decision point each possible branch be executed at least once.

Bug: A fault in a program which causes the program to perform in an unintended or unanticipated manner.

Beta Testing: Acceptance testing performed by the customer in a live application of the software, at one or more end user sites, in an environment not controlled by the developer.

Boundary Value Testing: A testing technique using input values at, just below, and just above, the defined limits of an input domain; and with input values causing outputs to be at, just below, and just above, the defined limits of an output domain.

Branch Testing: Testing technique to satisfy coverage criteria which require that for each decision point, each possible branch [outcome] be executed at least once. Contrast with testing, path; testing, statement. See: branch coverage.

Compatibility Testing: The process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

Cause Effect Graph: A Boolean graph linking causes and effects. The graph is actually a digital-logic circuit (a combinatorial logic network) using a simpler notation than standard electronics notation.

Cause Effect Graphing: This is a Test data selection technique. The input and output domains are partitioned into classes and analysis is performed to determine which input classes cause which effect. A minimal set of inputs is chosen which will cover the entire effect set. It is a systematic method of generating test cases representing combinations of conditions.

Code Inspection: A manual [formal] testing [error detection] technique where the programmer reads source code, statement by statement, to a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

Code Review: A meeting at which software code is presented to project personnel, managers, users, customers, or other interested parties for comment or approval.

Code Walkthrough: A manual testing [error detection] technique where program [source code] logic [structure] is traced manually [mentally] by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.

Coverage Analysis: Determining and assessing measures associated with the invocation of program structural elements to determine the adequacy of a test run. Coverage analysis is useful when attempting to execute each statement, branch, path, or iterative structure in a program.

Crash: The sudden and complete failure of a computer system or component.

Criticality: The degree of impact that a requirement, module, error, fault, failure, or other item has on the development or operation of a system.

Cyclomatic Complexity: The number of independent paths through a program. The cyclomatic complexity of a program is equivalent to the number of decision statements plus 1.

Error: A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition.

Error Guessing: This is a Test data selection technique. The selection criterion is to pick values that seem likely to cause errors.

Error Seeding: The process of intentionally adding known faults to those already in a computer program for the purpose of monitoring the rate of detection and removal, and estimating the number of faults remaining in the program. Contrast with mutation analysis.

Exception: An event that causes suspension of normal program execution. Types include addressing exception, data exception, operation exception, overflow exception, protection exception, and underflow exception.

Exhaustive Testing: Executing the program with all possible combinations of values for program variables. This type of testing is feasible only for small, simple programs.

Failure: The inability of a system or component to perform its required functions within specified performance requirements.

Fault: An incorrect step, process, or data definition in a computer program which causes the program to perform in an unintended or unanticipated manner.

Functional Testing: Testing that ignores the internal mechanism or structure of a system or component and focuses on the outputs generated in response to selected inputs and execution conditions. (2) Testing conducted to evaluate the compliance of a system or component with specified functional requirements and corresponding predicted results.

Integration Testing: An orderly progression of testing in which software elements, hardware elements, or both are combined and tested, to evaluate their interactions, until the entire system has been integrated.

Interface Testing: Testing conducted to evaluate whether systems or components pass data and control correctly to one another.

Mutation Testing: A testing methodology in which two or more program mutations are executed using the same test cases to evaluate the ability of the test cases to detect differences in the mutations.

Operational Testing: Testing conducted to evaluate a system or component in its operational environment.

Parallel Testing: Testing a new or an altered data processing system with the same source data that is used in another system. The other system is considered as the standard of comparison.

Path Testing: Testing to satisfy coverage criteria that each logical path through the program be tested. Often paths through the program are grouped into a finite set of classes. One path from each class is then tested.

Performance Testing: Functional testing conducted to evaluate the compliance of a system or component with specified performance requirements.

Qualification Testing: Formal testing, usually conducted by the developer for the consumer, to demonstrate that the software meets its specified requirements.

Quality Assurance: (1) The planned systematic activities necessary to ensure that a component, module, or system conforms to established technical requirements. (2) All actions that are taken to ensure that a development organization delivers products that meet performance requirements and adhere to standards and procedures. (3) The policy, procedures, and systematic actions established in an enterprise for the purpose of providing and maintaining some degree of confidence in data integrity and accuracy throughout the life cycle of the data, which includes input, update, manipulation, and output. (4) The actions, planned and performed, to provide confidence that all systems and components that influence the quality of the product are working as expected individually and collectively.

Quality Control: The operational techniques and procedures used to achieve quality requirements.

Regression Testing: Rerunning test cases which a program has previously executed correctly in order to detect errors spawned by changes or corrections made during software development and maintenance.

Review: A process or meeting during which a work product or set of work products, is presented to project personnel, managers, users, customers, or other interested parties for comment or approval. Types include code review, design review, formal qualification review, requirements review, test readiness review.

Risk: A measure of the probability and severity of undesired effects.

Risk Assessment: A comprehensive evaluation of the risk and its associated impact.

Software Review: An evaluation of software elements to ascertain discrepancies from planned results and to recommend improvement. This evaluation follows a formal process. Syn: software audit. See: code audit, code inspection, code review, code walkthrough, design review, specification analysis, static analysis

Static Analysis: Analysis of a program that is performed without executing the program. The process of evaluating a system or component based on its form, structure, content, documentation is also called as Static Analysis.

Statement Testing: Testing to satisfy the criterion that each statement in a program be executed at least once during program testing.

Storage Testing: This is a determination of whether or not certain processing conditions use more storage [memory] than estimated.

Stress Testing: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.

Structural Testing: Testing that takes into account the internal mechanism [structure] of a system or component. Types include branch testing, path testing, statement testing. (2) Testing to insure each program statement is made to execute during testing and that each program statement performs its intended function.

System Testing: The process of testing an integrated hardware and software system to verify that the system meets its specified requirements. Such testing may be conducted in both the development environment and the target environment.

Test: An activity in which a system or component is executed under specified conditions, the results are observed or recorded and an evaluation is made of some aspect of the system or component.

Testability: The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

Testcase: Documentation specifying inputs, predicted results, and a set of execution conditions for a test item.

Testcase Generator: A software tool that accepts as input source code, test criteria, specifications, or data structure definitions; uses these inputs to generate test input data; and, sometimes, determines expected results.

Test Design: Documentation specifying the details of the test approach for a software feature or combination of software features and identifying the associated tests.

Test Documentation: Documentation describing plans for, or results of, the testing of a system or component, Types include test case specification, test incident report, test log, test plan, test procedure, test report.

Test Driver: A software module used to invoke a module under test and, often, provide test inputs, control and monitor execution, and report test results.

Test Incident Report: A document reporting on any event that occurs during testing that requires further investigation.

Test Item: A software item which is the object of testing.

Test Log: A chronological record of all relevant details about the execution of a test.

Test Phase: The period of time in the software life cycle in which the components of a software product are evaluated and integrated, and the software product is evaluated to determine whether or not requirements have been satisfied.

Test Plan: Documentation specifying the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, responsibilities, required, resources, and any risks requiring contingency planning. See: test design, validation protocol.

Test Procedure: A formal document developed from a test plan that presents detailed instructions for the setup, operation, and evaluation of the results for each defined test.

Test Report: A document describing the conduct and results of the testing carried out for a system or system component.

Test Result Analyzer: A software tool used to test output data reduction, formatting, and printing.

Testing: (1) The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. (2) The process of analyzing a software item to detect the differences between existing and required conditions, i.e. bugs, and to evaluate the features of the software items.

Traceability Matrix: A matrix that records the relationship between two or more products; e.g., a matrix that records the relationship between the requirements and the design of a given software component. See: traceability, traceability analysis.

Unit Testing: Testing of a module for typographic, syntactic, and logical errors, for correct implementation of its design, and for satisfaction of its requirements (or) Testing conducted to verify the implementation of the design for one software element; e.g., a unit or module; or a collection of software elements.

Usability: The ease with which a user can learn to operate, prepare inputs for, and interpret outputs of a system or component.

Usability Testing: Tests designed to evaluate the machine/user interface.

Validation: Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its predetermined specifications and quality attributes.

Validation, Verification and Testing: Used as an entity to define a procedure of review, analysis, and testing throughout the software life cycle to discover errors, determine functionality, and ensure the production of quality software.

Volume Testing: Testing designed to challenge a system's ability to manage the maximum amount of data over a period of time. This type of testing also evaluates a system's ability to handle overload situations in an orderly fashion.