Custom Search

Thursday, March 14, 2013

Different Types of Testing


Different Types of Testing

What is Software Testing?

Software testing is the process of evaluation a software item to detect differences between given input and expected output. Also to assess the feature of A software item. Testing assesses the quality of the product. Software testing is a process that should be done during the development process. In other words software testing is a verification and validation process.

Verification

Verification is the process to make sure the product satisfies the conditions imposed at the start of the development phase. In other words, to make sure the product behaves the way we want it to.

Validation

Validation is the process to make sure the product satisfies the specified requirements at the end of the development phase. In other words, to make sure the product is built as per customer requirements.

Basics of software testing

There are two basics of software testing: blackbox testing and whitebox testing.
Blackbox Testing
Black box testing is a testing technique that ignores the internal mechanism of the system and focuses on the output generated against any input and execution of the system. It is also called functional testing.
Whitebox Testing 
White box testing is a testing technique that takes into account the internal mechanism of a system. It is also called structural testing and glass box testing.
Black box testing is often used for validation and white box testing is often used for verification. 
Performance testing

a. Performance testing is designed to test run time performance of software within the context of an integrated system. It is not until all systems elements are fully integrated and certified as free of defects the true performance of a system can be ascertained
b. Performance tests are often coupled with stress testing and often require both hardware and software infrastructure. That is, it is necessary to measure resource utilization in an exacting fashion. External instrumentation can monitor intervals, log events. By instrument the system, the tester can uncover situations that lead to degradations and possible system failure

Security testing

If your site requires firewalls, encryption, user authentication, financial transactions, or access to databases with sensitive data, you may need to test these and also test your site’s overall protection against unauthorized internal or external access

Exploratory Testing

Often taken to mean a creative, internal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it

Benefits Realization tests

With the increased focus on the value of Business returns obtained from investments in information technology, this type of test or analysis is becoming more critical. The benefits realization test is a test or analysis conducted after an application is moved into production in order to determine whether the application is likely to deliver the original projected benefits. The analysis is usually conducted by the business user or client group who requested the project and results are reported back to executive management

Mutation Testing

Mutation testing is a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes (‘bugs’) and retesting with the original test data/cases to determine if the ‘bugs’ are detected. Proper implementation requires large computational resources
Sanity testing: Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state

Sanity testing

Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort, For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or destroying databases, the software may not be in a ‘sane’ enough condition to warrant further testing in its current state
Build Acceptance Tests 

Build Acceptance Tests should take less than 2-3 hours to complete (15 minutes is typical). These test cases simply ensure that the application can be built and installed successfully. Other related test cases ensure that Testing received the proper Development Release Document plus other build related information (drop point, etc.). The objective is to determine if further testing is possible. If any Level 1 test case fails, the build is returned to developers un-tested

Smoke Tests 

Smoke Tests should be automated and take less than 2-3 hours (20 minutes is typical). These tests cases verify the major functionality a high level. The objective is to determine if further testing is possible. These test cases should emphasize breadth more than depth. All components should be touched, and every major feature should be tested briefly by the Smoke Test. If any Level 2 test case fails, the build is returned to developers un-tested

Bug Regression Testing 

Every bug that was “Open” during the previous build, but marked as “Fixed, Needs Re-Testing” for the current build under test, will need to be regressed, or re-tested. Once the smoke test is completed, all resolved bugs need to be regressed. It should take between 5 minutes to 1 hour to regress most bugs

Database Testing

Database testing done manually in real time, it check the data flow between front end back ends. Observing that operations, which are operated on front-end is effected on back-end or not.
The approach is as follows:
While adding a record there’ front-end check back-end that addition of record is effected or not. So same for delete, update, Some other database testing checking for mandatory fields, checking for constraints and rules applied on the table , some time check the procedure using SQL Query analyzer

Functional Testing (or) Business functional testing

All the functions in the applications should be tested against the requirements document to ensure that the product conforms with what was specified.(They meet functional requirements)Verifies the crucial business functions are working in the application. Business functions are generally defined in the requirements Document. Each business function has certain rules, which can’t be broken. Whether they applied to the user interface behavior or data behind the applications. Both levels need to be verified. Business functions may span several windows (or) several menu options, so simply testing that all windows and menus can be used is not enough to verify the business functions. You must verify the business functions as discrete units of your testing
* Study SRS
* Identify Unit Functions
* For each unit function
* Take each input function
* Identify Equivalence class
* Form Test cases
* Form Test cases for boundary values
* From Test cases for Error Guessing
* Form Unit function v/s Test cases, Cross Reference Matrix

User Interface Testing (or) structural testing 

It verifies whether all the objects of user interface design specifications are met. It examines the spelling of button test, window title test and label test. Checks for the consistency or duplication of accelerator key letters and examines the positions and alignments of window objects

Volume Testing

Testing the applications with voluminous amount of data and see whether the application produces the anticipated results (Boundary value analysis)

Stress Testing 

Testing the applications response when there is a scarcity for system resources

Load Testing 

It verifies the performance of the server under stress of many clients requesting data at the same time

Installation testing 

The tester should install the systems to determine whether installation process is viable or not based on the installation guide

Configuration Testing

The system should be tested to determine it works correctly with appropriate software and hardware configurations

Compatibility Testing 

The system should be tested to determine whether it is compatible with other systems (applications) that it needs to interface with

Documentation Testing
It is performed to verify the accuracy and completeness of user documentation
1. This testing is done to verify whether the documented functionality matches the software functionality
2. The documentation is easy to follow, comprehensive and well edited
If the application under test has context sensitive help, it must be verified as part of documentation testing

Recovery/Error Testing

Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems

Comparison Testing

Testing that compares software weaknesses and strengths to competing products

Acceptance Testing

Acceptance testing, which black box is testing, will give the client the opportunity to verify the system functionality and usability prior to the system being moved to production. The acceptance test will be the responsibility of the client; however, it will be conducted with full support from the project team. The Test Team will work with the client to develop the acceptance criteria

Alpha Testing

Testing of an application when development is nearing completion, Minor design changes may still be made as a result of such testing. Alpha Testing is typically performed by end-users or others, not by programmers or testers

Beta Testing

Testing when development and testing are essentially completed and final bugs, problems need to be found before the final release. Beta Testing is typically done by end-users or others, not by programmers or testers

Regression Testing

The objective of regression testing is to ensure software remains intact. A baseline set of data and scripts will be maintained and executed to verify changes introduced during the release have not “undone” any previous code. Expected results from the baseline are compared to results of the software being regression tested. All discrepancies will be highlighted and accounted for, before testing proceeds to the next level

Incremental Integration Testing

Continuous testing of an application as new functionality is recommended. This may require various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed, or that test drivers are developed as needed. This type of testing may be performed by programmers or by testers

Usability Testing

Testing for ‘user-friendliness’ clearly this is subjective and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers

Integration Testing

Upon completion of unit testing, integration testing, which is black box testing, will begin. The purpose is to ensure distinct components of the application still work in accordance to customer requirements. Test sets will be developed with the express purpose of exercising the interfaces between the components. This activity is to be carried out by the Test Team. Integration test will be termed complete when actual results and expected results are either in line or differences are explainable/acceptable based on client input

System Testing

Upon completion of integration testing, the Test Team will begin system testing. During system testing, which is a black box test, the complete system is configured in a controlled environment to validate its accuracy and completeness in performing the functions as designed. The system test will simulate production in that it will occur in the “production-like” test environment and test all of the functions of the system that will be required in production. The Test Team will complete the system test. Prior to the system test, the unit and integration test results will be reviewed by SQA to ensure all problems have been resolved. It is important for higher level testing efforts to understand unresolved problems from the lower testing levels. System testing is deemed complete when actual results and expected results are either in line or differences are explainable/acceptable based on client input

Parallel/Audit Testing

Testing where the user reconciles the output of the new system to the output of the current system to verify the new

No comments:

Post a Comment