There are different types of software testing approaches, but they all share two common goals: to ensure that the final software meets the requested/agreed upon criteria and is free from bugs, no matter how ideal this may be. Our alphabetically sorted software testing dictionary lists important testing methods, giving you a concise overview of software testing.
Acceptance Testing – Considers the product from the point of view of customer’s needs. It is usually conducted by the end users of the solution to assess the viability of the solution.
Accessibility Testing – This test seeks to confirm whether the product is usable, even by people with disabilities. The objective is to check if the system can cater to people with different abilities.
Ad-Hoc Testing – An informal test that is usually run only once, unless a defect is discovered. It is usually done without any formally documented test script. It is also known as random testing or monkey testing.
Agile Testing – Originating from the agile software development principle, agile testing involves tight collaboration between customers and the developer.
All-Pairs Testing – This is also known as pair-wise testing and involves testing the two-way interactions between parameters. Variables are assigned different values and these values are paired together to determine what their outcomes will be.
Alpha Testing – The final testing at the developer’s site by potential users or the internal team before the product is released to the public.
API Testing – Testing of application programming interfaces, which facilitate communication and exchange of messages between 2 systems, as part of integration tests.
Automated Testing – This involves the use of specialized software programs to automatically conduct various tests.
Backward Compatibility Testing – Done to verify that a new version of the product is still compatible with the older version. For example, checking to assess if installing a new version of a software does not conflict with the previous version but follows a seamless upgrade path
Beta Testing – Following alpha testing, a sample group of users is invited to test the product in real-world conditions.
Big Bang Integration Testing – Individually developed modules are integrated and tested as a whole.
Black Box Testing – Testers are not aware of the internal structure of the product they are testing and only focus on its functionality.
Top Down and Bottom Up Integration Testing – The main modules are tested first before the sub modules (Top-down) and vice-versa for Bottom-up which involves testing the modules at the lower level before the main module is tested.
Branch Testing – Where there are decision points in the code, the goal is to ensure that each branch from every decision point is executed and thoroughly tested.
Browser Compatibility Testing – Tests whether the product works with all major web browsers or, at least, the targeted browser.
Comparison Testing – Compares actual results with expected results to highlight differences. It can also be used to understand the strengths and weaknesses of the software when its features are compared with that of a competitor.
Compatibility Testing – Evaluates if the software is compatible with the target environment e.g. Is the software compatible with the relevant hardware, operating system or browser?
Component Testing – Also known as module testing and involves the individual testing of each component in an application.
Condition Coverage Testing – Involves testing the outcome of each condition (e.g. evaluating the true or false paths and the eventual outcome) in a piece of software.
Dynamic Testing – Tests how the code behaves when executed, for example, how it affects CPU and memory usage.
End-To-End Testing – Tests the flow of the software program from start to finish.
Equivalence Partitioning – The purpose of this type of test is to divide the input data into sets that can be considered the same, that is, the system will handle these sets the same way. Thus, only one condition is tested from each partition.
Exploratory Testing – Places emphasis on the personal freedom of individual testers. Test design and execution are done at the same time by the user. The quality of the test thus depends on the tester’s ability to create test cases and find defects on their own.
Functional Testing – The objective of this test is to find out whether the product performs in accordance with specifications and requirements.
Fuzz Testing – Tests how the software handles invalid or random data. The test is conducted by loading the system with a massive amount of data in an attempt to make it crash, to see how well the system handles this. It can expose coding and security errors.
Glass Box Testing (White Box Testing) – Focuses on testing internal code structures, instead of the program’s functionality.
Gorilla Testing – This refers to in-depth testing of a single module with both valid and invalid inputs to determine how much stress the system can take. It can be done by a team of testers and developers.
GUI (Graphical User Interface) Testing – Tests all aspects of the program’s graphical user interface.
Happy Path Testing – Tests only known inputs, ignoring real-world conditions and producing an expected output. Exceptions are not tested here because the focus is on assessing if the required functionality is in place.
Incremental Integration Testing – Each module is tested upon its integration, unlike the big bang integration testing.
Install/Uninstall Testing – Verifies the functionality of the install/uninstall process and confirms whether or not the software can be completely installed or uninstalled from the system
Integration Testing – Tests how individual modules function after they are combined. Individually tested units of code are assembled and tested as a whole. The components of the system should be able to work together and this is validated at this stage.
Interface Testing – A type of integration test that focuses on the interface between components.
Internationalisation Testing – Tests whether the product can function in different environments, that is, across multiple regions, languages and cultures.
Keyword-Driven Testing – This is a table-driven test using keywords corresponding to individual testing actions. Keywords are used for all the functions that are to be executed.
Load Testing – Testing a software system under load. That is, it is done to confirm if the system can handle a pre-defined amount of load.
Localization Testing – Tests the quality and functionality of a program’s localization, that is, how does the system behaviour when local settings are applied to customize it for that location so that the target market can adapt to it?
Negative Testing – Tests whether the software application can elegantly handle errors, invalid input, unexpected user actions and exceptions.
Non-Functional Testing – Tests aspects of the software application which are not related to its functionality, e.g. performance tests.
Pair Testing – The same feature is tested by two people at the same time and at the same place. They interact with each other and this can lead to more comprehensive and exhaustive tests. One team member can provide the input while the other analyzes the results.
Penetration Testing – This type of test attempts to find and exploit security vulnerabilities.
Performance Testing – This involves testing how the software application performs under different loads.
Recovery Testing – Involves testing if and how an application can recover after a crash.
Regression Testing – This type of test is done to check if a new feature has not broken previously developed parts of the software program that were working before. It's about testing what has already been tested to make sure it still works.
Risk-Based Testing – In this approach, tests are done based on risk of failure, the likelihood and impact of the failure.
Smoke Testing – This test is done to ensure that the most important features of the software program work and is usually not exhaustive. The purpose is to check if the application is ready for further tests. For example, an application that does not launch successfully cannot be considered ready for exhaustive tests.
Scalability Testing – Verifies that the application can scale well with increasing user workload. For example, it can be done to assess how the system responds to an increase in user traffic, data and the number of transactions. It is often done as part of performance testing.
Soak Testing – Tests the stability and performance of a system over an extended period of time. It’s a type of stress test that is done to understand the limits of the system. It may involve maintaining a certain level of user input for a long period of time, thereby revealing issues with memory allocation and database resource utilization, to mention a few.
Static Testing – The software is tested without executing the code. For example, programmers may manually check the application code without running it with the objective of identifying defects. It can also be done by finding/eliminating errors in software design documents and test cases.
System Testing – Tests whether a completely integrated system complies with specifications. The objective of this phase is to find out if the product will meet users' requirements before turning it over to them to validate.
Unit Testing – Involves a deep scrutiny of individual portions or units (the smallest testable parts) of an application. The purpose of unit testing is to find problems in the smallest components of the system before testing the system as a whole.
Usability Testing – Determines how easy it is to use the application from the point of view of end users.
User Acceptance Testing (UAT) – Users test how the application handles real-world scenarios and use cases. It is at the UAT stage that the user is invited to test and accept the system.
Volume Testing – Tests how the program handles large quantities of data. High-volume transactions are executed to ensure that the software will handle future growth in volume.
What other tests should be on this list?