The decision to stop testing can be made empirically by using the data obtained from the metrics suggested above. A tester can decide to stop testing:
-
Based on the ratio of passed to failed test cases – there are three methods to interpret this:
-
Stop when all test cases get passed;
-
Stop when minimum proportion of test cases need to be passed is reached;
-
Stop when maximum proportion of test cases allowed to fail is reached.
-
Exhausting all test cases available for execution during the test run.
Other more advanced metrics that can be considered in your decision to stop testing are:
-
Mean Time Between Failure (MTBF): obtained by recording the average operational time before the system failure;
-
Defect density: measured by recording the defects related to size of software;
-
Coverage metrics: calculated by recording the percentage of instructions executed during tests;
-
Number and perceived severity of open bugs. The later is usually a subjective evaluation along a scale that ranges from ‘Very Low’ to ‘Very High’.
A tester can decide to stop testing when the MTBF time is sufficiently long, defect density is acceptable, code coverage deemed optimal in accordance to the test plan, and the number and severity of open bugs are both low. The general aim is to reduce the risk of catastrophic errors happening when the product is released.
You can use a platform like QAppAssure which allows you to test on-cloud and on-field devices, across 100+ device, make and models, Integrate with Jira, CI/CD tools, and also use Appium, Calabash, Espresso, UIAutomator, XCUITest. You can run unlimited parallel tests with the free trial pack.