7 worst practices of test automation using selenium

If you want to be progressive, do not start with best practices. Instead, try identifying the worst practices in the industry and take small actions towards bettering them.

Below are the some of the worst practices in test automation using Selenium:


CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans Apart. They are designed to prevent automation and so should not be automated. CAPTCHAs can be bypassed using two strategies:

The first idea is to disable CAPTCHAs in your test environment.
The second idea is to add a hook to allow tests to bypass the CAPTCHA.


The download is initiated by clicking some download link usually, which can be automated using selenium. But the API does not show download progress, which does not state that the download functionality is being tested. Moreover downloading file is not considered as the important aspect to test user interaction with the web platform. What can be done is find the link using selenium and pass it to an HTTP request library.


Generally, selenium RC acts a proxy between the browser and the site which is being automated, which means all the traffic passing through selenium can be manipulated. The “captureNetworkTraffic()” method is used to capture all the network traffic passing through selenium. including HTTP response codes.

Whereas selenium WebDriver provides a completely different approach to browser automation. It acts more like a user which is represented in the way that tester writes the test cases with WebDriver. Checking the status code is not important in automated functional testing.

Consider 404 or 500 error page, a simple way to fail the test when a tester encounters one of these errors pages is to check the title or the content (i.e. <header> tag) after every page load. If a tester is using the page object model he can include this check in the point where the page load is expected (e.g. class constructor).

Checking the web page itself is an important aspect of WebDriver’s ideal practice.


WebDriver is not the ideal choice to automate login into websites like Gmail and Facebook, firstly because it is against their policy and secondly because it is slow and unreliable.

The most appropriate choice would be to use the API that these websites provide. Using API might sound like an extra task but with the speed, reliability, and stability they offer its worth it. Moreover, the API is less subjected to the change, unlike web pages.

Using WebDriver to login to third party websites is not recommended as this makes the test longer making it unreliable and hence result into test fails.


Usually, testers have an idea that the test should be run in a specific order, but doing this what they don’t realize is that they are making the tests dependent. The tests should be able to run in any order.


Again, using selenium WebDriver is not an ideal choice for performance testing, not because it can’t do it but because it is not meant for this job ultimately tester might not get good results.

Yes, it may seem ideal to do performance testing with respect to user’s context but WebDriver is vulnerable many external factors which are beyond testers control. For instance, consider following scenarios:

  1. browser startup speed
  2. the speed of HTTP server response of third party servers that hosts JS or CSS
  3. the response of third party servers that hosts JS or CSS

change at these points will affect the results. It is difficult to separate the difference between the performance of your website and the performance of external resources, and it is also hard to tell what the performance penalty is for using WebDriver in the browser, especially if you are injecting scripts.

Another thing is the amount of time saved by performing performance and functional tests at the same time. Though they have completely opposite objective, functionality test may cloud the performance testing for e.g. to test loading functionality the test may need to wait and this will also cover the objectives of performance testing and vise-verse.

In order to enhance the performance of the website, overall performance independent of environment differences must be analyzed. There are ready-made tools available that can do this job more efficiently and provide reports too.


No, the selenium is not meant to spider through links. The main reason being the fact that WebDriver needs time to startup and can take from few seconds to minutes just to get the web pages and traverse through the DOM.

One could use curl command or libraries like BeautifulSoup instead of WebDriver and save a lot of time.