Automated tests in Continuous Integration environment – Part 2

15 August 2011
Zbyszek Mockun
Frink_Cognifide_2016_HeaderImages_0117
Part 1 of this post covered the importance of integrating automated tests as part of your continuous integration environment and in this post, I will speak about how you can automate functional tests using Selenium and using Hudson.

Selenium

Selenium is an open source tool, primarily used to automate web application testing across platforms. We use it for functional/regression tests. Below are some tips how to use Selenium in CI.

Consider Selenium to have two parts:
  1. IDE - a Firefox extension that allows quick development of test cases and running of test suites (only under Firefox).
  2. Remote Control (RC) - an expanded version of IDE, which allows us to run test suites on different browsers. Below is important CI functionality
    1. can be run from command line
    2. save reports from test suite execution
    3. support most popular browsers
To write your test scripts, use Selenium IDE, record your use case, update script(s) and add to test suite; but to run it, use Selenium RC.
 
Developing Test Scripts in Selenium IDE is very simple. Selenium supports most languages; for beginners or simple scenarios, I propose to use HTML. HTML scripts can be converted to any language.

Unfortunately, you can’t rely on the record and play features; some steps are not saved, or the wrong commands are used. Selenium has a very simple way of adding new commands and extending existing ones. New commands should be added to user-extensions.js file (writing commands is really simple). The file should be added Selenium IDE (Option > Option > General Tab > Selenium core extensions field) and to Selenium RC as parameter user-extension <path to user-extension.js file>. Additional commands are written and shared online by Selenium users (see example Contributed User-Extensions page).

Selenium reports are quite simple and clear but needs some improvements if we want to use them frequently. Reports show which test case failed; and simply clicking them shows the status for each step. There are three command execution states: passed, failed and not run. From reports, you can see which command failed, but not what really happened on the tested page. A failed command is not enough data to raise a bug, so we have to rerun the test (mostly using Selenium IDE). But what if test passed when rerun? If the test failed on a browser other than Firefox, we will have to rerun whole suite and observe the process. It’s clear that debugging, finding what caused the issue, and gathering data/logs takes a lot of time. If we run automated test often, aggregated time spent debugging becomes a critical issue. The solution is to extend Selenium reports. The built-in function captureScreenshots can automatic generate screen shots of tested page. Before each command is run, the screen status is captured and saved. In the report, screen shots are linked to from the appropriate command. When a command fails, one click will show the screen status to better identify the cause of the error. It is possible to reproduce whole test case path by clicking on previous commands - to see if there wasn’t any unexpected behaviour not verified in previous steps. Selenium reports can be extended not only by automatic screen shots; the html code from tested pages can also be saved.  

Selenium reports with links to screenshots/saved html code

The above report shows that Selenium wasn’t able to click the delete button. The screenshots allow us to check the state of the page before the click is registered. Screenshots or saved html shows if page loaded, there was no Delete button, the name had changed or there was some other reason for the failure. Extended reports save time, because we do not need to rerun tests. It is very important when the issue is not 100% reproducible.

Another important thing is to use locators wisely. Selenium allow to use preferred type of locators, the default is XPATH. Recording tests usually returns XPATH to identify elements like this: //span2/center/span/center/div/div/div. It is very difficult for someone other than the author to debug this if if there are no comments as it does not clearly identify the tested element. When testing only specific element, changes in other parts of the page shouldn’t cause failure or influence the test case results.

If there is a change on the page but locators are not written wisely, all tests will fail and finding the cause will not be easy. Additionally all test cases need to be improved not only the one designed for this specific element. It’s better to use XPTAH that says something more and contains only necessary elements. Use XPATH like this: //table[]//divtext()=’Some testing text’. It’s simpler to read and if test case failed, finding the cause should be straightforward.

Continuous Integration tool - HUDSON

Hudson is another open source tool which allows us to introduce Continuous Integration to our process. Hudson uses projects and jobs to manage content. Simply put, Jobs will be our collection of test suites for specific projects/applications. Selenium RC allows us to run the same tests suites on different browsers. So jobs will be run individually for specific browsers. Hudson supports versioning applications like SVN or CVS, using them to manage automated scripts so that the newest version of tests scripts will always be run. Just before each run Hudson, will check for updates and, if found any, integrate them.
   
Configure how the jobs run:
  • jobs are run only by user (on request)
  • jobs are scheduled and run on specific day and hour
  • set relationships between jobs
If we configure our application on the same Hudson server to rebuild, the best way is to set relationships - build after other projects are built. Setting this relationship ensures automated tests will always be run after the application is rebuilt. When the application is built on a different server, or you only want to run the tests at certain times (e.g. at night), the schedule feature should be used.

Hudson server allow a distributed architecture: using different machines for different projects; tests to be run in parallel or on different environments. If we have one application to test on different browsers we have two choices:
  • use one server, rebuild your application before each run (job for different browser). This idea has one big minus - the time needed to execute all jobs becomes really long, and the application and tests are run on the same machine.
  • use individual server for different jobs. In this case, we may need several servers. Running application and tests on different machines is very important if part of our automated suites involves performance tests. Saving time is obvious but it’s important if your test suite is quite long.
I will follow up with one more post to conclude this series and that will talk about how to use the two tools mentioned in this post together and time rules. Watch this space closely!