The Cross-browser Testing Conundrum: Strategy and Solutions

In our previous post we wrote about the testing challenges faced in ensuring predictability and consistency of web application behaviour across different browsers and their versions as employed on desktops or devices driven by one of several operating systems.

In this post we will discuss how we can resolve or minimize this problem by adopting and adapting test approaches, dependent on the project context.

At Arrk Group, some of these approaches are used to ensure that the web application is compatible across browsers and devices.

OS-Browser Matrix

A Strategy to focus and limit tests

The possible browser-version-operating system-version-device combinations for an application under test are too many to be tested completely, considering the constraints of time. The task is clearly to focus on combination(s) which is most popular and most used (please see the diagram). To address this, Analytics data helps the testing strategy enormously. Dependent on percentage usage, it makes sense to do more tests where usage is highest, and less as it drops and none at all after a certain threshold is reached. Geographical considerations of where the users reside is also important as browser popularity and preference could change from area to area. Information from Google Analytics and/or websites that provide usage statistics (e.g. StatCounter1, StatOwl2) is good to go with. This data coupled with any inputs from the customers can be used to determine the combinations to use and the scope of testing, whether full or happy path.

Along with this what has worked for Arrk is also a systematic distribution of different browsers and devices amongst the testing team during sprints when the development team do their implementation on one browser only. This helps to find and fix the usability bugs at the earliest. However when application wide regression tests are performed, all the tests focus on the one or two most used browsers with a separate round for exploratory smoke tests on other non-major browsers/devices.

Test Automation for Assurance

Tests are generally automated to target functional reliability since automated checks for look-n-feel is not worth the effort and best left for the human eye. Selenium WebDriver is a very popular functional automation tool and has evolved rapidly. WebDriver interacts with web applications in a very similar way as a human user does, providing a real-time simulation of user actions with the website. The Selenium drivers provided for Mozilla Firefox, Google Chrome and Internet Explorer have matured to a great extent and the drivers for Safari, iOS and Android are slowly catching up. WebDriver based automation is best done first with Mozilla Firefox browser. Firefox is a good choice due to availability of plug-ins and extensions which makes automation more productive and efficient. Once test scripts work as expected on Firefox, extending it to other browsers (Chrome and Internet Explorer) is relatively straight-forward.

A consideration also ought to be integrating with a continuous build tool (e.g. Maven, Hudson) where the latest test scripts are built and executed against the latest application build on a daily or periodic basis. An assurance of functional correctness is thus provided on an ongoing basis by running tests across browsers say Firefox, Chrome and IE. All failures (of the application or the test) are remediated such that alignment of the tests with the code is maintained. Alongside the functional failures there typically might be test failures on one browser and not the other which could be a potential browser specific implementation issue.

Screenshots anyone?

There are cloud based services available which can be used for taking screenshots of a web application in different operating systems and browsers. It provides a convenient way to test a website’s browser compatibility in one place. The developers can use them upfront to validate their design across a combination of operating systems, mobile devices, browsers, versions, screen resolutions, and so on. The way these services work is after a URL is supplied along with a choice of browser(s) and OS(s) combination, a number of distributed computers will open the website/web design and take screenshots and upload them to a central dedicated server for review. The screenshots can then be visually checked by a tester or developer to check if UI is as it should be. Examples of such websites include BrowserShots3, BrowserStack4.

SauceLabs5 takes the provisioning of cloud test infrastructure even further where Selenium tests can be run on chosen combinations with run reports, screenshots and videos.

Bug-Bashes – throwing it into the open

Another novel approach that can be used is ‘Bug-Bash’. Bug-Bashes compliments the cross browser testing by generally bringing out issues related to look and feel and layout. A Bug Bash, also known as the ‘pound the product’ or the ‘Break the app’ is a time-boxed event, where a group of developers, testers, analysts and even the Project managers, preferably NOT from the project team, attack the application from the perspective of a normal user, ideally in an exploratory and usability testing sense. The bug-bash can be scheduled after the application is functional and stable with some project background provided upfront by the business analyst or equivalent.

Apart from the issues that’ll be exposed, the Bug Bash may bring forth a new perspective on usage of the application. Even though the number and value of bugs found may not be of the level of a trained tester, there may yet be some odd wins including an unbiased, untrained perspective of the application. The fresh pair of eyes could, buoyed by the rewards that are normally up-for-grabs, employ passion-filled unconventional methods to find bugs.

Crowd Testing

Crowd-Testing – throwing it into the wild

Another approach gaining ground (with utilization of cloud infrastructure) is Crowd-Testing which differs from traditional testing methods in that the testing is carried out by a number of different testers from different places, and not by hired test personnel. The software is put to test under diverse realistic platforms which make it more reliable, cost-effective, fast, and possibly, bug-free. Testing being remote allows specific target groups to be recruited as testers through the crowd.

This method of testing is employed when the software, whose success is determined by its user feedback, is diverse in nature. An example could be a shopping website where the users would be varied in terms of demographics, technical capabilities, geographies, ways to access and so on. Crowd testing Companies, given the data/numbers generally available at their disposal, would allow the shopping site to be exercised much more than a team of less-finite testers could do. Crowd testing is frequently implemented with gaming and mobile applications, when experts who may be difficult to find in one place are required for specific types of testing, or when resource or time is lacking for internal testing.

In crowd-testing, a community of testers pre-register for testing software voluntarily. The testers in such cases are generally paid per bug, depending upon the type of bug and its quoted price. The crowd-sourced testing team usually complement the in-house testing team. There is inherent diversity in terms of not only the environment (e.g. different bandwidths, OS, browser preferences, devices, etc.) but also languages, culture and locales. This though has to be managed well given that this cannot be planned as effectively as in-house testing and also confidentiality considerations must be dealt with before allowing for crowd-testing.

Automated Visual Comparison

Through tools like Beyond Compare, screenshots of say a page taken across 2 builds can be placed on top of each other (as transparencies) and differences found. The process of automated visual comparison is not easy and the results unpredictable. Since the screen-shots rely on the underlying application build, the browser used, the screen resolution and the display configuration/calibration, the comparison is fraught with inaccuracies and false alarms. So while automated comparing of web pages is exciting and a seemingly promising way complementing an eye-ball check, even with advances on binary and perceptual diffs front, the matter of visual comparison must be dealt with in a pragmatic manner.


The above provides choices from a strategy and tool perspective to target head-on, the cross-browsing testing challenges created by a web application being accessed in different ways dependent on the environment at user end.

Quite clearly, there is not a single do-all solution and what must be used is a mix of several dependent on the situation that the project and organization exists under.


  1. StatCounter
  2. StatOwl
  3. BrowserShots
  4. BrowserStack
  5. SauceLabs
  6. Crowdsourced Testing, Changing the Game
  7. Perceptual Testing for Safer Continuous Deployment
Originally Published:


, and