Oct 11, 2016

Testing Front-end in isolation

For quite a long time (since 2007) I am working on test automation. GUI testing was a huge part of that and it was always something I never really enjoyed. I did have a reason, so as many of my coleagues, due to a lot of facts. It is not only hard to get a green build in our CI pipeline but it is also nearly impossible to execute all of the tests locally before commiting the code. At the moment I am working at eBay/Marktplaats and we have a pretty large set of Selenium tests (around 2500). The website itself (http://marktplaats.nl) is built using Service Oriented Architecture and we are trying to move towards using microservices but that is still a long way. At the moment our website consists of 5 front-end applications and around 75 back-end services. At the same time there are a lot of different databases. In some cases services share the same database and in some case they have a dedicated one. Unfortunately, when it comes to a Selenium test, it is always implemented as an integration test, meaning that all services and databases are supposed to be up and running. Due to the fact of having around 40 enginers in 6 teams working on the same codebase it is hard to get a green build. People often push broken code and, since front-end depends on every service, a small bug somewhere deep in a specific service can break the whole flow.

At some point I decided to rethink the UI testing strategy and started to look for something else. I had to find an answer to a lot of questions. What do we really want to test with Selenium? Can we reduce the scope of Selenium tests, make them run 2-3 seconds on average per test and drastically reduce the amount of flakiness in our CI? Do we really want a layout test to break due to a bug in back-end service? Can we make Selenium tests just part of the front-end repo without maintaining separate tests and stubs repos? And last but not least: can we make executing tests locally easier for everyone in the team?

Structure of Selenium tests

What drives me crazy in Selenium tests is that in many cases the flow gets quite complicated involving multiple pages and forms interactions. In the end a test might just verify some small popup or a dropdown element, but to get to the page you have to do lots of perparations. For instance, if you would like to test your profile page with a list of your ads on Marktplaats, you have to do the following:

  • Register a user (could be achieved by invoking a method on a user-service directly)
  • Place multiple ads for this user (could be achieved with a different service)
  • Open a website
  • Login
  • Go to user profile page
  • Verify something (e.g. layout)

If there is a bug on a user-service, or let say it wasn’t properly deployed to an integration environment, or a database machine went off – our test would fail. But is it a valid failure if the scope of the test was just validation of profile page layout? For a front-end engineer, who is primarily concerned in getting CSS and Html working, such failure makes no sense. It doesn’t give them a valuable feedback. In fact it delays an actual feedback on the UI component that this engineer is primarily interested in. And that is just one of many examples that we encounter every day. I personally think that the only valid reason for such test to fail is if it was intended to test integration of back-ends with front-end. But that is a totally different story. In case of integration testing I suggest that we focus on functionality of the complete system without diving in details of page rendering.

Front-end responsibility and test scope

If you think about it, a front-end in Service Oriented Architecture is mainly responsible for three things:

  • Rendering the data (HTML/CSS)
  • Interpreting user actions (JavaScript).
  • Invoking services

That’s it! Of course, when it comes to a monolithic web application it is a totally different story, but even there we can come up with a similar list if we remove business logic from the scope of Selenium test. I will come back to that later.


I like the concept of mocking 3rd parties in unit tests. It makes it much easier to test your components by mocking all 3rd party invocations. At Marktplaats we mostly use Mockito in our unit tests. It has a very convinient api and works well in terms of actual mocking and assertion. Lets see if we can make it work in a Selenium test for a Java webapp.

Imagine we have a webapp that directly invokes services over some protocol like Thrift. What we need to do, is to replace all those Thrift implementations of every service with something that would handle all mock invocations based on a test session. Why? There is no sense of doing all the mocking unless we can scale our tests. And in order for that to happen we need some proper mechanism of isolating every mock within its own test. Normally you don’t have to care about such things when you are mocking components in your unit tests. But since we have to use a browser and hammer our website from the outside – simple mocking would not work. Imagine that you want to run 20 tests in parallel for the same page with different outcome. In one test you might want to check a successful call and in another test you might want to test how a page reflects the error thrown from the service. Those are two incompatible states, but we can make it work. The easiest thing that came to mind was this picture:

The idea is simple. Each test generates its own unique key (e.g. UUID in Java). Then it registers a mock of a service in a so called Mock Registry. A Mock Registry is a simple storage of mocks and keys (e.g. Map<String, Object>) which is implemented in the test scope. It means that it can only be used within the tests and it will never go to production. Once all the mocks are prepared, a test loads a browser and sets a specific cookie. That cookie is very important as it contains previously generated key for the mocks. When the request hits our server, it extracts the key from cookies and searches for a proper mock in a Mock Registry. That’s a high level plan, lets dive into some more details.

Of course we need a bit more stuff to make it all work. On the diagram above you can see that we also need to implement a Request Filter and a Mock Proxy. Request filter is used for extracting the key from cookies and storing it within its thread (e.g. ThreadLocal). Later this key will be picked up by a Mock Proxy object in order to find a right mock. Mock Proxy works by delegating a call from controller to a prepared mock. You might ask if it isn’t too hacky? The following picture would look much better, wouldn’t it?

Normally I try to avoid using ThreadLocal and try not to store anything per thread. But that is only possible if the cookies or a request meta data are passed to all services all the time. A Mock Proxy needs to be able to extract the cookie for each service invocation and if it can’t find anything like that in method arguments, we have to go for the ThreadLocal trick. Normally it makes sense to pass some request meta data to all services since many companies implement feature switches or A/B testing. The feature switch is defined by a specific cookie and its value is passed to all the services. If you have that already your mocking implementation will be simpler. But in many cases it’s not so lets see how that hack looks like.

Lets start with a test implementation first and get a general idea of a test structure. Imagine that we want to test a search page on classifieds website. To make it simple lets say that we just have a single search-service which is represented with the following Java interface:

public interface FindingService { List<Ad> findAds(String query, int categoryId); }

When used in production it could be a Thrift-based implementation. But for our test we are going to replace it with Mockito mock. For a mock registration we need 3 things: actual mock, a key and a name of a mocked service.

public class SearchTest extends GalenTestBase { private String mockKey = UUID.randomUUID().toString(); FindingService findingService = MockRegistry.registerMock( Mockito.mock(FindingService.class), mockKey, FindingService.class.getName() ); // ... }

A MockRegistry can be implemented as a singleton object. It only needs two methods that would allow to store and obtain a mock based on a pair of key and name. We need a name of a mock since a single test might manage multiple mocks with a single key.

public enum MockRegistry { INSTANCE; private final Map<Pair<String, String>, Object> mocks = new HashMap<>(); public static void registerMock(String sessionId, Object mock, String mockName) { MockRegistry.INSTANCE.mocks.put(new Pair<>(sessionId, mockName), mock); } public static Object pickMock(String sessionId, String mockName) { return MockRegistry.INSTANCE.mocks.get(new Pair<>(sessionId, mockName)); } }

Lets get back to our test. We already instantiated and registered a mock. Now it is time for some action. First of all, it is important to create a browser and inject a cookie with our key.

public class SearchTest extends GalenTestBase { // ... WebDriver driver; @BeforeTest public void setupDriver() { driver = new FirefoxDriver(); driver.get("http://marktplaats.nl"); driver.manage().addCookie(new Cookie( "__SERVICES_MOCK_KEY__", mockKey, "/", COOKIE_EXPIRATION_DATE )); } // ... }

This is how the test would look like. In the example below I am using Galen in order to test search page layout.

public class SearchTest extends GalenTestBase { // ... @Test public void should_display_ads_when_searching_for_iphone() { when(findingService.findAds(any(), 0)) .thenReturn(asList( new Ad("iPhone 7", "Very nice phone", 456.45), new Ad("iPhone 3", "Very nice phone", 100.00), )); driver.get("/z.html?query=iphone"); checkLayout(driver, "/specs/search-iphone.gspec"); verify(findingService).findAds("iphone", 0); verifyNoMoreInteractions(findService); } // ... }

As you can see a test is pretty straitforward. Lets see how we can make a functional test. What if we need to test that clicking a link in a search results takes us to an item page. An item page could be yet another page that takes data from another service (e.g. AdService). This means that we have to again prepare all mocks that will be used during the test flow.

public class SearchTest extends GalenTestBase { // ... FindingService findingService = MockRegistry.registerMock( Mockito.mock(FindingService.class), mockKey, FindingService.class.getName() ); AdService AdService = MockRegistry.registerMock( Mockito.mock(AdService.class), mockKey, AdService.class.getName() ); // ... @Test public void should_take_to_item_page_when_clicking_link_in_search_results() { when(findingService.findAds(any(), 0)) .thenReturn(asList( new Ad(123, "iPhone 7", "Very nice phone", 456.45), new Ad(456, "iPhone 3", "Very nice phone", 100.00), )); when(adService.getAdById(any())) .thenReturn(new Ad(456, "iPhone 3", "Very nice phone", 100.00)); driver.get("/z.html?query=iphone"); new SearchResultsPage(driver).clickResult(1); AdPage adPage = new AdPage(driver); assertThat(adPage.getTitle(), is("iPhone 7")); verify(findingService).findAds("iphone", 0); verify(adService).getAdById(456); verifyNoMoreInteractions(findService, adService); } // ... }

As you can see this time we have to verify that front-end did invoke an ad-service after the test clicked the second ad.

Mocking implementation

We already have a MockRegistry, now let see how we can make use of it. First we will write a static method for mocked service instantiation. In Java we can easily create a proxy implementation that is generic to every service.

public class MockedService { public static <T> T createService(Class<T> serviceClass) { return (T) Proxy.newProxyInstance( serviceClass.getClassLoader(), new Class<?>[] {serviceClass}, new MockProxy(serviceClass.getName()) ); } }

The actual mock orchestration logic is implemented in MockProxy class:

class MockProxy<T extends Service> implements InvocationHandler { private final String serviceName; MockProxy(String serviceName) { this.serviceName = serviceName; } @Override public Object invoke(Object proxy, Method method, Object[] args) throws Throwable { // Searching for a mock key for current thread Optional<String> mockKey = CurrentMockKey.get(); if (mockKey.isPresent()) { // Obtaining a mock based on key and service name Object mock = MockRegistry.pickMock(mockKey.get(), serviceName); if (mock != null) { // Delegating current service invokation to a mock return method.invoke(mock, args); } else { throw new RuntimeException("Mock " + method.getName() + " in " + serviceName + " service is not defined for key " + mockKey.get()); } } else { throw new RuntimeException("You haven't provided a mock key"); } } }

It works!

All the above code should be sufficient for isolated front-end testing. The only thing left is to instantiate a webserver and replace all services with mock proxies. I am not going to cover that part in the article since it varies from application to application. E.g. in case of Spring Framework you would have to create a separate applicationContext.xml file in which you would provide a different factory for services instantiation. In other cases you might add an argument to the main class of your app and instatiate it with different service factories depending on the context. If you want to see it in action just checkout my galen-ide open-source project. I am still working on this application and it is not released yet, but I did made a lot of UI bugs during its development. At some point I got tired of fixing those bugs with every change and I decided to introduce isolated Selenium + Galen tests in order to verify page functionality and layout every time before I push the code.

Galen-ide is a monolithic webapp, thus it doesn’t have standalone services. But, I decided to refactor it in such way that all business logic is abstracted with simple Java interfaces. That made all my controllers pretty simple as they were simply taking a user input and delegating a call to a service component without knowing its internal implementation. In my TestNg tests I added a BeforeSuite annotation which starts a webserver with all the services replaced by the mocks. I execute all my tests using mvn clean verify command and all of those are also executed in Travis with every commit.

WireMocking the API

All of the above makes sense when we are talking about monolithic webapp or a front-end that communicates to services directly. But there is another way of implementing a front-end. Here I will try to briefly cover a front-end that is abstracted from services by an API layer.

This means that we simply have 2 webservers: Front-end and API. Both of them are registered in HTTP Router so that user doesn’t notice a thing. When you open a page in browser you will first get HTML, JavaScript and CSS loaded from Front-end app. Later, once the page is being rendered, it might send some Ajax call to API under the same domain. Those are the calls we can easily mock. This time it will be much easier implementation wise, but a bit trickier from configuration point of view. The good part is that we don’t have to implement mock orchestration logic in our front-end code. The downside is that we have to get HTTP router configured with a standalone mock registered in it. Luckily there is a good solution for the mock called WireMock.

Once we have all those 3 apps up and running we can write our tests in exactly the same way as earlier in this article. A test will generate a key and prepare a mock. But this time the mock preparation will be done using WireMock API. Here is short example of mocking /messages resource:

{ "request" : { "urlPath" : "/messages", "method" : "GET", "cookies" : { "__MockKey__" : { "equalTo" : "wer893275rweit32o5u3tewqt32" } } }, "response" : { "status" : 200, "headers": { "Content-Type" : "application/json" }, "body": "{\"messages\":[{\"id\":\"34234\",\"text\":\"Hi There!\"}]}" } }

As you can see I am using a __MockKey__ cookie in that mocked call. Once a test opens a page with that cookie it will get that simulated message. You can do a lot more with WireMock so I advise to read its documentation. One of the cool things is that you can also run it in a proxy mode.


At the moment I don’t have a lot of experience with isolated front-end testing and I am still experimenting with it but here is what I was able to achieve so far:

1) Tests are stable. It doesn’t make a difference whether I run the tests on my local machine or if I exectute it in Travis CI. I always get exactly the same result in every run. Since I’ve started using this approach I never had any flacky results. This is simply due to the fact that there are no 3rd parties involved. No databases, no services and no business logic. I am able to focus on a simple GUI testing which Selenium is really good at.

2) Front-end gets well documented. Every test represents a page in a specific state and configures all the mock invocations. That means that if I need to figure out how my page works I can just go to the tests and see what kind of data gets sent to services.

3) Easy to execute tests locally. In case of monolithic app it doesn’t impress that much, but if we are talking about a service oriented architecture with dozens of services and databases – that’s a completely different world. Previously, if I would want to execute Selenium tests locally for Marktplaats website, I would need to do a lot of stuff. First I would have to get all virtual machines, sync all github repos, build all services, execute all database migrations, kick every service and hope for the best. Since many engineers are pushing their code in different services – you rarely get a green run on your local machines. On the other hand, with isolated front-end tests all you need to do is get a repo of your webapp and install a browser. That’s it, you don’t have to do any extra steps to get your tests up and running locally. Since this moment there is no execuse for not running UI test before pushing the code changes.

4) Fast feedback. Normally, on integration it takes on average 30 seconds for Selenium test to execute. Some test take even longer. But it takes a lot more to get the complete system up and running. I did have some really bad time when I had to spend 2 days just to fix everything on my local machine. And once I got my tests pass locally I couldn’t get the same result in Jenkins since someone else introduced a bug in one service. With isolated tests I got an average of 2 seconds per test and around 3 to 5 seconds for a webserver to get started.

5) Testing the error handling. Using this approach I am able not only to test the successful page states but I can also simulate errors comming from services. Or I can even simulate a service timeout. That means I can easily cover any possible states of any page in my application which wasn’t possible on integration environment.

comments powered by Disqus