Exploring Mobile UI Testing - Network Isolation

Exploring Mobile UI Testing - Network Isolation

This post is part of a series that tries to describe my journey in the Mobile UI Automated Testing world. This time we'll be exploring network isolation, with some practices that allow having complete control over any network interaction when running tests.

Quick links to already published articles:


When designing our internal UI testing frameworks, one of the key aspects was to remove any interaction with the network. No matter what, dealing with real endpoints is bound to fail at some point as we can't make sure that network conditions are always the same nor we can control what the backend returns. Even though today we have several test suites that interact with our test/production environments in a "pure" E2E approach with the help of RobotFramework + Appium, we realized that it has some limitations.

For instance, for our use case, to replicate certain scenarios by interacting with the real server we'd need clean accounts that require heavy and long setups every time, and the same goes for returning them to their initial state. It's not a big deal if your users could have just a couple of different account configurations, but when it's dozens - or more - it becomes a real struggle. Moreover, when we interact with third-party APIs and do not have control over the underlying services, configuring accounts for testing might end up involving different actors that sometimes belong to different organizations and, as you might wonder, this ends up slowing down the entire process.

Also, what if the server is unreachable? We can't do much until the reported incident is closed. Some tests also cover cases where a particular status code is returned by the backend, being that 400s or 500s, or even worse if there's a nested error code that can be replicated in extremely particular conditions.

Replicating those scenarios is not that easy, and relying on unit tests written by the developers it's not always an option because some edge cases the QA team could find might not be already covered.

Achieving Network Isolation

Over the years I had the opportunity to see and try out several implementations to isolate networks in E2E UI tests that had their pros and cons, for instance using the phone clipboard as a communication channel between the test application and the target application (mainly on iOS), or creating fakes of the services based on the contracts defined by their protocol/interface that were replacing the real implementation in the test flavor/schema.

The former was unreliable, as some phones and OS versions did not allow such thing (e.g. forbidding copy-paste between apps: you could write to the clipboard but an empty string was returned when reading from it!), and the latter just turned into a boilerplate nightmare that was driving testers insane and made us figure that it was not meeting the expectations we had set: make mobile automation accessible to newcomers with no hassle.

Another solution was creating an additional environment just for E2E testing separated from test, staging, and production, but scalability here is in my opinion rather tricky. It sure works for some limited cases, but if you want to do extensive E2E testing you might need some additional logic (either on the frontend or backend) to make sure certain behaviors are forced and handled correctly in scenarios where dozens of devices at a time are performing mocked requests.

At this point, we just needed a high availability solution that would replace the real endpoints, configurable at runtime with close to zero maintenance. What if we just run a local webserver on the device, started in the test application process, that would stop at the end of every test execution?

Local webservers and JSON mocks to the rescue

In our custom automation testing strategies, we have a simple HTTP(S) server running on the device at localhost as part of the testing app that serves all the network requests instead of the real endpoints. It's just a bunch of https://localhost:<PORT>/... that replace the base URLs, which makes us sure that, unless the test runner process crashes, our network requests will always go through.

So we'd have something like this:

As you can see above, on the left-hand side of the picture the app communicates with real endpoints. In our case with network isolation, instead, on every setup, we make sure the URLs are swapped to custom ones pointing to a local webserver running on the device itself as part of the test app. This works for every network service the app might interact with, although if you want to go the extra mile you might end up for instance resorting to reflection on Android when dealing with third-party libraries that need to be somehow mocked in E2E and not enough abstraction is available in the target application code.

Fake network responses

Before diving in on the technical implementation of the local servers, we need to consider that faking network responses is however not an easy task if you want to make it work properly. I believe it's essential to have a single source of truth for the JSON files that will be served, and that needs to be solid and versatile enough to avoid having to move thousands of lines of JSON every time we have to make changes or additions to multiple projects.

When I first started implementing tests on Android, I made the terrible mistake of storing the JSON files we wanted the mock server to return within the repository of the target app. This solution soon led to flooding our Android development team with thousands of lines of JSON at every pull request, and I can imagine it was kind of annoying for them to browse through all the files all the time.

Another approach was possible, and a dedicated repository seemed a way better idea as it would allow to:

  • Avoid the need to replicate the same JSON files multiple times, for instance once in the test case definition and two more times within the Android and iOS projects. I didn't mention this before, but I was also populating our test book with test cases that referred to the same mocks that were attached as raw JSON files in our test management platform.
  • Avoid bloating the target apps repositories and have developers find thousands of diffs that have no meaning between pulls. I still feel bad for whoever had to scroll many times through 20+ file changes that had no meaning to them.
  • Have consistent history that allowed us to version and lock the fetch of the mocks to a given commit SHA. While copy-pasting countless times a single file to multiple locations is indeed a bad idea, maintaining a separate repository and following good git practices (branching, PRs, reviews, etc.) help make sure all the mocks are relevant at the correct time.

To elaborate a bit on the last point, in both the Android and iOS apps we have Mockfile, which is a plain text file that defines the following variables:

[email protected]:a-fancy-org/mocks-repository.git
MOCKS_TARGET_REF=main
MOCKS_DEFAULT_LOCAL_PATH="../mocks-repository-local-folder"
MOCKS_TARGET_PATH="app/src/androidTest/resources/assets/qa-network-mocks/"

MOCKS_REPO is  the SSH URL to the repository, MOCKS_TARGET the default HEAD reference, MOCKS_DEFAULT_LOCAL_PATH the default path we assume contains the network mocks, that can be changed when the script is run, and MOCKS_TARGET_PATH which is the folder within the target application source where we're gonna read the mocks from.

We then have Mockfile.lock, which is just a one-liner:

TARGET_MOCKS_SHA=<SHA of the target commit> 

TARGET_MOCKS_SHA is the SHA reference to the commit of the mocks repository we want to checkout when performing setup in our CI environment. This guarantees that unless we explicitly change it, we're always going to fetch the same version no matter what. This is extremely important as it allows us to go to any past revision of the target application and be entirely sure that if we run our UI tests suite we will have all the mocks it needs to run.

And lastly, we use a simple script that orchestrates everything: from either installing the mocks at the given SHA, updating the Mockfile.lock file to the latest revision of the main branch, or symlinking the local path in case we have the repo already cloned somewhere on our machine to allow faster iterations. In hindsight, instead of using symlinks git submodules would probably have been a better idea at least on iOS (mostly because of how Xcode treats them as IDE),  but we got our way around it by now.

The naming dilemma

Having the mocks ready and all setup, the last thing we had to deal with was file naming. In the beginning, since I was working on Android that preserves the hierarchy properly when building and deploying the app on the device, I didn't pay much attention to name uniqueness. The main and only rule for that was following the remote path when organizing the mock files.

So, for instance, for the endpoint [base-url]/api/v2/endpoint1/something we would have its JSON mock in the api/v2/endpoint1/something folder with a name recalling the one from the test case (e.g. for the TEST-100, we'd have a file called test_100_1.json).

Down the road, on Xcode this wasn't smart as - out of the box - compilation would fail due to having files with the same name within the project even if in different paths. We then started prepending the last path of the API URL, so the naming would become something_test_100_1.json to make sure to avoid any doubles (the _1 allows for multiple mocks in case of multiple requests for the same endpoint within the same test).


To wrap up, our testing solutions follow these few rules when running UI tests:

  • No real network allowed. All endpoints point to localhost by leveraging dependency injection on network configurations and tricky hacks on third party libraries wherever necessary;
  • All JSON files have unique and consistently structured names;
  • All JSON files are stored in a single repository and fetched at compile time on CI. If working locally, symlinking is available given that the tester has the repo cloned somewhere.

Next time we will start looking at how network requests are served in the Android implementation. We'll see how MockWebServer gave us a head start and what changes we introduced to adapt the mocking framework to our own needs.

Niccolò Forlini

Niccolò Forlini

Senior Mobile Engineer