augmented reality

What IS real? Augmented Reality Testing with ARKit

QA Automation Engineer.

AR has been one of the up and coming technologies of the last few years, already known as the amazing tech behind Pokémon Go, Snapchat's animated emojis, and Instagram's 3D stickers. Let’s walk through the unique challenges it presents to QA and the testing process.

In the fourth and final part of our beginner’s guide to Augmented Reality (AR) using ARKit we’ll be taking a look at testing. This post outlines my journey from knowing nothing about AR or ARKit to creating my first working ARKit implementation. It’s intended to help others looking to write and test their first ARKit application.


AR was one of the technologies that caught my eye in 2017 but I never got to try it out first-hand. When I was presented with the opportunity to work on an internal research project that involved AR and Machine Learning I jumped at the task enthusiastically.

It's worth clarifying where this testing effort fits within the testing pyramid. If you haven't heard of it before, the testing pyramid is a guideline on the ratio of the different test levels we include in our project:

Testing Pyramid

In this post we're looking at the higher levels. While we do consider Integration Tests in the "Solutions" section by using the feature point clouds, we only performed exploratory UI testing. Unit tests would be done the same way since we're still adding Swift code.

As a QA I naturally decided to first have a look at the test support Apple provides. The search was brief as there isn’t any yet (other than some debug options, which we’ll cover in the “Solutions” section). That’s ok though since testing is about one’s mindset, not the toolset. I followed this tutorial and made a simple app to get familiar with ARKit.

Together with my fellow Novodans we then got working on building the app as described earlier in this series. The end goal was a simple one: add a wireframe to an object within an ARSCNView. The object itself is recognised using Machine Learning but that is a separate topic. For this post we’ll consider the AR side of the app only.

Testing Challenges

All testing is done on physical devices as simulators cannot run the Camera app.
The first thing that struck me is that there is no UI in our Demo App - at least, not in the typical sense. Once we've launched it the only UI elements are the 2 buttons at the bottom of the screen:

Bananas in a basket

One of these bananas doesn't exist!

The only user inputs are touches and the only events are the addition and removal of 3D models. Accessibility is thus already an issue that we’d have to look into before considering a production version.

The second observation is that testing AR is more time consuming than traditional testing activities. One needs to get up, manually try different scenes, objects, lighting conditions, etc. Compared to typing values in a text field this is physically slower and much less consistent.

Consistency in particular is an area of concern. As per Apple’s Developer Documentation:

ARKit does not guarantee that the number and arrangement of raw feature points will remain stable between software releases, or even between subsequent frames in the same session

Since feature points form the basis of the app’s view of the world it becomes clear that we simply can’t automate our efforts using XCUITest.

If you need an introduction to XCUITest head over to our Getting started with XCUITest post.

Every screenshot could potentially be different since the view is built upon constantly changing input data (our scene) despite our best efforts and this is just for a static viewpoint, we didn’t even try a moving setup.

A testing setup attempt

Putting aside everything else the biggest challenge is simply the lack of maturity of the field. Most developers are working with the APIs for the first time and trying to figure out the do’s and don'ts along the way. We’re probably a long way from solid testing frameworks, good practices, or reliable CI.


Let’s try to address the above difficulties.

“No UI” isn’t necessarily an issue. If anything, we have fewer inputs to worry about.
Screenshot comparison on a physical device is a no-go due to the scene consistency issues mentioned above. However, ARKit provides us with the ARPointCloud. If we capture the full set of points we can use that to provide consistent input. The full set of points can be accessed using one line:


This still just provides a single capture but we can then save consecutive sets of points to model the scenes we want to test. Printing a full Point Cloud gives something like this, where every float3 is a detected point in 3D space:

Feature point cloud

These correspond to the highlighted points here:

Feature points on an iPhone

To get the points to show on the screen, you’ll need to enable them by adding the debug options to the viewDidLoad method of your view controller:

sceneView.debugOptions =

You can also add the 3D World Origin marker to the above which represents the 3 axes starting from the [0,0,0] point in the current scene using this:

sceneView.debugOptions =

The Origin marker is particularly useful when experimenting with newly imported 3D models which might not get added at the expected location. This could be due to an accidental offset or scaling issue, since 3D modelling tools often use measurement units different to those used by Xcode, as mentioned in the Designing for AR part of this series.

The stats and fps counter at the bottom of the screen can be enabled with this line:

sceneView.showsStatistics = true

With the above data in place we could validate our app’s behaviour by asserting that the wireframe always gets added to the same point given a specific Point Cloud. We didn’t get to implement this solution as part of our experiment as we invested our time in building the app and learning more about AR but watch this space for our first attempts at automated testing based on Point Clouds.

Further opportunities

In this highly recommended talk (requires a Ministry of Testing free membership) BJ Aberle predicts that future AR testing will be done with game engines. He anticipates test environments built in Unity and Unreal. For me this is fascinating and makes perfect sense. By doing so the uncertainty of the real world is controlled but at the same time we have entire 3D worlds instead of just point clouds. There would probably be a lot of work involved in order to pass the Unity/Unreal scene as an input to ARKit but the opportunity to define the testing standards & methodologies for a whole new class of applications is part of what makes these experiments so exciting!

From there automated testing (both scripted and exploratory) becomes possible which then allows for CI. There’s another advantage of using game engines: At companies like Novoda we are always looking to improve remote collaboration. For the purposes of this project, pairing remotely on ARKit development involved one person trying things on an iPhone and either reporting back verbally or having to share their screen via QuickTime, which meant disconnecting from Xcode. Not an optimal experience. Imagine entering a 3D world and just interacting with the scene together with your teammates. Now, let’s think about accessing that world on a VR platform. The VR world is considered “reality”, and a QA will be testing the Augmented Reality objects which are “not-real”.

We are currently starting to look into this type of integration between Unity & ARKit and will report back with our findings.


In the fourth and final post of our ARKit series we looked at the testing challenges & opportunities presented by the exciting field of Augmented Reality.

The code for the demo app used can be found on Novoda’s GitHub and you can also check our ARDemoApp repo, where you can import your own models into an AR Scene without having to write a line of code.

Any comments or questions? Hit me up on Twitter @KaraviasD

Enjoyed this article? There's more...

We send out a small, valuable newsletter with the best stories, app design & development resources every month.

No spam, no giving your data away, unsubscribe anytime.

About Novoda

We plan, design, and develop the world’s most desirable software products. Our team’s expertise helps brands like Sony, Motorola, Tesco, Channel4, BBC, and News Corp build fully customized Android devices or simply make their mobile experiences the best on the market. Since 2008, our full in-house teams work from London, Liverpool, Berlin, Barcelona, and NYC.

Let’s get in contact