Among almost 200 talks from the Google I/O 2018 that are an amazing source of knowledge I have selected ones that are related to testing. Put some time aside and check out Google’s latest offerings in the Android Testing space.
Great talk by Richard Gaywood and Justin Broughton about pre-launch report and app crawlers available on the Play Console and Firebase Test Lab. Use them to test your APK before releasing it to your clients.
How does it work?
App crawler is not only able to find crashes, performance or security issues but can also record videos, take screenshots and share the logs of its execution.
Pre-launch report gives you the summary of any issues found by app crawler and also prioritises them. It even gives you the recommendation on whether to go-live or not.
If your application requires logging in, app crawler can handle that for you. It can either log you in automatically with your Google account or you can configure the login credentials that app crawler will use after detecting logging screen.
If your application uses deep links, they can be now testes with pre-launch reports. You can add up to 3 deep links into your pre-launch report configuration and app crawler will test them for you.
Robo Script can help you to drive the behaviour of the app crawler within the the areas of the application that are too complicated for it to test such us complex text forms. Robo Script can be recorded by Espresso Test Recorded which is part of the Android Studio.
Too little tapping area of the button, too small font size, too low contrast or missing TalkBack annotaions are just some of the accessibility flaws that can be detected by pre-launch reports so that development team can improve it.
To me, pre-launch reports sound like a great way to find the issues with your application at a very early stage of the development and later on deliver high quality applications to your customers.
More details about pre-launch report: Use pre-launch reports to identify issues
One of the most exciting announcements regarding the Feirbase Test Lab is starting the support for the iOS devices. It's a great news for those who work on cross-platform projects and want to stick to one cloud testing service.
There is a possibility to sign up for the beta here.
Full talk: Best practices for testing your Actions (Google I/O '18)
Presented by Aylin Altiok, product manager for Actions on Google and Nick Felker, developer of programs engineer for the Google assistant and IOT.
Before we get too far, we might want to check out another video from the conference, an intro to Actions on Google
OK, so now we know that Actions on Google is the developer platform for Google Assistant. Basically, voice commands that app developers create to be used anywhere Google Assistant is, such as on Google Home. And why we want to enable our apps to integrate with Actions. The people at Google clearly think that assistants is the way forward in computing.
Right off, I enjoyed the opening of this talk "Let's talk about why testing is so important." As a tester, you've piqued my interest :)
Aylin starts off with some data on looking at Play Store statistics about folks that give one star reviews so often mention stability and bugs in their comments. And that a large majority uninstall if they see stability issues. On flip side, 5 star reviews very often mention usability and stability in their reviews.
Then we get into an example of how to build an action using DialogFlow. DialogFlow takes care of all the natural language processing and machine learning business when you are building your action. Basically, you come up with the phrases that a user might use to interact with your service, and use DialogFlow to pick out the key parts of the phrase and map them to an intent.
Building an action doesn't appear that difficult with the tools they've provided. I'll have to give that a go.
Nick steps up to start talking about the part we're here for. How are we going to test our action. Once you've got your action, first step in testing is to use the Actions on [Google Simulator] (https://developers.google.com/actions/tools/simulator) you access through the Google Action Console. The tester can specify various input text or voice commands to be sent to the action, and you can see the translated request input and corresponding responses. The tester can continue sending text or voice commands as a contextual workflow through the app, and verify each response from the service.
While this seems pretty easy to execute, feels like black box manual testing. Wouldn't want to have to do this over and over, doesn't seem exhaustive.
Let's talk about how to make this testing more repeatable and exhaustive. Next they introduce the Automated Testing Library for Actions on Google. The testing library is built on Node.js and supports all the existing testing infrastructure. The library will allow us to send unstructured queries to the system (users can say anything), retrieve the appropriate SSML (Speech Synthesis Markup Language) responses.
Ok, so here's a sample node.js test script that Nick uses to demo how easy it is to test:
What this is going to do is send the users queries to the service and parse the responses, just as we'd expect.
Let's break down what's happening in and pick apart the basics:
A couple of declarations at the top. They're going to use the test assertion library for node, Chai, and the logging library for node.js, Winston. They've also got a declaration for the coordinates a place to be used as the location we're going to query for.
Then they included the testing library actions-on-google-testing. This is the nuts and bolts that is going to allow them to easily send queries and check the responses. They also need to include in the test script the credentials for accessing the actions, which can be grabbed from the developers console [https://console.developers.google.com/]. Should look like:
Now, they start the test, named "Find trail in Glassboro with card" and set the users Lat/Long using action.setLocation.
Next they start the conversation with the service the same way a user would, calling "trail blazer" and once they get a response, send their first query to the service. They send the text version of just what a user would send to the service "Find trails nearby", and wait for the
response. The response is going to be that the service needs to be granted permission to use our street address, to which they reply 'yes'.
After that they validate that the first thing the service responds with is the correct park name, and that the second thing the server responds with is a question if there's anything else. They also validate the cards that are returned for the title and sub title.
That's it. Pretty straight forward. If you have any familiarity with testing with node, none of this should be black magic. Build you test just the way the user would send questions to the service and validate the responses.
Stay tuned, we are preparing Part 2 for you!
We plan, design, and develop the world’s most desirable software products. Our team’s expertise helps
brands like Sony, Motorola, Tesco, Channel4, BBC, and News Corp build fully customized Android devices
or simply make their mobile experiences the best on the market. Since 2008, our full in-house teams work
from London, Liverpool, Berlin, Barcelona, and NYC.
Let’s get in contact