testing & QA

TestBash Brighton 2018

Novoda has a reputation of building the most desirable apps for Android and iOS. We believe living and sharing a hack-and-tell culture is one way to maintain top-shelf quality.

On the 16th of March part of the Novoda QA team - Bart, Jonathan and Sven - went to the TestBash conference in Brighton. It was our first time attending TestBash organised by Ministry Of Testing and we would like to share our thoughts on the talks and activities we found most interesting.

Bart Ziemba
Mobile Software Tester / QA

My favourite talk was called "Communities of Practice, the Missing Piece of Your Agile Organisation" by Emily Webber. It was not strictly related to testing but rather about how testers can better interact and learn from each other while still being a part of cross-functional, Agile teams. Emily proposed that it can be achieved by building communities of practice which are

groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.
Wenger-Trayner

Emily Webber - Communities of Practice, the Missing Piece of Your Agile Organisation

I liked very much the presented journey from silos teams of project managers, designers, developers or testers sitting separately, through cross-functional teams, finishing on communities of practice.

communityofpractice_ewebber

At Novoda we are running guilds, which are groups that focus on various areas of expertise. Therefore I particularly enjoyed and found useful the part on the ways companies and employees can benefit from participating in communities of practice, and how building and running them can be improved.

Benefits for a company:

  • Possibilities to try out new things in a safe environment without concerns that could appear while trying it out when building for production.
  • Members share knowledge that reduces duplication.
  • Community creates its own practices and approach to tackle various. problems which helps to scale and sell the services to potential clients.
  • Creation of collective knowledge base.

Benefits for employees:

  • Increased confidence as you get support from community.
  • Opportunities to learn from others.

How to build successful communities of practice:

  • Every member should feel a sense of community.
  • Common effort of community, not individuals.
  • Safe environment and space where members of the community could express their ideas and feelings without any limitations and fears.
  • Communities should have their leaders. Leaders need sufficient time to lead the community.
  • Set up common values, goals, mission of the community. These enable the community to keep moving forward.
  • Meet often and systematically.
  • Create backlog of tasks you want to work on.

I hope that learnings taken from Emily's talk will soon be applied to our Testing guild at Novoda and others too. To help with that we are going to get Emily's book so everyone can find out how to build successfull communities of practice.


Jonathan Taylor
Head of QA

I really liked a talk that a presenter named Rosie Hamilton gave on Discovering Logic in Testing. I found what Rosie presented interesting for a number of reasons. Firstly, she related everything back to game testing, which I’ve never done. Secondly, she presented different types of logical reasoning, the origins of various philosophies of logic, and how testers apply logic everyday.

It started out a bit like a semester at University. We’re going to need some basic terminology so that we’re all on the same page. Questions in logic are propositions. We use logic in order to prove that the proposition is either affirmed or denied, that is to say judged to be true or false. A theory that tries to answer a question is a hypothesis. And a hypothesis is based on the probability that it can be proven and can be weak or strong.

Got it? Ok.

Deductive Reasoning
The first kind of logic introduced was Deductive Reasoning. Testers, and everyone else for that matter, will use this type of logic in problem solving every day. Conclusions drawn from this kind of logical reasoning are very strong, but in order to use deductive reasoning to logically solve a problem, the hypothesis must be true. The example that Rosie gives of deductive reasoning:

  • The vending machine number for crisps is 06.
  • The cost of crisps is 50p
  • Hypothesis: if I put in 50p and press 06, I’ll get a bag of crisps

In a testers world:

  • I found a crashing bug on every version of Android Nougat.
  • I’ve got a Samsung S8 with Android Nougat on it.
  • Hypothesis: I’ll see the app crash on this phone.

However, it is possible that the input statement was not correct in the first place.

Inductive Reasoning
Inductive Reasoning is basically the opposite of deductive reasoning. From specific observations, we can make broad generalizations. And then we draw conclusions based on the data we’ve recorded.

At this point Rosie used a whole bunch of 19th century philosopher John Stuart Mill’s Methods of Inductive Reasoning to determine the root cause of a bug found in a game called Rift. I wouldn’t do them justice to recount them but she used:

See? Sounds a lot like a Uni class. But as far as software testing goes, it boils down to this:

  • What is common to each failure
  • Spot the difference between success and failure
  • A mixture of both those things
  • The more broken it is the more something has happened

Abductive Reasoning
Which brought us to Abductive Reasoning and introduced another 19th century philosopher, Charles Sanders Peirce, who thought Mill’s was full of it. Oooh, Philosopher throw down.

It’s widely misquoted that Sherlock Holmes had amazing powers of Deductive Reasoning, when he in fact had Abductive Reasoning powers. Abductive reasoning used a best guess, or inference to the best explanation. The example that Rosie first gives:

  • Sherlock see someone has a sun tan.
  • He abduces that they have recently been away on holiday.
  • He see that there is a band of skin around their wrist which has no tan.
  • He abduces that this person usually wears a watch, however they are not wearing their watch today.

Classic Sherlock Holmes.

An example bringing it back to software testing:

  • I’ve observed a bug in the Android application running Nougat when a notification is received.
  • I’ve observed the bug on multiple devices running Nougat
  • I do not observe any bug in notifications when the same Android application deployed on any other os version
  • I’d abduce there is something in the way that the app is calling the Nougat notification bar.

The Logic of Testing
We’re all testers in one capacity or another at this conference, but I’d be surprised if there were more than a few that had classical training in logic and philosophy. That’s what I found most interesting about this talk, it gives names and categories to the types of thought processes we use all the time to test, find bugs, find root causes, and solve problems.

Paraphrasing Rosie here:
When we find a problem interacting with an application and try to cause it to happen again, we are collecting data. When we are generating ideas about why this is happening, we are generating hypothesis. When we are proving or disproving we are collecting more data. That data could cause the ideas to change. When we are finding the simplest explanation, this is the simplest explanation that satisfies the data. If the problem can’t be explained. Collect more data. See if Hypotheses might change. Report problem. The side effects from doing all this is gaining strong observation skills and strong reasoning skills.


Sven Kröll
Testing Toolsmith

I adored the talk of Matt Long who was talking about the topic of programmable infrastructure and why we should test it like our production code. Thanks to the DevOps movement teams are working closely together. Unfortunately the Quality Engineers are often not involved in this. Programmable infrastructure becomes a pattern which is used by more and more projects, and the team members become very acquainted with the topic. However, it seems that history repeats itself. In the early days of development - testing was a discipline which was not very common. This lead to a lot of untested code which in turn led to unmaintainable code. It appears that Programmable Infrastructure is following the same path. The talk tried to tackle this exact problem.

He started with pointing the differences between DevOps and Programmable Infrastructure:

...DevOps is about culture, teams, and processes while Programmable Infrastructure is a collection of tools, techniques, and everything which belongs under the umbrella of Programmable Infrastructure.

The Problem
Matt continued by showing us the problem he had encountered at his previous position where the team build a broker for cloud applications and that he was the person who should test it.
He could break down the problem into two parts - a web testing part which is, in the end, a daily routine for a seasoned tester and the infrastructure part which was utterly new to him.

The main things he thought could go wrong were:

  1. It doesn't even deploy
  2. It does deploy, but it is configured wrong
  3. It is unusable for the users.

It became undeniable that testing an application and testing a programmable infrastructure have some similarities, which brought him to his next point: Tooling.

The Tooling
His first tool of choice was a linter which helped in keeping the project sane in an ever-changing context. The next step up in the pyramid were the unit tests which he said you could implement with a bunch of different tools depending on your tech stack. However, he strongly points out that whatever you do - bash scripts are always the worst option, as they tend to be overcomplicated.

The integration test tools he used made the next level of confidence. He was showing some examples of frameworks which he identified during his projects like Serverspec, Goss and the native test solutions with a cloud provider SDK.

The summary
He summarised the advantages, disadvantages and the things which could go south.
The good parts were that we have tests for each layer like in testing for production code and that it is very much doable. The more annoying things were that testers would need to have another framework to maintain and it ended in a lot of context and language switches which made it slower. Lastly, he pointed out that infrastructure can be very slow and expensive which can lead to management doubting whether it makes sense to continue.

All in all, I was very interested in the topic and could learn a lot. I hope that I will have the chance in one of my next projects to use the techniques I've learned.

UnExpo

The unexpo was a new tryout to modernise the expositions which are common on conferences. It is usually a place where companies try to sell their services and tools or try to lure testers in their nets. Richard Bradshaw tried it with a new approach. While unconferences become more and more common unexpos are very much a new thing. He invited us - the participants to create a stand with a topic we liked to talk about or share. Sven started a booth at the unexpo where they wanted to share the ideas of architecting an automation solutions with other testers. Other stands were about developer/tester relationships and people looking for jobs. Another exciting one was about the dojo and how testers are learning and a small game of stapling. I found the idea of an unexpo very promising, and I hope that this will become another tradition at test bashes.

Openspace

Every test bash is followed by an open space where some of the participants prolong the feeling of a test bash on a Saturday and exchange a lot of topics. The format of an open space is as the name already suggests less strict and up to the people who join. We started in the morning by announcing the topics which went from playing test related games like risk storming knight rider or playing exploding kittens to more serious discussions of test techniques and tools. My programme was very much divided in playing some games about testing to a debate on pairing and how to introduce it. I also attended a discussion about automation in testing and on the one hand how we can tackle, but also on the other hand how we can enable testers to create better automation solutions. I enjoyed it very much and could learn a lot at this event. However, I feel also very good that I could also share some of my knowledge with lesser experienced folks.

Sum up

All in all, Brighton's test bash gave us lots of new ideas we would like to apply to our day by day work. What is more, it allowed us to interact with a great community of testers from around the world. We are looking forward to the future test bashes.

Enjoyed this article? There's more...

We send out a small, valuable newsletter with the best stories, app design & development resources every month.

No spam, no giving your data away, unsubscribe anytime.

About Novoda

We plan, design, and develop the world’s most desirable software products. Our team’s expertise helps brands like Sony, Motorola, Tesco, Channel4, BBC, and News Corp build fully customized Android devices or simply make their mobile experiences the best on the market. Since 2008, our full in-house teams work from London, Liverpool, Berlin, Barcelona, and NYC.

Let’s get in contact