On the 16th of March part of the Novoda QA team - Bart, Jonathan and Sven - went to the TestBash conference in Brighton. It was our first time attending TestBash organised by Ministry Of Testing and we would like to share our thoughts on the talks and activities we found most interesting.
My favourite talk was called "Communities of Practice, the Missing Piece of Your Agile Organisation" by Emily Webber. It was not strictly related to testing but rather about how testers can better interact and learn from each other while still being a part of cross-functional, Agile teams. Emily proposed that it can be achieved by building communities of practice which are
groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.
I liked very much the presented journey from silos teams of project managers, designers, developers or testers sitting separately, through cross-functional teams, finishing on communities of practice.
At Novoda we are running guilds, which are groups that focus on various areas of expertise. Therefore I particularly enjoyed and found useful the part on the ways companies and employees can benefit from participating in communities of practice, and how building and running them can be improved.
Benefits for a company:
Benefits for employees:
How to build successful communities of practice:
I hope that learnings taken from Emily's talk will soon be applied to our Testing guild at Novoda and others too. To help with that we are going to get Emily's book so everyone can find out how to build successfull communities of practice.
I really liked a talk that a presenter named Rosie Hamilton gave on Discovering Logic in Testing. I found what Rosie presented interesting for a number of reasons. Firstly, she related everything back to game testing, which I’ve never done. Secondly, she presented different types of logical reasoning, the origins of various philosophies of logic, and how testers apply logic everyday.
It started out a bit like a semester at University. We’re going to need some basic terminology so that we’re all on the same page. Questions in logic are propositions. We use logic in order to prove that the proposition is either affirmed or denied, that is to say judged to be true or false. A theory that tries to answer a question is a hypothesis. And a hypothesis is based on the probability that it can be proven and can be weak or strong.
Got it? Ok.
The first kind of logic introduced was Deductive Reasoning. Testers, and everyone else for that matter, will use this type of logic in problem solving every day. Conclusions drawn from this kind of logical reasoning are very strong, but in order to use deductive reasoning to logically solve a problem, the hypothesis must be true. The example that Rosie gives of deductive reasoning:
In a testers world:
However, it is possible that the input statement was not correct in the first place.
Inductive Reasoning is basically the opposite of deductive reasoning. From specific observations, we can make broad generalizations. And then we draw conclusions based on the data we’ve recorded.
At this point Rosie used a whole bunch of 19th century philosopher John Stuart Mill’s Methods of Inductive Reasoning to determine the root cause of a bug found in a game called Rift. I wouldn’t do them justice to recount them but she used:
See? Sounds a lot like a Uni class. But as far as software testing goes, it boils down to this:
Which brought us to Abductive Reasoning and introduced another 19th century philosopher, Charles Sanders Peirce, who thought Mill’s was full of it. Oooh, Philosopher throw down.
It’s widely misquoted that Sherlock Holmes had amazing powers of Deductive Reasoning, when he in fact had Abductive Reasoning powers. Abductive reasoning used a best guess, or inference to the best explanation. The example that Rosie first gives:
Classic Sherlock Holmes.
An example bringing it back to software testing:
The Logic of Testing
We’re all testers in one capacity or another at this conference, but I’d be surprised if there were more than a few that had classical training in logic and philosophy. That’s what I found most interesting about this talk, it gives names and categories to the types of thought processes we use all the time to test, find bugs, find root causes, and solve problems.
Paraphrasing Rosie here:
When we find a problem interacting with an application and try to cause it to happen again, we are collecting data. When we are generating ideas about why this is happening, we are generating hypothesis. When we are proving or disproving we are collecting more data. That data could cause the ideas to change. When we are finding the simplest explanation, this is the simplest explanation that satisfies the data. If the problem can’t be explained. Collect more data. See if Hypotheses might change. Report problem. The side effects from doing all this is gaining strong observation skills and strong reasoning skills.
I adored the talk of Matt Long who was talking about the topic of programmable infrastructure and why we should test it like our production code. Thanks to the DevOps movement teams are working closely together. Unfortunately the Quality Engineers are often not involved in this. Programmable infrastructure becomes a pattern which is used by more and more projects, and the team members become very acquainted with the topic. However, it seems that history repeats itself. In the early days of development - testing was a discipline which was not very common. This lead to a lot of untested code which in turn led to unmaintainable code. It appears that Programmable Infrastructure is following the same path. The talk tried to tackle this exact problem.
He started with pointing the differences between DevOps and Programmable Infrastructure:
...DevOps is about culture, teams, and processes while Programmable Infrastructure is a collection of tools, techniques, and everything which belongs under the umbrella of Programmable Infrastructure.
Matt continued by showing us the problem he had encountered at his previous position where the team build a broker for cloud applications and that he was the person who should test it.
He could break down the problem into two parts - a web testing part which is, in the end, a daily routine for a seasoned tester and the infrastructure part which was utterly new to him.
The main things he thought could go wrong were:
It became undeniable that testing an application and testing a programmable infrastructure have some similarities, which brought him to his next point: Tooling.
His first tool of choice was a linter which helped in keeping the project sane in an ever-changing context. The next step up in the pyramid were the unit tests which he said you could implement with a bunch of different tools depending on your tech stack. However, he strongly points out that whatever you do - bash scripts are always the worst option, as they tend to be overcomplicated.
The integration test tools he used made the next level of confidence. He was showing some examples of frameworks which he identified during his projects like Serverspec, Goss and the native test solutions with a cloud provider SDK.
He summarised the advantages, disadvantages and the things which could go south.
The good parts were that we have tests for each layer like in testing for production code and that it is very much doable. The more annoying things were that testers would need to have another framework to maintain and it ended in a lot of context and language switches which made it slower. Lastly, he pointed out that infrastructure can be very slow and expensive which can lead to management doubting whether it makes sense to continue.
All in all, I was very interested in the topic and could learn a lot. I hope that I will have the chance in one of my next projects to use the techniques I've learned.
The unexpo was a new tryout to modernise the expositions which are common on conferences. It is usually a place where companies try to sell their services and tools or try to lure testers in their nets. Richard Bradshaw tried it with a new approach. While unconferences become more and more common unexpos are very much a new thing. He invited us - the participants to create a stand with a topic we liked to talk about or share. Sven started a booth at the unexpo where they wanted to share the ideas of architecting an automation solutions with other testers. Other stands were about developer/tester relationships and people looking for jobs. Another exciting one was about the dojo and how testers are learning and a small game of stapling. I found the idea of an unexpo very promising, and I hope that this will become another tradition at test bashes.
Every test bash is followed by an open space where some of the participants prolong the feeling of a test bash on a Saturday and exchange a lot of topics. The format of an open space is as the name already suggests less strict and up to the people who join. We started in the morning by announcing the topics which went from playing test related games like risk storming knight rider or playing exploding kittens to more serious discussions of test techniques and tools. My programme was very much divided in playing some games about testing to a debate on pairing and how to introduce it. I also attended a discussion about automation in testing and on the one hand how we can tackle, but also on the other hand how we can enable testers to create better automation solutions. I enjoyed it very much and could learn a lot at this event. However, I feel also very good that I could also share some of my knowledge with lesser experienced folks.
All in all, Brighton's test bash gave us lots of new ideas we would like to apply to our day by day work. What is more, it allowed us to interact with a great community of testers from around the world. We are looking forward to the future test bashes.
We plan, design, and develop the world’s most desirable software products. Our team’s expertise helps
brands like Sony, Motorola, Tesco, Channel4, BBC, and News Corp build fully customized Android devices
or simply make their mobile experiences the best on the market. Since 2008, our full in-house teams work
from London, Liverpool, Berlin, Barcelona, and NYC.
Let’s get in contact