Tuesday, 25 October 2016

TestBash Manchester 2016


Last week was awesome! Why? Because it was time for TestBash again, but this time in my hometown of Manchester! I was really looking forward to seeing familiar faces again, meeting new ones and learning a ton about testing again, especially in very familiar surroundings (my current workplace is barely 10 minutes walk away from the main conference!).

Pre-TestBash meetup

If you’ve never been to a TestBash before, one of the best parts of it is the socialising before and after. Usually there is a meetup hosted on meetup.com by Software Testing Club on the night before the main conference day. This is your opportunity to meet fellow attendees, speakers and even say hi to the organisers Rosie and Richard. I fully recommend that you attend this and meet people you’ve never spoken to before, we’re a friendly bunch and have plenty of stories to share!
At Manchester, one of the major sponsors, RentalCars, hosted the meetup in their very impressive offices in the centre of Manchester. I’m definitely a little bit jealous of their very unique beach-themed cafeteria!

Main Conference day

The next day was the main conference day at the Lowry Theatre in Salford Quays, I was unfortunately a little late and just missed out on getting involved with one of the Lean Coffee sessions but in the end it was ok because I could meet the rest of my team from work (who had fortunately been given budget to come along too!).
The talks for this TestBash definitely had a common theme which I would summarise as “psychology and learning”. The first five talks definitely followed a theme of psychology, starting with James Bach’s talk on critical and social distance.

The talks
I was really looking forward to James’ talk partly because his previous talks inspired me to start this blog and get more involved with the community to begin with, but also because the topic is close to my heart as my career has been driven by it. When I started as a games tester, I was effectively working as an offshore QA with pretty poor and slow communication channels with developers. Since then I’ve been driven to reduce social distance and prove that I can maintain critical distance even if I become very intimate with the software I test. James’ talk pretty much covered this and provided some useful language and framing to explain it. As always I learn so much from observing James’ style of talking too!

Following on from James were two talks from opposite ends of conversations - Iain Bright on the psychology of asking questions and Stephen Mounsey on listening. I took plenty of notes for these talks because I know I’d like to improve in both of these areas. I was actually on a little bit of a personal mission try and hold back my excitement and listen carefully to other testers during the event because I’ve felt I’ve talked too much before. Trying to carefully think about the questions you want to ask, and why you’re asking them was the main takeaway I took from Iain’s talk and Stephen’s made me aware of how often I’m thinking about people’s words (and my own response) rather than actually listening to what they have to say before they’re finished. Personally there was plenty of food for thought here that I’d like to try and slow down and keep in mind in future.

Speaking of slowing down and keeping in mind, I loved Kim Knup’s talk on positivity! I think every tester out there has felt they are negative to some degree, simply due to the nature of reporting problems. I definitely catch myself complaining a lot when things aren’t going great so I’m going to try and take onboard her ideas such as making notes of 3 positive things each day, to try and train my brain to look out for them. I’ve already started trying to high-5 people in the office to put a smile on my face haha.

Just before lunch, Duncan Nisbet gave a talk on “shifting left” called “Testers be more Salmon!”. I was looking forward to this as I know Duncan from NWEWT and from the Liverpool Tester Gathering. The topic itself is something that I’ve also been trying to encourage at work and my colleague Greg Farrow has written about it before on this blog. Essentially the idea is to test earlier, asking questions about requirements, testability and gathering information to both save time and catch bugs when it’s cheapest to do so. Duncan made the great point too that shared documentation doesn’t mean shared understanding, simply because something is documented, it doesn’t mean everyone understands it the same way.

I feel the afternoon talks had a theme running through about learning, starting with Helena Jeret-Mäe and Joep Schuurkes’ talk on “the 4 hour tester experiment”. This was a little bit of an explanation of an experiment they’d like to try based on Tim Ferriss’ 4 hour chef book. The idea is to try and see if you can train a tester in 4 hours, focusing on just the basics. I’d definitely encourage you to have a go at this challenge on their website fourhourtester.net. They talked a little about their opinion that testing isn’t something you can just teach, that it is much better to learn through practice and I fully agree with this, especially the analogy about learning to drive a car!

Following Helena and Joep was Mark Winteringham’s talk on the deadly sins of acceptance criteria. To be honest, I was looking forward to Mark speaking because he gave a great talk at the Liverpool Tester Gathering on testing APIs but I think I’m a little bored of hearing about the pitfalls of BDD now (Behaviour Driven Development). That’s not to take anything away from Mark’s talk, he shared some pretty familiar examples on what not to do and had a great way with humour in his talks. But the negativity around BDD or acceptance scenarios feels like the negativity I’ve encountered around Microservices and I’d like to hear some well-thought out, positive experience reports. It feels like all of the balanced or thoughtful talks tend to be quite negative really and I don’t really see a great deal of value in using BDD over more straight-forward approaches such as TDD (Test Driven Development) and just trying to encourage collaboration without the reliance on process to force it. I want to really emphasise that Mark gave a great talk though and I’m sure others who actively use BDD took a lot away from it! I don’t mean to imply here that Mark’s talk wasn’t well-thought out or negative about BDD, just my own feelings on the subject make me want hear more on the benefits.

Next was Huib Schoots with his talk on the “path to awesomeness” which was effectively a series of lists of great attributes for testers, areas to focus on to improve and generally just what he feels makes a great tester. Echoing the sentiments of Helena and Joep’s talk, he really emphasised the need to practice, practice, practice! One particular line he gave that I really liked was “testing is a psychological and social problem as well as a technical one”.

Gwen Diagram followed Huib with her talk on “Is live causing your test problems?”. If Duncan’s talk was about “shifting left”, then Gwen’s talk was about “shifting right” - she gave lots of great advice and ideas on how to “test in live” such as caring about and learning from your production logs and monitoring or using feature flags. Her talk was very on point for me after I recently attended a meetup on Microservices and I’ve very much got DevOps on my mind at the moment, so I was very appreciative when she came along to chat about it at the Open Space the next day too!

Finishing the day was Beren Van Daele with his experience report on trying to make testing visible on a project that he was a test consultant on. Any talk that includes a slide which reads “My Mistakes” is always going to be very valuable, it’s important to share our mistakes and how we learnt from them and Beren shared a lot! I loved his idea of taking the step of actually creating a physical wall of bugs (out of insect pictures) to get people to recognise the bugs that needed fixing.

Overall the talks were excellent, I made lots of notes and ended the day with the now familiar headache from trying to stuff so much into my brain. My colleagues seemed to enjoy and learn a lot too so all in all I was very happy.

99 second talks
So at the end of the conference, they usually have a section for 99 second talks open for anyone attending to stand up on stage and talk about anything they like. I intentionally decided not to do one this time because I wanted to focus on the main talks and not worry about what I was going to say later as I did in Brighton. I also wanted to save the topic I had in my head for the following day at the Open Space.
Those that did do one though, were great, especially a developer looking for a tester hug and Gem Hill’s on meditation and mindfulness. Not many people broke the 99 second limit though!

Post-conference meetup
So as with the pre-conference meetup, there’s usually a meetup after the conference at a nearby bar or pub. For Manchester this was Craftbrew which was barely 30 seconds walk away from the Lowry. Again, I fully recommend attending these as it gives you more time to chat to other attendees, especially as many only attend the conference day. Everyone seemed to have enjoyed the day and were all bubbling with ideas from the talks.

Open Space

So for the first time TestBash held an “Open Space” day on the Saturday after the conference. This was held at LateRooms’ offices in the centre of Manchester (also very impressive offices that I’m jealous of, especially the number of great meeting rooms!). I had never been to one of these before and I was keen to try it out. If you’ve never been to one, it’s basically a conference where there is no formal plan, all of the attendees come up with talks, workshops or discussions they’d like to offer and everyone arranges a multi-track schedule that makes sense. I had no idea what to expect before I went but I knew I would get something useful out of it, and it definitely did!


To give you an idea, some of the things that were on the schedule were a workshop on security testing using Dan Billing’s insecure app called Ticket Magpie, a workshop on OWASP’s ZAP tool and in-depth discussions on BDD, automation and how to help new testers.

As I said before, I had a topic in mind that I wanted to discuss more with people so I ran a discussion on “Testing in DevOps”. I explained my feelings on the topic and openly asked what people felt about it and where they felt testing was going. I got a lot of great notes, ideas and thoughts out of this discussion and I’ll definitely be writing up a post about it in future! I’m very keen to talk about it at a DevOps meetup in future too.

I really enjoyed the Open Space, it gave me further chances to meet and chat to people I hadn’t met before and I really enjoy having focused, in-depth discussions on topics in a very similar way to a peer conference. I treated as an opportunity to learn from very experienced peers and have some of my own ideas or opinions challenged and improved. Hopefully I provided the same for others! I think I actually enjoyed this more than the main conference day in many respects, I guess because it gave more time to discuss ideas and challenge them, as opposed to simply listening the whole time.

I’m absolutely looking at attending the next one at Brighton!


Once again, TestBash has been one of the best experiences of my life, I really mean that. I absolutely adore the relaxed and friendly atmosphere, I used to consider myself quite shy and I’ve found it so easy to meet and chat to so many people. In just a short space of time I make so many new friends and pick up so many new ideas to think about. I’ve never looked forward to educational or social events like this, even though I’ve spent most of my life in education! But if you’re only ever going to try one testing conference experience, then absolutely to go to TestBash and try it out. I hope to see you at one.
Many thanks again to Rosie Sherry, Richard Bradshaw and everyone who helped organise, sponsor or make TestBash Manchester happen.

Sunday, 16 October 2016

The words “Testing” and “QA”


I’m not one to harp on about semantics, I mean, I care that we all understand each other, but I’m very conscious of how irritating it can be to constantly point out “you’re using that word wrong”. However, I do find it frustrating when people assume that language used in the community or in documentation is commonly agreed, understood and given the same meaning by everyone in the same way. Very frequently, this is not the case and the words “testing” and “QA” seem to be an example of this. As a tester, keeping in mind these semantics can be very useful in resolving misunderstandings that may have been missed.

Why do I find it frustrating?

Ever since I was a newbie to testing and the tech industry, I’ve been trying to understand. I hear phrases or words and try to find out what they mean. Someone may teach me the word, I may read it in documentation or hear it explained in talks. Most of the time I learn words through the context they are used, which is probably the most natural way we all learn language.
So I learn words or phrases and believe I understand what they mean and hence believe that I would be understood when I reuse them the same way. However, because we are creating new words and phrases all of the time, some of them don’t have a “standard” meaning. An obvious example of this I can think of is regional dialects or slang, and in the UK we have many, many words to describe a bread roll.

So if I asked for a “Bacon Stotty” in London I would get a puzzled look and if I asked for a “Muffin” I might get one of these:
I personally see this as simply a natural development of language, we invent new words all of the time to describe things that are new to us, especially if we don’t have an appropriate word to use already. However, due to distance and culture as humans we may come up with different words to describe the same thing or the same word describing two different things.

The frustration for me is when I encounter people who don’t seem to accept this. The phrase “QA” is widely used in the tech world in many different contexts and doesn’t appear to have much common agreement on its definition. For this reason, I don’t like using the phrase because I don’t believe I would be well understood most of the time. However, I haven’t felt the need to talk about this until now because I haven’t come across an example that justified my feeling on it.

Enough accuracy for enough understanding

So I recently interviewed someone who was looking to start as a tester, they didn’t have any experience of testing and were keen to understand as much as they could about the role. I will add they were impressive for someone with no experience! During the interview one of my colleagues explained the gist of how the development teams were structured and what they worked on. They described a tester on one team as performing “performance testing” and then another one on another team as performing “QA”. I wryly smiled to myself about the use of the word “QA” but I didn’t say anything to it because it was a reasonable, high-level description of how we might work in a very generalised sense. While I knew some of the words were misleading, it wasn’t the time or place to pick it apart because:
  1. I didn’t want to embarrass my colleague and I didn’t want to give the candidate a bad impression of our relationship.
  2. I didn’t want to spend the limited time we had in the interview explaining why those words weren’t right.
  3. The gist that was given felt enough to me for the candidate to understand how we worked, at least for now.

Recognising misunderstanding and addressing it

The candidate was happy with this explanation and I was happy that it was enough to at least give them some context to ask any further questions. The interview continued and at the end the candidate expressed they were really happy we had an “Internal QA” role because they knew of a similar role in their current company which involved checking products were meeting guidelines and standards set out by governing bodies. They felt that this was a role they would like to start with because it would give them an easier step into testing and more technical roles (they openly admitted they didn’t have much technical knowledge of experience but wanted to learn).
Now it had become apparent the gist we had given was clearly not appropriate because the candidate had understood the phrase “QA” differently to how my colleague had meant. The candidate now clearly expressed a desire for such a role, particularly because they were keen to have some guidance. They liked the sound of QA because it sounded more scripted, more guided and therefore an easier leap into the tech world. I immediately explained to the candidate that in the context of this business, we don’t have a role like that and really the roles are far more exploratory in nature. I then had to go into explaining what I defined as “QA” and “testing” what the difference was and why my colleague had interchangeably used them.
This real example hit home for me a justification that the phrase “QA” is misused and misunderstood. It can have multiple interpretations and in this case it could have led an interview candidate into accepting a job completely different to what they had envisioned.

What are my definitions of “QA” and “Testing”?

My definitions are:
  • “QA” or “Quality Assurance” involves checking that a piece of software conforms to a predetermined set of qualities. This can come from legal requirements, industry standards or certification. QA is generally scripted in nature.
  • “Testing” involves exploring a piece of software to discover information about it. While it may include the use of checking predetermined sets of qualities it focuses on the unknown rather than the known. Testing is generally exploratory in nature.
I do not dare suggest these are commonly agreed definitions and I definitely do not go around correcting people on this. I have learned it’s pretty irritating and counterproductive to do this. However, I use these definitions to help me identify when I have misunderstood or someone else misunderstood these phrases. In other words, I’m aware that these words aren’t commonly understood the same way and I choose to clarify my understanding this way when the discussion comes up.

No one is at fault here

I want to emphasise that in the interview situation I refer to, no one was at fault. I do not expect my colleague to understand all of the semantics of testing. I do not expect a newbie tester to be aware of them either. I’m simply observing that it is useful to keep in mind these semantics and how people may be silently misunderstanding each other and not realising it.
I don’t think we can really do much about preventing this divergence of meanings of words and phrases, I personally feel they are a natural flow of language. The English language is full of words that have many contextual meanings as it is. Nor can we “know” all of these meanings. All we can do is share our different meanings of words and raise each others awareness to these different meanings and perhaps come to some greater consensus on definitions.

Neat ideas from some recent meetups


The last few weeks I’ve been to quite a few testing meetups and there were some notable ideas that I really loved. Plus, I like promoting that these events exist and if you haven’t been to one, find one near you and go! (or if there isn’t one, start one!).

Challenges of testability

I had no idea about this one until I saw a tweet about free tickets from Ash Winter. It was a free 1-day, 2-track conference held in Leeds on the 20th September. I thoroughly enjoyed one of the workshops by Clem Pickering and Mike Grimwood. Their workshop was on testability and it involved designing a toaster (which reminded very much of this TED talk).

What I loved about this workshop was the visual demonstration of how people could take many different interpretations from vague requirements, the assumptions we make and how asking questions on testability helps drive out these assumptions. They used James Bach’s testability heuristics as a tool to help people explore different kinds of testability and generate some great discussion and ideas in the workshop. I loved this workshop so much in its simplicity and visual impact that I’d like to have a go running it at work when I can find a good time.

Using data to drive testing

The following week was the Liverpool Tester Gathering which featured my old manager from Sony, Gaz Tynan, talking about the visual methods they use to plan and review test coverage. This was another highlight for me as again I love the visual impact. Gaz talked through how they collect data from both exploratory tests and automation checks and map it onto a game map (similar to Google Maps). As you can imagine, many games at Sony have a 3D virtual world to explore and test, so they can represent their test data as a physical map too. He then demonstrated how they can then see where they test coverage is lacking or where they may want to explore more through examples like heat maps of framerate drop or where bugs were collecting together.
Seeing this visual representation of test coverage really got me thinking about how I could achieve similar results back at work. It was really inspiring and yet more evidence for me that visual representations of ideas, problems or reporting on data are so appealing and compelling.

Tuesday, 4 October 2016

Providing value beyond bugs


I’ve had several experiences where I’ve worked on projects where a manager has proclaimed “we don’t need testers for this project, we’re not bothered about bugs”. When I started testing 6 years ago, I would have felt that this was wrong simply because “bugs” can take many forms. However nowadays I’ve come to realise that testers provide much more than “bugs”. I still find it incredibly difficult to explain this to people (particularly managers) though and it has only been through bad experiences that I’ve felt justified in arguing the case.

Why people might think they don’t need testing

I have now worked with several projects where I’ve been told that testing wasn’t required, the reasons have varied:
  • “This is an internal project and we’re not bothered about embarrassing bugs”
  • “This is a quick prototype and we want you to focus on more important projects”
  • “This isn’t the software you need to test, it’s just a library/tool/software we’ve bought so we don’t need you to look at it”
  • “This project is in its early stages so there is nothing for you to test yet”
It seems that people still see testing as simply being the bugs we report, and not only that, but sometimes people are only thinking of bugs in the popular sense - visually obvious and embarrassing bugs. They also seem to be making decisions regarding risk and priority with the information they have at the time.

Why did I think these projects would benefit from a tester?

Well I think the main reason is because I see testing differently, I don’t see my job as simply reporting bugs, but about telling the truth about software, about observing the people and processes that produce the software and trying to help those people to produce better software. In other words, I see testers as people dedicated to creative and critical thinking on a project.
  • Internal projects still need to ‘work’ right? Not to mention, do we really understand all of the kinds of bugs or risks up front about the project? Maybe the project is for internal use, but does it interact with external systems? What about the people and processes, do we not want to help them? Do we not want to track the progress of the project? Even internal projects have costs and implications if they are late or don’t fit the requirements.
  • Frequently quick prototypes prove to be quite useful, this is generally their purpose - to rapidly learn what is useful or valuable. Testers are great at rapid learning and especially learning what customers or end users find valuable or useful, so such information is invaluable when moving on to design the eventual fully-fledged product. Why not boost the success of your prototype by involving a tester to help focus the product on the learning as well as bring their skills in analysing the truth of what makes the prototype a success. Not to mention that prototypes tend to very quickly become the “finished product” without any re-design or re-development, it’s easy for people to assume it’s ready to change the usage from “concept” to “ready for mass use”. Involving a tester might help avoid this easy slide and highlight the risks.
  • Also, what if the projects we are testing affect your prototype? We won’t know that if we aren’t aware of your prototype, which means there is a risk that we could break your prototype if it depends on other projects.
  • Assuming other people’s software is well tested and perfect is an all too easy assumption. Then there is the question about whether it’s even compatible with your own software and the assumption that you understand what it does or how it exactly works.
  • Many bugs are caused by assumptions made at the design stage either because of ambiguous language, cognitive bias’ towards information or simply because we cannot think of everything. Does it not make sense to more cheaply catch and resolve these bugs at the design stage rather than finding them after we have spent time building a product?
I believe in all these cases and in any project, you can benefit greatly from involving a human being who dedicates their focus to critically analysing each piece of information and offering feedback. It’s not just about banging on keys or finding “bugs” in the language of computers, you can discover plenty of “bugs” in the language of humans.

Changing the tone from an argument to an invitation

Its draining to constantly have to argue to be included in meetings and projects and this can heavily affect my motivation at times. It’s much easier when I’m invited and do not need to convince people of my value. How to do this then? I think I’m still learning how to become better at this, but one obvious way is simply by becoming extremely knowledgeable about a project, its technology and the end users. By simply being able to answer many questions and provide this knowledge, you naturally become an oracle people refer to, which means you become invited to more meetings.

Take advantage of your opportunities to learn

What do I mean by opportunities? Well firstly, opportunities to become very knowledgeable can take many forms - for example, investigating bugs typically shows you the guts of a system and also provides information of why a system was built a particular way (otherwise how do you know its a “bug”?). Try to view everything as an opportunity to learn more and you may pick up and remember a lot more.

Now you’re in a design meeting, how do you prove your worth?

I also mean opportunities in terms of demonstrating your value in providing critical and creative feedback. It can be very easy to squander these, I’ve repeatedly been too critical, asking the wrong question at the wrong time and earning the ire of my colleagues for wasting time or dragging out discussions. This can lead to being left out of discussions for being disruptive and not constructive.
Another area I’m trying to learn to improve is catching myself when I react emotionally to something, I tend to blurt out a critical question when I see something wrong. Sometimes I should consider how to word my question better to come across as more inquisitive rather than criticising. Sometimes I should really ask the question later and not put people on the spot. However, as ever, such feelings can be heuristics and sometimes I am right to ask the question there and then, sometimes the emotion draws attention to important details. The main point here though is that you shouldn’t always let your emotions guide you.


Asking too many questions tends to make people feel they are being interrogated, and perhaps not feel trusted to do their jobs. For example, several times I’ve found myself invited to meetings about new projects that I’m not completely up to speed with and a discussion start with the participants assuming I know some things already. It’s too easy in these situations to criticise and say that they’re making a lot of assumptions, when really they established details in another meeting and you’re simply not aware. While this is something that needs to be resolved, by aggressively criticising them for this, they will get defensive and you potentially derail the meeting. It’s being aware of these situations and being critical at the right times that allows you to provide your value, being critical all the time feels counter-productive to me.

Asking questions rather than making declarations

I’ve caught myself declaring things in the past, which comes off as criticism of an individual or a statement of objective fact to argue about. I try to instead ask questions rather than make emotional statements, emphasising that I’m not criticising an individual or think something is bad. I guess in a way, this is a kind of “safety language”, to use the RST (Rapid Software Testing) phrase. For example;

Instead of:
“This isn’t a good idea, I don’t know why you’re suggesting that?”
I might try:
“I don’t mean to criticise you, but are we sure that X is a good idea?”

Instead of:
“These requirements are really ambiguous and unclear”
I might try:
“Apologies if this has already been covered, but could we clarify the requirements here? X and Y could be misinterpreted?”


I think to change the view on whether testers contribute more than just bugs, we have to show value in other ways. One way to do this is becoming an oracle of knowledge about a product, business or end users in general. This can then lead to further opportunities where we can provide value through our critical analysis skills. However, if we are not careful, we can deal more damage to this image than good if we are too disruptive.
I care a lot that my contribution to projects is valued and that I’m of service. I don’t want to waste people’s time and I want people to invite me because they appreciate my skills and knowledge. However, the nature of testing tends to be viewed as negative and sometimes I myself contribute to this negative view when I disrupt meetings with heavy handed criticism. Life is all about carefully balancing my critical feedback with self-criticism and also recognising opportunities and carefully not squandering them.