Monday, 19 December 2016

The temptation to split dev and test work in sprints - don’t do it!

Introduction

About 3 and a half years ago, I was new to sprints and scrum. Coming from the videogames industry, I was used to a process where I would test code that came from developers and return bug reports. I had heard the words “sprint” and “scrum” before but I had no idea how testing fit into them, so I joined a company where I could figure that out. This is what I figured out.

What’s a sprint?

If you’re not familiar with scrum or agile, then a sprint is effectively a short-term project plan where a team of people decide the work that they can complete within a 1, 2 or 3 week window. Work is “committed” (promised) to be completed in that time frame and the team tracks their progress. After each sprint, reviews and retrospectives are held to help the team find what works well and what helps them complete more work to a higher standard while still maintaining their commitment. The main focus of sprint work is to complete the work and trying to avoid leaving work unfinished.

Where does testing fit?

So normally teams set up a task board with columns titled something similar to “To Do, In Progress, Done”. Sometimes people add more columns or use different names but the usage is similar. Anyone from the same background as me would be tempted to then suggest that an additional column could be added between “In Progress” and “Done”. The logic being that “when you’ve finished your development work, I’ll test it”. In my head, this was trying to work with what I knew already in this new environment. We ended up with columns similar to “To Do, Build/Dev, Testing, Done”.

Bad idea

So at first, I thought things were working ok, I feel one of my strengths is learning and picking up things fast so I got stuck in and kept up with the 5 developers in my team. Most of the time I was fortunate that the work dropped sequentially or wasn’t particularly time consuming to test. This didn’t last long though and eventually we started to fail to complete work on time. This happened either because I was testing it all at the end of a sprint or because the work was highly dependant upon each other and the problems with integration weren’t found until very late.
This meant we had to continue some work in future sprints. Now I no longer had plenty of time to write my test plans at the start, but I was busy testing last sprint’s work and then testing this sprint’s work! I no longer had time to spend learning more automation or exploring newer areas to me like performance testing. All of my time was consumed trying to test all of this work and I couldn’t do it. What went wrong?

A change in approach

I would love to say I quickly realised the problem and fixed it but it took me a long time to realise the issue. I think part of this I will put down to not knowing any better and partly working with developers who didn’t know any better. Either way, a while later I realised that the problem was that I was trying to test everything and the developers started to rely on me for that. I’ve since realised that there is a fair bit of psychology involved in software development and this was one of my biggest lessons.
We eventually decided to stop splitting up work between roles, mainly because we found that developers tended to treat work that was in “test” as “done” to them, freeing themselves up to work on even more development work. This created a bottleneck, as the only tester as I was testing work from yesterday while they were busy with today. Instead, I came to the realisation that there is little benefit to splitting the work up in this way, at least not through process. We should be working together to complete the work, not trying to focus on our own personal queue. I shifted from testing after development was thought complete, to trying to test earlier, even trying to “test” code as developers were writing it, pairing with them to analyse the solution.

Understanding what a “role” means

I think for me this lesson has been more about realising that playing the role of “tester” does not necessarily mean I carry out all of the “testing” in a team. It does mean I am responsible for guiding, improving and facilitating good testing, but I do not necessarily have to complete it all personally. An additional part of this lesson is that I cannot rely on other people to define my role for me - as a relative newbie to testing I relied on the developers to help me figure out where I should be. While I’ve learnt from it, I also know that I may need to explain this learning again in future because it is not immediately obvious.

So where does testing really fit?

Everywhere, in parallel and in collaboration to development. Testing is a supportive function of the team’s work, it now doesn’t make sense to me to define it as another column of things to do. It has no set time frame where it’s best to perform, and it doesn’t always have a great deal of repetition in execution. It is extremely contextual.
In addition, that’s not to say you shouldn’t test alone or separately to ongoing teamwork. You absolutely must test alone as well, to allow you to focus and to process information. It’s just that you must choose to do this - where it is appropriate.

Definition of “Done”

One of my recent approaches was to define the definition of “Done” as:

“Code deployed live, with appropriate monitoring or logging and feedback has been gathered from the end user”

Others may have different definitions, but I liked to focus the team on getting our work in a position where we could learn from it and take actions in the following sprint. For me, it meant we could actually pivot based on end user feedback or our monitoring and measure our success. Instead of finishing a sprint with no idea whether our work was useful or not, planning a new sprint not knowing whether we would need to change it.

Summary

  • Avoid using columns like “Dev” and “Test” in sprint boards. It seems to lead to a separation of work where work is considered “Done” before it is tested.
  • Instead, try to test in parallel as much as possible (but not all of the time), try to test earlier and lower down a technology stack (such as testing API endpoints before a GUI is completed that uses them).
  • Encourage developers to test still and instead try to carefully pick when and where to personally carry out the bulk of the testing. Try to coach the team on becoming better at testing, share your skills and knowledge and let them help you.
  • Altering the definition of “Done” seemed to help for me, it was useful to focus the team on an objective that meant we really didn’t have to keep returning to work we had considered completed. In other words, make sure “done” means “done”.

Which languages are best to learn for testing?

Introduction

I’ve seen this question raised quite a lot in testing circles regarding which programming language is best to learn. Like it or not, the current trend in the industry seems to be asking much more of testers, with a view to creating more automation and having a much greater understanding of the technology they are testing.

Why learn a programming language?

What is your motivation for wanting to learn a programming language? In order to test well, you don’t need to know any programming language. There are very particular situations or contexts where programming may be useful to me as a tester, such as wanting to write some automated checks, learn more about what the product I’m testing is actually doing under the surface or simply wanting to save time by creating tools to help myself. However, these situations don’t arise all of the time.
It’s also worth highlighting that to write programs, I need to understand a lot about the domain I’m working with. If I want to write an automated check, I need to test the product first to understand what is worth checking. If I want to read some code, I need to understand the context that code is used for, what its purpose is.
So even if I did have something that is worth programming, I still need to “test” to identify it, understand it and consider whether it is worth it. Simply learning to program is not enough, which is why as a tester you can bring a lot to the design of automated checks and why developers cannot easily test the product themselves.

Automated checks

So it seems the usual reason testers are looking to learn a programming language is to create automated regression suites for speed and reliability. Typically I find the advice tends to be that you should learn and use the same language as your back-end developers (so if they use Java to build the product you test and Java to write unit tests, then you should learn Java too). The argument being that by using the same language, you can access their support and help more easily when you need it. However, this depends upon your current relationship with your developers and their location. Perhaps you may not be very close to your developers and won’t benefit from their support - this may not be something you can easily change.
You are going to have judge for yourself which language, but the biggest factors that affect my choice would be:
  • How comfortable am I writing code in this language?
  • What support can I get from the developers I work with?
  • What support can I get from other testers?
  • How well supported is the language in terms of libraries or capabilities? (for example, if you want to write Selenium checks, is there documentation on how to use Selenium in that language?).
  • Can I write programs in this language in a way that makes them easily understood by other people?
There is no easy answer to these questions so I wouldn’t recommend any particular language. However, to help narrow your research, I would suggest focusing on these languages to consider:
  • Java
  • C#
  • Python
  • Ruby
  • Javascript
At the time of writing, these are some of the more popular languages to learn with regards to automated checks.

Toolsmithing

Maybe you’re interested in simply being able to make use of a programming language to create your own tools. A “tool” in this context can be something small like a script that rapidly repeats a certain action over and over. For example, I once created a script that compared two sets of 100 invoices with each other. It looked at each invoice total and compared the old total with a new one and saved the difference in a text file. This meant I could rapidly compare and identify obvious errors, saving my own time and helping me perform this particular action more accurately. However, it didn’t replace the testing I performed, it simply augmented it, allowing me to focus on more interesting tests.

I created tools like this in a programming language called Python. I personally love using this language because it’s very easy to read, has a lot of support in terms of libraries and documentation and can allow you to experiment with ideas very rapidly. I very much recommend Python as a starting point for building simple tools and it can be used to write automated checks if you so wish.

There’s a great tutorial on getting started with Python here.

Alternatives to programming

Do you want to become a more technically capable tester? Not really keen on learning a programming language but feel like you need to learn? Well perhaps you can find value in learning other technologies and concepts. While programming is a very powerful tool, it’s not the only one that a tester can learn in order to become more technically capable.
  • Test lower - maybe you could understand more about the technologies powering the product you’re testing and test lower and earlier? For example, many web services are built upon APIs (Application Programming Interfaces), perhaps you could learn how to test these? A place to start interacting with APIs is trying out Postman.
  • A similar approach to testing lower is learning about databases and how to operate them using languages such as MySQL or Postgres.
  • Research tools that can help you test or provide more information. For example, Google Chrome DevTools have lots of very useful tools for interacting with websites and debugging problems or emulating mobile devices.
  • Talk to developers! Ask them to draw diagrams explaining how the product works technically and ask them explain things you don’t understand. It can be difficult at first knowing what is important to remember and what can be forgotten but simply taking an interest in their work and asking questions can even help them understand their own work better. I find there is no better test of my own understanding than having to explain myself!

Summary

  • You don’t need to learn a programming language to be a great tester.
  • There is no one particular language that is “the best”, but there are some popular ones that are good to get started with.
  • There are other ways to become a more technical tester that don’t involve learning programming.

Tuesday, 25 October 2016

TestBash Manchester 2016

Introduction

Last week was awesome! Why? Because it was time for TestBash again, but this time in my hometown of Manchester! I was really looking forward to seeing familiar faces again, meeting new ones and learning a ton about testing again, especially in very familiar surroundings (my current workplace is barely 10 minutes walk away from the main conference!).

Pre-TestBash meetup

If you’ve never been to a TestBash before, one of the best parts of it is the socialising before and after. Usually there is a meetup hosted on meetup.com by Software Testing Club on the night before the main conference day. This is your opportunity to meet fellow attendees, speakers and even say hi to the organisers Rosie and Richard. I fully recommend that you attend this and meet people you’ve never spoken to before, we’re a friendly bunch and have plenty of stories to share!
At Manchester, one of the major sponsors, RentalCars, hosted the meetup in their very impressive offices in the centre of Manchester. I’m definitely a little bit jealous of their very unique beach-themed cafeteria!

Main Conference day

The next day was the main conference day at the Lowry Theatre in Salford Quays, I was unfortunately a little late and just missed out on getting involved with one of the Lean Coffee sessions but in the end it was ok because I could meet the rest of my team from work (who had fortunately been given budget to come along too!).
The talks for this TestBash definitely had a common theme which I would summarise as “psychology and learning”. The first five talks definitely followed a theme of psychology, starting with James Bach’s talk on critical and social distance.

The talks
I was really looking forward to James’ talk partly because his previous talks inspired me to start this blog and get more involved with the community to begin with, but also because the topic is close to my heart as my career has been driven by it. When I started as a games tester, I was effectively working as an offshore QA with pretty poor and slow communication channels with developers. Since then I’ve been driven to reduce social distance and prove that I can maintain critical distance even if I become very intimate with the software I test. James’ talk pretty much covered this and provided some useful language and framing to explain it. As always I learn so much from observing James’ style of talking too!

Following on from James were two talks from opposite ends of conversations - Iain Bright on the psychology of asking questions and Stephen Mounsey on listening. I took plenty of notes for these talks because I know I’d like to improve in both of these areas. I was actually on a little bit of a personal mission try and hold back my excitement and listen carefully to other testers during the event because I’ve felt I’ve talked too much before. Trying to carefully think about the questions you want to ask, and why you’re asking them was the main takeaway I took from Iain’s talk and Stephen’s made me aware of how often I’m thinking about people’s words (and my own response) rather than actually listening to what they have to say before they’re finished. Personally there was plenty of food for thought here that I’d like to try and slow down and keep in mind in future.

Speaking of slowing down and keeping in mind, I loved Kim Knup’s talk on positivity! I think every tester out there has felt they are negative to some degree, simply due to the nature of reporting problems. I definitely catch myself complaining a lot when things aren’t going great so I’m going to try and take onboard her ideas such as making notes of 3 positive things each day, to try and train my brain to look out for them. I’ve already started trying to high-5 people in the office to put a smile on my face haha.

Just before lunch, Duncan Nisbet gave a talk on “shifting left” called “Testers be more Salmon!”. I was looking forward to this as I know Duncan from NWEWT and from the Liverpool Tester Gathering. The topic itself is something that I’ve also been trying to encourage at work and my colleague Greg Farrow has written about it before on this blog. Essentially the idea is to test earlier, asking questions about requirements, testability and gathering information to both save time and catch bugs when it’s cheapest to do so. Duncan made the great point too that shared documentation doesn’t mean shared understanding, simply because something is documented, it doesn’t mean everyone understands it the same way.

I feel the afternoon talks had a theme running through about learning, starting with Helena Jeret-Mäe and Joep Schuurkes’ talk on “the 4 hour tester experiment”. This was a little bit of an explanation of an experiment they’d like to try based on Tim Ferriss’ 4 hour chef book. The idea is to try and see if you can train a tester in 4 hours, focusing on just the basics. I’d definitely encourage you to have a go at this challenge on their website fourhourtester.net. They talked a little about their opinion that testing isn’t something you can just teach, that it is much better to learn through practice and I fully agree with this, especially the analogy about learning to drive a car!

Following Helena and Joep was Mark Winteringham’s talk on the deadly sins of acceptance criteria. To be honest, I was looking forward to Mark speaking because he gave a great talk at the Liverpool Tester Gathering on testing APIs but I think I’m a little bored of hearing about the pitfalls of BDD now (Behaviour Driven Development). That’s not to take anything away from Mark’s talk, he shared some pretty familiar examples on what not to do and had a great way with humour in his talks. But the negativity around BDD or acceptance scenarios feels like the negativity I’ve encountered around Microservices and I’d like to hear some well-thought out, positive experience reports. It feels like all of the balanced or thoughtful talks tend to be quite negative really and I don’t really see a great deal of value in using BDD over more straight-forward approaches such as TDD (Test Driven Development) and just trying to encourage collaboration without the reliance on process to force it. I want to really emphasise that Mark gave a great talk though and I’m sure others who actively use BDD took a lot away from it! I don’t mean to imply here that Mark’s talk wasn’t well-thought out or negative about BDD, just my own feelings on the subject make me want hear more on the benefits.

Next was Huib Schoots with his talk on the “path to awesomeness” which was effectively a series of lists of great attributes for testers, areas to focus on to improve and generally just what he feels makes a great tester. Echoing the sentiments of Helena and Joep’s talk, he really emphasised the need to practice, practice, practice! One particular line he gave that I really liked was “testing is a psychological and social problem as well as a technical one”.

Gwen Diagram followed Huib with her talk on “Is live causing your test problems?”. If Duncan’s talk was about “shifting left”, then Gwen’s talk was about “shifting right” - she gave lots of great advice and ideas on how to “test in live” such as caring about and learning from your production logs and monitoring or using feature flags. Her talk was very on point for me after I recently attended a meetup on Microservices and I’ve very much got DevOps on my mind at the moment, so I was very appreciative when she came along to chat about it at the Open Space the next day too!

Finishing the day was Beren Van Daele with his experience report on trying to make testing visible on a project that he was a test consultant on. Any talk that includes a slide which reads “My Mistakes” is always going to be very valuable, it’s important to share our mistakes and how we learnt from them and Beren shared a lot! I loved his idea of taking the step of actually creating a physical wall of bugs (out of insect pictures) to get people to recognise the bugs that needed fixing.

Overall the talks were excellent, I made lots of notes and ended the day with the now familiar headache from trying to stuff so much into my brain. My colleagues seemed to enjoy and learn a lot too so all in all I was very happy.

99 second talks
So at the end of the conference, they usually have a section for 99 second talks open for anyone attending to stand up on stage and talk about anything they like. I intentionally decided not to do one this time because I wanted to focus on the main talks and not worry about what I was going to say later as I did in Brighton. I also wanted to save the topic I had in my head for the following day at the Open Space.
Those that did do one though, were great, especially a developer looking for a tester hug and Gem Hill’s on meditation and mindfulness. Not many people broke the 99 second limit though!

Post-conference meetup
So as with the pre-conference meetup, there’s usually a meetup after the conference at a nearby bar or pub. For Manchester this was Craftbrew which was barely 30 seconds walk away from the Lowry. Again, I fully recommend attending these as it gives you more time to chat to other attendees, especially as many only attend the conference day. Everyone seemed to have enjoyed the day and were all bubbling with ideas from the talks.

Open Space

So for the first time TestBash held an “Open Space” day on the Saturday after the conference. This was held at LateRooms’ offices in the centre of Manchester (also very impressive offices that I’m jealous of, especially the number of great meeting rooms!). I had never been to one of these before and I was keen to try it out. If you’ve never been to one, it’s basically a conference where there is no formal plan, all of the attendees come up with talks, workshops or discussions they’d like to offer and everyone arranges a multi-track schedule that makes sense. I had no idea what to expect before I went but I knew I would get something useful out of it, and it definitely did!

openspace.jpg

To give you an idea, some of the things that were on the schedule were a workshop on security testing using Dan Billing’s insecure app called Ticket Magpie, a workshop on OWASP’s ZAP tool and in-depth discussions on BDD, automation and how to help new testers.

As I said before, I had a topic in mind that I wanted to discuss more with people so I ran a discussion on “Testing in DevOps”. I explained my feelings on the topic and openly asked what people felt about it and where they felt testing was going. I got a lot of great notes, ideas and thoughts out of this discussion and I’ll definitely be writing up a post about it in future! I’m very keen to talk about it at a DevOps meetup in future too.

I really enjoyed the Open Space, it gave me further chances to meet and chat to people I hadn’t met before and I really enjoy having focused, in-depth discussions on topics in a very similar way to a peer conference. I treated as an opportunity to learn from very experienced peers and have some of my own ideas or opinions challenged and improved. Hopefully I provided the same for others! I think I actually enjoyed this more than the main conference day in many respects, I guess because it gave more time to discuss ideas and challenge them, as opposed to simply listening the whole time.

I’m absolutely looking at attending the next one at Brighton!

Summary

Once again, TestBash has been one of the best experiences of my life, I really mean that. I absolutely adore the relaxed and friendly atmosphere, I used to consider myself quite shy and I’ve found it so easy to meet and chat to so many people. In just a short space of time I make so many new friends and pick up so many new ideas to think about. I’ve never looked forward to educational or social events like this, even though I’ve spent most of my life in education! But if you’re only ever going to try one testing conference experience, then absolutely to go to TestBash and try it out. I hope to see you at one.
Many thanks again to Rosie Sherry, Richard Bradshaw and everyone who helped organise, sponsor or make TestBash Manchester happen.

Sunday, 16 October 2016

The words “Testing” and “QA”

Introduction

I’m not one to harp on about semantics, I mean, I care that we all understand each other, but I’m very conscious of how irritating it can be to constantly point out “you’re using that word wrong”. However, I do find it frustrating when people assume that language used in the community or in documentation is commonly agreed, understood and given the same meaning by everyone in the same way. Very frequently, this is not the case and the words “testing” and “QA” seem to be an example of this. As a tester, keeping in mind these semantics can be very useful in resolving misunderstandings that may have been missed.

Why do I find it frustrating?

Ever since I was a newbie to testing and the tech industry, I’ve been trying to understand. I hear phrases or words and try to find out what they mean. Someone may teach me the word, I may read it in documentation or hear it explained in talks. Most of the time I learn words through the context they are used, which is probably the most natural way we all learn language.
So I learn words or phrases and believe I understand what they mean and hence believe that I would be understood when I reuse them the same way. However, because we are creating new words and phrases all of the time, some of them don’t have a “standard” meaning. An obvious example of this I can think of is regional dialects or slang, and in the UK we have many, many words to describe a bread roll.


So if I asked for a “Bacon Stotty” in London I would get a puzzled look and if I asked for a “Muffin” I might get one of these:
nci-vol-2609-300.jpg
I personally see this as simply a natural development of language, we invent new words all of the time to describe things that are new to us, especially if we don’t have an appropriate word to use already. However, due to distance and culture as humans we may come up with different words to describe the same thing or the same word describing two different things.


The frustration for me is when I encounter people who don’t seem to accept this. The phrase “QA” is widely used in the tech world in many different contexts and doesn’t appear to have much common agreement on its definition. For this reason, I don’t like using the phrase because I don’t believe I would be well understood most of the time. However, I haven’t felt the need to talk about this until now because I haven’t come across an example that justified my feeling on it.

Enough accuracy for enough understanding

So I recently interviewed someone who was looking to start as a tester, they didn’t have any experience of testing and were keen to understand as much as they could about the role. I will add they were impressive for someone with no experience! During the interview one of my colleagues explained the gist of how the development teams were structured and what they worked on. They described a tester on one team as performing “performance testing” and then another one on another team as performing “QA”. I wryly smiled to myself about the use of the word “QA” but I didn’t say anything to it because it was a reasonable, high-level description of how we might work in a very generalised sense. While I knew some of the words were misleading, it wasn’t the time or place to pick it apart because:
  1. I didn’t want to embarrass my colleague and I didn’t want to give the candidate a bad impression of our relationship.
  2. I didn’t want to spend the limited time we had in the interview explaining why those words weren’t right.
  3. The gist that was given felt enough to me for the candidate to understand how we worked, at least for now.

Recognising misunderstanding and addressing it

The candidate was happy with this explanation and I was happy that it was enough to at least give them some context to ask any further questions. The interview continued and at the end the candidate expressed they were really happy we had an “Internal QA” role because they knew of a similar role in their current company which involved checking products were meeting guidelines and standards set out by governing bodies. They felt that this was a role they would like to start with because it would give them an easier step into testing and more technical roles (they openly admitted they didn’t have much technical knowledge of experience but wanted to learn).
Now it had become apparent the gist we had given was clearly not appropriate because the candidate had understood the phrase “QA” differently to how my colleague had meant. The candidate now clearly expressed a desire for such a role, particularly because they were keen to have some guidance. They liked the sound of QA because it sounded more scripted, more guided and therefore an easier leap into the tech world. I immediately explained to the candidate that in the context of this business, we don’t have a role like that and really the roles are far more exploratory in nature. I then had to go into explaining what I defined as “QA” and “testing” what the difference was and why my colleague had interchangeably used them.
This real example hit home for me a justification that the phrase “QA” is misused and misunderstood. It can have multiple interpretations and in this case it could have led an interview candidate into accepting a job completely different to what they had envisioned.

What are my definitions of “QA” and “Testing”?

My definitions are:
  • “QA” or “Quality Assurance” involves checking that a piece of software conforms to a predetermined set of qualities. This can come from legal requirements, industry standards or certification. QA is generally scripted in nature.
  • “Testing” involves exploring a piece of software to discover information about it. While it may include the use of checking predetermined sets of qualities it focuses on the unknown rather than the known. Testing is generally exploratory in nature.
I do not dare suggest these are commonly agreed definitions and I definitely do not go around correcting people on this. I have learned it’s pretty irritating and counterproductive to do this. However, I use these definitions to help me identify when I have misunderstood or someone else misunderstood these phrases. In other words, I’m aware that these words aren’t commonly understood the same way and I choose to clarify my understanding this way when the discussion comes up.

No one is at fault here

I want to emphasise that in the interview situation I refer to, no one was at fault. I do not expect my colleague to understand all of the semantics of testing. I do not expect a newbie tester to be aware of them either. I’m simply observing that it is useful to keep in mind these semantics and how people may be silently misunderstanding each other and not realising it.
I don’t think we can really do much about preventing this divergence of meanings of words and phrases, I personally feel they are a natural flow of language. The English language is full of words that have many contextual meanings as it is. Nor can we “know” all of these meanings. All we can do is share our different meanings of words and raise each others awareness to these different meanings and perhaps come to some greater consensus on definitions.

Neat ideas from some recent meetups

Introduction

The last few weeks I’ve been to quite a few testing meetups and there were some notable ideas that I really loved. Plus, I like promoting that these events exist and if you haven’t been to one, find one near you and go! (or if there isn’t one, start one!).

Challenges of testability

I had no idea about this one until I saw a tweet about free tickets from Ash Winter. It was a free 1-day, 2-track conference held in Leeds on the 20th September. I thoroughly enjoyed one of the workshops by Clem Pickering and Mike Grimwood. Their workshop was on testability and it involved designing a toaster (which reminded very much of this TED talk).

What I loved about this workshop was the visual demonstration of how people could take many different interpretations from vague requirements, the assumptions we make and how asking questions on testability helps drive out these assumptions. They used James Bach’s testability heuristics as a tool to help people explore different kinds of testability and generate some great discussion and ideas in the workshop. I loved this workshop so much in its simplicity and visual impact that I’d like to have a go running it at work when I can find a good time.

Using data to drive testing

The following week was the Liverpool Tester Gathering which featured my old manager from Sony, Gaz Tynan, talking about the visual methods they use to plan and review test coverage. This was another highlight for me as again I love the visual impact. Gaz talked through how they collect data from both exploratory tests and automation checks and map it onto a game map (similar to Google Maps). As you can imagine, many games at Sony have a 3D virtual world to explore and test, so they can represent their test data as a physical map too. He then demonstrated how they can then see where they test coverage is lacking or where they may want to explore more through examples like heat maps of framerate drop or where bugs were collecting together.
Seeing this visual representation of test coverage really got me thinking about how I could achieve similar results back at work. It was really inspiring and yet more evidence for me that visual representations of ideas, problems or reporting on data are so appealing and compelling.

Tuesday, 4 October 2016

Providing value beyond bugs

Introduction

I’ve had several experiences where I’ve worked on projects where a manager has proclaimed “we don’t need testers for this project, we’re not bothered about bugs”. When I started testing 6 years ago, I would have felt that this was wrong simply because “bugs” can take many forms. However nowadays I’ve come to realise that testers provide much more than “bugs”. I still find it incredibly difficult to explain this to people (particularly managers) though and it has only been through bad experiences that I’ve felt justified in arguing the case.

Why people might think they don’t need testing

I have now worked with several projects where I’ve been told that testing wasn’t required, the reasons have varied:
  • “This is an internal project and we’re not bothered about embarrassing bugs”
  • “This is a quick prototype and we want you to focus on more important projects”
  • “This isn’t the software you need to test, it’s just a library/tool/software we’ve bought so we don’t need you to look at it”
  • “This project is in its early stages so there is nothing for you to test yet”
It seems that people still see testing as simply being the bugs we report, and not only that, but sometimes people are only thinking of bugs in the popular sense - visually obvious and embarrassing bugs. They also seem to be making decisions regarding risk and priority with the information they have at the time.

Why did I think these projects would benefit from a tester?

Well I think the main reason is because I see testing differently, I don’t see my job as simply reporting bugs, but about telling the truth about software, about observing the people and processes that produce the software and trying to help those people to produce better software. In other words, I see testers as people dedicated to creative and critical thinking on a project.
  • Internal projects still need to ‘work’ right? Not to mention, do we really understand all of the kinds of bugs or risks up front about the project? Maybe the project is for internal use, but does it interact with external systems? What about the people and processes, do we not want to help them? Do we not want to track the progress of the project? Even internal projects have costs and implications if they are late or don’t fit the requirements.
  • Frequently quick prototypes prove to be quite useful, this is generally their purpose - to rapidly learn what is useful or valuable. Testers are great at rapid learning and especially learning what customers or end users find valuable or useful, so such information is invaluable when moving on to design the eventual fully-fledged product. Why not boost the success of your prototype by involving a tester to help focus the product on the learning as well as bring their skills in analysing the truth of what makes the prototype a success. Not to mention that prototypes tend to very quickly become the “finished product” without any re-design or re-development, it’s easy for people to assume it’s ready to change the usage from “concept” to “ready for mass use”. Involving a tester might help avoid this easy slide and highlight the risks.
  • Also, what if the projects we are testing affect your prototype? We won’t know that if we aren’t aware of your prototype, which means there is a risk that we could break your prototype if it depends on other projects.
  • Assuming other people’s software is well tested and perfect is an all too easy assumption. Then there is the question about whether it’s even compatible with your own software and the assumption that you understand what it does or how it exactly works.
  • Many bugs are caused by assumptions made at the design stage either because of ambiguous language, cognitive bias’ towards information or simply because we cannot think of everything. Does it not make sense to more cheaply catch and resolve these bugs at the design stage rather than finding them after we have spent time building a product?
I believe in all these cases and in any project, you can benefit greatly from involving a human being who dedicates their focus to critically analysing each piece of information and offering feedback. It’s not just about banging on keys or finding “bugs” in the language of computers, you can discover plenty of “bugs” in the language of humans.

Changing the tone from an argument to an invitation

Its draining to constantly have to argue to be included in meetings and projects and this can heavily affect my motivation at times. It’s much easier when I’m invited and do not need to convince people of my value. How to do this then? I think I’m still learning how to become better at this, but one obvious way is simply by becoming extremely knowledgeable about a project, its technology and the end users. By simply being able to answer many questions and provide this knowledge, you naturally become an oracle people refer to, which means you become invited to more meetings.

Take advantage of your opportunities to learn

What do I mean by opportunities? Well firstly, opportunities to become very knowledgeable can take many forms - for example, investigating bugs typically shows you the guts of a system and also provides information of why a system was built a particular way (otherwise how do you know its a “bug”?). Try to view everything as an opportunity to learn more and you may pick up and remember a lot more.

Now you’re in a design meeting, how do you prove your worth?

I also mean opportunities in terms of demonstrating your value in providing critical and creative feedback. It can be very easy to squander these, I’ve repeatedly been too critical, asking the wrong question at the wrong time and earning the ire of my colleagues for wasting time or dragging out discussions. This can lead to being left out of discussions for being disruptive and not constructive.
Another area I’m trying to learn to improve is catching myself when I react emotionally to something, I tend to blurt out a critical question when I see something wrong. Sometimes I should consider how to word my question better to come across as more inquisitive rather than criticising. Sometimes I should really ask the question later and not put people on the spot. However, as ever, such feelings can be heuristics and sometimes I am right to ask the question there and then, sometimes the emotion draws attention to important details. The main point here though is that you shouldn’t always let your emotions guide you.

Trust

Asking too many questions tends to make people feel they are being interrogated, and perhaps not feel trusted to do their jobs. For example, several times I’ve found myself invited to meetings about new projects that I’m not completely up to speed with and a discussion start with the participants assuming I know some things already. It’s too easy in these situations to criticise and say that they’re making a lot of assumptions, when really they established details in another meeting and you’re simply not aware. While this is something that needs to be resolved, by aggressively criticising them for this, they will get defensive and you potentially derail the meeting. It’s being aware of these situations and being critical at the right times that allows you to provide your value, being critical all the time feels counter-productive to me.

Asking questions rather than making declarations

I’ve caught myself declaring things in the past, which comes off as criticism of an individual or a statement of objective fact to argue about. I try to instead ask questions rather than make emotional statements, emphasising that I’m not criticising an individual or think something is bad. I guess in a way, this is a kind of “safety language”, to use the RST (Rapid Software Testing) phrase. For example;

Instead of:
“This isn’t a good idea, I don’t know why you’re suggesting that?”
I might try:
“I don’t mean to criticise you, but are we sure that X is a good idea?”

Instead of:
“These requirements are really ambiguous and unclear”
I might try:
“Apologies if this has already been covered, but could we clarify the requirements here? X and Y could be misinterpreted?”

Conclusion

I think to change the view on whether testers contribute more than just bugs, we have to show value in other ways. One way to do this is becoming an oracle of knowledge about a product, business or end users in general. This can then lead to further opportunities where we can provide value through our critical analysis skills. However, if we are not careful, we can deal more damage to this image than good if we are too disruptive.
I care a lot that my contribution to projects is valued and that I’m of service. I don’t want to waste people’s time and I want people to invite me because they appreciate my skills and knowledge. However, the nature of testing tends to be viewed as negative and sometimes I myself contribute to this negative view when I disrupt meetings with heavy handed criticism. Life is all about carefully balancing my critical feedback with self-criticism and also recognising opportunities and carefully not squandering them.