Showing posts with label software testing. Show all posts
Showing posts with label software testing. Show all posts

Thursday, 18 August 2016

I love testing and this is why!

Introduction

Over the last year I’ve been on a little bit of a journey of discovery, for 5 years I had been working as a tester but never really seeing it as a career I wanted to keep. In this 6th year of testing, I finally realised I loved it after discovering the friendly and passionate testing community. I’ve learned to appreciate my own work, justify it and learn how to become better and better at it. Suddenly what felt like a job that didn’t have any further growth exploded into a fascinating and immersive world where I could maximise my skills and help others to realise theirs too. I’m taking this moment to write down why I love it so much as a little preparation so that I’ve thought about what I want to say whenever I talk about this again in future.

Testing is endless learning

I’ve spent most of my life (up to this point) in education, I went to nursery at the age of 3 and graduated with my degree at the age of 21. On reflection I feel I’ve always liked having that next target, be it grades for next year, progressing to university or securing full time work. I’ve come to realise one of the things I relish about testing is always having something to learn. I now effectively describe my testing as learning, especially thanks to attending and studying Rapid Software Testing with Michael Bolton which really hit home this point. As an example, before I started my current job, I knew very little about performance, security, API and automation testing. I can now confidently talk about each of these subjects and know where I need to improve.

When someone presents me something to test, I’m trying to learn as much as possible as soon as possible and I love doing this, I love building on each bit of knowledge and using it to ask more and more questions. Let’s say for example I’m presented with an accounting service that integrates with other systems, I start reading the documentation or asking questions and learn that one of these integrations accesses through an externally-facing API. So then I’m asking questions about what is this integration? What does it do? Who is it for? Why have we built it? Can I test it? What does it expect as a response? Is the API secure? What happens when I try to access it without authentication? What happens when an error occurs? What is the desired performance end-to-end with this integration?

Testing is about facts, though opinions are valued too

It’s sometimes easy for people to perceive testers as picky or pedantic people, always criticising or bringing bad news. If we are not careful with our expression, we can over-state a personal opinion or fail to justify why something is a serious problem. I personally enjoy trying to constantly improve my communication, maintain a high standard and keep learning how to interact better with so many different kinds of people. All the while trying to keep in check my own emotions while at the same time knowing when to listen to them.

Testing is about being involved everywhere

As a tester I find myself working at every point from start to finish of a software project. From trying to learn as much about our customers and the business to learning all about the behaviour of a particular line of code. I get to work with so many other roles such as development, business analysis, project management, systems administration, technical support, sales and marketing and all the wonderful and horrible ways people might like to use the software. Testers have moved on to become business analysts or project managers and I think it’s a very natural career step for some.

Testing is about being persistent

Like the world’s best detective, as a tester I try to leave no stone un-turned and I try to look beyond the obvious explanations. I actually find this one of the harder parts of testing, it’s easy to accept an explanation that you don’t fully understand, because sometimes you think you do but don’t really. Sometimes it’s also difficult to keep persisting, especially when a problem resolves itself. Maybe you don’t always have time to keep searching. However, the satisfaction in finally discovering the root of a challenging problem is so high that it makes it worth it. I try to remind myself of the detective analogy because it helps re-energise me (it feels more exciting!) when dealing with tricky problems.

Testing is about directly and indirectly helping people

In providing quality information, I’m helping people all of the time. Helping developers learn more about what code actually does, helping product owners track the progress of a project or helping everyone become more careful with their language and aware of assumptions. I’m also indirectly helping end users of software, trying to understand their needs and understand how these match against the software. I feel caring about people is so very, very central to testing.

Testing is about balance

I like to think of myself as pragmatic and realistic, I care about being critical of my own views with respect to considering someone else’s point of view. So I relish the challenge of balance, I see so many things in life, even life itself being one of balance. There are always pros and cons, always another perspective, another time when things might make more or less sense. The most common challenge that crops up in testing is balancing the desire to learn as much as possible with being quick. Why is time important? Because to learn everything you need all of the time in the world, which you never really have. So you’re always prioritising and trying to identify where the most important pieces of information may be hiding so you can find them as soon as possible. Everything costs time and so we are always asking ourselves whether it’s worth spending the time or if we are spending it in the right places.

Testing is a field ripe for discovery

While many people have discussed, experimented and written about testing for decades, it still feels like many of us are only just beginning to really understand it better as a field. The spread of knowledge and encouragement to discuss how to become better at it seems to be getting closer to other fields like programming. To some extent, it feels like testing is still lagging behind those other fields, we are only just starting to see more conferences and meetups. As a relatively new tester, I can’t speak for what it was like before, but I went a whole 5 years without even knowing there was a community who talked about it, let alone had ideas to improve it. I like this because it feels like an exciting place to be, with so many topics to explore, so much to possibly contribute to. There feels like a lot of work still left to be done to reach out to more people, both new testers and those who may become a tester in future. Not to mention that as we educate ourselves about what testing really means, we also become better at encouraging those other fields to understand it better too and interact with them better.

Testing is great for drawing on diversity

I make use of several different skills and bits of knowledge all of the time in my testing. An obvious example would be making use of my programming background to build useful tools or understand and read code. But I’ve also made use of my education to help me write reports, produce presentations and I’d even credit my studies of history (such as understanding how to analyse sources) with how I approach analysis. I’ve heard of examples of testers using their background in journalism to ask better questions and gather better answers or testers with a keen ear for audio being able to more accurately describe an audio problem. I love that testing so visibly benefits from diversity.

Testing is very psychological

I love psychology, it’s fascinating to me and so much of my work is affected by my psychological state. Topics such as confirmation bias and social interaction come to mind as the most obvious examples. Understanding these areas helps to stay alert to some of the most easily missed issues. For example, if you consider Usability Testing and how you analyse the information you gather from it. You don’t just ask users a list of questions and take away their answers. You are looking out for so many psychological factors too - how are they interpreting the question? Why did they give that answer? What were they thinking when they performed that action? When they say that they want X do they really need Y?

Testing is creative

In our pursuit of learning, it helps to be creative, because it allows us to explore much more than if we stick to the path laid out for us. In testing, being creative is highly rewarding as you are better able to not only discover issues that no one had thought of but also workaround issues or make an innocent-looking problem worse.

Testing is about being a team player

Testing cannot be separated from the work done to produce software. I’m not a rock star, walking onto projects and “assuring quality” and telling people what they’ve done wrong. I’m here to support others in producing software, to work together and learn together. By working more closely with people and learning to become better at working with people, my testing becomes more effective, efficient and supportive. I personally hate working alone and love working in a team. I love “team spirit” and pushing each other to become better.

Summary

This all I can think of right now on why I love testing, I’m sure I have more that will crop up another time. Many thanks to specifically James Bach, Michael Bolton, Jerry Weinberg and Karen Johnson for unlocking this love and appreciation for testing and the ideas and analogies I use in this article. Particular thanks to Michael for this blog post which inspired me to write this and also Andy Tinkham on this podcast for reminding me (through the idea of writing down what I think testing is) to write this.

Wednesday, 3 August 2016

My attempt at the 30 days of testing challenge

Introduction

So just over a month ago, the Ministry of Testing started a challenge called “30 days of testing” and I decided to have a go at tackling it. I had intended to actually achieve it and post all of the progress I made here but I didn’t make much progress in the end.
However, I’ve decided to post the progress anyway because I really liked this initiative and the behaviour it encouraged in the community. Even if I didn’t manage to achieve over half of the challenge, I’d like to encourage others to have a go and share the experiences anyway! If you only achieved one of the challenges, that’s still a good thing, especially if it was helping others to test.

Why didn’t I complete it?

Lame excuse time but basically it was because work and life in general got pretty busy in the second half of last month. I also made the mistake of trying to bundle some of the challenges together - so I was thinking of completing the “mind map”, “5 bugs from a mobile app”, “accessibility bug” and “user experience bug” into one but then spent ages trying to find a mobile app I’d like to explore. I should have just dived into any old app but ended up postponing it and then becoming too busy with work and life in general.
I was also keen on getting a photo of my team but due to illness or holidays I couldn’t get a good time to take a picture of everyone.
But basically I just ran out of time.

What did I achieve?


Day 1 was easy, I already had in mind some books I had been meaning to buy, so this was just an easy motivator to go buy them! I started Dear Evil Tester but I didn’t finish reading it by day 30, so I didn’t successfully complete this one, but at least I got my hands on these books! I’ve enjoyed Dear Evil Tester so far and I’m keen to get started with Explore It too! I definitely recommend both!


Day 2 needs some further explanation (which is why I wanted to write this blog post!) - one of the things I’m working on at the moment is writing some simple scripts to help speed up some aspects of our testing at work. With a shift to a microservices architecture, this provides more opportunities to test much faster at a service level, rather than always at a GUI level. So I’ve been keen to help the testers in my team get on board with this by providing some scripts to quickly generate test data and therefore allow more time to focus on testing a particular feature rather than spending time setting up data or environments in general. Funnily enough, a month after posting this tweet I am automating a bunch more stuff to try and free up more of our time away from the crazy-busy project we are on at the moment.


Day 3 was one that I was looking forward to because I had never considered listening to a podcast before. Well, I had listened to testing podcasts before, but not on any kind of regular basis. The Let’s Talk About Tests Baby! podcast caught my eye as the topic of telling stories in testing resonated with me as I’ve always felt communication is critical in testing, starting from learning a project to reporting a bug. I find myself studying the language, social interaction and psychology aspects of testing more and more lately. I liked that the podcast itself was short but thought provoking - definitely worth a listen if you’ve got a spare 15 or so minutes!


Day 4 I decided to trade blogs with a great developer at work, I didn’t like the idea of just telling someone to read a blog, I felt I should reciprocate and I like the idea of understanding what other people read too! So I shared Michael Bolton’s blog, particularly this post on what is a tester because I feel it sums up how I view my own testing. In return I got Joel Spolsky’s blog which was quite an illuminating view on how perhaps many developers feel about their work too.


Day 5 I basically thanked Rosie Hamilton for all of her hard work on the testing survey she has conducted the last few months. I think we all make a lot of generalisations and assumptions on what the state of the wider testing community and I really value any work done to try and provide some data to consider. The more that people do this kind of thing the better, because it means we can hopefully start to measure the difference we could make by constantly improving the testing community. I’d like to see the results of improving come out in data like this. Or if not, at least we can try and learn and understand why not. I also like the theme of talking about our backgrounds in testing because I think we all bring a variety of different backgrounds and skills that we use in our day-to-day testing!


Day 6 I really struggled with because I had no idea what could be described as a “crazy” test (I generally take the word ‘crazy’ to mean something non-sensical or illogical). I decided it would be “crazy” to test weather forecasting due to how unpredictable it can be, or so I thought! I made a record of what weather accuweather predicted for particular hours in 2 days time and checked if it was correct. Turns out it was pretty accurate, it had predicted cloudy weather for most of the day, rain showers at 11am, 1pm and 3pm and it did indeed rain at those specific moments. Or maybe it was just lucky that I live in Manchester and that is pretty regular weather around here haha.


Day 7 I did a little research and found this neat little tool for tweaking the colours on websites to account for colour blindness. I found a couple of places on this blog that needed tweaking (specifically the main one was the shades of blues for the link text have been ever so slightly changed so they are easier to read on a white background).


I also managed to complete days 10 and 16 - the former I heard about Ministry of Testing’s “Masters of the Ministry” and I’ve booked myself into Richard Bradshaw’s awesome-looking week of workshops. The latter I attended a non-testing event in the shape of Manchester Tech Nights, partly because I’ve attended before and was curious to see what the numbers of testers were like and also because the head of HR at ResponseTap, Rebecca Taylor, was speaking.

What I almost achieved or technically did achieve

  • Number 14 “Step out of your comfort zone” - I’m hoping to do this soon by appearing on the Let’s Talk About Tests, Baby! Podcast! I hate listening to myself so definitely not comfortable with it, but looking forward to giving it a good go anyway! I think I might enjoy it though!
  • Number 19 “Find and use a new tool” - I think I technically achieved this with the colour blindness tool, but I wanted to really use this challenge to get my teeth into one of the many tools I keep seeing mentioned from time to time around the testing community.
  • Number 21 “Pair test with someone” I probably achieved as well because in my current role I try to do this as much as possible anyway. I did pair test with a developer last week in fact.
  • Number 26 “Invite a non-tester to a testing event” I regularly keep plugging TestBash at every meetup I attend so I’ve invited a lot of them technically!

No more excuses, here’s some I can at least achieve right here

  • Number 17 “Find and share a quote that inspires you” - “At the simplest level, people are doing testing all the time. You often run and analyze tests without even knowing it.” - Gerald Weinberg Perfect Software and other illusions about testing, 2008.
  • Number 22 “Share your favourite testing tool” - well the tool I use the most is a pen and notepad, but my favourite testing tool is probably the programming language Python. I write a lot of tools for my own testing with it and I love how versatile it is thanks to its popularity. I don’t think I like any tools that describe themselves as testing tools, I’ve not used one that I’ve liked - for example TestLink or DevTrack.
  • Number 27 - “Say something nice about the thing you just tested”. Ok so I’ve recently been testing a particular new application at work and I love how easy it is to test it in isolation. It’s a service that consumes messages from other applications so its reliant on these messages in order to test its logic. However, the developers helpfully included some API endpoints to simulate the messages so where necessary it was possible to test the logic of the service in isolation quite easily. It has meant we can test some things a little earlier and quicker than waiting for a fully integrated environment (although obviously we still need to test that too).

It was fun!

I’m a little annoyed that I didn’t complete the challenge but I can of course still do some of them later. I definitely will be trying to keep up some of the good habits from this challenge like commenting on blogs, listening to podcasts and occasionally testing other people’s stuff and sending them helpful bug reports.

Friday, 12 February 2016

North West Tester Gathering

Introduction

In the last few weeks I’ve attended my first ever testing meetups in Manchester and Liverpool. Both of these meetups were organised by a group called the “North West Tester Gathering” and you can find it here on meetup.com. Other than online, I’ve never spoken to any testers outside of the companies I’ve worked for and I was really looking forward to it. I wanted to go for two reasons:
  • To listen to other testers’ experiences and try to learn from them, the problems they faced and the solutions they chose.
  • To talk about my own experiences and seek out fresh opinions and ideas and talk about the challenges I face. This is not necessarily because I don’t believe I can face the challenges alone, but because I believe I can never think of everything and I like to try out new ideas that I might never think of.

Speakers

For the first meetup in Manchester there was only one main speaker, Richard Bishop from a company called Trust IV - a software testing consultancy company. The main topic of the talk was about Network Virtualisation, which is a technology that allows you to “stub” or simulate network interactions such as a user visiting to your website through an iPhone on a 2G network. The tool they demonstrated this with was one created by Hewlett Packard called HPE Network Virtualisation.
The second meetup in Liverpool had two speakers, Vernon Richards and Duncan Nisbet. Vernon’s talk was about the common myths in testing that we all know and how we can tackle these myths - mainly by improving how we talk about testing in the first place! Duncan’s talk was about exploratory testing and how we probably all already conduct exploratory testing, we just don’t include it in our existing processes.


You can find videos of these talks here:
“Myth Deep Dive” by Vernon Richards:
“Exploratory Testing” by Duncan Nisbet:
{will add when its uploaded!}


Main Takeaways

I found all of the talks engaging and very relatable! I fully recommend watching the videos if you’re new to discussing the world of testing!
“Network Virtualisation” by Richard Bishop
  • Richard showed us some figures produced by one of the large big data companies forecasting how the technology market would look for 2016. In it, he especially highlighted the rise of end users relying on mobile devices to interact with products. I think this was useful food for thought especially as I’m involved with a project which could be viewed via mobile.
  • He also used some very effective examples of demonstrating the value of performance testing as well as the need to validate your assumptions (which applies to any testing!). He described an interesting test where they took network speed samples before and during a major football match and found that the speed was faster during the match - against their assumption that it would be slower!
  • I’ve definitely got a lot to learn still regarding performance testing, right now it feels like a domain rich with specialist knowledge (or at least different knowledge, for example the need to understand and know about statistical significance). I now know what the phrase “jitter” means! (where network packets are received in the wrong order).
  • Richard also provided some useful example use cases such as Facebook’s “2G Tuesdays”. This is where employees at Facebook are asked to work with Facebook using a network speed as slow as 2G to help them understand the difference in experience for some users in more remote or developing areas of the world. I felt this was an effective example of the lengths Facebook were going to, to try and help their employees empathise with these customers and therefore take their product’s performance on slow networks seriously.
“Myth Deep Dive” by Vernon Richards
  • Vernon’s talk mainly focused around talking better about testing to non-testers. A lot of myths people believe about testing are partly caused by our own inability to talk about testing.
  • There were a lot of themes that I think we would all recognise, such as “The way to control testing is to count things” - which is to say, judging the value of testing in terms of test cases executed or bugs reported and how this isn’t necessarily useful.
  • I really recommend you watch the video above! But the other themes were: "Testing is just clicking a few buttons" and "Automated testing solves all your problems".
“Exploratory Testing” by Duncan Nisbet
  • Duncan’s talk focused on some typical testing examples of where we all perform exploratory testing but simply don’t think about it being exploratory - we don’t value it because we don’t identify it.
  • He also talked about exploratory sessions being iterative, you spend time exploring, learn what you can and then repeat but designing further tests based on what you’ve learnt.
  • He also talked about the difference between good and bad exploratory testing being how well the tester can explain what they did in an exploratory session. Good exploratory testing can be explained and justified, it isn’t random and a tester should be able to easily explain what they were doing and why.

Socialising!

So other than the main talks, I was attending these meetups to meet and talk to other testers! I introduced myself to a few people and got chatting to quite a few different people. Some people I already knew from my days at Sony in Liverpool, others I met for the first time. It was nice to be able share stories and experiences, I highly recommend attending meetups just for this really, you can learn a lot from others and get some different points of view on your testing ideas.

Being brave…

At the Manchester meetup I caught up with Leigh Rathbone, who was organising the Liverpool meetup. During the course of our chat, I think my passion for testing got out and Leigh asked if I wanted to stand up and do a lightning talk at Liverpool. I don’t take opportunities like this lightly, so I accepted. I think the process of writing these blog posts has helped prepare me a little bit but I certainly have never stood up in front of 80 people, let alone people from my profession, some of whom are massively more experienced than me and whom I have a lot of respect for.
I chose to talk about the very subject that I passionately discussed with Leigh - diagrams. Lately in my recent work I’ve found many examples where people try to explain themselves in terms of words - either written or oral and failed. Not everything is easy to explain this way - I have definitely found that right here on this blog! The point I tried to make was that sometimes some information is better explained in a diagram or chart - e.g. timelines, flowcharts, mind maps and entity relationship diagrams to name a few. Its worth considering this when we are trying to explain ourselves or when someone is struggling to explain something to us. I explored this theme a little in my post Test Cases - do we need them?
I also quickly recommended a book that I believe every tester should read -  “Perfect Software and other illusions about Testing” by Gerald Weinberg. I had never read a testing book before and I’m fairly sure a lot of testers haven’t. I particularly like this book because I think it addresses the very topic Vernon was talking about - explaining what we do as testers in terms that anyone can understand. I’m also very much a fan of Jerry’s writing style, his stories and anecdotes make his points so much more memorable and relatable!

Summary


  • You should attend testing meetups! Even if you’re not a tester!
  • Even if I knew something already about the topics discussed, I always had something to learn or a new way of looking at it. I’d like to think I will always learn from the talks at these meetups.
  • Richard, Vernon and Duncan are really friendly and engaging people to talk to!
  • I shouldn’t be afraid of talking in front of lots of testers, because they are friendly people and I must have made some kind of sense as people came to thank me and chat about diagrams! I hope this inspires other people are nervous or unsure of talking to give it a go! Don’t listen to your brain!
  • Take opportunities with both hands when you see them - it can be very rewarding!
  • I’ve only attended two meetups so far and I’ve got so much to talk and think about!

Wednesday, 20 January 2016

Test Cases - do we need them?

Introduction

I’ve worked in three different companies now as a tester and I’ve read, written and executed a lot of different types and styles of test case. My time especially at a large corporation working with large numbers of test cases written by a large number of different people really gave me some varied experience with them.
Not only that, but these three companies had different approaches to their processes and testing reflected that. I’ve worked with gigantic test suites of thousands of test cases, projects where the test cases were a single spreadsheet and projects where I tested with no test cases at all.
Which then begs the question, do we need test cases? Is there such a thing as too many? Or are there not enough test cases?

The realisation

I once asked my testing team this question - what do you find test cases useful for? Some of the answers I got back were something like this:

“To make sure we check everything”
“To work out what to test”
“To help us learn other areas of the system we haven’t tested before”
“To have confidence we have tested everything and not forgotten anything”

There’s a common theme here, the realisation that test cases are just a form of documentation. Documentation of what you are testing and the kinds of tests you want to run. Not only that, but as testers we use test cases to assist us in learning the system and designing our tests. In other words, we write out test cases in order to figure out what tests we want to run.

So if we’re not even designing our tests before we write them, then how can we hope to write them to a good standard? Are we even thinking about writing them to a standard? Can all tests fit any particular standard?

By having this documentation and putting a tick next to each test, testers also find confidence that they have thoroughly tested the system. So if test cases are a form of documentation….

Do we need documentation?

I think any tester would answer this with a yes. Without documentation, you are wholly reliant on memory and what people tell you. Documentation almost always exists somewhere - even if it’s not “formal” documentation (e.g. a written document, diagram or perhaps a wiki), it might be just an email, a set of requirements or your notes observing the behaviour of the system. Technically, the code of a program is the ultimate form of documentation - it’s just it might not be very easy to read! Documentation is a way of articulating information in a more easily understandable way, and as testers we want to understand as much as possible about the system we are testing. So having easily understandable documentation is very valuable to us.

So, we’ve established we do need documentation, and we’ve established that test cases are only one form of documentation. Maybe then the question to ask is…

Are test cases always the right kind of documentation?

Documentation is a way of articulating information, so the way we produce documentation influences how well people understand it. There is also a cost to documentation, it can never be done in zero time and it takes skill to create documentation that is well written and understandable to a given audience.
Just as there are good authors and bad authors, documentation is a creative art.

My experience at the large corporation and my experience of the wider testing industry is that there appears to be a general understanding that test cases can be written by anybody and therefore read and executed by anybody. Some note that there is a skill to this, but the focus is on the production of test cases, rather than the quality of said test cases. In other words, the idea that you could produce good testing without test cases seemed like a massive risk.

However, this desire for test cases and fear of a lack of them drives people to create test cases where it may not be useful. Most test cases I’ve seen follow the following format:
Summary
Preconditions
Steps
Expected Result

Now, given a very simple piece of software that takes in value X, determines if X > 5 and then does Y or Z depending on this, we might write some test cases for this that look like this:
testCases.png

This is a very simplistic example, but take note of how much I’ve had to write here to fit this format and how long it takes you to read it. Maybe it takes me a few minutes to write these sections, fix any spelling mistakes and review it to make sure it makes sense.

Now, consider if I documented the same example as a diagram:

decisionTree.png
How long did it take me to draw this? About 30 seconds in a decent flowchart program (such as yEd). I had fewer errors to correct and this particular example mapped very easily into a flowchart. How long did it take you to understand this diagram? How long did it take you to read the test cases? Which was most effective at helping you understand the behaviour? How long would it take you to think of some test ideas?

Hang on, where can I document my tests?

So we’ve not satisfied the desire for some documentation of our tests here. This diagram only satisfies the need for an explanation of the system. Test cases outline specific tests we may want to run. So, maybe we still need test cases and we just need a few diagrams to help explain more complex areas? Is there any other solution?

Enter...mind maps

So I did a little digging and looking around, I felt that surely there are better ways to design tests. Test cases can feel very slow to write, slow to read and hard to keep written in a particular format. This is when I came across mind maps described in this blog by Darren McMillan.  A mind map is not too different from a brainstorm - simply start with a topic and begin branching ideas from it. Taking my example above, I might end up with this mind map (created in a great tool called XMind found here):
testCases2.png
Here I’ve explored some test ideas and started to design my tests. Exploring these ideas may raise questions that I don’t know the answer to right now - is storage a concern? If it is, how does it get stored, maybe security is a concern if the underlying system is using a MySQL database, maybe I need to test for SQL injection? Maybe performance needs to be considered? I can also show this to other people and they can quickly understand the scope of my testing without having to read individual test cases - and I can quickly observe the scope of my own testing and keep adding to it. It’s much easier to consider the overall picture of my test plan using this diagram.

But isn’t it difficult to draw diagrams and fit things in?

Yes, I don’t think mind maps are a replacement for test cases. Instead, I think this is a tool that can be used in conjunction with test cases to help you design and document tests in a more readable format, quicker. However, there are still cases where it can be difficult or time-consuming to create a mind map or diagram. I envisage that you may start with a flowchart and formal documentation of a system first, to understand the system you are testing. You would then create a mind map to explore what you want to test around this system. Finally, you would perhaps write test cases to formally document your tests, especially where they have complex steps.

So are there any quick, easy solutions to design and document all tests?

No, I don’t think so. I think by the nature of having to design, write or draw tests, they can never be created in zero time. Some systems or tests will be complex and you cannot run away from the complexity - drawing a diagram or writing test cases may be a way of reducing or making the complexity easier to understand, but sometimes it’s not possible to simplify it.
In the process of thinking about and writing this blog post, I think I’ve come to the conclusion that the problem here is not that we don’t need test cases. The problem is that we are not always using the right tool for the job and sometimes as testers we aren’t thinking carefully about the format we want to write or convey our tests and documentation. Using flowcharts and mind maps allow us more tools for this purpose, and they definitely are not the only forms of diagram or documentation we can use!

But what if I want to collate my tests and re-visit them for regression?

I think this is the crux of why test cases become relied upon so much. Why is it useful to re-visit test cases? Is it because you don’t want to miss an important test that you might have forgotten? I would argue that if we are repeating an important test regularly, we’re unlikely to forget it and simply because it is in a test case format doesn’t mean its importance is always highlighted - which means you’re either regularly running a lot of test cases “just in case” or potentially missing these tests anyway.
Wouldn’t it be better to document a system in a way that conveys information - highlighting important areas, rather than trying to fit all such information in a test case format?

That nagging feeling…

I still feel dissatisfied with my conclusions, I still have concerns that test cases and diagrams still require a lot of skill to write or draw in a way that is easily understandable by others. Every one of us will write or draw things differently and this makes it difficult to be consistent. The need to set standards, train people and conduct reviews still feels like it’s necessary.
However, I definitely do feel that in my career so far as a tester and in my conversations with other testers, there has definitely been too large a reliance on writing test cases over other forms and styles of documentation. I do believe that as testers we can definitely save some time and improve the quality of our testing by considering other techniques and not relying on test cases.
But I still feel there could be a better way!

Summary


  • Test cases are not always the most appropriate format to document tests.
  • Diagrams like flowcharts can be used to document and explain systems better than a test case. Rather than using test cases to learn a system, we could use diagrams or more formal documentation of the system - if this documentation doesn’t exist already, it’s worth creating it!
  • Mind maps can be used to assist in designing and documenting tests, rather than designing the tests as you write test cases. Mind maps provide better visibility of your overall test plan.
  • Writing test cases or drawing diagrams still requires skill - diagrams are not necessarily better than test cases all of the time and not necessarily easier to create to a consistent standard.