Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Thursday, 31 December 2020

A tester’s feedback on system feedback

It’s not surprising that as testers our default approach to a black box is to try and perform tests to understand it better. However, sometimes the desire or need to do this is a test in itself! Rather than advocate for more testing, sometimes I think we should advocate for more observability. More observability not only makes the system, bug or problem we’re trying to understand easier, it unlocks and enables lots more testing, it can make our testing faster and it directly helps the operation of the system too - making it easier for people to support in production.

In motor racing, there are roughly 2 main types of racing driver - those that adapt to problems with the car’s handling and those that are really good at identifying those problems and feeding back to engineers and mechanics to simply make the car easier and faster. The really good drivers are good at both.

I feel many testers naturally fall into the first category when it comes to testing, I think for many good reasons we are quite patient and persistent at finding ways to test even when it is difficult, mundane or time-consuming. I think our profession attracts the sort of mindset that loves to investigate, problem-solve and carefully understand problems step-by-step. This makes us really good at understanding systems just by observing behaviour. None of that is bad, these are our strengths and typically what we bring to most teams.

However, sometimes I feel there are times when we could be feeding back and suggesting ways to improve the system we are testing, in ways that make it easier to test. Sometimes we are asked to test software that is very difficult to understand beyond its behaviour, and we can spend a lot of time and effort testing to understand all of this behaviour.

In addition, in order to feedback, make comments and suggest improvements on how the system gives feedback we need a bit of technical understanding and experience of what is possible. However, I believe we can all learn a little bit more and it can all help exponentially to the quality of the software we help produce.

Below are some suggestions on how we can assess the feedback a system gives us as testers and therefore make suggestions to improve it.

Logging

The first most obvious way to improve system feedback is assessing the quality of logging, through logs we can make the system report both errors but also behaviour. If we have some complex logic to send data through and we’re unsure which path it is taking, we can make the system log this “processing data 1”, “data 1 was sent to x because y equalled z”.

I frequently find the quality of logs on many projects to be quite poor for a variety of reasons, they tend to be neglected - its not just testers who lack experience or knowledge of logging, many developers also haven’t explored this much themselves.

There are multiple aspects to this that I could write lengthy blog posts about but to summarise a few areas that you can help assess the quality of:

Where are the logs?

If we have multiple servers or devices each logging do I have to login to each one separately to view the logs - or can we make the access easier by putting in one place (centralised logs).

What logs are these?

There are lots of kinds of logs for different layers of software and context:

  • The operating system logs lots of low level things like user access and system processes but these are quite noisy and don’t tend to be useful for the average developer or tester.
  • Back-end system logs (like a Java process).
  • Front-end logs (like a website).
  • Audit logs (user access, what they accessed and when, whether they were denied access).
  • Business metrics (sometimes we can log when certain actions happened)
  • Is it easy to distinguish between he different types of logs?

UX of logs

Particularly when we send logs to centralised systems such as ELK (Kibana), there is work required to make the logs easy to read, navigate and understand.
For example:

  • Easily and accurately being able to filter for ERROR logs
  • Traceability - can we trace different events in different applications logs together with some common identification such as an ID?
  • Formatting the log line data correctly (typically as JSON) so that it can be displayed correctly in Kibana - e.g. long stack traces with multiple lines appearing as separate logs rather than 1 log.
  • Easily being able to identify and separate different systems and environments - can we quickly distinguish between Prod/Live and test environments?

Errors and stack traces

Do we write error logs when something has gone wrong? If we see a bug as user, try to forget that you can “see it” and know the steps - has the system written an error log that would help us identify it if we didn’t know that?
When we have errors, do we also write out the stack trace that goes with it? Even when the error itself is ambiguous or vague, the stack trace can help developers with clues of what the underlying problem is.

Monitoring and metrics

In addition to logs, we can also observe the behaviours of a system via monitoring and metrics. So rather than just rely on negative feedback like errors from logs, we can also use positive behaviours, such as counting the number of times users have visited pages or whatever the “successful” use of the system means for us. Sometimes when things go wrong, we don’t get errors - but we can still observe that this happened by the lack of drop in positive or business metrics.
Does you system have somewhere you are collecting data on what its doing? This could be things like Google Analytics, where you can track what users are clicking on, or it can even just be logs like above stored in ELK/Kibana - logging each time the system processes a piece of data successfully.

Summary

I try to keep at the forefront of my testing the thought “what if this happened in production, can we tell the system or user did this?”. I believe that by adding this to my assessment of bugs and quality that I’m improving the overall quality of the broader system in terms of operability and supportability. It also can massively help my own testing - I have tested very complex wholly back-end systems which feature no obvious user interfaces and by pushing for better feedback from the system, it has made it way easier to test as well.

Thursday, 18 August 2016

I love testing and this is why!

Introduction

Over the last year I’ve been on a little bit of a journey of discovery, for 5 years I had been working as a tester but never really seeing it as a career I wanted to keep. In this 6th year of testing, I finally realised I loved it after discovering the friendly and passionate testing community. I’ve learned to appreciate my own work, justify it and learn how to become better and better at it. Suddenly what felt like a job that didn’t have any further growth exploded into a fascinating and immersive world where I could maximise my skills and help others to realise theirs too. I’m taking this moment to write down why I love it so much as a little preparation so that I’ve thought about what I want to say whenever I talk about this again in future.

Testing is endless learning

I’ve spent most of my life (up to this point) in education, I went to nursery at the age of 3 and graduated with my degree at the age of 21. On reflection I feel I’ve always liked having that next target, be it grades for next year, progressing to university or securing full time work. I’ve come to realise one of the things I relish about testing is always having something to learn. I now effectively describe my testing as learning, especially thanks to attending and studying Rapid Software Testing with Michael Bolton which really hit home this point. As an example, before I started my current job, I knew very little about performance, security, API and automation testing. I can now confidently talk about each of these subjects and know where I need to improve.

When someone presents me something to test, I’m trying to learn as much as possible as soon as possible and I love doing this, I love building on each bit of knowledge and using it to ask more and more questions. Let’s say for example I’m presented with an accounting service that integrates with other systems, I start reading the documentation or asking questions and learn that one of these integrations accesses through an externally-facing API. So then I’m asking questions about what is this integration? What does it do? Who is it for? Why have we built it? Can I test it? What does it expect as a response? Is the API secure? What happens when I try to access it without authentication? What happens when an error occurs? What is the desired performance end-to-end with this integration?

Testing is about facts, though opinions are valued too

It’s sometimes easy for people to perceive testers as picky or pedantic people, always criticising or bringing bad news. If we are not careful with our expression, we can over-state a personal opinion or fail to justify why something is a serious problem. I personally enjoy trying to constantly improve my communication, maintain a high standard and keep learning how to interact better with so many different kinds of people. All the while trying to keep in check my own emotions while at the same time knowing when to listen to them.

Testing is about being involved everywhere

As a tester I find myself working at every point from start to finish of a software project. From trying to learn as much about our customers and the business to learning all about the behaviour of a particular line of code. I get to work with so many other roles such as development, business analysis, project management, systems administration, technical support, sales and marketing and all the wonderful and horrible ways people might like to use the software. Testers have moved on to become business analysts or project managers and I think it’s a very natural career step for some.

Testing is about being persistent

Like the world’s best detective, as a tester I try to leave no stone un-turned and I try to look beyond the obvious explanations. I actually find this one of the harder parts of testing, it’s easy to accept an explanation that you don’t fully understand, because sometimes you think you do but don’t really. Sometimes it’s also difficult to keep persisting, especially when a problem resolves itself. Maybe you don’t always have time to keep searching. However, the satisfaction in finally discovering the root of a challenging problem is so high that it makes it worth it. I try to remind myself of the detective analogy because it helps re-energise me (it feels more exciting!) when dealing with tricky problems.

Testing is about directly and indirectly helping people

In providing quality information, I’m helping people all of the time. Helping developers learn more about what code actually does, helping product owners track the progress of a project or helping everyone become more careful with their language and aware of assumptions. I’m also indirectly helping end users of software, trying to understand their needs and understand how these match against the software. I feel caring about people is so very, very central to testing.

Testing is about balance

I like to think of myself as pragmatic and realistic, I care about being critical of my own views with respect to considering someone else’s point of view. So I relish the challenge of balance, I see so many things in life, even life itself being one of balance. There are always pros and cons, always another perspective, another time when things might make more or less sense. The most common challenge that crops up in testing is balancing the desire to learn as much as possible with being quick. Why is time important? Because to learn everything you need all of the time in the world, which you never really have. So you’re always prioritising and trying to identify where the most important pieces of information may be hiding so you can find them as soon as possible. Everything costs time and so we are always asking ourselves whether it’s worth spending the time or if we are spending it in the right places.

Testing is a field ripe for discovery

While many people have discussed, experimented and written about testing for decades, it still feels like many of us are only just beginning to really understand it better as a field. The spread of knowledge and encouragement to discuss how to become better at it seems to be getting closer to other fields like programming. To some extent, it feels like testing is still lagging behind those other fields, we are only just starting to see more conferences and meetups. As a relatively new tester, I can’t speak for what it was like before, but I went a whole 5 years without even knowing there was a community who talked about it, let alone had ideas to improve it. I like this because it feels like an exciting place to be, with so many topics to explore, so much to possibly contribute to. There feels like a lot of work still left to be done to reach out to more people, both new testers and those who may become a tester in future. Not to mention that as we educate ourselves about what testing really means, we also become better at encouraging those other fields to understand it better too and interact with them better.

Testing is great for drawing on diversity

I make use of several different skills and bits of knowledge all of the time in my testing. An obvious example would be making use of my programming background to build useful tools or understand and read code. But I’ve also made use of my education to help me write reports, produce presentations and I’d even credit my studies of history (such as understanding how to analyse sources) with how I approach analysis. I’ve heard of examples of testers using their background in journalism to ask better questions and gather better answers or testers with a keen ear for audio being able to more accurately describe an audio problem. I love that testing so visibly benefits from diversity.

Testing is very psychological

I love psychology, it’s fascinating to me and so much of my work is affected by my psychological state. Topics such as confirmation bias and social interaction come to mind as the most obvious examples. Understanding these areas helps to stay alert to some of the most easily missed issues. For example, if you consider Usability Testing and how you analyse the information you gather from it. You don’t just ask users a list of questions and take away their answers. You are looking out for so many psychological factors too - how are they interpreting the question? Why did they give that answer? What were they thinking when they performed that action? When they say that they want X do they really need Y?

Testing is creative

In our pursuit of learning, it helps to be creative, because it allows us to explore much more than if we stick to the path laid out for us. In testing, being creative is highly rewarding as you are better able to not only discover issues that no one had thought of but also workaround issues or make an innocent-looking problem worse.

Testing is about being a team player

Testing cannot be separated from the work done to produce software. I’m not a rock star, walking onto projects and “assuring quality” and telling people what they’ve done wrong. I’m here to support others in producing software, to work together and learn together. By working more closely with people and learning to become better at working with people, my testing becomes more effective, efficient and supportive. I personally hate working alone and love working in a team. I love “team spirit” and pushing each other to become better.

Summary

This all I can think of right now on why I love testing, I’m sure I have more that will crop up another time. Many thanks to specifically James Bach, Michael Bolton, Jerry Weinberg and Karen Johnson for unlocking this love and appreciation for testing and the ideas and analogies I use in this article. Particular thanks to Michael for this blog post which inspired me to write this and also Andy Tinkham on this podcast for reminding me (through the idea of writing down what I think testing is) to write this.

Sunday, 7 August 2016

Games Testers should be proud of their work, not ashamed

Introduction

A few months ago I interviewed someone for a testing role and the question of past experience came up, to which the interviewee said something along the lines of ‘well this might be a bit sad, but I’ve done a bit of video game beta testing’. Then a week later I noticed a discussion on the Ministry of Testing’s slack team where people were discussing how they had got into testing. A couple of people mentioned how they found it useful to start in games testing but felt games testers get a lot of stick for what they do. This prompted me to jump in and talk about my story as I’m quite passionate about this subject! In fact this very sentiment is where my career pretty much started after university!

Pizza, pop and candy

I will never forget this line. I had just finished university and was applying for every programming job I could find. There was a wild week (they seem to happen every year or two) where I had accrued three interviews to take place in one week, just prior to one of my brothers getting married. I had two interviews for programming roles and one for a games testing job.
The first interview I didn’t pass, the second one I seemingly impressed and got offered the job to test games and the third one? Well, bearing in mind I wanted to get a programming job to make the most of my degree, I went into it feeling glad to have the games testing job in my pocket but wanting the programming job. However, the third interview went something like this:

“So we see from your CV that you’re interested in working in the games industry?”
“That’s right, but I’m keen to learn whatever I can from any programming role I can find!”
“Pizza, pop and candy”
“Excuse me?”
“That’s what you ‘gamers’ like isn’t it?”
“Erm…”
“Yes, we hired a programmer from that games company nearby, what was the name?”
“Traveller’s Tales?”
“Yes, that’s it, but they didn’t last long. Great programmer, but not a good fit for our organisation...”

Needless to say, that was the most awkward, aggressive and stressful interview I’ve ever experienced. I honestly don’t know why they decided to waste their and my time in such an interview. But it made me suddenly feel very, very comfortable to be accepting a job in games testing and definitely removed any second thoughts! My testing career effectively started from a statement of disgust about ‘gamers’.

Learning to test

So here I was in a job testing, having barely heard the word ‘testing’ before (it seems crazy to think now that you can complete programming degrees without ever hearing the phrase ‘unit test’!). I’ve loved video games all of my life and I had seen my fair share of bugs with them, so I knew pretty well what ‘good’ looked like and what ‘bad’ looked like. I also play a lot of different types of game at varying difficulties, so I had a lot of knowledge to draw on - not to mention a degree in games design & programming. I understood what people wanted from games, how they are made and what could go wrong. I hadn’t worked a proper full time job before but I was pretty confident I could do a good job.

I learnt a huge deal games testing, especially regarding the sheer variety of forms a ‘bug’ or ‘problem’ could take. Not only were there the obvious bugs that a lot of people might recognise like crashes, hangs and freezes (yes, they had technical differences) and graphical glitches, but also problems you don’t necessarily see visually. The bulk of the testing was exploratory in nature, usually with a task being to focus on a particular area - be it a collision detection test or a playthrough of the game just to make sure it could be completed.
To give you some idea of the variety of problems we would look out for:
  • Graphical - the most obvious and visual kinds of bugs that affected the look and feel of the game.
  • Functional - quite simply, bugs to do with what the game should ‘do’. This was very contextual as every game is designed differently, but there are some fairly typical consistencies such as being able to steer a car in a racing game..
  • Difficulty - this was pretty subjective, but we had to judge whether ‘easy’ was consistent and didn’t suddenly jump in difficulty on a particular level.
  • Audio - maybe the voices in the game don’t synch up with the animations of the characters or the quality of the audio was poor (such as once hearing the ‘beeps’ of the recordings being started and ended for the voice actors!).
  • Legal - sometimes games would include trademarked or copyrighted material which they hadn’t credited. Or sometimes they could accidentally feature close similarities to real world products.
  • Health - we were trained on photosensitivity and how to identify bugs that would trigger epilepsy.
  • Technology - we were sometimes testing for bugs with particular technologies such as 3D, motion control, force feedback devices, augmented reality, virtual reality headsets and more! All of these technologies feature their own particular bugs and problems.
  • Compliance - we checked that games were compliant with the standards set out by the corporation. Games that didn’t comply with these standards would be rejected so we tested to make sure they were compliant.
  • Localisation - although we didn’t translate games nor could necessarily read all languages, we did check for bugs related to the subtle differences in localisation. An innocent missing language file would be enough to make a game unplayable or crash. You don’t need to be able to read German to notice the text doesn’t fit inside buttons!
  • Multiplayer/Performance - we tested for load, stress and also for exploits. Some games need to be able to handle large amounts of players and exploits can ruin people’s enjoyment of their online gaming experience.
  • Compatibility - there were some cases where games were ported to new hardware or developed to communicate with older systems. We tested to find problems with how these games were compatible.
  • Transactions - we tested to make sure real world payments could be completed and that people couldn’t exploit these systems.
There are probably a few I’ve forgotten, but hopefully that gives you some idea of the wide range of issues we would learn about and find!

Along with learning about the wide variety of problems, I also learned about testing techniques and strategies. Unfortunately the company was heavily using test cases at the time. There were differing views on these and I learned to find and report problems from my exploratory sessions more than fill out the boring and repetitive test cases. There were also the beginnings of automation testing which was being used to help carry out load and stress tests in multiplayer games along with experimented use in running regression test cases.
I learned two major lessons about testing in this job:
  1. I hated test cases and found them fairly useless. Occasionally they were useful to find test ideas or guide exploring a product, but they felt awkward to write and use and just generally wasteful of time. I found most of my bugs when I was performing exploratory testing.
  2. Automating tests on a product that was constantly changing, especially in a graphical sense was a waste of time. By the time we had written automated tests of the product, it had changed again and we had to re-write them. Quite a lot of game projects didn’t require testing after release, so the value of the automation tests was rarely seen. It only seemed to return some value when running large scale performance tests.

What did I pick up that could apply to other testing jobs?

There were things that I picked up in my first job that I hadn’t formed thoughts around or hadn’t found the language to describe effectively but were important to becoming a better tester. While I gained better understanding and language later in my career, my first job gave me awareness of these issues:
  • Managing relations with developers - I quickly learned that diplomacy was important when raising bugs with developers. I picked up an analogy that I re-used in future testing interviews - “imagine an artist has created a magnificent piece of art and you come along and say ‘you missed a bit there’ - you can imagine that’s quite irritating so I look for ways to soften that blow”.
  • Costs of automation - I didn’t have the language to describe it, but I learned the hidden costs of automation and the temptation to try and find ways to use it. I was already wary about the value and how easily people could misunderstand that automation is some kind of replacement for ‘all’ testing.
  • Knowing when to stop testing - I had started to get experience on when and where I might want to stop testing. While we were generally time-boxed at in my first role, I definitely saw the diminishing returns and learned which risky areas I wanted to focus my testing on.
  • Knowing when to repeat tests - I also started to form ideas on how to judge regression testing although I didn’t know at the time how else it could be done other than re-running test cases.
  • Justifying my testing - I had started to build up confidence through my experience of being able to justify my testing and write effective bug reports. While I didn’t have the best language to explain myself, I at least had experience to refer to that gave me a determination to learn better language!
  • Determination to understand more about the product, earlier - Typically in my first job, we didn’t have very good documentation to refer to (sometimes none at all!). When we did have documentation, it was much easier to quickly provide feedback on the product. So in future roles, I had a strong determination to get involved earlier in projects so I could understand more about them, much more quickly. I knew documentation was expensive but I felt that trying to access sources of information was crucial to getting my testing really started.
  • Thinking ‘outside of the box’ - I quickly learned that the ‘best bugs’ were the ones that people hadn’t even thought of. Sometimes these would be fairly expensive bugs to fix and it was valuable to find them as soon as possible.
  • Being organised, fast and clear - I took pride in my work being organised but fast and effective. I quickly understood the need to setup my tests quickly and maximise the amount of time I had available. I created my own techniques for being able to do this, while still providing clear feedback in my reports, which have served me well to this very day!
  • Being adaptable and willing to learn - My programming background helped me pick up technical or complex products very quickly and I learned that this can be very valuable. Again, it was useful being able to maximise the time I had available to test and it meant I could provide quick and clear feedback. I then looked for opportunities to learn more skills that could complement my testing efforts.
  • Understanding and empathising with the end user - If you’re in a games job, you’ve probably played a lot of video games and so you probably have a pretty good idea of what end users do, what they want and what they like. So it was easier to think like an end-user and therefore I really appreciated the value this brings when coming up with test ideas, justifying your testing and justifying your bug reports. This experience still motivates my desire to always understand more about the psychology of the end users as I highly value it in my testing.

Summary

I was a games tester for 3 years in two different companies and I will never talk about it as a negative experience. That’s not to say that everything was perfect, the games industry has a lot of problems that still affect it today. I left the industry partly because testers are treated as an end-of-cycle role and because I was keen to build my career in testing. However, I still managed to leave and secure a job I would describe as being ‘professional testing’ - I wouldn’t have been able to do this without the experience I gained from games testing. If you are currently testing games and you feel like you’re not a professional tester and have to prove something - you don’t! You’re already a professional tester!

Wednesday, 20 January 2016

Test Cases - do we need them?

Introduction

I’ve worked in three different companies now as a tester and I’ve read, written and executed a lot of different types and styles of test case. My time especially at a large corporation working with large numbers of test cases written by a large number of different people really gave me some varied experience with them.
Not only that, but these three companies had different approaches to their processes and testing reflected that. I’ve worked with gigantic test suites of thousands of test cases, projects where the test cases were a single spreadsheet and projects where I tested with no test cases at all.
Which then begs the question, do we need test cases? Is there such a thing as too many? Or are there not enough test cases?

The realisation

I once asked my testing team this question - what do you find test cases useful for? Some of the answers I got back were something like this:

“To make sure we check everything”
“To work out what to test”
“To help us learn other areas of the system we haven’t tested before”
“To have confidence we have tested everything and not forgotten anything”

There’s a common theme here, the realisation that test cases are just a form of documentation. Documentation of what you are testing and the kinds of tests you want to run. Not only that, but as testers we use test cases to assist us in learning the system and designing our tests. In other words, we write out test cases in order to figure out what tests we want to run.

So if we’re not even designing our tests before we write them, then how can we hope to write them to a good standard? Are we even thinking about writing them to a standard? Can all tests fit any particular standard?

By having this documentation and putting a tick next to each test, testers also find confidence that they have thoroughly tested the system. So if test cases are a form of documentation….

Do we need documentation?

I think any tester would answer this with a yes. Without documentation, you are wholly reliant on memory and what people tell you. Documentation almost always exists somewhere - even if it’s not “formal” documentation (e.g. a written document, diagram or perhaps a wiki), it might be just an email, a set of requirements or your notes observing the behaviour of the system. Technically, the code of a program is the ultimate form of documentation - it’s just it might not be very easy to read! Documentation is a way of articulating information in a more easily understandable way, and as testers we want to understand as much as possible about the system we are testing. So having easily understandable documentation is very valuable to us.

So, we’ve established we do need documentation, and we’ve established that test cases are only one form of documentation. Maybe then the question to ask is…

Are test cases always the right kind of documentation?

Documentation is a way of articulating information, so the way we produce documentation influences how well people understand it. There is also a cost to documentation, it can never be done in zero time and it takes skill to create documentation that is well written and understandable to a given audience.
Just as there are good authors and bad authors, documentation is a creative art.

My experience at the large corporation and my experience of the wider testing industry is that there appears to be a general understanding that test cases can be written by anybody and therefore read and executed by anybody. Some note that there is a skill to this, but the focus is on the production of test cases, rather than the quality of said test cases. In other words, the idea that you could produce good testing without test cases seemed like a massive risk.

However, this desire for test cases and fear of a lack of them drives people to create test cases where it may not be useful. Most test cases I’ve seen follow the following format:
Summary
Preconditions
Steps
Expected Result

Now, given a very simple piece of software that takes in value X, determines if X > 5 and then does Y or Z depending on this, we might write some test cases for this that look like this:
testCases.png

This is a very simplistic example, but take note of how much I’ve had to write here to fit this format and how long it takes you to read it. Maybe it takes me a few minutes to write these sections, fix any spelling mistakes and review it to make sure it makes sense.

Now, consider if I documented the same example as a diagram:

decisionTree.png
How long did it take me to draw this? About 30 seconds in a decent flowchart program (such as yEd). I had fewer errors to correct and this particular example mapped very easily into a flowchart. How long did it take you to understand this diagram? How long did it take you to read the test cases? Which was most effective at helping you understand the behaviour? How long would it take you to think of some test ideas?

Hang on, where can I document my tests?

So we’ve not satisfied the desire for some documentation of our tests here. This diagram only satisfies the need for an explanation of the system. Test cases outline specific tests we may want to run. So, maybe we still need test cases and we just need a few diagrams to help explain more complex areas? Is there any other solution?

Enter...mind maps

So I did a little digging and looking around, I felt that surely there are better ways to design tests. Test cases can feel very slow to write, slow to read and hard to keep written in a particular format. This is when I came across mind maps described in this blog by Darren McMillan.  A mind map is not too different from a brainstorm - simply start with a topic and begin branching ideas from it. Taking my example above, I might end up with this mind map (created in a great tool called XMind found here):
testCases2.png
Here I’ve explored some test ideas and started to design my tests. Exploring these ideas may raise questions that I don’t know the answer to right now - is storage a concern? If it is, how does it get stored, maybe security is a concern if the underlying system is using a MySQL database, maybe I need to test for SQL injection? Maybe performance needs to be considered? I can also show this to other people and they can quickly understand the scope of my testing without having to read individual test cases - and I can quickly observe the scope of my own testing and keep adding to it. It’s much easier to consider the overall picture of my test plan using this diagram.

But isn’t it difficult to draw diagrams and fit things in?

Yes, I don’t think mind maps are a replacement for test cases. Instead, I think this is a tool that can be used in conjunction with test cases to help you design and document tests in a more readable format, quicker. However, there are still cases where it can be difficult or time-consuming to create a mind map or diagram. I envisage that you may start with a flowchart and formal documentation of a system first, to understand the system you are testing. You would then create a mind map to explore what you want to test around this system. Finally, you would perhaps write test cases to formally document your tests, especially where they have complex steps.

So are there any quick, easy solutions to design and document all tests?

No, I don’t think so. I think by the nature of having to design, write or draw tests, they can never be created in zero time. Some systems or tests will be complex and you cannot run away from the complexity - drawing a diagram or writing test cases may be a way of reducing or making the complexity easier to understand, but sometimes it’s not possible to simplify it.
In the process of thinking about and writing this blog post, I think I’ve come to the conclusion that the problem here is not that we don’t need test cases. The problem is that we are not always using the right tool for the job and sometimes as testers we aren’t thinking carefully about the format we want to write or convey our tests and documentation. Using flowcharts and mind maps allow us more tools for this purpose, and they definitely are not the only forms of diagram or documentation we can use!

But what if I want to collate my tests and re-visit them for regression?

I think this is the crux of why test cases become relied upon so much. Why is it useful to re-visit test cases? Is it because you don’t want to miss an important test that you might have forgotten? I would argue that if we are repeating an important test regularly, we’re unlikely to forget it and simply because it is in a test case format doesn’t mean its importance is always highlighted - which means you’re either regularly running a lot of test cases “just in case” or potentially missing these tests anyway.
Wouldn’t it be better to document a system in a way that conveys information - highlighting important areas, rather than trying to fit all such information in a test case format?

That nagging feeling…

I still feel dissatisfied with my conclusions, I still have concerns that test cases and diagrams still require a lot of skill to write or draw in a way that is easily understandable by others. Every one of us will write or draw things differently and this makes it difficult to be consistent. The need to set standards, train people and conduct reviews still feels like it’s necessary.
However, I definitely do feel that in my career so far as a tester and in my conversations with other testers, there has definitely been too large a reliance on writing test cases over other forms and styles of documentation. I do believe that as testers we can definitely save some time and improve the quality of our testing by considering other techniques and not relying on test cases.
But I still feel there could be a better way!

Summary


  • Test cases are not always the most appropriate format to document tests.
  • Diagrams like flowcharts can be used to document and explain systems better than a test case. Rather than using test cases to learn a system, we could use diagrams or more formal documentation of the system - if this documentation doesn’t exist already, it’s worth creating it!
  • Mind maps can be used to assist in designing and documenting tests, rather than designing the tests as you write test cases. Mind maps provide better visibility of your overall test plan.
  • Writing test cases or drawing diagrams still requires skill - diagrams are not necessarily better than test cases all of the time and not necessarily easier to create to a consistent standard.