Showing posts with label exploratory. Show all posts
Showing posts with label exploratory. Show all posts

Thursday, 31 December 2020

A tester’s feedback on system feedback

It’s not surprising that as testers our default approach to a black box is to try and perform tests to understand it better. However, sometimes the desire or need to do this is a test in itself! Rather than advocate for more testing, sometimes I think we should advocate for more observability. More observability not only makes the system, bug or problem we’re trying to understand easier, it unlocks and enables lots more testing, it can make our testing faster and it directly helps the operation of the system too - making it easier for people to support in production.

In motor racing, there are roughly 2 main types of racing driver - those that adapt to problems with the car’s handling and those that are really good at identifying those problems and feeding back to engineers and mechanics to simply make the car easier and faster. The really good drivers are good at both.

I feel many testers naturally fall into the first category when it comes to testing, I think for many good reasons we are quite patient and persistent at finding ways to test even when it is difficult, mundane or time-consuming. I think our profession attracts the sort of mindset that loves to investigate, problem-solve and carefully understand problems step-by-step. This makes us really good at understanding systems just by observing behaviour. None of that is bad, these are our strengths and typically what we bring to most teams.

However, sometimes I feel there are times when we could be feeding back and suggesting ways to improve the system we are testing, in ways that make it easier to test. Sometimes we are asked to test software that is very difficult to understand beyond its behaviour, and we can spend a lot of time and effort testing to understand all of this behaviour.

In addition, in order to feedback, make comments and suggest improvements on how the system gives feedback we need a bit of technical understanding and experience of what is possible. However, I believe we can all learn a little bit more and it can all help exponentially to the quality of the software we help produce.

Below are some suggestions on how we can assess the feedback a system gives us as testers and therefore make suggestions to improve it.

Logging

The first most obvious way to improve system feedback is assessing the quality of logging, through logs we can make the system report both errors but also behaviour. If we have some complex logic to send data through and we’re unsure which path it is taking, we can make the system log this “processing data 1”, “data 1 was sent to x because y equalled z”.

I frequently find the quality of logs on many projects to be quite poor for a variety of reasons, they tend to be neglected - its not just testers who lack experience or knowledge of logging, many developers also haven’t explored this much themselves.

There are multiple aspects to this that I could write lengthy blog posts about but to summarise a few areas that you can help assess the quality of:

Where are the logs?

If we have multiple servers or devices each logging do I have to login to each one separately to view the logs - or can we make the access easier by putting in one place (centralised logs).

What logs are these?

There are lots of kinds of logs for different layers of software and context:

  • The operating system logs lots of low level things like user access and system processes but these are quite noisy and don’t tend to be useful for the average developer or tester.
  • Back-end system logs (like a Java process).
  • Front-end logs (like a website).
  • Audit logs (user access, what they accessed and when, whether they were denied access).
  • Business metrics (sometimes we can log when certain actions happened)
  • Is it easy to distinguish between he different types of logs?

UX of logs

Particularly when we send logs to centralised systems such as ELK (Kibana), there is work required to make the logs easy to read, navigate and understand.
For example:

  • Easily and accurately being able to filter for ERROR logs
  • Traceability - can we trace different events in different applications logs together with some common identification such as an ID?
  • Formatting the log line data correctly (typically as JSON) so that it can be displayed correctly in Kibana - e.g. long stack traces with multiple lines appearing as separate logs rather than 1 log.
  • Easily being able to identify and separate different systems and environments - can we quickly distinguish between Prod/Live and test environments?

Errors and stack traces

Do we write error logs when something has gone wrong? If we see a bug as user, try to forget that you can “see it” and know the steps - has the system written an error log that would help us identify it if we didn’t know that?
When we have errors, do we also write out the stack trace that goes with it? Even when the error itself is ambiguous or vague, the stack trace can help developers with clues of what the underlying problem is.

Monitoring and metrics

In addition to logs, we can also observe the behaviours of a system via monitoring and metrics. So rather than just rely on negative feedback like errors from logs, we can also use positive behaviours, such as counting the number of times users have visited pages or whatever the “successful” use of the system means for us. Sometimes when things go wrong, we don’t get errors - but we can still observe that this happened by the lack of drop in positive or business metrics.
Does you system have somewhere you are collecting data on what its doing? This could be things like Google Analytics, where you can track what users are clicking on, or it can even just be logs like above stored in ELK/Kibana - logging each time the system processes a piece of data successfully.

Summary

I try to keep at the forefront of my testing the thought “what if this happened in production, can we tell the system or user did this?”. I believe that by adding this to my assessment of bugs and quality that I’m improving the overall quality of the broader system in terms of operability and supportability. It also can massively help my own testing - I have tested very complex wholly back-end systems which feature no obvious user interfaces and by pushing for better feedback from the system, it has made it way easier to test as well.

Friday, 12 February 2016

North West Tester Gathering

Introduction

In the last few weeks I’ve attended my first ever testing meetups in Manchester and Liverpool. Both of these meetups were organised by a group called the “North West Tester Gathering” and you can find it here on meetup.com. Other than online, I’ve never spoken to any testers outside of the companies I’ve worked for and I was really looking forward to it. I wanted to go for two reasons:
  • To listen to other testers’ experiences and try to learn from them, the problems they faced and the solutions they chose.
  • To talk about my own experiences and seek out fresh opinions and ideas and talk about the challenges I face. This is not necessarily because I don’t believe I can face the challenges alone, but because I believe I can never think of everything and I like to try out new ideas that I might never think of.

Speakers

For the first meetup in Manchester there was only one main speaker, Richard Bishop from a company called Trust IV - a software testing consultancy company. The main topic of the talk was about Network Virtualisation, which is a technology that allows you to “stub” or simulate network interactions such as a user visiting to your website through an iPhone on a 2G network. The tool they demonstrated this with was one created by Hewlett Packard called HPE Network Virtualisation.
The second meetup in Liverpool had two speakers, Vernon Richards and Duncan Nisbet. Vernon’s talk was about the common myths in testing that we all know and how we can tackle these myths - mainly by improving how we talk about testing in the first place! Duncan’s talk was about exploratory testing and how we probably all already conduct exploratory testing, we just don’t include it in our existing processes.


You can find videos of these talks here:
“Myth Deep Dive” by Vernon Richards:
“Exploratory Testing” by Duncan Nisbet:
{will add when its uploaded!}


Main Takeaways

I found all of the talks engaging and very relatable! I fully recommend watching the videos if you’re new to discussing the world of testing!
“Network Virtualisation” by Richard Bishop
  • Richard showed us some figures produced by one of the large big data companies forecasting how the technology market would look for 2016. In it, he especially highlighted the rise of end users relying on mobile devices to interact with products. I think this was useful food for thought especially as I’m involved with a project which could be viewed via mobile.
  • He also used some very effective examples of demonstrating the value of performance testing as well as the need to validate your assumptions (which applies to any testing!). He described an interesting test where they took network speed samples before and during a major football match and found that the speed was faster during the match - against their assumption that it would be slower!
  • I’ve definitely got a lot to learn still regarding performance testing, right now it feels like a domain rich with specialist knowledge (or at least different knowledge, for example the need to understand and know about statistical significance). I now know what the phrase “jitter” means! (where network packets are received in the wrong order).
  • Richard also provided some useful example use cases such as Facebook’s “2G Tuesdays”. This is where employees at Facebook are asked to work with Facebook using a network speed as slow as 2G to help them understand the difference in experience for some users in more remote or developing areas of the world. I felt this was an effective example of the lengths Facebook were going to, to try and help their employees empathise with these customers and therefore take their product’s performance on slow networks seriously.
“Myth Deep Dive” by Vernon Richards
  • Vernon’s talk mainly focused around talking better about testing to non-testers. A lot of myths people believe about testing are partly caused by our own inability to talk about testing.
  • There were a lot of themes that I think we would all recognise, such as “The way to control testing is to count things” - which is to say, judging the value of testing in terms of test cases executed or bugs reported and how this isn’t necessarily useful.
  • I really recommend you watch the video above! But the other themes were: "Testing is just clicking a few buttons" and "Automated testing solves all your problems".
“Exploratory Testing” by Duncan Nisbet
  • Duncan’s talk focused on some typical testing examples of where we all perform exploratory testing but simply don’t think about it being exploratory - we don’t value it because we don’t identify it.
  • He also talked about exploratory sessions being iterative, you spend time exploring, learn what you can and then repeat but designing further tests based on what you’ve learnt.
  • He also talked about the difference between good and bad exploratory testing being how well the tester can explain what they did in an exploratory session. Good exploratory testing can be explained and justified, it isn’t random and a tester should be able to easily explain what they were doing and why.

Socialising!

So other than the main talks, I was attending these meetups to meet and talk to other testers! I introduced myself to a few people and got chatting to quite a few different people. Some people I already knew from my days at Sony in Liverpool, others I met for the first time. It was nice to be able share stories and experiences, I highly recommend attending meetups just for this really, you can learn a lot from others and get some different points of view on your testing ideas.

Being brave…

At the Manchester meetup I caught up with Leigh Rathbone, who was organising the Liverpool meetup. During the course of our chat, I think my passion for testing got out and Leigh asked if I wanted to stand up and do a lightning talk at Liverpool. I don’t take opportunities like this lightly, so I accepted. I think the process of writing these blog posts has helped prepare me a little bit but I certainly have never stood up in front of 80 people, let alone people from my profession, some of whom are massively more experienced than me and whom I have a lot of respect for.
I chose to talk about the very subject that I passionately discussed with Leigh - diagrams. Lately in my recent work I’ve found many examples where people try to explain themselves in terms of words - either written or oral and failed. Not everything is easy to explain this way - I have definitely found that right here on this blog! The point I tried to make was that sometimes some information is better explained in a diagram or chart - e.g. timelines, flowcharts, mind maps and entity relationship diagrams to name a few. Its worth considering this when we are trying to explain ourselves or when someone is struggling to explain something to us. I explored this theme a little in my post Test Cases - do we need them?
I also quickly recommended a book that I believe every tester should read -  “Perfect Software and other illusions about Testing” by Gerald Weinberg. I had never read a testing book before and I’m fairly sure a lot of testers haven’t. I particularly like this book because I think it addresses the very topic Vernon was talking about - explaining what we do as testers in terms that anyone can understand. I’m also very much a fan of Jerry’s writing style, his stories and anecdotes make his points so much more memorable and relatable!

Summary


  • You should attend testing meetups! Even if you’re not a tester!
  • Even if I knew something already about the topics discussed, I always had something to learn or a new way of looking at it. I’d like to think I will always learn from the talks at these meetups.
  • Richard, Vernon and Duncan are really friendly and engaging people to talk to!
  • I shouldn’t be afraid of talking in front of lots of testers, because they are friendly people and I must have made some kind of sense as people came to thank me and chat about diagrams! I hope this inspires other people are nervous or unsure of talking to give it a go! Don’t listen to your brain!
  • Take opportunities with both hands when you see them - it can be very rewarding!
  • I’ve only attended two meetups so far and I’ve got so much to talk and think about!