It’s not surprising that as testers our default approach to a black box is to try and perform tests to understand it better. However, sometimes the desire or need to do this is a test in itself! Rather than advocate for more testing, sometimes I think we should advocate for more observability. More observability not only makes the system, bug or problem we’re trying to understand easier, it unlocks and enables lots more testing, it can make our testing faster and it directly helps the operation of the system too - making it easier for people to support in production.
In motor racing, there are roughly 2 main types of racing driver - those that adapt to problems with the car’s handling and those that are really good at identifying those problems and feeding back to engineers and mechanics to simply make the car easier and faster. The really good drivers are good at both.
I feel many testers naturally fall into the first category when it comes to testing, I think for many good reasons we are quite patient and persistent at finding ways to test even when it is difficult, mundane or time-consuming. I think our profession attracts the sort of mindset that loves to investigate, problem-solve and carefully understand problems step-by-step. This makes us really good at understanding systems just by observing behaviour. None of that is bad, these are our strengths and typically what we bring to most teams.
However, sometimes I feel there are times when we could be feeding back and suggesting ways to improve the system we are testing, in ways that make it easier to test. Sometimes we are asked to test software that is very difficult to understand beyond its behaviour, and we can spend a lot of time and effort testing to understand all of this behaviour.
In addition, in order to feedback, make comments and suggest improvements on how the system gives feedback we need a bit of technical understanding and experience of what is possible. However, I believe we can all learn a little bit more and it can all help exponentially to the quality of the software we help produce.
Below are some suggestions on how we can assess the feedback a system gives us as testers and therefore make suggestions to improve it.
Logging
The first most obvious way to improve system feedback is assessing the quality of logging, through logs we can make the system report both errors but also behaviour. If we have some complex logic to send data through and we’re unsure which path it is taking, we can make the system log this “processing data 1”, “data 1 was sent to x because y equalled z”.
I frequently find the quality of logs on many projects to be quite poor for a variety of reasons, they tend to be neglected - its not just testers who lack experience or knowledge of logging, many developers also haven’t explored this much themselves.
There are multiple aspects to this that I could write lengthy blog posts about but to summarise a few areas that you can help assess the quality of:
Where are the logs?
If we have multiple servers or devices each logging do I have to login to each one separately to view the logs - or can we make the access easier by putting in one place (centralised logs).
What logs are these?
There are lots of kinds of logs for different layers of software and context:
- The operating system logs lots of low level things like user access and system processes but these are quite noisy and don’t tend to be useful for the average developer or tester.
- Back-end system logs (like a Java process).
- Front-end logs (like a website).
- Audit logs (user access, what they accessed and when, whether they were denied access).
- Business metrics (sometimes we can log when certain actions happened)
- Is it easy to distinguish between he different types of logs?
UX of logs
Particularly when we send logs to centralised systems such as ELK (Kibana), there is work required to make the logs easy to read, navigate and understand.
For example:
- Easily and accurately being able to filter for ERROR logs
- Traceability - can we trace different events in different applications logs together with some common identification such as an ID?
- Formatting the log line data correctly (typically as JSON) so that it can be displayed correctly in Kibana - e.g. long stack traces with multiple lines appearing as separate logs rather than 1 log.
- Easily being able to identify and separate different systems and environments - can we quickly distinguish between Prod/Live and test environments?
Errors and stack traces
Do we write error logs when something has gone wrong? If we see a bug as user, try to forget that you can “see it” and know the steps - has the system written an error log that would help us identify it if we didn’t know that?
When we have errors, do we also write out the stack trace that goes with it? Even when the error itself is ambiguous or vague, the stack trace can help developers with clues of what the underlying problem is.
Monitoring and metrics
In addition to logs, we can also observe the behaviours of a system via monitoring and metrics. So rather than just rely on negative feedback like errors from logs, we can also use positive behaviours, such as counting the number of times users have visited pages or whatever the “successful” use of the system means for us. Sometimes when things go wrong, we don’t get errors - but we can still observe that this happened by the lack of drop in positive or business metrics.
Does you system have somewhere you are collecting data on what its doing? This could be things like Google Analytics, where you can track what users are clicking on, or it can even just be logs like above stored in ELK/Kibana - logging each time the system processes a piece of data successfully.
Summary
I try to keep at the forefront of my testing the thought “what if this happened in production, can we tell the system or user did this?”. I believe that by adding this to my assessment of bugs and quality that I’m improving the overall quality of the broader system in terms of operability and supportability. It also can massively help my own testing - I have tested very complex wholly back-end systems which feature no obvious user interfaces and by pushing for better feedback from the system, it has made it way easier to test as well.
No comments:
Post a Comment