Showing posts with label Collaboration. Show all posts
Showing posts with label Collaboration. Show all posts

Monday, 26 August 2019

Using Bowtie Diagrams to describe test strategy

Introduction

A long time ago in this blog post I was introduced to the Bowtie Diagram. I love how this visualises how we manage risks and I feel this compliments test strategy. Why? Well surely our test strategies should be accounting for risk and how we manage it. Whether you define testing in a specific focused sense (like functionally testing code) or in a holistic or broader sense (like viewing code reviews as testing, or monitoring or simply asking the question “What do end users want?”) - these activities are ways of either preventing or mitigating risks. I feel if you want to be effective in helping improve the quality of a project or product, you need to assess the potential risks and how you are going to prevent or mitigate them and therefore also assess where your time is best placed.
What are Bowtie Diagrams?
The short version - it’s a diagram that looks like a bowtie, where you describe a hazard, the top likely harmful event then list threats that will trigger the event and consequences from the event happening. For each threat and consequence, you describe a prevention or mitigation.



The long version, (and better more comprehensively described version) can be read here - https://www.cgerisk.com/knowledgebase/The_bowtie_method

What I love about these diagrams is that they more visually describe and explain risks and how we intend to manage them. Creating them is a useful exercise in exploring risks we may not have normally thought of, but in particular, I find we don’t explore the right hand side (consequences) of these diagrams very often. I find most of the time in software development that we are very reactionary to consequences, and even then, we don’t not typically spend much time on improving mitigation. 


Managing the threats
I’ve started using these diagrams to explain my recent approaches to test strategy because they neatly highlight why I’m not focusing all of my attention on automated test scripts or large manual regression test plans. I view automated test scripts as a barrier to the threat of code changes. Perhaps the majority of people out there view this as the biggest threat to quality, perhaps many people understand the word “bug” to mean something the computer has done wrong. These automated test scripts or regression test plans may well catch most of these. But are these the only threats?

I see other threats to quality and I feel we have a role to play in “testing” for these and helping prevent or mitigate them.

Managing the consequences
Any good tester knows you cannot think of or test for everything. There are holes in our test coverage, we constantly make decisions to focus our testing to make the most of precious time. We knowingly or unknowingly choose to not test some scenarios and therefore make micro judgements on risk all of the time. There are also limits to our knowledge and limits to the environments in which we test, sometimes we don’t have control over all of the possible variables. So what happens when an incident happens and we start facing the consequences? Have we thought about how we prevent, mitigate or recover from those consequences? Do we have visibility of these events occurring or signs they may be about to occur?
I find this particular area is rarely talked about in such terms. Perhaps there is some notion of monitoring and alerting. Usually there are some disaster recovery plans. But are testers actively involved? Are we actively improving and re-assessing these? I typically find most projects do not consider this as part of their strategy, in most cases it seems to be an after-thought. I think most of this stems from these areas typically being an area of responsibility for Ops Engineers, SysAdmins, DBAs and the like, whereas as testers we have typically focused on software and application teams. As the concept of DevOps becomes ever popular, we can now start to get more involved in the operational aspects of our products, which I think can relieve a lot of pressure for us to prevent problems even occurring.


Mapping the diagram to testing
An example of using a diagram like this within software development and testing:







I feel our preventative measures to an incident occurring are typically pretty good from a strategic view, especially lately where it has become more and more accepted that embedding testers within development teams improves their effectiveness. Yes, maybe we aren’t writing unit tests or involving testers in the right ways sometimes still. But overall, even with such issues, efforts are being made to improve how we deliver software to production.

But on the right-hand side, we generally suffer in organisations where DevOps has not been adopted. And when I say DevOps, I don’t mean devs the write infrastructure-as-code, I mean teams who are responsible and capable of both delivering software solutions and operating and maintaining them. Usually, we see the Ops side of things separated into its own silo still and very little awareness or involvement from a software development team of their activities. But Ops plays a very key role in the above diagram because they tend to be responsible for implementing and improving the barriers or mitigations that help reduce the impact of an incident.

I feel the diagram neatly brings this element into focus and helps contribute to the wider DevOps movement towards a holistic view of software development, towards including aspects such as maintenance, operability, network security, architecture performance and resilience as qualities of the product too.

As testers I feel we can help advocate for this by:

  • Asking questions such as “if this goes wrong in production, how will we know?”
  • Requesting access to production monitoring and regularly checking for and reporting bugs in production.
  • Encouraging teams to use a TV monitor to show the current usage of production and graphs of performance and errors.
  • If you have programming/technical skills, helping the team add new monitoring checks, not dissimilar to automation checks. (e.g. Sensu checks)
  • Becoming involved with and performing OAT (Operational Acceptance Testing) where you test what happens to the product in both expected downtime (such as deploying new versions) and disaster scenarios, including testing the guides and checklists for recovery.
  • Advocating for Chaos Engineering.

Monday, 19 December 2016

The temptation to split dev and test work in sprints - don’t do it!

Introduction

About 3 and a half years ago, I was new to sprints and scrum. Coming from the videogames industry, I was used to a process where I would test code that came from developers and return bug reports. I had heard the words “sprint” and “scrum” before but I had no idea how testing fit into them, so I joined a company where I could figure that out. This is what I figured out.

What’s a sprint?

If you’re not familiar with scrum or agile, then a sprint is effectively a short-term project plan where a team of people decide the work that they can complete within a 1, 2 or 3 week window. Work is “committed” (promised) to be completed in that time frame and the team tracks their progress. After each sprint, reviews and retrospectives are held to help the team find what works well and what helps them complete more work to a higher standard while still maintaining their commitment. The main focus of sprint work is to complete the work and trying to avoid leaving work unfinished.

Where does testing fit?

So normally teams set up a task board with columns titled something similar to “To Do, In Progress, Done”. Sometimes people add more columns or use different names but the usage is similar. Anyone from the same background as me would be tempted to then suggest that an additional column could be added between “In Progress” and “Done”. The logic being that “when you’ve finished your development work, I’ll test it”. In my head, this was trying to work with what I knew already in this new environment. We ended up with columns similar to “To Do, Build/Dev, Testing, Done”.

Bad idea

So at first, I thought things were working ok, I feel one of my strengths is learning and picking up things fast so I got stuck in and kept up with the 5 developers in my team. Most of the time I was fortunate that the work dropped sequentially or wasn’t particularly time consuming to test. This didn’t last long though and eventually we started to fail to complete work on time. This happened either because I was testing it all at the end of a sprint or because the work was highly dependant upon each other and the problems with integration weren’t found until very late.
This meant we had to continue some work in future sprints. Now I no longer had plenty of time to write my test plans at the start, but I was busy testing last sprint’s work and then testing this sprint’s work! I no longer had time to spend learning more automation or exploring newer areas to me like performance testing. All of my time was consumed trying to test all of this work and I couldn’t do it. What went wrong?

A change in approach

I would love to say I quickly realised the problem and fixed it but it took me a long time to realise the issue. I think part of this I will put down to not knowing any better and partly working with developers who didn’t know any better. Either way, a while later I realised that the problem was that I was trying to test everything and the developers started to rely on me for that. I’ve since realised that there is a fair bit of psychology involved in software development and this was one of my biggest lessons.
We eventually decided to stop splitting up work between roles, mainly because we found that developers tended to treat work that was in “test” as “done” to them, freeing themselves up to work on even more development work. This created a bottleneck, as the only tester as I was testing work from yesterday while they were busy with today. Instead, I came to the realisation that there is little benefit to splitting the work up in this way, at least not through process. We should be working together to complete the work, not trying to focus on our own personal queue. I shifted from testing after development was thought complete, to trying to test earlier, even trying to “test” code as developers were writing it, pairing with them to analyse the solution.

Understanding what a “role” means

I think for me this lesson has been more about realising that playing the role of “tester” does not necessarily mean I carry out all of the “testing” in a team. It does mean I am responsible for guiding, improving and facilitating good testing, but I do not necessarily have to complete it all personally. An additional part of this lesson is that I cannot rely on other people to define my role for me - as a relative newbie to testing I relied on the developers to help me figure out where I should be. While I’ve learnt from it, I also know that I may need to explain this learning again in future because it is not immediately obvious.

So where does testing really fit?

Everywhere, in parallel and in collaboration to development. Testing is a supportive function of the team’s work, it now doesn’t make sense to me to define it as another column of things to do. It has no set time frame where it’s best to perform, and it doesn’t always have a great deal of repetition in execution. It is extremely contextual.
In addition, that’s not to say you shouldn’t test alone or separately to ongoing teamwork. You absolutely must test alone as well, to allow you to focus and to process information. It’s just that you must choose to do this - where it is appropriate.

Definition of “Done”

One of my recent approaches was to define the definition of “Done” as:

“Code deployed live, with appropriate monitoring or logging and feedback has been gathered from the end user”

Others may have different definitions, but I liked to focus the team on getting our work in a position where we could learn from it and take actions in the following sprint. For me, it meant we could actually pivot based on end user feedback or our monitoring and measure our success. Instead of finishing a sprint with no idea whether our work was useful or not, planning a new sprint not knowing whether we would need to change it.

Summary

  • Avoid using columns like “Dev” and “Test” in sprint boards. It seems to lead to a separation of work where work is considered “Done” before it is tested.
  • Instead, try to test in parallel as much as possible (but not all of the time), try to test earlier and lower down a technology stack (such as testing API endpoints before a GUI is completed that uses them).
  • Encourage developers to test still and instead try to carefully pick when and where to personally carry out the bulk of the testing. Try to coach the team on becoming better at testing, share your skills and knowledge and let them help you.
  • Altering the definition of “Done” seemed to help for me, it was useful to focus the team on an objective that meant we really didn’t have to keep returning to work we had considered completed. In other words, make sure “done” means “done”.

Tuesday, 10 May 2016

Mob Programming with Woody Zuill

Introduction

Today I attended a 1-day workshop organised and run by Woody Zuill who, along with his colleagues at Hunter Industries, originally developed the idea of ‘mob programming’. This workshop was part of a major conference being held in Manchester called Agile Manchester and I heard about it after attending a meetup last month called Manchester Tech Nights. I had been invited to that meetup to represent testing with a 5 minute lightning talk. It was a great experience and even better that I found out about this workshop and conference from talking to people there! After finding out, I just had to book my ticket and attend, it seemed like a great opportunity to really understand what mob programming was all about without having to travel very far.

What on earth is ‘mob programming’?

Basically, it’s the idea of taking pair programming and applying it to the whole team. So the whole team works on a single piece of work at a time, with a driver and navigator rotating at timed intervals. The proposed benefits of this idea is that when we work separately, we deliver both the best and worst of our work whereas with the whole team present, we support each other and produce the best of our work at all times. You can read a much better written description by Woody himself here.

Why was I interested in this workshop?

Barely a year ago I only just started to explore the wider testing community and start reading and listening to different experiences and ideas on testing. Through that research I came across Maaret Pyhäjärvi’s blog and her talks from previous TestBash conferences on the Ministry of Testing’s Dojo. Some of the particular topics that caught my interest were on her recent experiences of working as the sole tester within a large group of developers and the challenges in managing this situation. Through her talks and blogs she mentioned the idea of mob programming (or mob testing) as a means to both learn very quickly but also cultivate understanding and respect across a cross-functional team. My curiosity was really piqued when I had an all too brief taste of pairing with Llewelyn Falco as well hearing his lightning talk on the benefits of mobbing at TestBash Brighton 2016.
I’ve started to experience the need to adapt testing to become more integrated in development processes in agile environments and mob programming struck me as a potentially useful way to encourage the spirit of not only developers and testers working together, but also the wider team, be that product owners, designers or scrum masters. The workshop presented a great (and well timed!) opportunity to observe how it might be done and get a better idea of why exactly Woody and his team back at Hunter Industries even came up with this approach.

The workshop and what I learned

The workshop itself was the kind of friendly, fun and relaxed workshop I’ve come to expect after the high standards set for me back at TestBash! Woody was very honest and open in explaining that he was only here to explain what mob programming is and how it has worked for him in the past. He didn’t claim that this approach would work for everybody. The workshop was split into two parts - the first part was where Woody explained the context of how he and his developed this idea and we took part in exercises to observe the benefits of working together as a team. The second part was where we actually wrote code as a mob!
The main reasons that led to mob programming as an approach being developed were:
  • Originally Woody wanted to encourage his developers to recognise bad code. He wanted to achieve this by allowing people to figure things out and practice their craft together regularly - in the same way as any sports team.
  • He also wanted to overcome the challenge of getting access to project managers to answer questions that blocked work - involving them in the mob helped provide immediate feedback to any questions the developers had.
  • However, to achieve this, he didn’t force people to participate, everyone was invited but attending was not compulsory. This allowed people to get comfortable with the idea before they attended.
  • Hence, the only reason the approach of mobbing worked was because the team decided to do it. It was not forced and was developed simply by trying ideas and trying to solve problems when they arose.
I really liked how open and honest this explanation was of the context and how this approach was developed. It was exactly what I was looking to learn by attending this workshop and considering how it might be useful (or might not be useful!) back at my workplace or in future.

Coding again!

Although I’ve been writing a fair bit of Python the last 3 years I haven’t really touched any more advanced programming than that for about 6 years (since my degree basically!). I was relishing attending this workshop and throwing myself in with a group of experienced programmers and trying to keep up. Having read and heard Maaret’s experiences with mob programming as a tester and how it was a great way to learn, I really wanted to experience this and see this benefit for myself. Not to mention, to hopefully contribute something with my skills as a tester. Having said that, I was reasonably confident that I could somewhat keep up due to my background in my degree and my recent Python experience.

It was a lot of fun! I didn’t feel too ashamed by my rustiness with Java and I started getting back into it pretty quickly. It was awesome to see how quickly we started interacting as teams, at first I was concerned that having less experienced programmers (or people who didn’t know any code at all!) would drag the team down. But I actually felt the less experienced programmers got up to speed remarkably quickly! It’s really quite impressive (and not at all surprising when you think about it!) how quickly people can pick things up when they’re thrown in the deep end but given the support to perform.

Woody was a great facilitator and I learned a fair few tips on how to conduct these sessions successfully - the main points being:
  • The hardest part is to shut up and let everyone have an equal say! Treating everyone with consideration and respect was key, so that the whole team is on the same page.
  • The key part that I think testers (and non-technical people) really bring a lot of value to mobs is focusing the team on considering each task in the language of plain english rather than in ‘coding language’. There was an emphasis on navigators giving instructions in english rather than specifying code. Although it may be necessary to support non-technical people to guide them through typing specific characters, it means as a tester you can easily navigate as you can focus on the objective at hand rather than technically how to solve a problem.

What does this all have to do with testing?

I believe that in agile environments and with ideas like continuous development it is generally no longer very effective to conduct all (note - perhaps some is still necessary) of your testing at the end of a development cycle or separately to development. Not only does this quickly lead to ‘testing bottlenecks’ but it just doesn’t make a lot of sense to be discovering basic flaws in logic or implementation only after development has finished. We could be bringing our skills as testers much earlier in the development cycle and discovering these flaws earlier and more cheaply. If we can challenge assumptions made before the code is even written, or even as the code is written, we will surely prevent some of the major re-factoring or re-writes that sometimes occur.
I personally see mob programming as a potential tool for encouraging much better collaboration between developers and testers as well as product owners and designers. The idea that ‘everyone is a developer’ who simply bring different skills is something that makes a lot of sense to me and I believe our testing can be become much more effective when we understand more about the code we test.
However, as always, maintaining critical distance and trying to always be an advocate for the customer is key to our work. But I strongly believe we can bring those values as well as learn and contribute more from a technical perspective without compromising those values.

What next?

Now that I’m armed with some better knowledge of the context of how mob programming came about and some experience of what it’s like to be a part of and facilitate, I’ll carefully consider it as a tool to suggest in future. I can see that if a team is already collaborating and working well together, pairing and mobbing in an informal sense, that this approach may not be necessary. However, I’m looking to build a healthy stock of ideas and tools to try when the time arises and if nothing else it’s always useful to try these ideas and see if there is anything we like about them.
In the scenario of rapidly sharing knowledge about a particular technology or language, I think this formal approach of mob programming could be very very effective and I will be looking out for opportunities to do this.
In summary, I highly recommend this workshop if you’re interested in the subject! As I’ve said a lot in this blog post, Woody is great and I got everything out of the workshop that I wanted to.