Saturday 26 September 2015

Microservices discussion by XP Manchester


A couple of weeks ago, I was invited by some fellow programmers to attend an event on microservices organised by XP Manchester. Microservices are a hot topic right now in software development so I wanted to go along partly from my own interest in the subject area but mainly to think about how testing may be impacted or what considerations there may need to be for testing. The event was focused on programming and software architecture but it's discussional format allowed for questions so a variety of different points were talked about through the evening.

What is a “microservice”?

The conclusion from the evening was there is no agreed definition! However, I think we can summarise that microservices as an architecture is the process of breaking down your system into smaller parts. The main reason you want to do this is for scalability, but it is an expensive process and the conclusion for the evening was “Microservices are very hard!”.

What does testing have to do with this?

I had initially gone to the event hoping to understand better some of the implications on testing. But I actually found myself taking a step back and observing the meet up and the discussion from a philosophical view. I actually found a lot of parallels to the debates on automation testing in the testing world. So while the content of the discussion had little to do directly with testing, I think there are some lessons that apply to automation testing and many other “hot topics”.

The temptation to try something new - the fallacy of “best practice”

One of the points raised on microservices was that as a concept it had been around for several decades, but only very recently has it become a popular subject. This is mainly down to the famous use cases of Netflix and Spotify. It seems it is very easy for people to see companies such as these and want to copy their models. The problem is, solutions like microservices are very expensive and complex, they are solutions to very particular problems - they are too expensive as a solution to be used at all times. It is tempting for people to consider them a “best practice”, which is totally inappropriate. I see the same kind of attitudes on automation testing too - that large companies talk about automation testing and everyone else decides to follow it as a best practice. Automation testing is also very expensive and is not a best practice, it is a solution to a particular problem. I also see automation testing get discussed as the “best thing to do” in a similar vein to microservices.
At the event, someone mentioned a great analogy - you wouldn’t use the same design and manufacturing methods to build a model plane as you would a real Jumbo Jet. Just because famous or bigger companies use particular solutions, doesn’t mean these solutions are appropriate for your situation.

Only looking at the benefits, without considering the cost

Another point that I could relate to is the belief from some people that microservices is easier and simpler - that by breaking down your monolithic code base, you’re breaking down the complexity. This is false, the complexity is always still there, it’s just been spread around - which makes it both easier and harder to manage in different respects. While a particular area of code is easier to manage in isolation, the overall integration or full system is much harder to manage in terms of infrastructure, deployment and debugging.
I see the same problem in automation testing - a common view I’ve come across is that automation testing is always quicker than a human manually typing on a keyboard. Just like with microservices, people are ignoring the bigger picture here - focusing on the speed of executing a test, rather than considering what you gain and lose in the wider process. Automation testing is more than just executing a test, there is a lot of work to do before and after the execution! The cost of automation testing is the time it takes to write the test up, analyse its results every time its run and the maintenance cost of the test. With a human manual tester, the cost of writing the test is massively reduced because you are not writing code - in some cases perhaps nothing needs to be written! Analysing the results can be much quicker for a human to do - they are both able to run the test themselves, notice irregularities and analyse all at the same time - something a computer cannot do. Maintenance is also a lot less for manual testing because a human can adapt to a new situation easily.

Because microservices and automation testing are very expensive, the cost must be weighed up against the benefits. Only if the benefits outweigh this cost does it make sense. Typically, the value in automation testing comes from repeatable activities - and over time this will overcome the high costs. But for anything that isn’t repeatable, it’s difficult to really justify automation over simply carrying out the testing manually.

Additional thoughts

On a different note, I’d also like to talk a little about how the event was organised by XP Manchester as I felt it was a very successful format that I hadn’t experienced before. The format was one where everyone was asked to sit in a circle with 5 chairs in the middle of the circle. 4 people would sit on the chairs, leaving one vacant and discuss the topic (with the discussion being guided by a moderator). If someone wanted to join the discussion, they would sit on the vacant 5th chair and someone else from the discussion must leave. Meanwhile, the rest of us in the circle had to remain silent and listen. I felt this format was fantastic for keeping a focused discussion while allowing 30 people to be involved or listen to it. It was a refreshing change from traditional lecturing approaches or just a chaotic discussion with 30 people talking to each other at once. In some respects it was the best of both worlds. Credit to the guys at XP Manchester for running a great little event that produced some useful intellectual discussion!


  • There are definitely a lot of relatable problems when it comes to decision making for programmers, as there are for testers.
  • Don’t be tempted to follow a “best practice” or “industry standard” without considering whether it is right for you.
  • Always consider the costs of your decisions, always treat so called “silver bullet solutions” with suspicion - is this solution really the best? Is it really as easy as people are suggesting?
  • For groups of 30ish people, if you want to generate a focused, intellectual discussion for people to listen and learn from but don’t want to use a lecture/seminar format - then consider the format described above!

No comments:

Post a Comment