Wednesday, 14 October 2015

An introduction to testing

Introduction

I’ve had a few requests for an article that explains what testing involves for a complete newcomer, so here it is! As I’ve progressed in my career in testing, I’ve discovered it’s a subject that isn’t widely taught in academia and very few people ever choose it as a career path. I’d like to be able to contribute to changing this and hopefully one day inspire others to choose it and feel the same passion to become a better tester!

What is testing?

Testing is the process of designing, executing and analysing tests. According to thefreedictionary.com, the definition of a “test” is:
“A procedure for critical evaluation; a means of determining the presence, quality, or truth of something; a trial”
In other words, it typically means taking something - be that a piece of software, an aircraft or a piece of chocolate - and evaluating some truth from it while comparing it to some form of specification.
The result of carrying out these tests is that you gather information and learn something about the thing you are testing. Testing is all about learning as much as possible about whatever you are testing - through this you can provide information regarding how a system actually works, what specific code does, bugs and the overall quality.
The reason companies hire dedicated testers is simply because there is so much to learn and test! Typically, most companies operate as departments - with a programmer and a salesman focusing on totally different aspects of the company. Because of this, very quickly no one in the company knows everything about their product. As a tester or as a testing team you play a part in trying to provide this full picture - or at least provide people information on what they don’t know.

Why not document this information or automate these tests?

Absolutely! As a tester I am always promoting the use of documentation to ensure that the information I have to uncover is not difficult to uncover again.
Automation used well can also save a lot of time in discovering simple or basic failures particularly for more repetitive or laborious tasks, taking the burden off manual testing to focus on more creative types of tests.
However, both of these are expensive activities. Both require work to write, design and maintain and can only be written when the information has been defined.
Also, automation in itself is not testing because it cannot understand the context, what it is testing. The automation will only test what you tell it to test and will not, for example, investigate problems it notices during the test. So while it can save you some time performing repetitive checks, you still need a human to critically observe a system too.


Hence, the position of a tester exists to help guide the decisions on what tests to run, what to automate and what to document. This is all very dependant on what is being tested and the non-functional requirements of the business. There are companies that have requirements which mean a larger amount of automation and documentation is required than others. Some companies will have programmers carry out testing and others will have huge testing departments of over 100 testers. Others may even rely on end-users to test their products. However in all of these scenarios, the objectives of testing are the same - to gather information on the system and learn.

How do I become a tester? How can I learn?

Regarding academia, currently there doesn’t seem to be any academic courses on testing at all, a quick search on UCAS in the UK shows that there are no degree courses on testing as a subject.
For software testing, there are qualifications such as the ISTQB/ISEB which many companies recognise, however it is not necessary to hold this qualification to become a tester and there is a lot of disatisfaction in the software industry for this qualification.
From my experience, there are only two routes into software testing:
  1. Applying for a junior testing role in an organisation which is willing to hire inexperienced staff and train them.
  2. Working in a different role and switching to testing within the same company.


The main piece of advice I can give though is, if you are applying for a testing role - try to learn as much as you can about the company, what you might be testing and anything else you can think of. What will make you a good tester is your ability to learn and to understand what you still don’t know and seek this information out - testers should always be inquisitive and asking a lot of questions.

Ok, so if there aren’t many useful courses, what about online?

Yes! There are plenty of places online to read about testing practices! I’m mainly knowledgeable about software testing so I can only recommend the following resources on software testing as a start:
There are also some good places to ask questions, read further into various topics and become involved in the testing community:


However, I’ve found the following video is a fantastic introduction to testing, it articulates what you are trying to do as a tester pretty well:

Saturday, 26 September 2015

Microservices discussion by XP Manchester

Introduction

A couple of weeks ago, I was invited by some fellow programmers to attend an event on microservices organised by XP Manchester. Microservices are a hot topic right now in software development so I wanted to go along partly from my own interest in the subject area but mainly to think about how testing may be impacted or what considerations there may need to be for testing. The event was focused on programming and software architecture but it's discussional format allowed for questions so a variety of different points were talked about through the evening.

What is a “microservice”?

The conclusion from the evening was there is no agreed definition! However, I think we can summarise that microservices as an architecture is the process of breaking down your system into smaller parts. The main reason you want to do this is for scalability, but it is an expensive process and the conclusion for the evening was “Microservices are very hard!”.

What does testing have to do with this?

I had initially gone to the event hoping to understand better some of the implications on testing. But I actually found myself taking a step back and observing the meet up and the discussion from a philosophical view. I actually found a lot of parallels to the debates on automation testing in the testing world. So while the content of the discussion had little to do directly with testing, I think there are some lessons that apply to automation testing and many other “hot topics”.

The temptation to try something new - the fallacy of “best practice”

One of the points raised on microservices was that as a concept it had been around for several decades, but only very recently has it become a popular subject. This is mainly down to the famous use cases of Netflix and Spotify. It seems it is very easy for people to see companies such as these and want to copy their models. The problem is, solutions like microservices are very expensive and complex, they are solutions to very particular problems - they are too expensive as a solution to be used at all times. It is tempting for people to consider them a “best practice”, which is totally inappropriate. I see the same kind of attitudes on automation testing too - that large companies talk about automation testing and everyone else decides to follow it as a best practice. Automation testing is also very expensive and is not a best practice, it is a solution to a particular problem. I also see automation testing get discussed as the “best thing to do” in a similar vein to microservices.
At the event, someone mentioned a great analogy - you wouldn’t use the same design and manufacturing methods to build a model plane as you would a real Jumbo Jet. Just because famous or bigger companies use particular solutions, doesn’t mean these solutions are appropriate for your situation.

Only looking at the benefits, without considering the cost

Another point that I could relate to is the belief from some people that microservices is easier and simpler - that by breaking down your monolithic code base, you’re breaking down the complexity. This is false, the complexity is always still there, it’s just been spread around - which makes it both easier and harder to manage in different respects. While a particular area of code is easier to manage in isolation, the overall integration or full system is much harder to manage in terms of infrastructure, deployment and debugging.
I see the same problem in automation testing - a common view I’ve come across is that automation testing is always quicker than a human manually typing on a keyboard. Just like with microservices, people are ignoring the bigger picture here - focusing on the speed of executing a test, rather than considering what you gain and lose in the wider process. Automation testing is more than just executing a test, there is a lot of work to do before and after the execution! The cost of automation testing is the time it takes to write the test up, analyse its results every time its run and the maintenance cost of the test. With a human manual tester, the cost of writing the test is massively reduced because you are not writing code - in some cases perhaps nothing needs to be written! Analysing the results can be much quicker for a human to do - they are both able to run the test themselves, notice irregularities and analyse all at the same time - something a computer cannot do. Maintenance is also a lot less for manual testing because a human can adapt to a new situation easily.


Because microservices and automation testing are very expensive, the cost must be weighed up against the benefits. Only if the benefits outweigh this cost does it make sense. Typically, the value in automation testing comes from repeatable activities - and over time this will overcome the high costs. But for anything that isn’t repeatable, it’s difficult to really justify automation over simply carrying out the testing manually.

Additional thoughts

On a different note, I’d also like to talk a little about how the event was organised by XP Manchester as I felt it was a very successful format that I hadn’t experienced before. The format was one where everyone was asked to sit in a circle with 5 chairs in the middle of the circle. 4 people would sit on the chairs, leaving one vacant and discuss the topic (with the discussion being guided by a moderator). If someone wanted to join the discussion, they would sit on the vacant 5th chair and someone else from the discussion must leave. Meanwhile, the rest of us in the circle had to remain silent and listen. I felt this format was fantastic for keeping a focused discussion while allowing 30 people to be involved or listen to it. It was a refreshing change from traditional lecturing approaches or just a chaotic discussion with 30 people talking to each other at once. In some respects it was the best of both worlds. Credit to the guys at XP Manchester for running a great little event that produced some useful intellectual discussion!

Summary


  • There are definitely a lot of relatable problems when it comes to decision making for programmers, as there are for testers.
  • Don’t be tempted to follow a “best practice” or “industry standard” without considering whether it is right for you.
  • Always consider the costs of your decisions, always treat so called “silver bullet solutions” with suspicion - is this solution really the best? Is it really as easy as people are suggesting?
  • For groups of 30ish people, if you want to generate a focused, intellectual discussion for people to listen and learn from but don’t want to use a lecture/seminar format - then consider the format described above!

Friday, 18 September 2015

Sec-1 Penetration Testing Seminar

Introduction

Recently I was invited to a seminar by Sec-1 on Penetration Testing. It was a great introduction to the discipline and I took away quite a few really great points. I’ve never really performed Penetration Testing myself and I only have a general knowledge of it - enough to be able to identify and understand some basic problems. I’m looking to expand my knowledge of this type of testing so that I can bring much more value to my functional testing. In this blog post I will talk about the main points that I took away from the seminar.

You are only as secure as your weakest link

You may spend a lot of time securing one particular application server but if you have just one older or less secure server on the same network, your security is only as strong as that one weak sever. There may be an old server that is used for your printer in the office and it may not be considered as a security concern, but it can be used by hackers to gain access and compromise your network.

Whitelist, don’t blacklist

If you write rules that attempt to prevent specific malicious interactions, you end up with a maintenance nightmare by continually having to maintain and update these rules for each new threat. It is much more effective to instead write your rules to expect the correct interaction.

Keep up to date as much as possible

It can be a pain to keep libraries, servers and software up to date because the updates can break existing application functionality. But where possible it should be encouraged because these updates may contain important security updates. However, you cannot rely on the change logs for the update to tell you about any important security fixes because typically they are described as “minor bug fixes”. Companies do this because it can be considered bad publicity to admit they are regularly fixing security holes.
Keeping up to date will save time in the long run as your systems will become more secure without you needing to create your own security fixes later for systems you have no updated.

Only a small amount of exploits are needed to cause major vulnerabilities

At the seminar they demonstrated a variety of techniques that could be used to get just enough information to open larger vulnerabilities. Through SQL injection, a poorly secured form could provide full access to your database - allowing hackers to potentially dump your entire database which they can then use to gain further access to your network or sell on to other malicious entities.

Attacks do not have to be direct

Even if your own system is highly secure, hackers can target totally independent systems such as targeting websites your employees visit and gathering password data. Typically a lot of people still re-use passwords and this can be an alternative way into your system. In the same vein, you are open to attack through integration partners if their own systems are not as secure as yours. Again, you are only as strong as your weakest link.

Summary

  • I found the seminar useful and I certainly learnt a lot more. I can certainly recommend Sec-1’s seminars to anyone who only has a general knowledge of penetration testing and wants to understand more.
  • Keeping software and hardware up to date has more benefits to security than it may first appear because security fixes are not always made public knowledge.
  • Penetration testing requires specialist skills and knowledge. However, I feel there is still some worth to having a better understanding as a functional tester. It allows me to pick up on potential areas that may concern security and helps me to drive quality in terms of security by challenging lax attitudes on these “minor issues”.

Wednesday, 2 September 2015

Important defects or significant information?

licensetoTest.png

Introduction

As a tester I feel I am a provider of information, information that allows others to best judge quality and risk. If this is correct, should I merely report all information with no care for judging importance or priority? If I filter the information to what I think is important, surely I'm influencing the decision process of others? I feel this is, as ever, a murky gray area that has no easy answers.

Who decides importance?

Project Managers, Product Owners, Business Analysts, stakeholders - whoever determines what is worked on - decides importance. There is absolutely no question of that. As a tester I am not the one who decides what is worked on. I am not usually in a position where I am in conversation with the entire business or have sufficient knowledge of ‘the big picture’ to make these decisions and it isn't the job I'm hired to do. Of course, there may be circumstances where these jobs are blurred (there are Project Managers who test). Still, in a typical company set up, I'm rarely hired as a tester to manage projects.


However, that doesn't mean as a tester I can’t have some knowledge of the wider project or business concerns. It vastly improves my testing ability to have this knowledge! So, I do have the information to assist others in measuring or deciding importance. I am a gatherer of information, and it is how I communicate this which is all important. In doing this, I will need to use careful language to ensure that I am not under-emphasising or over-emphasising particular parts of that information.


For example, a product owner is gathering metrics on how a system is used. They measure how often particular features are used by customers and decide how important bugs or problems affecting those features are, based on this metric. If I have found a critical bug with a feature I have to be very careful to highlight and justify the critical nature of the bug. If I only described the bug as “There is a problem with this feature”, the product owner may choose to dismiss the bug if it’s a relatively low-use feature. But what if the bug corrupts the database? This surely has to be fixed? This is why careful use of language is important. A tester needs to understand the significance of the information they have gathered; and convey this in a balanced way

Informing not blocking

So if a tester helps decide significance, how far should they go in justifying significance? This is the tricky part - you must ensure you are not blocking the business, or even more importantly, you must ensure you are not seen as blocking the business. You need to balance providing useful information, which allows others to make decisions, with the language you use in describing that information. For example, if I find a defect which I believe is very significant and I feel I need to highlight, I could use the following kind of language:


“There are lots of important defects, affecting all sorts of areas. We must fix them immediately and we cannot release until they are fixed!”


This is very poor use of language and isn't providing any useful information. Firstly, I'm declaring the importance with finality - I am not the one to judge the importance of the defects. Secondly, I'm telling people what to do and then failing to provide any justification. This also gives an impression that I am demanding the defect is fixed.
Now consider if I worded it like this:


“There are two defects that are significant. The web application server fails to start because it is missing a configuration file and the database updates have deleted the accounts table. I also have a further list of defects that I think are significant but these two I think need attention first.”


Here I am highlighting what I know are significant defects and providing information about them so that people can make their own conclusions. By highlighting the significance, I focus people’s attention on those defects. By providing summaries of the defects, I allow people to make their own judgement of whether the defects are really important. So here I am not declaring importance, but I'm suggesting a course of action backed up by the information I have. This helps promote an image of my testing as being informative, rather than demanding.

And the blocking? Surely I can’t let a defect go live?!

Its also important to therefore have an attitude which allows you to accept and understand the decision that is made. Even when you have presented this information, the business may decide to accept the defect and not fix it. At this point you have done your job and should not feel responsible for the decision. Testers are not the gatekeepers for defects going live. Testers are more like spies - gathering intelligence to inform decisions made from a strategic level. Just like a spy, you may commit significant time and energy to delivering information you may have felt is significant - sometimes beyond what you were asked to do. But the spy doesn't act on the information, they merely deliver it. 007 is not a good spy in this respect, he’s trying to defeat the defect all by himself and it may only be another henchman. We want to be a team player, provide information to others and help attack the boss pulling the strings!

So I just let someone else decide, so I shouldn't care?

No you absolutely should care! You should be passionate about quality and take pleasure in delivering clear and accurate information to the relevant people. Recognising someone else takes the decision is not a sign that your information, your work, doesn't matter. Its a sign that there is more to consider than one viewpoint - the business or organisation cares about the wider view. But it can only make the best decisions if the smaller views that are communicated up are given care and attention. The value of any group must surely be the sum of its parts, so by caring you are implicitly helping the wider group care.

Summary


  • Testers don’t decide importance, however, they can influence importance by providing information on significance.
  • The language you use shapes how others perceive significance, so your words must be carefully chosen - they should be objective, not subjective.
  • Testers should help the wider business to make informed decisions, not become the gatekeepers for defects.

Wednesday, 19 August 2015

How much testing is enough?

Introduction

Risk analysis is absolutely key to being an effective tester. I have rarely found it effective or possible to “test everything”, and so there is always an element of deciding what and how much I want to test before I have confidence in the quality of a piece of work. Even in cases where I do need vast test coverage, I still need to prioritise what I will test first. The reason I do this is because I want to provide reports on the most important defects as soon as possible, as this is where I deliver the most value.

What to test and how much to test?

So how do I answer these questions? With more questions of course! There are a variety of factors that determine how I will answer, but some general questions that help me decide are:
  • What is it? What does it do?
  • How complex is it?
  • How much do I understand about it?
  • Does it have good documentation I can refer to?
  • Does it have clear requirements?
  • How much time do I have to test? Is there a deadline?
  • What resources do I have available to me?
  • What tools do I have available to me?
  • How critical to the business is it?
  • Does it interact with or affect other critical systems?
  • Who uses it? How do they use it?
  • Is it a modification to an existing system or a brand new system?
  • If it's a modification, does the system have a history of instability?
  • What is the modification and what does it affect?
  • Are there any pre-existing defects I need to know about?
  • Are there any performance or security concerns?
  • What are the most important parts of the system? Do I have an order of priority?
There are many, many more questions I could ask. Some of these questions I might already know the answers to, but they still influence my decisions on what and how much I test. Its important to realise that some of these questions have impacts on one another, it is only with the full picture that I can effectively identify risks. For example, with limited time and resources, it directly impacts how much I will test and therefore I would prioritise testing the critical areas of the system that are new or have changed.
Asking these questions also allows other members of the team to consider them and helps them gain an insight into my work and what I’m looking for. Over time this can provoke them into providing better information and lead to more collaboration.

But surely you always prioritise critical areas of the system?

Not necessarily, there may be many critical areas and it may not be possible for me to test all of those areas given the time and resources I have available. We may in fact consider some critical areas to be very stable and know that they have not changed. I may decide to accept the risk of not testing these areas in order to focus on areas that I know are more unstable or have been affected by change.

No testing? That’s crazy!

I’m not saying no testing, I’m just suggesting that no testing can be an option - but one of many. Ideally if I had any critical areas that I felt I couldn’t comprehensively test in the time frame I would still look to perform some testing. This can range from very basic smoke tests, time-boxed exploratory tests or some further prioritised order of tests. If this particular area is something that is regularly covered during regression testing, then it may have automation scripts I could run. I may choose to only run automation scripts for that area.
However, ultimately, you will be making a decision to not test something, somewhere. Therefore you must be comfortable with being able to draw this line based on your informed understanding of the risks.

What if there is no deadline?

Then I would ask the business how long is too long. The business will want the work by some ideal time, otherwise they would not ask for the work to be carried out. They will not wait indefinitely and there is always value in delivering work quickly.
Usually a business may give you no deadline simply because they do not understand enough about testing but want you to do a good job. They don’t want to give you an arbitrary deadline because they don’t know themselves how much testing is enough. It is important to start a dialogue at this point to really explore what the business wants and collaboratively come to a decision on how much testing you really want to do.

Summary


  • In order to decide what to test, you need to gather information regarding time, resources, priorities, etc.
  • Not testing specific areas is a valid option.
  • Comprehensive testing is never an option in an agile environment.
  • There is always a desired deadline even if it is not explicitly stated.

Sunday, 16 August 2015

The role of a tester in backlog grooming and planning

Quote.png

Backlog grooming

As an on-going activity, the product owner and the scrum team should be actively reviewing their backlog and ensuring the work is appropriately prioritised and contains clear information. This allows each piece of work to be more easily planned into a sprint as it can be more accurately estimated.

As a tester I actively try to be involved in this process as it is my first opportunity to assess the requirements and the information provided. It also allows me a chance to gather information required for testing, which allows me to provide more reliable estimates.

The objective is to be in a position where for any piece of work presented in planning you know exactly what the work requires. You should then have a good idea of what you will test and therefore provide reliable estimates. If this is not the case, then the work cannot be effectively planned into the sprint.

Planning and estimation

At the start of each sprint, you have a sprint planning meeting. In this meeting the team collectively decide what work they can commit to being done by the end of the sprint. This may or may not have an estimation process. This is your last opportunity as a tester to ensure that you have all of the information you need before work is started. If the appropriate backlog grooming has been done, then this should be a relatively straight forward process, however inevitably there will be pieces of work that need clarification or may have missed something.

Some example typical thoughts I have in a planning meeting for a piece of work are as follows:
  • Do I understand the requirements?
  • Does everyone else understand the requirements? (and does their understanding match mine?)
  • What requirements have not been written down?
  • Are there any external factors such as legal requirements or third parties?
  • How can I test this work? Is it even testable?
  • Do I need any additional tools to test this work?
  • Do I have any dependencies in order to test this work, such as needing live data or having to work directly with a customer or third party?
  • Does this work conflict with anything else in this sprint?
  • Does this work conflict with work being done by other teams?
  • Are there any repetitive checks that I could automate for this work?
  • Do I need to consider other forms of testing such as security or performance testing?
  • Is the balance of development workload to testing workload viable?

Hopefully I should have asked most of these questions during backlog grooming, and I wouldn’t always ask all of these questions, it very much depends on the context of the work. But hopefully this demonstrates that you can easily think of a lot of questions and it is important to ask these before and during planning.

Once you have the answers to all of these questions, you should have a good idea of what you are going to test. From this you can provide a more reliable estimate of how much testing you would like to do. Not only should you have a good idea of how long it would take, but also you should be better equipped to analyse risk.

Summary

  • Planning a sprint is easier with clearly defined work and when the team has prepared for the planning meeting.
  • To achieve this, backlog grooming should be used to ensure tickets are prioritised appropriately and contain enough information.
  • Backlog grooming should also be used to prepare for planning such as considering if you need testing tools or environments.