Monday, 9 November 2015

Automation in testing - writing your own tools

Introduction

A consistently hot topic in testing is “automation” and how it can be used to improve testing. In this blog post I’m going to talk about how you can really get the most out of “automation” very quickly.

Automation? So you’re going to replace me with a machine?

Nope, I’m going to suggest you augment your abilities as a tester like a cyborg! Part human, part machine. Frequently people assume automation is referring to having a machine perform all of the tests you might perform manually. But actually automation can be used to help you manually test faster. Are you spending a lot of time tearing down databases and re-creating data? Are you spending a lot of time repetitively reading and comparing files? Automation can help you focus on the fun part of testing!

So where do I start?

Before I get onto some of the ideas for what you can automate, you need tools in order to create the automation! First of all, do you have any programming knowledge? If you don’t, don’t fret! (if you do, please ignore me as I’m writing this assuming you don’t). There are things called “scripting languages” - these are a form of programming language that is usually interpreted rather than compiled. What does that mean? Well it means they are much easier to work with and some of them even have more natural english language-like syntax. These scripting languages are a great way to quickly and easily put together programs that automate tasks for you and hopefully you will find them much more accessible than compiled programming languages such as Java or C.

So which language is best?

The one you are most comfortable with really, if you already know a language, then there is nothing wrong sticking with that language. If you don’t know any languages, then some I would recommend are Python and Ruby. I personally prefer Python and I will write the rest of this blog on Python but Ruby is equally good place to start.
Check them both out and decide for yourself which you like the most:
The reason I recommend Python and Ruby is because they are very commonly used and can be run on any machine be it Windows, Mac or Linux. There are lots of examples of free code or libraries (collections of code) available online for you to use to tackle almost any problem too. I personally prefer Python simply because it is more human-readable than Ruby and many other languages.

Ok, so I’ve done a few tutorials but what do I start automating?

Think about tests you’re running and where you spend the most time, can any of it be handled by a script? Is there something about the setup of the test that can be automated?

I don’t know?

That’s ok, it is hard at first to really know what you can and can’t do until you’ve attempted something or seen it done before. Naturally I would recommend trying some ideas as this is always a good way to learn. But to give you some ideas of what is possible, I’ll cover some of the tools that I’ve created myself.

Data Setup

My first port of call whenever I’m considering automation-assisted testing is to look at the data setup for the test. Sometimes there can be a lot of tests that require a lot of setup before hand but you don’t want to vary that setup too much, you simply want to have an environment where you can focus on a particular area. Perhaps you’re attempting to recreate a bug 20 pages into a website or an application. Or perhaps you’re testing an invoicing system and you simply want some base data to repeatedly test against. Sometimes these kinds of tests involve a lot setting up and then restarting to try something else - you want a “clean” set of data that isn’t spoiled too much by testing you’ve done before.

Automation can help a lot in quickly setting the exact same “clean” data up again. You may ask “but why not just have a saved state of an environment?” - sure, that is a valid alternative strategy to deal with this problem. However, it’s a different kind of maintenance - with that you are maintaining a database that you need to keep updating. With a script you may also need to keep updating with changes to the system you are testing. Personally I prefer maintaining scripts, I feel you can more easily create scripts that are less affected by change than keeping a database in a clean state.

Some examples of tools I’ve created to help speed my testing up by creating data are:
  • A script that utilises the Selenium Webdriver library to open a browser window and create data in a web application for me. I wouldn’t normally recommend using Selenium for data setup because it is designed for checking UI elements and is therefore quite slow. But in this particular case this was a legacy system with no alternative than a UI for creating the data. I felt it was worth mentioning because Selenium is a useful library if you need to script around a UI. This script became a time-saver simply because it could be left running while I worked on other things.
  • A script that controlled two SIP telephones and had one call the other using the sipcmd program and later, the PJSIP library. This was used to quickly and easily create call traffic, especially in large amounts (e.g. making 100 calls). During some of my testing I’ve had instances where I’ve had to simulate telephone calls and it was useful to automate it to create volume. It also had the benefit of being able to then log and provide details about the phone call that would have been more difficult to see manually.
  • A script that uses the requests library to interact with a REST API. This allowed very rapid data setup within seconds and the requests library is extremely straightforward to use. I fully recommend this approach to data setup because of its speed.

Speeding up tests

The second area I focus on is to look at where I spend the most time during a test. Am I manually performing something that is just checking? Can a machine do this checking instead? I once had to test a change to a billing system that produced invoices for customers. I wanted to run the invoices on the old system and the new system and compare them to see if I could quickly identify any obvious problems between the two. Manually, this was a lot of “checking” that didn’t really require much intellectual thought - I would be literally comparing one value to another. This is a candidate for automation, so I can focus on more intellectual tests such as focusing on edge cases or perhaps scenarios where we don’t currently have a customer (so I can’t test through this method).

Some examples of tools I’ve created for speeding up tests are:
  • A script that compares groups of CSV (comma-separated values) files. I used this to very quickly compare the results of running invoices in one version of a billing system with another. It very simply compared the totals of the invoices - so it wasn’t “testing” very much, but it allowed me to very quickly identify any easy or obvious failures in the billing if the totals didn’t match. This is a good example of a script that augmented my manual testing and allowed me to provide information much faster.
  • A script that uses the requests library to quickly check every customer-facing API endpoint returns a 200 OK response. This was useful for very quickly performing a very basic check that the API appeared to be functioning - which would quickly catch any easy bugs.

When not to automate

So this all sounds great right? Remove all the boring, time-consuming testing and leave a machine to do it? Hold your horses! It’s not quite as straight-forward as that. There are times when automation really isn’t a good idea. Here are some common pitfalls for automation:
  • The desire for invention. Beware of becoming more interested in engineering automation than providing valuable testing. Make sure that any automation you are writing is going to deliver some value to your testing. Don’t simply automate parts of your testing simply because you can.
  • Avoid creating a mirror image of the system you’re testing. This is very easy to do, say you are testing a calculator - it’s easy to end up writing a script that enters 2+2 and then calculates the answer itself. So both the calculator and the script calculate the answer is 4. Why is this bad? Because in a more complex example where a failure occurs, how do you know which is wrong? Have you written a script that calculates the answer wrong or is the calculator failing? Instead you should be writing a script that already knows the answer. You shouldn’t be calculating the answer during the test.
  • Once-only tests with little to no repetitive nature to them. When I say repetitive nature, I mean there isn’t anything in this one-off test that requires you to repeat something like data setup or checks - these are obviously good to automate potentially. But otherwise one-off tests are nearly always quicker to perform manually and the cost of creating your automation script won’t be recovered in value because it may never be used again.

One more thing…

This post has been all about creating your own tools via scripting, but sometimes a tool may already exist! It's always worth searching around the Internet for existing tools that may help your testing, but learning scripting is useful in situations where you don’t have time to search and try different options and just want to quickly script something small.
Existing tools can take many forms such as Python libraries, professional software products or perhaps plugins or extensions to another piece of software your company already uses.

Summary

  • Automation is useful to augment your testing, improving your speed and efficiency.
  • Scripting languages are a good place to start learning how to create your own automation tools.
  • It’s hard to know what you can script at first, so never stop asking the question “can I script this?”.
  • However, beware of creating automation simply because you can, make sure it’s valuable.
  • Even if you can’t create your own tool, have a look around and see if anyone else has.

Wednesday, 14 October 2015

An introduction to testing

Introduction

I’ve had a few requests for an article that explains what testing involves for a complete newcomer, so here it is! As I’ve progressed in my career in testing, I’ve discovered it’s a subject that isn’t widely taught in academia and very few people ever choose it as a career path. I’d like to be able to contribute to changing this and hopefully one day inspire others to choose it and feel the same passion to become a better tester!

What is testing?

Testing is the process of designing, executing and analysing tests. According to thefreedictionary.com, the definition of a “test” is:
“A procedure for critical evaluation; a means of determining the presence, quality, or truth of something; a trial”
In other words, it typically means taking something - be that a piece of software, an aircraft or a piece of chocolate - and evaluating some truth from it while comparing it to some form of specification.
The result of carrying out these tests is that you gather information and learn something about the thing you are testing. Testing is all about learning as much as possible about whatever you are testing - through this you can provide information regarding how a system actually works, what specific code does, bugs and the overall quality.
The reason companies hire dedicated testers is simply because there is so much to learn and test! Typically, most companies operate as departments - with a programmer and a salesman focusing on totally different aspects of the company. Because of this, very quickly no one in the company knows everything about their product. As a tester or as a testing team you play a part in trying to provide this full picture - or at least provide people information on what they don’t know.

Why not document this information or automate these tests?

Absolutely! As a tester I am always promoting the use of documentation to ensure that the information I have to uncover is not difficult to uncover again.
Automation used well can also save a lot of time in discovering simple or basic failures particularly for more repetitive or laborious tasks, taking the burden off manual testing to focus on more creative types of tests.
However, both of these are expensive activities. Both require work to write, design and maintain and can only be written when the information has been defined.
Also, automation in itself is not testing because it cannot understand the context, what it is testing. The automation will only test what you tell it to test and will not, for example, investigate problems it notices during the test. So while it can save you some time performing repetitive checks, you still need a human to critically observe a system too.


Hence, the position of a tester exists to help guide the decisions on what tests to run, what to automate and what to document. This is all very dependant on what is being tested and the non-functional requirements of the business. There are companies that have requirements which mean a larger amount of automation and documentation is required than others. Some companies will have programmers carry out testing and others will have huge testing departments of over 100 testers. Others may even rely on end-users to test their products. However in all of these scenarios, the objectives of testing are the same - to gather information on the system and learn.

How do I become a tester? How can I learn?

Regarding academia, currently there doesn’t seem to be any academic courses on testing at all, a quick search on UCAS in the UK shows that there are no degree courses on testing as a subject.
For software testing, there are qualifications such as the ISTQB/ISEB which many companies recognise, however it is not necessary to hold this qualification to become a tester and there is a lot of disatisfaction in the software industry for this qualification.
From my experience, there are only two routes into software testing:
  1. Applying for a junior testing role in an organisation which is willing to hire inexperienced staff and train them.
  2. Working in a different role and switching to testing within the same company.


The main piece of advice I can give though is, if you are applying for a testing role - try to learn as much as you can about the company, what you might be testing and anything else you can think of. What will make you a good tester is your ability to learn and to understand what you still don’t know and seek this information out - testers should always be inquisitive and asking a lot of questions.

Ok, so if there aren’t many useful courses, what about online?

Yes! There are plenty of places online to read about testing practices! I’m mainly knowledgeable about software testing so I can only recommend the following resources on software testing as a start:
There are also some good places to ask questions, read further into various topics and become involved in the testing community:


However, I’ve found the following video is a fantastic introduction to testing, it articulates what you are trying to do as a tester pretty well:

Saturday, 26 September 2015

Microservices discussion by XP Manchester

Introduction

A couple of weeks ago, I was invited by some fellow programmers to attend an event on microservices organised by XP Manchester. Microservices are a hot topic right now in software development so I wanted to go along partly from my own interest in the subject area but mainly to think about how testing may be impacted or what considerations there may need to be for testing. The event was focused on programming and software architecture but it's discussional format allowed for questions so a variety of different points were talked about through the evening.

What is a “microservice”?

The conclusion from the evening was there is no agreed definition! However, I think we can summarise that microservices as an architecture is the process of breaking down your system into smaller parts. The main reason you want to do this is for scalability, but it is an expensive process and the conclusion for the evening was “Microservices are very hard!”.

What does testing have to do with this?

I had initially gone to the event hoping to understand better some of the implications on testing. But I actually found myself taking a step back and observing the meet up and the discussion from a philosophical view. I actually found a lot of parallels to the debates on automation testing in the testing world. So while the content of the discussion had little to do directly with testing, I think there are some lessons that apply to automation testing and many other “hot topics”.

The temptation to try something new - the fallacy of “best practice”

One of the points raised on microservices was that as a concept it had been around for several decades, but only very recently has it become a popular subject. This is mainly down to the famous use cases of Netflix and Spotify. It seems it is very easy for people to see companies such as these and want to copy their models. The problem is, solutions like microservices are very expensive and complex, they are solutions to very particular problems - they are too expensive as a solution to be used at all times. It is tempting for people to consider them a “best practice”, which is totally inappropriate. I see the same kind of attitudes on automation testing too - that large companies talk about automation testing and everyone else decides to follow it as a best practice. Automation testing is also very expensive and is not a best practice, it is a solution to a particular problem. I also see automation testing get discussed as the “best thing to do” in a similar vein to microservices.
At the event, someone mentioned a great analogy - you wouldn’t use the same design and manufacturing methods to build a model plane as you would a real Jumbo Jet. Just because famous or bigger companies use particular solutions, doesn’t mean these solutions are appropriate for your situation.

Only looking at the benefits, without considering the cost

Another point that I could relate to is the belief from some people that microservices is easier and simpler - that by breaking down your monolithic code base, you’re breaking down the complexity. This is false, the complexity is always still there, it’s just been spread around - which makes it both easier and harder to manage in different respects. While a particular area of code is easier to manage in isolation, the overall integration or full system is much harder to manage in terms of infrastructure, deployment and debugging.
I see the same problem in automation testing - a common view I’ve come across is that automation testing is always quicker than a human manually typing on a keyboard. Just like with microservices, people are ignoring the bigger picture here - focusing on the speed of executing a test, rather than considering what you gain and lose in the wider process. Automation testing is more than just executing a test, there is a lot of work to do before and after the execution! The cost of automation testing is the time it takes to write the test up, analyse its results every time its run and the maintenance cost of the test. With a human manual tester, the cost of writing the test is massively reduced because you are not writing code - in some cases perhaps nothing needs to be written! Analysing the results can be much quicker for a human to do - they are both able to run the test themselves, notice irregularities and analyse all at the same time - something a computer cannot do. Maintenance is also a lot less for manual testing because a human can adapt to a new situation easily.


Because microservices and automation testing are very expensive, the cost must be weighed up against the benefits. Only if the benefits outweigh this cost does it make sense. Typically, the value in automation testing comes from repeatable activities - and over time this will overcome the high costs. But for anything that isn’t repeatable, it’s difficult to really justify automation over simply carrying out the testing manually.

Additional thoughts

On a different note, I’d also like to talk a little about how the event was organised by XP Manchester as I felt it was a very successful format that I hadn’t experienced before. The format was one where everyone was asked to sit in a circle with 5 chairs in the middle of the circle. 4 people would sit on the chairs, leaving one vacant and discuss the topic (with the discussion being guided by a moderator). If someone wanted to join the discussion, they would sit on the vacant 5th chair and someone else from the discussion must leave. Meanwhile, the rest of us in the circle had to remain silent and listen. I felt this format was fantastic for keeping a focused discussion while allowing 30 people to be involved or listen to it. It was a refreshing change from traditional lecturing approaches or just a chaotic discussion with 30 people talking to each other at once. In some respects it was the best of both worlds. Credit to the guys at XP Manchester for running a great little event that produced some useful intellectual discussion!

Summary


  • There are definitely a lot of relatable problems when it comes to decision making for programmers, as there are for testers.
  • Don’t be tempted to follow a “best practice” or “industry standard” without considering whether it is right for you.
  • Always consider the costs of your decisions, always treat so called “silver bullet solutions” with suspicion - is this solution really the best? Is it really as easy as people are suggesting?
  • For groups of 30ish people, if you want to generate a focused, intellectual discussion for people to listen and learn from but don’t want to use a lecture/seminar format - then consider the format described above!

Friday, 18 September 2015

Sec-1 Penetration Testing Seminar

Introduction

Recently I was invited to a seminar by Sec-1 on Penetration Testing. It was a great introduction to the discipline and I took away quite a few really great points. I’ve never really performed Penetration Testing myself and I only have a general knowledge of it - enough to be able to identify and understand some basic problems. I’m looking to expand my knowledge of this type of testing so that I can bring much more value to my functional testing. In this blog post I will talk about the main points that I took away from the seminar.

You are only as secure as your weakest link

You may spend a lot of time securing one particular application server but if you have just one older or less secure server on the same network, your security is only as strong as that one weak sever. There may be an old server that is used for your printer in the office and it may not be considered as a security concern, but it can be used by hackers to gain access and compromise your network.

Whitelist, don’t blacklist

If you write rules that attempt to prevent specific malicious interactions, you end up with a maintenance nightmare by continually having to maintain and update these rules for each new threat. It is much more effective to instead write your rules to expect the correct interaction.

Keep up to date as much as possible

It can be a pain to keep libraries, servers and software up to date because the updates can break existing application functionality. But where possible it should be encouraged because these updates may contain important security updates. However, you cannot rely on the change logs for the update to tell you about any important security fixes because typically they are described as “minor bug fixes”. Companies do this because it can be considered bad publicity to admit they are regularly fixing security holes.
Keeping up to date will save time in the long run as your systems will become more secure without you needing to create your own security fixes later for systems you have no updated.

Only a small amount of exploits are needed to cause major vulnerabilities

At the seminar they demonstrated a variety of techniques that could be used to get just enough information to open larger vulnerabilities. Through SQL injection, a poorly secured form could provide full access to your database - allowing hackers to potentially dump your entire database which they can then use to gain further access to your network or sell on to other malicious entities.

Attacks do not have to be direct

Even if your own system is highly secure, hackers can target totally independent systems such as targeting websites your employees visit and gathering password data. Typically a lot of people still re-use passwords and this can be an alternative way into your system. In the same vein, you are open to attack through integration partners if their own systems are not as secure as yours. Again, you are only as strong as your weakest link.

Summary

  • I found the seminar useful and I certainly learnt a lot more. I can certainly recommend Sec-1’s seminars to anyone who only has a general knowledge of penetration testing and wants to understand more.
  • Keeping software and hardware up to date has more benefits to security than it may first appear because security fixes are not always made public knowledge.
  • Penetration testing requires specialist skills and knowledge. However, I feel there is still some worth to having a better understanding as a functional tester. It allows me to pick up on potential areas that may concern security and helps me to drive quality in terms of security by challenging lax attitudes on these “minor issues”.

Wednesday, 2 September 2015

Important defects or significant information?

licensetoTest.png

Introduction

As a tester I feel I am a provider of information, information that allows others to best judge quality and risk. If this is correct, should I merely report all information with no care for judging importance or priority? If I filter the information to what I think is important, surely I'm influencing the decision process of others? I feel this is, as ever, a murky gray area that has no easy answers.

Who decides importance?

Project Managers, Product Owners, Business Analysts, stakeholders - whoever determines what is worked on - decides importance. There is absolutely no question of that. As a tester I am not the one who decides what is worked on. I am not usually in a position where I am in conversation with the entire business or have sufficient knowledge of ‘the big picture’ to make these decisions and it isn't the job I'm hired to do. Of course, there may be circumstances where these jobs are blurred (there are Project Managers who test). Still, in a typical company set up, I'm rarely hired as a tester to manage projects.


However, that doesn't mean as a tester I can’t have some knowledge of the wider project or business concerns. It vastly improves my testing ability to have this knowledge! So, I do have the information to assist others in measuring or deciding importance. I am a gatherer of information, and it is how I communicate this which is all important. In doing this, I will need to use careful language to ensure that I am not under-emphasising or over-emphasising particular parts of that information.


For example, a product owner is gathering metrics on how a system is used. They measure how often particular features are used by customers and decide how important bugs or problems affecting those features are, based on this metric. If I have found a critical bug with a feature I have to be very careful to highlight and justify the critical nature of the bug. If I only described the bug as “There is a problem with this feature”, the product owner may choose to dismiss the bug if it’s a relatively low-use feature. But what if the bug corrupts the database? This surely has to be fixed? This is why careful use of language is important. A tester needs to understand the significance of the information they have gathered; and convey this in a balanced way

Informing not blocking

So if a tester helps decide significance, how far should they go in justifying significance? This is the tricky part - you must ensure you are not blocking the business, or even more importantly, you must ensure you are not seen as blocking the business. You need to balance providing useful information, which allows others to make decisions, with the language you use in describing that information. For example, if I find a defect which I believe is very significant and I feel I need to highlight, I could use the following kind of language:


“There are lots of important defects, affecting all sorts of areas. We must fix them immediately and we cannot release until they are fixed!”


This is very poor use of language and isn't providing any useful information. Firstly, I'm declaring the importance with finality - I am not the one to judge the importance of the defects. Secondly, I'm telling people what to do and then failing to provide any justification. This also gives an impression that I am demanding the defect is fixed.
Now consider if I worded it like this:


“There are two defects that are significant. The web application server fails to start because it is missing a configuration file and the database updates have deleted the accounts table. I also have a further list of defects that I think are significant but these two I think need attention first.”


Here I am highlighting what I know are significant defects and providing information about them so that people can make their own conclusions. By highlighting the significance, I focus people’s attention on those defects. By providing summaries of the defects, I allow people to make their own judgement of whether the defects are really important. So here I am not declaring importance, but I'm suggesting a course of action backed up by the information I have. This helps promote an image of my testing as being informative, rather than demanding.

And the blocking? Surely I can’t let a defect go live?!

Its also important to therefore have an attitude which allows you to accept and understand the decision that is made. Even when you have presented this information, the business may decide to accept the defect and not fix it. At this point you have done your job and should not feel responsible for the decision. Testers are not the gatekeepers for defects going live. Testers are more like spies - gathering intelligence to inform decisions made from a strategic level. Just like a spy, you may commit significant time and energy to delivering information you may have felt is significant - sometimes beyond what you were asked to do. But the spy doesn't act on the information, they merely deliver it. 007 is not a good spy in this respect, he’s trying to defeat the defect all by himself and it may only be another henchman. We want to be a team player, provide information to others and help attack the boss pulling the strings!

So I just let someone else decide, so I shouldn't care?

No you absolutely should care! You should be passionate about quality and take pleasure in delivering clear and accurate information to the relevant people. Recognising someone else takes the decision is not a sign that your information, your work, doesn't matter. Its a sign that there is more to consider than one viewpoint - the business or organisation cares about the wider view. But it can only make the best decisions if the smaller views that are communicated up are given care and attention. The value of any group must surely be the sum of its parts, so by caring you are implicitly helping the wider group care.

Summary


  • Testers don’t decide importance, however, they can influence importance by providing information on significance.
  • The language you use shapes how others perceive significance, so your words must be carefully chosen - they should be objective, not subjective.
  • Testers should help the wider business to make informed decisions, not become the gatekeepers for defects.

Wednesday, 19 August 2015

How much testing is enough?

Introduction

Risk analysis is absolutely key to being an effective tester. I have rarely found it effective or possible to “test everything”, and so there is always an element of deciding what and how much I want to test before I have confidence in the quality of a piece of work. Even in cases where I do need vast test coverage, I still need to prioritise what I will test first. The reason I do this is because I want to provide reports on the most important defects as soon as possible, as this is where I deliver the most value.

What to test and how much to test?

So how do I answer these questions? With more questions of course! There are a variety of factors that determine how I will answer, but some general questions that help me decide are:
  • What is it? What does it do?
  • How complex is it?
  • How much do I understand about it?
  • Does it have good documentation I can refer to?
  • Does it have clear requirements?
  • How much time do I have to test? Is there a deadline?
  • What resources do I have available to me?
  • What tools do I have available to me?
  • How critical to the business is it?
  • Does it interact with or affect other critical systems?
  • Who uses it? How do they use it?
  • Is it a modification to an existing system or a brand new system?
  • If it's a modification, does the system have a history of instability?
  • What is the modification and what does it affect?
  • Are there any pre-existing defects I need to know about?
  • Are there any performance or security concerns?
  • What are the most important parts of the system? Do I have an order of priority?
There are many, many more questions I could ask. Some of these questions I might already know the answers to, but they still influence my decisions on what and how much I test. Its important to realise that some of these questions have impacts on one another, it is only with the full picture that I can effectively identify risks. For example, with limited time and resources, it directly impacts how much I will test and therefore I would prioritise testing the critical areas of the system that are new or have changed.
Asking these questions also allows other members of the team to consider them and helps them gain an insight into my work and what I’m looking for. Over time this can provoke them into providing better information and lead to more collaboration.

But surely you always prioritise critical areas of the system?

Not necessarily, there may be many critical areas and it may not be possible for me to test all of those areas given the time and resources I have available. We may in fact consider some critical areas to be very stable and know that they have not changed. I may decide to accept the risk of not testing these areas in order to focus on areas that I know are more unstable or have been affected by change.

No testing? That’s crazy!

I’m not saying no testing, I’m just suggesting that no testing can be an option - but one of many. Ideally if I had any critical areas that I felt I couldn’t comprehensively test in the time frame I would still look to perform some testing. This can range from very basic smoke tests, time-boxed exploratory tests or some further prioritised order of tests. If this particular area is something that is regularly covered during regression testing, then it may have automation scripts I could run. I may choose to only run automation scripts for that area.
However, ultimately, you will be making a decision to not test something, somewhere. Therefore you must be comfortable with being able to draw this line based on your informed understanding of the risks.

What if there is no deadline?

Then I would ask the business how long is too long. The business will want the work by some ideal time, otherwise they would not ask for the work to be carried out. They will not wait indefinitely and there is always value in delivering work quickly.
Usually a business may give you no deadline simply because they do not understand enough about testing but want you to do a good job. They don’t want to give you an arbitrary deadline because they don’t know themselves how much testing is enough. It is important to start a dialogue at this point to really explore what the business wants and collaboratively come to a decision on how much testing you really want to do.

Summary


  • In order to decide what to test, you need to gather information regarding time, resources, priorities, etc.
  • Not testing specific areas is a valid option.
  • Comprehensive testing is never an option in an agile environment.
  • There is always a desired deadline even if it is not explicitly stated.