Wednesday, 1 November 2017

So you can test an API, what to learn next?

Introduction

Last week I ran my first ever workshop at a conference for TestBash Manchester! It was an awesome experience, totally different to the talks I’ve done before at meetups and smaller conferences. The workshop was all about how to get started with web API testing and I targeted it at beginners who had no prior knowledge of APIs. For this post though, I’m not going to talk about the workshop, but more about what happens next. Several people have asked me about a more advanced workshop and what they could study in their own time next. I don’t have a quick or easy answer to this as I feel there is lots more to learn and it really depends on your context. However, I’m going to try and discuss some areas and ideas.

Where are we starting from?

Before I get started, I’d like to clarify where my workshop left people and what this post is assuming you’re already familiar with.
  • What APIs are.
  • What do APIs look like and how they work.
  • Why they are useful to understand for testing.
  • What API documentation looks like.
  • What paths and query strings are.
  • How methods work.
  • What status codes mean.
  • The concepts of authentication and authorisation.
  • How to create requests with authentication using Basic Auth.
  • The concept of resources and IDs.
  • What headers are.
  • What request and response bodies are.
  • An introduction to JSON & XML data formats and data types.
  • Using Postman’s basic features.
  • Understanding how Postman collections could be used.
  • Awareness of how basic automation can be created using Postman’s runner.
If you feel lacking in these areas, I would still spend more time on understanding the basics of these before starting on anything more advanced.


So with that, I’ll go over some areas that we could explore for a more advanced workshop or for you to explore in your own time.

Try testing other, more complex APIs

On my workshop, we learned to interact with a simple API that myself and my friend Lee Goodman built together. This API was intentionally designed in a way that allowed attendees to learn in stages, introducing different concepts at each stage. However, this is not how a API will look in reality, you won’t have nice ways to learn it in stages, they will immediately throw everything at you. You typically won’t be able to interact with most APIs without authentication (which will be more complex and won’t be the same for every API) and they will provide varying levels of quality of documentation.


One of the aspects of APIs that I didn’t cover in my workshop is that they are an abstract representation of the resources, objects, functions and capabilities of an application or system. In plain English this means when you learn to use an application via an API, the picture you build up in your head based on the API’s structure and responses is based on a translation of the system underneath the API. Just as with translating from Japanese to English, there are concepts that are not easy or even possible to express in an API. People also make mistakes in translation or have many different ways of creating the translation. It’s useful to get some experience of this by using more APIs, you may start to notice these differences and get a feel for what works well, what doesn’t and perhaps get some sense of the compromises made. You will also see where some of the language that even I use is not consistent across all APIs.


You can try out various public APIs for free, one example being twitter’s API. You can find documentation for lots of public APIs you can try here:
There is also simpler and neater API to to play with produced by Swagger here:

Learn about more complex forms of authentication

In the workshop I only covered the basics of how to authenticate your requests in Postman with one of simplest forms of authentication called Basic Auth. There are many many more types and technologies for authentication that you could learn about, some of them are very complicated and deserve an entire workshop in themselves! I don’t feel it’s necessary to understand them all because you probably won’t come across many of them. But it could be useful to understand the more popular (and secure) kinds of authentication such as OAuth 1.0 and OAuth 2.0.

Learn about different kinds of headers

I briefly talked about headers in the workshop, mainly in reference to the “Authorization” header (which is the one Postman creates for you when you add authentication) and the “Content-type” header which we used in the workshop to tell the server whether we were sending JSON or XML with our POST and PUT requests (again automatically generated by Postman). There are a couple of more headers that you can experiment with when sending requests such as the “Accept” header which can tell the server to return responses in a different format, in the same was the Content-type header. This means you can do weird stuff like send JSON data but demand the response is in XML. I have accidentally killed servers in the past with typos in my headers too! You can read more about different kinds of HTTP headers here.

Learn to use the more advanced features of Postman

Postman has a lot of neat features which can be used to augment your testing in different ways. Learning to use collections can allow you to create documentation of an API you’re exploring which you can share with other testers and developers (particularly useful when a new person starts on a project, you can give them a collection to get them up and running much faster). If you’re finding yourself repeating the same requests a lot, especially to create data, using Postman’s collection runner can allow you to create automated scripts of requests that quickly generate test data for you.


You can further extend the capabilities of your collections to automation checks which can be rapidly run and tell you if the API you are testing is ready for deeper exploratory testing or if there is something significantly wrong. You can do this by using Postman’s test scripts function. While these tests are written in Javascript, it’s possible to write these scripts with little knowledge of Javascript using the example snippets. However, it may be helpful to learn a little bit about Javascript to get the most out of these scripts. You can learn Javascript via sites like this one, which is a free 30 day coding challenge.


Combined with Postman’s pre-request script functionality, you can then create more complex collections using functions such as loops and branching. In addition to these, you can also learn about environments and variables, which let you parameterise data that needs to change every time you run a request. The most common examples of using environments is where you have multiple test environments with different domain names (e.g. wwww.live.test.com & www.stage.test.com) but you don’t want to keep re-writing the requests.


These features can allow you to chain requests together, so rather than manually copying an ID from one request to use in another, Postman can run the two requests together for you. This is a good blog post explaining how to do this.

Try integrating your Postman collections as part of a CI pipeline

One of the most popular topics in software development currently is DevOps and the related topics of CI (Continuous Integration) and CD (Continuous Delivery). Typically a team that works to these methodologies has a ‘deployment pipeline’ where they build their code and run unit and integration tests. If you work with a team like this, you can setup their deployment pipeline to run your Postman collections for you. This means that your collections that help you create test data or check the environment is ready for deeper exploratory testing can be run for you every time you create a new build of the codebase. The tool that allows you to do this is called Newman. Newman simply allows Postman collections to be run on command line, however this means any CI build tool can run it too such as Jenkins, Bamboo, TeamCity or GoCD. Here is a blog about how to do that with Jenkins.

Have a look at any existing APIs you may work with already

This seems obvious, but ask around about any APIs you might already work with, there may be systems you didn’t realise existed that you could have a look at. Or there could be third party integrations that your team use within your application. There may be some existing documentation or even monitoring, it can be especially interesting to have a look at any monitoring you already have. Tools such as AppDynamics gather a lot of data such as API requests, their speed and responses. This can give you an insight into how people use those APIs and what problems may already be occurring.

Try out other tools

Postman isn’t the only way to interact with APIs, it’s a great tool and I especially like using it to teach with because it’s popular and has a nicer interface which isn’t as cluttered as some others. Your team may use a totally different tool or you may need to use another tool in future and they all have different strengths and weaknesses. So it may be useful to learn some other tools such as:
Another popular tool for interacting with APIs is Jmeter, however I highlight this separately as it’s actually a load testing tool. It can be used in a similar fashion to the Postman collections but is designed for running many, many concurrent requests and designing performance test runs. However, I have seen it successfully used as an automated functional checking tool and can be integrated into build pipelines too.

Write automated checks in a scripting or programming language

There will come a point where creating automated checks using Postman becomes very complicated or unwieldy. This is a point where it's typically easier to write it in a scripting or programming language. Why? Postman (and also Jmeter) are GUI-based and so they enforce a particular pattern and design to your tests in order to work. Sometimes what you are trying to do doesn’t neatly fit their structure or sometimes you want to integrate with more systems or perform functions they don’t provide.


Which programming language? It pretty much doesn’t matter, almost every popular programming language comes with libraries for making HTTP requests and frameworks for running tests. What you decide to learn should be guided by:
  • Your level of experience with programming.
  • Who is going to write and maintain these checks.
  • What languages people use around your workplace.
  • What languages are used for the application you are testing.
If you are working with a Java application and your team is happy to share the work and help out, it may make the most sense to write your automated checks in Java using frameworks like Junit and libraries like RestAssured. This means they can be easily incorporated into the rest of the integration tests your team already has and removes the need to find more tools to run it in your pipeline.
However, you may not be closely working with a team like this or have developers to support you. You may have decided that it will be best for you to maintain the tests. In this case it’s more important to choose a language you are comfortable learning. In this scenario I personally like teaching people about Python and the requests library because Python can be easier for newbies to learn programming. However, there are lots of other languages such as Ruby, Javascript, C# and more and none of them are bad to learn. They all have the capability to create these checks and much of what you learn will be transferrable to other languages.

Learn how to work with mocks

Sometimes you may need to test an application that relies upon a third party API. Maybe it’s a website that hasn’t got a back-end finished yet. Perhaps you are working with lots of other development teams and some of the work has been finished early. In these situations it’s useful to be able to create pretend versions of these APIs so you can test as if they were there. These are referred to as ‘mocks’. You can have a play with websites such as this one to create a fake API that responds exactly the same as an application you want to fake. You can then point an application you are testing at it to begin testing against your contract.

Learn about contract testing

Speaking of contracts and mocks, there are now tools that let you create these mocks in a more reliable and automated way. Tools such as Pact allow teams to run automated checks against each other’s services without having to understand how to run the services. While not strictly about APIs, learning about how you could automatedly check an API you provide for another team or vice versa can be a lot more useful than creating massive Postman collections or custom mocks that fall out of date as teams update the behaviour of their services.

This video gives a helpful explanation in a conversational style about Pact and contract testing (thanks Conny!):

Thursday, 17 August 2017

Best of the BSides - A friendly security conference in Manchester

https://media.licdn.com/media/AAEAAQAAAAAAAAd-AAAAJDU1ZDQ1MWY4LWNmYzEtNGNlMi04MTgzLTRhNTBiODgxNmIwYg.png

Introduction

Today I attended a great little conference in Manchester called BSides Manchester. This was a free conference about security ran by members of the security community in a similar way to TestBash. In fact the whole event was a bit of a “SecurityBash” in so many respects, which is awesome and I recognised many familiar topics, concerns and ideas. Whether you're experienced with security or a newbie, I highly recommend this conference. I went along with no expectations, just hoping to learn as much as I could, expose my brain to new ideas and even if I didn’t pick it all up immediately, it could give my brain a place to start. Not only did I actually learn quite a bit but I also noticed that there was a great deal of similarities to testing so I thought I’d talk about the conference from that angle.

The similarities and parallels to testing

In no particular order:
  • The security community seem to be very keen to promote leaner and more effective ways of improving security such as getting involved earlier and trying to be involved in discussions about new projects or approaches. This is exactly the same as with testing in general and both are frustrated when they are only asked for their opinion very late in projects. Perhaps this is the biggest area we share in common and maybe we could share our experiences and lessons with each other. Perhaps we can also be allies on this, for example where a tester has managed to get involved in the project early, we could be advocates for involving security professionals earlier too and vice versa.
  • Carolyn Yates gave a great talk on the bowtie method which is very applicable to testing too and reminds me of how we look to use examples like mind-maps to effectively visualise our work. She also made the point that “not all tools need be programs, sometimes they can be visual aids” which I think as testers we can certainly appreciate too.
  • There was a great talk by Collette Weston about echo chambers - in particular the difficulty for women and other industry minorities to break into the InfoSec industry and community and what can be done about it. I think we can all agree this is an issue across the software industry as a whole too and while I feel testing is a little bit better in this regard, it’s definitely not as good as it could be. This talk also prompted a great discussion about how some companies had started trying to diversify their security personnel (including hiring people with biology degrees) and I know in testing its well appreciated that we benefit greatly for our diverse backgrounds.
  • In two separate talks by Ian Trump and Charl Van Der Walt there were discussions of what the future might hold and how artificial intelligence and the advance of technology will shape the industry and the work of security professionals. It seems obvious but I found it quite re-assuring to know that it’s not just testers who are wondering how these advances will affect their jobs. There was also discussion of the effects of automation and whether people were really considering these effects on the loss of jobs and how humans interact and use the automation. This echoes the concerns I’ve heard many testers raise and reminds me of my old blog post on this subject.
  • Naturally there were several more technical talks focusing on particular types of hacks, attacks and penetration tests. This included discussions of how to defend against these attacks too. The mindsets and techniques that security professionals use to find and report these exploits is exactly the same as how testers find and report bugs. I think we have a lot in common on this subject (as, well, it is a form of testing) and I think we could do more to engage with the security community and share our experience - just as much as we can learn a lot from them too! All the things we talk about in testing were present here - such as trying to turn exploits into the most damaging problem they could find to justify and explain to companies why they need to fix it. I believe as testers we can also become more effective at general testing by learning about these exploits too - both in helping raise security issues earlier but also giving us more ideas for other kinds of testing. Perhaps we could share our knowledge, approaches and experience of exploratory testing with them.
  • Another common theme of the conference was that security was not really a technological problem, but a people problem. This is of course not a new revelation, there are many historical quotes and philosophical discussions, for example, “a bad workman blames his tools”, “pick the right tool for the job” and so on. However, as humans we clearly find it difficult to keep these lessons in mind and it is easy with bias to miss that we are making assumptions about our problems. As testers I feel we should be very aware of this too and typically many challenges we face are nothing to do with the particular technologies involved. Many software bugs are caused by humans and machines are simply doing as they are told, the same applies for security exploits.

The differences

Of course, for all our similarities, there are also differences:
  • As part of the discussion about diversity in the industry from Collette’s talk, there was also discussion about autism and how there was a general belief that many “black hats” may have struggled at school, dropped out and only picked up hacking because they had no other options. It was pointed out that because many companies require specific levels of education (such as GCSEs), it meant there was no way for these individuals to become security professionals. Why is this different to testing? Well in the testing industry I don’t feel we have such a specific concern with autism (though it will definitely also affect the testing industry and community too!), I feel our concerns are more about increasing awareness of testing as a possible career in the first place!
  • I think this one is probably obvious but the security community is more naturally technically focused and capable, in tandem with the above point, most people seem to join the industry because of their interest in it and interest in technology. As such, while there is diversity, I get the general impression the diversity of backgrounds is a lot more acute as opposed to the very broad backgrounds of testers. As a result I feel testers tend to be less technically focused and more a balanced spread of soft skills to go with the technical subjects. That said, the conference did feature plenty of talks that were more about the soft skills, although probably a different balance compared to some testing conferences.
  • I feel that the security community is even more aware of justifying their testing and explaining the effects of the exploits they find than the average tester because of both the ethical act of the testing and it’s very technical nature. Not only must they be very careful in not breaking laws or damaging a company but they also have to be very good at explaining why they think something is a significant problem and helping the company fix it. I think as testers we have a lot we can learn from this, not because we don’t do a good job of this, but our testing is a lot safer and doesn’t always require as much explanation. However, I think this will change over time if we get more involved with DevOps, challenging requirements and testing in production.

You should go to!

All in all, it was a great conference, I took a lot away and enjoyed myself. It was very reassuring to see so many similarities to testing and seeing ways in which we could work together. I hope to go to some other conferences in other areas around software development like Programming, Project Management, UX, Business Analysis, Operations and Systems Administration and continue learning from them. Maybe even to begin talking about testing at their meetups and conferences and see more sharing across our disciplines.

Thursday, 6 July 2017

Some quick bites of performance testing

Introduction

I’ve recently been attempting to write some blog posts about my recent experiences with performance testing, each time I try they are very long winded and feel like a mouthful to read. So this is an attempt to provide some quick summarised points, mistakes, lessons and general tips that I’ve learned or relearned over the past 2 months.

Where to start?

Why performance test? - If you’ve been asked to perform some performance testing, find out why. If you’re thinking you might need to, think about why that is. You need this context in order to make sure the performance testing is useful to the other members of your team.
What do you mean by “performance test”? - The phrase “performance testing” encompasess a lot of different kinds of tests and information that you could find out. Do you want to try load tests, stress tests, spike tests, soak tests? Are you looking to test one component, an integration or a whole system? Be aware of people not understanding and meaning these words and phrases the same way. Someone might ask you to perform “some load tests”, but they don’t mean only load tests, they may really mean “can you explore the performance of the product”. They may not ask for stress tests, they may not be thinking of planning capacity for the future, but that doesn’t mean you shouldn’t raise it as an area to explore. People may be concerned about one specific component but actually they hadn’t thought of load testing an integration point too.
What numbers are we using? - Are there NFRs (non-functional requirements) or functional requirements? Is the application already running in live, if so, what does the current load and performance look like? If it’s a new application, what do we expect the load and performance to be? What would we expect it to be next year? In 2 years? What would be “too much” load? What would a spike realistically look like? Are there peaks and troughs in the load profiles?
You might not know everything right now, so start with the basics - Start with basic functionality smoke tests, move on to small load tests that check the acceptance criteria then start exploring around that as you learn more about the system.
You can’t performance test something if you don’t understand how it works - The application might be very fast, if you send bad data. How do you know you’re sending the correct data? What happens when you send bad data? How do you know what good or bad looks like?
Isolated, stable, “like-live” environment - The tests should be run against something that you control, anything could affect performance and you want to control as many variables as possible. You want the environment to be as close to production hardware and configuration as possible so you can rule out issues like the hardware not being powerful enough.
Understand the infrastructure and architecture of your tools and environment - Consider where you are going to run the tests from, what is going to generate the load? Think about where the environment is in respect to that. Try to make sure the load generator is on the same network and isn’t throttled or blocked by proxies or load balancers accidentally (unless you’re testing them). Make sure your tests aren’t affected by the performance of the server generating requests or the bandwidth of the connection.
It’s ok to start with an environment that’s not like-live such as a local environment to help design your tests - this ties in with understanding how it works, but you can design the tests against a smaller environment while you wait for a larger environment to be built. This is useful when you’re trying to figure out how to get API requests just to work and check what to check for in the responses, or tweak the timings of particular scenarios where you only need to run 1 or 2 tests.
Stuff you might need that might take time to get sorted (so get the ball rolling!):
  • Access to a server to run the load generator from.
  • Access to monitoring of the servers and application logs.
  • Access to any databases.
  • An ability to restart servers and reset databases between test runs.
  • Access to an environment you can start exploring right now.
  • Documentation of how the system works.

Mistakes & Lessons

Completely random test data may not be very useful - If the test data is completely random, it means you are running a different test on every run. You can use weighted distributions instead - this is where you give a probability that a particular result will occur. For example, 90% of the time, it will pick one value, but randomly it will pick another value 10% of the time. Why is this useful? It gives you control over the randomness and lets you explore different patterns that might affect performance.
If you’re just designing the tests and want to test them out with small load on a dev environment, don’t guess the numbers to try a small load test - I did this and accidentally brought down an environment being used for UAT (user acceptance testing). I had picked a number off the top of my head and assumed it was a safe number, well, turned out it wasn’t. Always discuss with other people about what numbers to try and warn people before you run any test, even if you think you’re not going to stress the environment, don’t just rely on guess work.
Not all of the data needs to be automatically generated - Be pragmatic and try to understand which parts of the data matter for performance. There may be some parts of the data that have no affect on performance. It’s not always possible to know which parts, but start with pieces of data you would expect to have an effect and gradually include other parts later. Initially I started writing a very complicated automation for generating a variety of data before I realised that most of it could be identical as it wasn’t expected to affect performance.
In tandem with the above point, consider how to discover information quickly - You may spend a long time writing a very complicated performance test that covers all kinds of data and scenarios, only to find that the application or the environment hasn’t been configured correctly. Simpler, quicker tests can be run earlier to discover bits of information about whether you are ready to performance test or discover very obvious issues. Simply rapidly sending API requests manually through Postman may stress test the server and that can be done in a few minutes!
Consider as many user stories as possible - Anna Baik shared this one in the testing community slack that I would never have thought of - Health Check endpoints. One of the users of the system is your internal monitoring which may regularly hit a health check endpoint. This can affect performance! What other user stories are there that you may not have considered?

General tips

Find a way to monitor your performance tests live as they run! - If you’re using a tool such as Gatling, you can configure real-time monitoring. This is extremely useful as you can quickly tell how the test is going and stop it early if it’s already killed the application. You can also do this through monitoring the application through tools such as AppDynamics or using any tools provided by cloud service providers such as AWS CloudWatch. The more information you can have to observe how the application and its hardware behaves, the better.
Treat performance tests as exploratory tests - Expect to run lots of tests and to keep changing and tweaking the tests. Be prepared to explore different questions and curiosities. Treat your first runs of your tests as opportunities to check your tools and tests actually work how you expect. Try to avoid people investing too much in the result of the first load test - you will learn a lot from it, but it won’t tell you “good to ship” first time.
No seriously, it will be more than just “one test” - Imagine if someone asked you to verify some functionality in just one test? Do you really believe you will not make any mistakes and the product will perform as expected first time? If you have that much faith, why run the performance test? If you’ve decided there is value in performance testing, then surely you’ve accepted that you will take the time to run as many tests as it takes to have some better confidence and reliable information?
Errors might be problems with your tests, not just problems with the application - Just as with automated tests, expect there to be mistakes and errors with your tests. Don’t jump too quickly to conclusions about why errors might be occurring.
Separate generating test data from your test execution - Consider what you are performance testing, does the performance test need to create data before it does something else with it? Or is it unrealistic for it to generate load that creates data and does data need to pre-exist? In my case I needed to create 1000s of user accounts, but the application wasn’t intended to handle 1000s of user accounts being created all at once. So I created a separate set of automation to handle building the data prior to the performance test run.
Gradually introduce variables such as different users or different loads - For example, if you have two different types of user - an admin and a customer - try the customer load test on its own and the admin load test on its own before running them together. If there is a significant problem with one or the other, you can more easily identify it. In other words, try to limit how many tests you run at once and how many variables you play with at once.
When you run a stress test, measure throughput - This lets you measure how much data you are sending and help you figure out if your stress test is reaching the limits of your machine, the network or the application you're testing.

Test ideas


  • What happens when the load spikes? Does the application ever recover after the spike? How long does it take to recover?
  • What happens if we restart the servers in the middle of a load test?
  • How efficiently does the application use its hardware? If it’s in a cloud service, would it be expensive to scale?
  • What happens when we run a soak test (a load test that runs for a long time with sustained load, e.g. 12 hours or 2 days).
  • What happens when we run with a tiny amount of load?
  • What happens when we send bad requests?
  • What do we believe to be the riskiest areas and how can we assess them?
  • What do we believe to be the safest areas and how can we assess them?