Showing posts with label Planning. Show all posts
Showing posts with label Planning. Show all posts

Wednesday, 22 March 2017

The other side of the interview table

Introduction

I’ve recently been in the privileged position of being on the other side of the interview table for several interviews over the past year. I’ve decided I’d like to share my experience and get some ideas written down.

Reading CVs

So before an interview, you usually need to review CVs and pick ones that you feel warrant pursuing. Why do we pick out CVs? Because interviewing is a costly process, it takes time and focus away from our daily work, particularly in my case at a mid-sized company where we don’t tend to interview on a regular basis. We simply don’t have the time to interview everyone that we receive a CV for, so we are forced to filter them down.
My general approach for this was the following:
  • Read through the CV thoroughly  - everything on the CV is a small clue about the person.
  • I looked at first for some sign of personality in the CV, something that told me why this person was looking for work and what motivates them to work.
  • I noted any skill that I thought may be relevant, not just programming skills. For example, skills with Business Analysis tools or experience on a Support team. Anything that could be valuable and bring something different to my test team.
  • Depending on the role we were looking for, I would review the years of experience.
  • I would make a note of any certifications, I personally don’t put a great amount of value on ISTQB certifications, but I considered them just the same as any training a candidate might mention.
  • I always looked for some mention that the person attended meetups, conferences, workshops or is somewhat actively engaged with the testing community. While this doesn’t rule people out (as it’s pretty rare that I see it on CVs), when people do mention it, it makes them stand out.
  • I would carefully analyse the wording chosen, especially when talking about skills or previous employment. While I wouldn’t necessarily reject a CV because of a typo, it’s pretty embarrassing when people have them in sentences such as “I have a keen eye for qaulity”.

My experience so far didn’t include the initial CV collation and filtering, however, I have done this once or twice with sets of 10 or 12 CVs. Perhaps if I was filtering a stack of 100 CVs, I probably wouldn’t be as thorough reading the CVs and may be more arbitrary about the criteria I reject them on.

My general experience with this part of interviewing is there is not much right and wrong here. Only you can decide what a “good” CV is and what matches your criteria for the role. I have my own personal preferences for people that add a little personality to their CV, with opinions and motivations but other people may value lists of skills or abilities more highly.

I will say though that many, many people seem to have very, very similar CVs, which makes it hard to pick a few to take forward to interview. This is why you may end up using pretty arbitrary rules for filtering and it also biases you towards those CVs that look a bit different. As an interviewee you can use this to your advantage, but as an interviewer I feel you need to be careful not to let this bias lead you too much. Sometimes a dull CV hides a gem of a candidate!

Preparing for the interview

Who is the person? What do I want to find out?
If it’s been quite a while or if I’ve been quite busy with other work between reading the CV the first time and the date of the interview, I will first start with refreshing my memory on the CV. I will try to think about what I like about this person from the CV that I want to see more of, and try to think of questions that will give them opportunity to impress in these areas. Equally, I will also look for areas that I dislike and try to think of questions that explore these.  Some examples of these I’ve had in the past:
  • A tester mentioned working closely with developers and managing the relationships with them - I’ve asked them to expand on that, what’s worked well, what hasn’t etc.
  • Some CVs have simply listed skills without description of what their level of experience or confidence with them is, or how they’ve used them. So I’ve targeted questions on those skills to try and explore where they really are with them. “I know Java” would usually prompt questions from me about how they’ve used it and how confident they are with it, even specific questions regarding it.
  • Some CVs have also described their previous testing experience mainly in terms of “producing Test Cases and Test Plans according to the specifications” which prompts me to probe quite a bit about the candidates feelings on exploratory testing and how they would handle an environment without many written test cases.
Due to the nature of everyone’s CV being different, this means I end up with a different set of questions each time. Currently I feel this is a little inadequate because I feel I end up with inconsistent or biased opinions on the candidates where I’ve asked better questions to some than others.
Interview format
Something that I’ve had not had much chance to experiment with yet is scripting or planning the interview format. But I feel there are several variables that can change and I could experiment with:
  • How many people are going to be involved in the interview?
  • How long will the interview be?
  • Will we include a technical test?
  • How many interviews will we conduct with each candidate (e.g. 2nd stage or 3rd stage interviews)?
  • Do we ask different questions or the same questions to each candidate? Do we stick to a script?
  • Do we ask the candidate to perform homework or a task before the interview?
  • Do we ask the candidate to conduct a task (such as a presentation) during the interview?
I’ve been in various interviews with a mix of the above and I’m undecided on what does and doesn’t work. However it’s worth considering and planning these things before the candidate walks through the door! I also feel I can improve how I learn from each interview and compare them. I would like to spend more time in future making sure the experience with each candidate is more consistent and keep better notes on them. In other words I feel I need to plan better how I am going to make a decision on which candidate to choose, rather than leaving it to gut feeling and all of its biases.

The interview itself

Think about your performance
Regardless of whether you are either the interviewer or the interviewee, my number one rule for interviews is to think of interviews as a two-way conversation. Both parties are interviewing each other to figure out if they like each other. As the interviewer I feel it’s important to respect this even if the candidate doesn’t and give them plenty of opportunities to ask questions. Not only that but I try to keep discussions as honest, informal and friendly as possible. If it can feel more like chatting casually in a cafe or a bar, the better, because both interviewer and interviewee are going to think of better questions and answers.

With this mind, I try to be cautious not to assault the candidate with lots of questions one after another. It’s not easy to describe when it makes sense to hold off and give the candidate space, it depends on several factors:
  • The personalities of everyone in the interview.
  • The mental state of the candidate.
  • How difficult the questions being asked are.
  • How the conversation has been going (i.e. sometimes the flow is so natural that we may be chatting fairly casually and rattling through lots of questions and that’s ok).
  • How much time we have.

I’ve noticed that people very rarely tend to ask questions after the interview, despite being told they can. While I still encourage this, I’ve taken this to mean it’s very important that the interviewee gets chance to ask as much as they can in the interview. If possible, I try to see if I can learn from their questions rather than from the answers they have for mine.

Multiple interviewers
All of the interviews I’ve conducted have been with other interviewers in the room, asking questions. The worst thing that can happen is where you trip over each other, interrupting or awkwardly looking at each other to ask the next question. This is why preparing the interview format and discussing a script or questions beforehand is important to me. For me you get so little time with candidates that you have to spend every minute, every second very carefully. I absolutely hate when an interviewer pursues a line of questioning which has been covered before or that I don’t consider very useful for this reason.

What would a script look like? Would it be a set of strict questions, one after another that we would follow to the letter? No of course not, as I said earlier it’s important to keep the interview casual and informal, letting it flow with the candidate, adapting all of the time. I would like to try scripts in future where we plan out what kinds of questions and discussions we would like have and assign each interviewer to “lead” each. So someone would conduct the introductions, outro and facilitate the interview, another would ask deeper questions on a topic, etc. I would still allow each interviewer to interrupt or go off script but the key is to try and make sure we get the most out of the interview while keeping it natural.

It’s all about opportunities, not tests
If you are thinking of including some kind of task, examination or test of the candidate to assess their skills, bear this in mind - do not look for failure. What do I mean by this? Interviews are very compromised things, there is a lot of pressure involved and people don’t perform anywhere near like they do when they work normally. It is rarely an accurate representation of what the person is like to work with. With this in mind, I try to view questions and tests as opportunities for the candidate to impress me. If the candidate misses or messes up these opportunities, I try to keep in mind that this may be due to the unusual pressure. I feel if I view it as a series of opportunities to impress, then I avoid placing too much emphasis on particular parts of the interview and look for more well-rounded candidates. It also means people have a chance to recover, where they may mess up the start of an interview, but relax and impress later. Or they may impress in their preparation but fluff up their performance because they are not comfortable with interviews. I’m also open to my own questions being terrible and the candidate impressing me in a way that I didn’t expect, on something I didn’t ask them about.

Life is continuous learning and lessons
Even if you don’t hire them, make sure to always give feedback to the candidate and if there are areas they didn’t know or understand, always take the opportunity to teach them if possible. You may not be hiring them but it can be impossible for the candidate to improve themselves if they never receive feedback. I used to find it very frustrating when no-one ever told me why I didn’t get the job, even if I had done nothing wrong it would been helpful for my confidence to know the reasons.

Interviewing testers

So what about testers? What do we talk about and discuss, what is important for testing? My first reference for this is Dan Ashby’s excellent interview mindmap found here:
‘Nuff said! But some additional thoughts for me:
  • Discussing definitions of “testing” and why people like testing are important because everyone has different ideas and understanding. This as much about making the candidate feel comfortable with what they are applying for as it is establishing they are the right fit for us.
  • Discussing “agile” or “devops” are also opportunities to make clear how we work. I’m not looking for people rattle off dictionary definitions of these words but I want to understand what they think it is and how they adapt to topics that affect testing. Its also for me to explain what I believe it is and how the company has interpreted or implemented those ideas. The discussion and understanding  is the important part, not testing the candidate for definitions.
  • In terms of technical tests or exams, I’m very skeptical. While there may be certain contexts where you are looking to hire testers with programming experience, I personally don’t view programming as a key testing skill. However, if I could design a technical test that gives a good picture of how capable of learning technical subjects, I would try that! I value testers with the right attitude and approach and the ability to learn a great deal, already knowing programming is useful but not critical. The critical ability is the capacity to learn. I’ve worked with and hired great testers who know little about programming and have contributed a lot of value, if not more value that those that knew programming.
  • I’ve experimented with tests of candidate’s testing abilities and seen different ideas, but I’m again unconvinced how much you can judge. You can try and assess them on bugs they find in an application or ask them to explore their lateral thinking skills with a task such as mind-mapping a pencil. I’ve seen some interesting results from these tasks but I’m concerned that these tasks bias us towards candidates that are great on the spot. I suspect there are great testers who don’t perform very well in these situations but are excellent given more time and less pressure.

Summary

  • Its rare that we trained how to interview so it’s worth spending time planning how you are going to learn and improve, because it is an area that has particular skills and considerations like any other.
  • I’ve got several areas I’d like to focus on improving or learning more about in future, particularly around planning and facilitating interviews.
  • It’s easy to feel interviews are about asking lots of questions and testing the interviewee, based on your experience as an interviewee. But the best interviews are where you make it a more natural and informal chat.
  • Opportunities to impress, not testing for failure!
  • Make sure to always take the time to give feedback, especially if you don’t hire the candidate. Tell them why you are not hiring them, so they can improve.

Monday, 19 December 2016

The temptation to split dev and test work in sprints - don’t do it!

Introduction

About 3 and a half years ago, I was new to sprints and scrum. Coming from the videogames industry, I was used to a process where I would test code that came from developers and return bug reports. I had heard the words “sprint” and “scrum” before but I had no idea how testing fit into them, so I joined a company where I could figure that out. This is what I figured out.

What’s a sprint?

If you’re not familiar with scrum or agile, then a sprint is effectively a short-term project plan where a team of people decide the work that they can complete within a 1, 2 or 3 week window. Work is “committed” (promised) to be completed in that time frame and the team tracks their progress. After each sprint, reviews and retrospectives are held to help the team find what works well and what helps them complete more work to a higher standard while still maintaining their commitment. The main focus of sprint work is to complete the work and trying to avoid leaving work unfinished.

Where does testing fit?

So normally teams set up a task board with columns titled something similar to “To Do, In Progress, Done”. Sometimes people add more columns or use different names but the usage is similar. Anyone from the same background as me would be tempted to then suggest that an additional column could be added between “In Progress” and “Done”. The logic being that “when you’ve finished your development work, I’ll test it”. In my head, this was trying to work with what I knew already in this new environment. We ended up with columns similar to “To Do, Build/Dev, Testing, Done”.

Bad idea

So at first, I thought things were working ok, I feel one of my strengths is learning and picking up things fast so I got stuck in and kept up with the 5 developers in my team. Most of the time I was fortunate that the work dropped sequentially or wasn’t particularly time consuming to test. This didn’t last long though and eventually we started to fail to complete work on time. This happened either because I was testing it all at the end of a sprint or because the work was highly dependant upon each other and the problems with integration weren’t found until very late.
This meant we had to continue some work in future sprints. Now I no longer had plenty of time to write my test plans at the start, but I was busy testing last sprint’s work and then testing this sprint’s work! I no longer had time to spend learning more automation or exploring newer areas to me like performance testing. All of my time was consumed trying to test all of this work and I couldn’t do it. What went wrong?

A change in approach

I would love to say I quickly realised the problem and fixed it but it took me a long time to realise the issue. I think part of this I will put down to not knowing any better and partly working with developers who didn’t know any better. Either way, a while later I realised that the problem was that I was trying to test everything and the developers started to rely on me for that. I’ve since realised that there is a fair bit of psychology involved in software development and this was one of my biggest lessons.
We eventually decided to stop splitting up work between roles, mainly because we found that developers tended to treat work that was in “test” as “done” to them, freeing themselves up to work on even more development work. This created a bottleneck, as the only tester as I was testing work from yesterday while they were busy with today. Instead, I came to the realisation that there is little benefit to splitting the work up in this way, at least not through process. We should be working together to complete the work, not trying to focus on our own personal queue. I shifted from testing after development was thought complete, to trying to test earlier, even trying to “test” code as developers were writing it, pairing with them to analyse the solution.

Understanding what a “role” means

I think for me this lesson has been more about realising that playing the role of “tester” does not necessarily mean I carry out all of the “testing” in a team. It does mean I am responsible for guiding, improving and facilitating good testing, but I do not necessarily have to complete it all personally. An additional part of this lesson is that I cannot rely on other people to define my role for me - as a relative newbie to testing I relied on the developers to help me figure out where I should be. While I’ve learnt from it, I also know that I may need to explain this learning again in future because it is not immediately obvious.

So where does testing really fit?

Everywhere, in parallel and in collaboration to development. Testing is a supportive function of the team’s work, it now doesn’t make sense to me to define it as another column of things to do. It has no set time frame where it’s best to perform, and it doesn’t always have a great deal of repetition in execution. It is extremely contextual.
In addition, that’s not to say you shouldn’t test alone or separately to ongoing teamwork. You absolutely must test alone as well, to allow you to focus and to process information. It’s just that you must choose to do this - where it is appropriate.

Definition of “Done”

One of my recent approaches was to define the definition of “Done” as:

“Code deployed live, with appropriate monitoring or logging and feedback has been gathered from the end user”

Others may have different definitions, but I liked to focus the team on getting our work in a position where we could learn from it and take actions in the following sprint. For me, it meant we could actually pivot based on end user feedback or our monitoring and measure our success. Instead of finishing a sprint with no idea whether our work was useful or not, planning a new sprint not knowing whether we would need to change it.

Summary

  • Avoid using columns like “Dev” and “Test” in sprint boards. It seems to lead to a separation of work where work is considered “Done” before it is tested.
  • Instead, try to test in parallel as much as possible (but not all of the time), try to test earlier and lower down a technology stack (such as testing API endpoints before a GUI is completed that uses them).
  • Encourage developers to test still and instead try to carefully pick when and where to personally carry out the bulk of the testing. Try to coach the team on becoming better at testing, share your skills and knowledge and let them help you.
  • Altering the definition of “Done” seemed to help for me, it was useful to focus the team on an objective that meant we really didn’t have to keep returning to work we had considered completed. In other words, make sure “done” means “done”.

Sunday, 16 August 2015

The role of a tester in backlog grooming and planning

Quote.png

Backlog grooming

As an on-going activity, the product owner and the scrum team should be actively reviewing their backlog and ensuring the work is appropriately prioritised and contains clear information. This allows each piece of work to be more easily planned into a sprint as it can be more accurately estimated.

As a tester I actively try to be involved in this process as it is my first opportunity to assess the requirements and the information provided. It also allows me a chance to gather information required for testing, which allows me to provide more reliable estimates.

The objective is to be in a position where for any piece of work presented in planning you know exactly what the work requires. You should then have a good idea of what you will test and therefore provide reliable estimates. If this is not the case, then the work cannot be effectively planned into the sprint.

Planning and estimation

At the start of each sprint, you have a sprint planning meeting. In this meeting the team collectively decide what work they can commit to being done by the end of the sprint. This may or may not have an estimation process. This is your last opportunity as a tester to ensure that you have all of the information you need before work is started. If the appropriate backlog grooming has been done, then this should be a relatively straight forward process, however inevitably there will be pieces of work that need clarification or may have missed something.

Some example typical thoughts I have in a planning meeting for a piece of work are as follows:
  • Do I understand the requirements?
  • Does everyone else understand the requirements? (and does their understanding match mine?)
  • What requirements have not been written down?
  • Are there any external factors such as legal requirements or third parties?
  • How can I test this work? Is it even testable?
  • Do I need any additional tools to test this work?
  • Do I have any dependencies in order to test this work, such as needing live data or having to work directly with a customer or third party?
  • Does this work conflict with anything else in this sprint?
  • Does this work conflict with work being done by other teams?
  • Are there any repetitive checks that I could automate for this work?
  • Do I need to consider other forms of testing such as security or performance testing?
  • Is the balance of development workload to testing workload viable?

Hopefully I should have asked most of these questions during backlog grooming, and I wouldn’t always ask all of these questions, it very much depends on the context of the work. But hopefully this demonstrates that you can easily think of a lot of questions and it is important to ask these before and during planning.

Once you have the answers to all of these questions, you should have a good idea of what you are going to test. From this you can provide a more reliable estimate of how much testing you would like to do. Not only should you have a good idea of how long it would take, but also you should be better equipped to analyse risk.

Summary

  • Planning a sprint is easier with clearly defined work and when the team has prepared for the planning meeting.
  • To achieve this, backlog grooming should be used to ensure tickets are prioritised appropriately and contain enough information.
  • Backlog grooming should also be used to prepare for planning such as considering if you need testing tools or environments.