Introduction
I always feel like this is a weird thing to admit (as it seems a bit morbid), but I love the Canadian documentary series ‘Aircrash Investigation’ (also known as ‘Mayday’). I think I like it because it obviously heavily involves the process of investigation and depicts crash investigators leaving no stone unturned nor making any assumptions about causation. There are a lot of common themes with software testing, not to mention the fascinating psychological analysis of humans under pressure. I recently watched an episode that highlighted such an analysis, particularly with regards to the psychology of humans interacting with machines.
Learning from tragedy
The episode in question focused on the crash of Asiana Airlines Flight 214 which resulted in the loss of three lives and one hundred and eighty-seven injured people. The USA’s NTSB (National Transportation Safety Board) investigation determined that the crash was caused by the pilots’ poor mental model of the automation that helps them fly the plane. The documentary coined a great term that I really liked to describe this - ‘automation confusion’. In short, the pilots were trained to rely on the automation so heavily that when they were presented with an unexpected situation, they couldn’t react in time and didn’t notice the signs around them that something was wrong. The automation couldn’t help the pilots due to the unexpected and unique actions they took in this situation. As part of the NTSB’s report, they raised concerns that the automated systems on planes are becoming so complex that it isn’t possible for pilots to fully understand and predict the behaviour of the aircraft in all situations. The documentary ended on a note of how the flight industry was discussing ‘going back to basics’ and focusing on training pilots to fly manually so that the over-reliance on automation could be avoided in future.
What does this have to do with testing?
I found this fascinating because I recognise a lot of parallels with current discussions in the testing community about automation and the concerns about over-reliance on automation tests. The concern that humans find it difficult to keep up and understand complex automated systems also reminds me a lot about the relationship between programmers and their code. Is it possible for any programmer to understand 100% of their code, the libraries they use, the languages they use? Do they understand every possible permutation, no matter the situation or user-input? Will the automation always save the human when things go wrong? How does the automation know what ‘wrong’ is without it being told?
I think we deal with ‘automation confusion’ all of the time in software development and I think as testers it serves us well to be aware of this problem.
A concern for the future?
As we appear to be moving to greater amounts of automation in software development, I think we should be looking out for this problem. With DevOps and the ideas of continuous delivery and continuous deployment becoming ever popular, we are building more and more automated systems to help us in our day-to-day work. But for each automated system we build to make things faster and easier, we also hide complexity from ourselves. In order to automate each system, each process - we potentially base this automation on a simplistic mental model of those systems or processes.
There are also two sides to this:
- The creators of the automation crafting a potentially simplistic system to replace a more complex manual system.
- The users of the automation having a simplistic mental model of how the automated system works and how smart it is.
How testing can help
I think my feeling on this is a sense of justification for raising concerns about the eager adoption of automation in all kinds of areas. I think testing can help discover these poor mental models and help improve not only the quality of these automated systems, but also improve the way people use these systems. I think we can do this by:
- Challenging the decision to automate a system - why are we doing it? Do we understand the effects automating it will have? Are we automating the right things?
- Testing these automated systems with a focus on usage - could we be creating user stories based on this scenario of over-reliance on the automation to handle situations?
- We could therefore focus on understanding the psychology of end users and how their behaviour changes when a manual task is replaced with an automated one. Perhaps in the same way that people have an unreasonable belief of existence ‘perfect software’, maybe they also consistently believe automated systems are smarter than they really appear.