#management
#strategy
#development
In This blog

    You may be familiar with the term “think outside the box”. What does it mean to you when you hear it? Many people may not realize the importance of this skill. It’s especially useful to us as testers, and it’s something we can learn, practice and improve.

    Some of the biggest discoveries and inventions throughout history are the result of someone thinking outside the box. Ada Lovelace came up with what many people consider is the first computer program, back in 1843. Dr. Sue Black had the idea to use social media to save Bletchley Park, back in the early 00s (I highly recommend her book, Saving Bletchley Park, which tells the amazing story). Stephanie Kwolek realized the utility of a solution that was usually thrown away and invented Kevlar.

    You all do this kind of thinking at times, that is for sure. But if it produces such great results, why don’t we do it all the time?

    Edward de Bono came up with the term “lateral thinking” back in 1967. He defines it as solving problems through an indirect and creative approach, using reasoning that is not immediately obvious and involving ideas that may not be obtainable by using only traditional step-by-step logic. We software professionals tend to use “vertical thinking”, which we see as logical problem solving – a conscious, step-by-step approach, relying on data and facts.

    In his landmark book Thinking Fast and Slow, the late Daniel Kahneman explains that our brains work in two different systems: automatic mode and complex mode. For practical reasons, we spend a lot of time in automatic mode. What’s the capital of England? How much is two plus two? We have these facts to hand and the answers don’t take any brain power. But if I asked you, “What is 17 x 24?”, you’d probably reach for a piece of paper or your phone. Your brain needs to move into complex mode to answer that type of question.

    Our brains are wired for automatic mode. Back when saber-toothed tigers roamed the earth, humans didn’t spend a lot of time analyzing how they should respond when they saw one – they ran. Today, we still learn from our experiences. When we first learn a skill, we may have to focus intensely. With practice, the skill may become automatic. Think about how you drove a car in your first driving lesson, and how you drive now.

    Our brains take shortcuts based on experience so that they can get into that comfortable automatic thinking mode. Unfortunately, these shortcuts can lead to a distortion of reality.

    We all have cognitive biases – unconscious ones that we don’t think about because our brains are in automatic mode. There’s nothing inherently wrong with automatic mode – we couldn’t get through the day without it. And, we should recognize that these biases can get in the way of software quality. Let’s look at a few common ones.

    Conformity bias

    Conformity bias is what happens when we adjust our behavior or thinking to match those of other people or a group standard. I guarantee you will enjoy this video that illustrates conformity bias. Solomon E. Asch conducted studies that showed that people tend to go along with others’ answers, even when those answers were wrong. So – it’s important that we QA folks – Question Askers, that is – help our teams rely on facts and data.

    Confirmation bias

    We humans have an ingrained habit of seeing what we expect to see. When I give a group of people a sequence of numbers – 2, 4, 6 – and ask them what rule produces this series, I hear an automatic thinking answer right away: “Even numbers”. Rarely does someone just ask me questions about what I’m thinking, so they can discover, which is, “numbers that are two numbers apart, positive or negative”, or ask me for more examples so they can figure it out. This is another bias that shows how important our investigative skills are.

    I’ve found this to be an especially dangerous bias that can get in the way of things like exploratory testing. Testing a minor change to a user interface, I already know what the change is and how the thing should look and behave. Many times, I’ve tested that change, while failing to notice that something else in the same screen got totally broken. Welcome to the human race, Lisa!

    Anchoring bias

    We tend to rely too heavily on the first piece of information we see. This is why salespeople start negotiations with a high price – the buyer will settle for a higher price than they would have if the negotiation started with a fair price. If you see a t-shirt you want and it costs $100, and you see the same shirt somewhere else for $20, you’ll think it’s super cheap. But it might really be a $10 t-shirt. Whatever reference point we start with influences our adjustments, even as we get more information.

    This bias gets us in trouble when we’re planning our work. Consider a Scrum team being asked to estimate how long a new feature will take to deliver. They have an estimating meeting. The first story comes up, and the senior developer says it will take one or two story points. The other developers join in and agree.

    The lone tester in the room has worked on a similar feature before at a previous job and sees a lot of potential risks and unknowns. She says it’s at least an eight-point story. But everyone heard the low numbers first, and it’s hard to influence the group. This is why we use practices like planning poker, where everyone reveals their estimate at the same time. Unfortunately, anchoring bias combined with conformity bias can still lead the team to an unreasonably low estimate.

    Availability bias

    We’ve talked about automatic thinking and how we humans like to make mental shortcuts. We tend to base our beliefs on readily available facts. As a result, we may overestimate the likelihood of a shark attack on a Florida beach, thanks to sensational news reports. Once, I bought a yellow car, and I suddenly saw yellow cars everywhere, even though that’s not a popular color.

    This bias isn’t well understood, as it’s difficult to study. Yet it’s clear that it can impact our product’s quality. For example, we remember some risk scenarios better than others, but just because they are easy to recall, doesn’t mean they are the most important ones. It’s important to use brainstorming techniques to flush out as many hidden assumptions and inconsistencies as we can when we start planning a new product or feature.

    So many biases affect software quality

    We’ve just looked at the tip of the cognitive bias iceberg. Some other examples of biases that can bite us:

    • Loss aversion: We experience losses twice as intensely as gains
    • Sunk cost fallacy: We believe past investments justify further expenditures
    • IKEA effect: We value things we build more (have you deleted any flaky automated tests lately?)
    • Functional fixedness: Use an object only in the “traditional” way

    Getting outside the box

    So how do we get outside the box? You’ll hear people say that we just need to be aware of our biases. Unfortunately, since they are unconscious, knowing that we have them doesn’t prevent us from succumbing to them.

    In my experience, a diverse group of people collaborating has the best chance of counteracting unconscious biases. Psychological safety is a prerequisite to this. Each person must feel safe to contribute, to ask questions, to point out problems. I’ve found that when I pair up with a developer, designer, or other non-tester for a testing activity, we notice a lot more together than I would by myself. Testing as well as programming with an ensemble of different specialists is especially powerful. Not only do we have more eyes on the problems, but we probably have someone present who can answer each of our questions.

    I’ve also found that it’s essential to use some visual means of communication along with talking together. Whether it’s a physical or virtual whiteboard, mind map, or sticky notes – we communicate better when we have visuals. Our brains work better when we use our hands and write as well as speak to each other. Visual models such as the test automation pyramid, agile testing quadrants, and holistic testing model help us think laterally. Using personas helps us consider more perspectives.

    And keep asking good questions!

    Want to learn more about Cognitive Bias and Testing? Watch the webinar recording now!

    Schedule a Demo

    PBS LogoDXC Technology LogoBoots LogoMcAffee LogoNCR LogoRoblox LogoAIA LogoEnvisionHealthcare LogoWendy's Logoeasyjet LogoAST LogoUCSF Logo
    PBS LogoDXC Technology LogoBoots LogoMcAffee LogoNCR LogoRoblox LogoAIA LogoEnvisionHealthcare LogoWendy's Logoeasyjet LogoAST LogoUCSF Logo
    About the author
    Lisa Crispin

    Lisa Crispin

    An independent consultant, author and speaker based in Vermont, USA. Together with Janet Gregory, she co-authored Holistic Testing: Weave Quality Into Your Product; Agile Testing Condensed: A Brief Introduction; More Agile Testing: Learning Journeys for the Whole Team; and Agile Testing: A Practical Guide for Testers and Agile Teams; and the LiveLessons “Agile Testing Essentials” video course. She and Janet co-founded a training company offering two live courses world-wide: “Holistic Testing: Strategies for agile teams” and “Holistic Testing for Continuous Delivery”.
    Lisa uses her long experience working as a tester on high-performing agile teams to help organizations assess and improve their quality and testing practices, and succeed with continuous delivery. She’s active in the DORA community of practice. Please visit https://lisacrispin.com , https://agiletester.ca, https://agiletestingfellow.com and https://linkedin.com/in/lisacrispin/ for details and contact information.

    Related resources

    Article

    Taming the Chaos: How to Manage Testing in Complex & Robust Environments

    Ebook

    The 2024 State of Testing™ Report is now live!

    Resource center
    In This blog
      mail twitter linkedin facebook