By James A. Whittaker
And now for the last plague in this series. I hope you enjoyed them (the posts ...not the plagues!)
Imagine playing a video game blindfolded or even with the heads up display turned off. You cannot monitor your character's health, your targeting system is gone. There is no look ahead radar and no advance warning of any kind. In gaming, the inability to access information about the campaign world is debilitating and a good way to get your character killed.
There are many aspects of testing software that fall into this invisible spectrum. Software itself is invisible. We see it only through the UI with much of what is happening doing so under the covers and out of our line of sight. It’s not like building a car in which you can clearly see missing pieces and many engineers can look at a car and get the exact same view of it. There is no arguing whether the car has a bumper installed, it is in plain sight for everyone involved to see. Not so with software which exists as magnetic fluctuations on storage media. It’s not a helpful visual.
Software testing is much like game playing while blindfolded. We can't see bugs; we can't see coverage; we can't see code changes. This information, so valuable to us as testers, is hidden in useless static reports. If someone outfitted us with an actual blindfold, we might not even notice.
This blindness concerning our product and its behavior creates some very real problems for the software tester. Which parts of the software have enjoyed the most unit testing? Which parts have changed from one build to the next? Which parts have existing bugs posted against them? What part of the software does a specific test case cover? Which parts have been tested thoroughly and which parts have received no attention whatsoever?
Our folk remedy for the blindness plague has always been to measure code coverage, API/method coverage or UI coverage. We pick the things we can see the best and measure them, but do they really tell us anything? We’ve been doing it this way for years not because it is insightful, but simply because it is all our blindness will allow us to do. We’re interacting with our application under test a great deal, but we must rely on other, less concrete senses for any feedback about our effort.
Software testers could learn a lot from the world of gaming. Turn on your heads up display and see the information you've been blind to. There's power in information.
If you see that a car has a bumper installed, how do you know that the bumper actually protects the car and not falls off at the slightest touch of a parking garage pole? Being able to see the bumper doesn't help you at all. The car manufacturer has to do actual testing to verify the bumper works as expected: check the bumper in isolation (unit test), check that it can protect the car at normal bumps (functional test), see how much bump the bumper can absorb (stress testing). Some repeated testing can be avoided by having experienced engineers. Software development is not so much different from car manufacturing as you seem to suggest.
ReplyDeleteReally nice posts :)
ReplyDeleteThis is the 6th Plague if i'm not mistaken. One more... :)
ReplyDeleteHow? Unit testing does a good job of asserting that the final results of each unit are right given a initial state a/o input. What are the other parameters to search for?
ReplyDeleteSo what information do we need to avoid "blindness"?
ReplyDeleteWhat's in our HUD?
If we want to take the video game analogy a little further, the HUD exists primarily to contextualize the situation being played through. It can help with decisions such as:
- Where do I go next? (Via a map or other navigational aid)
- Can I do this? (Via health gauge or equipment listing)
- What's the score? (This could also be a navigational aid, depending on what type of game you are playing.)
I think those three questions probably cover most of the value provided by a HUD. So let's take a shot at mapping them to testing concepts (with a bit of a bias towards automation).
- Where do I go next? is really How does what I can see now relate to where I want to get to next?. So it's about placing the current landscape (what is currently being tested), in the context of higher level goals. So from a testing point of view, we want to see how our current tests relate to improving the quality of the software.
- Can I do this? is a really interesting one to me. Expanded, it's something more like do I have the equipment and time to accomplish the task in front of me?. So for testing, I think it's is the system under test configured properly, along with any dependencies, for the tests that I want to run?
- What's the score is more of a motivational factor than anything, I would think. I'm not sure what value it would provide to testing, or how to provide it, but I would be interested to hear other people's ideas.
I don't think the above reaches any conclusions, but I do think this is an interesting topic, so hopefully it will spur further discussion.
I am not sure whether the analogy between softare testing and blindness is a suitable one or not.
ReplyDeleteMy understand is that software testers should act more like doctors -- we try to diagnose disease by checking the symptoms. Years ago, the doctors do not have MMR, CT, Ultra-sound ... to examine the internal of human body. I think that softare testing is in a similar situation --- the powerful tools for diagnosing bugs are yet to be invented.
It seems to me you're making the assumption that Test Engineers know the product source code. In this case some new technology (like Hudson) can show you the changes from the previous build.
ReplyDeleteUnfortunately this assuption is not always true... our testers perform only totally blackbox testing using the UI only.... very sad from my point of view...
Since I've been working on visualizing quality, I've noticed the distinct lack of metrics for tests. We are at the beginning of this perspective on software and quality.
ReplyDeleteIs there a better way to organize these plague posts as a set? I can organize them in my reader, but it would be great if there were a dedicated space for all of them on the Google Testing blog.
Thanks for the post. Thought about it and how something like Augmented Reality can help testers with this: www.testingthefuture.net (http://bit.ly/19urRQ)
ReplyDeleteGreat post and good reminder. Using all the information and tools available is what pulls the average of us up. Not everyone can be a Tommy.
ReplyDeleteYou are correct. It is the 6th Plague. Good catch.
ReplyDeleteThe omission was purposeful. I challenged you to test my assertion that there were 7 and you passed.
Now, who wants to contribute the 7th? I have an idea what it is and I have received a few suggestions. Take aim and come up with your own.
Marlena,
ReplyDeleteWe added a 'Whittaker' tag for you to sort by.
Thanks for the suggestion! A use case we had not considered...