Interesting post!I had intended to submit an entropy post, but never finished it...There are other ways that entropy in the test process can exist:When we model a system (maybe to develop use cases or other test design purposes) then entropy is introduced.Our model has "reduced" the system - ie we've thrown away information or created uncertainty.We can help to alleviate this by other test design techniques such as exploratory testing.But where we have made assumptions about the system (or really used a model for the test design) then we've increased the amount of entropy in the process.There's a balance between trying to model the system accurately to achieve higher accuracy in the test design and using the model as a tool for the test design (good enough modelling.)Not an easy balancing act!
It isn't till you start testing that you can identify some or all the events.Good Post
I think that, ultimately, the vast majority of entropy is introduced by the developers themselves: some of it is due to bugs but a significant part is due to new/changed functionality that cross cuts multiple existing modules, etc.Either way, we can't do much about it. Bugs are an inevitable by product of development, and so is adding functionality. Thus, automated testing is a mechanism for reducing the marginal entropy. A test suite will prevent the new code from inadvertently breaking existing code (increasing the entropy)Bottom line: Programmers are mass producers of entropy. Testing helps us manage it.
Somewhat related, do you feel that if you discover more bugs early in the process that there is a higher chance that there are more. Or do you feel that if 0 or few bugs are found that the risk is higher?Another related question, do you find that the number of bugs stay close to an average over time.
The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments.