Good enough software testing
Good enough. Perfection. Supreme quality. Delivering quality at speed. Reliably delivering software.
These are familiar expressions. We throw “vibes at the universe”, hoping to sharpen a piece of software we are building. Most of these expressions are vague.
Take for example:
“Automating all the testing”
Or the politically charged:
“100% coverage”
These often bring desperation and disappointment. They also signal probable misunderstanding between testing and logical checking activities. Read: misunderstanding the act of willful learning and experimenting with benumbed interaction.
Keep in mind: These sayings are not absurd. They may work in a tiny contained context. There are activities and artifacts that are not incorrect or distant from these.
The usual counter-argument is that the problem space is never small and one-sided. There’s entropy and chaos facing the non-isolated systems we build and work in.
On the other side of the debate: “meaningful” or “deep testing” can come off as meaningless as “automate all the testing” for some.
The underlying problem is one of realism and context. I also think both sides of the debate are part of the problem.
The divide
Most proponents of “automate all testing” or “AI testing” will usually sell their ideas by omission of one or more type of constraints:
- Time constraints
- Human constraints, particularly agreement and communication, …
- Software constraints, particularly testability, reproducibility, repeatability, …
- Ethical constraints
- …
Most constraints get brushed under the rug to sell test engineering or automation ideas. As far as anyone cares, the small disclaimer writing is in the package, in microscopic font type:
Dear user, the theories may work in a small enough test target. Really small.
This transports the people on the “meaningful testing” side of the debate into extensive debating.
The debating becomes a problem in itself. The first crowd is not interested in debating. The second crowd gets lost in righteous banter.
Like in real life and in many political environments, the microcosm of software testing ends up suffering:
- There is too much tolerance towards the intolerant;
- There’s too much intolerance towards the tolerant;
- Extremism reigns in schools of testing of different poles;
The divide becomes bigger and confusing through time.
Keeping our sanity
Those walking out of the debate altogether have a hard time keeping up. Adaptation is key.
The best way we can adapt is to find out for ourselves how much testing is (seemingly) good enough. We can only figure that out by testing. Testing a LOT!
Some testers are afraid of testing a LOT, because they’re afraid of displaying failure. There’s no way around it though. You can only test a LOT by pure trial and error.
We can think of it in the form of a curve that displays how our testing meets the needs of the project. We can only draw the curve by actually testing. Plus, we need to cut away any small bit of distraction that is taking time away from testing.
Improving the debate
This entire argument needs an upgrade. It starts with splitting concerns and asking questions:
- What are we actually automating?
- What is automation not solving?
- What are the best effort approaches that will support different contexts?
- What kind of feedback loops can we set in place to test the approaches?
- …
Answering these can help planning testing efforts and articulating all sorts of restrictions. I like to think about this as “Testing the Testing itself”. I wish to write more about this soon, but for now I’d like to leave readers with a few thoughts.
To test the testing itself we can’t stop at asking questions.
To answer questions we have both learnings and guiding principles we inherit from others and from past experiences. We also have to perform analysis and gather information.
The act of questioning and then hunting for answers demands decisions and actions. This is where oftentimes as Testers we face the most pains:
- We don’t have enough property/power over making decisions that impact our work;
- Or we lack resources to put our decisions into actions;
- Or we fail to make our actions observable;
- …
These pains contrast with what sometimes the industry expects from us. Most experienced Testers will tell us that we don’t assure quality. It’s a trap. There’s actually very little we control in the grand scheme of things.
We need to be honest to ourselves: what is preventing us from being better Testers/Test Engineers?
We can’t waste time or focus by doing “work done for the sake of showing work”. Some folks caution us: beware of time put into lengthy scripted test cases and test-plans.
“Testing the Testing itself” takes dedicated work, focus and time.
And if we’re honest: we have a lot of work ahead. We are still far from good enough software testing.
If you read this far, thank you. Feel free to reach out to me with comments, ideas, grammar errors, and suggestions via any of my social media. Until next time, stay safe, take care! If you are up for it, you can also buy me a coffee ☕