<< An approach to NPC knowledge in persistent MMOs Back to index... Quick and dirty outage reports >>

Minor Gripe

2019-11-25 -- Testing priorities

Chris Ertel

Introduction

Software testing philosophy is softball subject for blog posts, and after a month of fighting with a NAS build I’m in the mood for an easy write-up.

This is an adaptation of a work thread where I reflected a bit on testing practices, why we test things, and mostly importantly which things we as professional developers should emphasize and put in the extra effort for.

Fuckhueg disclaimer: The following all assumes that the software being worked on is not library code, being produced to serve as a fundamental reusable component for other folks. It similarly assumes the software isn’t being done in a regulatory regime (say, United States DoD or FDA) where testing values and procedures are spelled out. Either of those domains likely requires a different mindset about testing.

When do we test?

We test our software to verify…

Why do we test?

In short, we do testing to convince ourselves and our business friends that our software does what we say it does.

Note that convincing ourselves this is quite, quite different from proving it to ourselves. If we actually cared about correctness, we’d do something like:

This would guarantee the creation of artifacts that would extremely well-characterized, reliable to some arbitrary degree, inflexible to business demands, exceeding slow to produce, and expensive to commission. In short, “a museum piece from the very start”.

In most cases, the above is too much. We would be better served by just convincing instead of proving, and that means we can maybe pick our battles a bit more–we just need to figure out which things are convincing enough and we can focus on them more.

What do we test?

If we rank the aforementioned types of tests by what our business friends care about, it looks something like:

  1. Acceptance testing (since eventually customers get angry if the software can’t do what they ask it to under any circumstances)
  2. [some gap]
  3. Reliability testing (customers that start using software to remove a pain get upset if the software doesn’t reliably remove the pain)
  4. Performance testing (if SaaS, business folks want to cut costs which means fewer servers and cheaper deployments; if not SaaS, customers will make the same call for their personal machines)
  5. [huuuuuge gap]
  6. Integration testing (business people don’t care about how many stages are in the sausage factory–customers never ask)
  7. Unit testing (business people really don’t care about the bolts holding the conveyor belts in the sausage factory together)

Let’s look at how we developers probably rank the same things:

  1. Unit testing (since it’s straightforward, usually, and the functions are like right there)
  2. [minor gap]
  3. Performance testing (since it’s usually not too hard to run a function or module through its paces while wrapped in a timing block)
  4. Reliability testing (it’s always kinda fun to try and feed our code things it doesn’t expect)
  5. [moderately-sized gap]
  6. Integration testing (it’s a pain in the ass putting all the modules together and getting services to communicate)
  7. [moderately-sized gap]
  8. Acceptance testing (ugh checklists are boring to write and clicking stuff isn’t fun at all and if we change anything we’re gonna have to redo all this work and it takes so long to run)

The only thing both developers and business people agree on, it seems, is that integration testing sucks rocks to do.

(Notice, incidentally, that in neither case did we talk about what is actually effective testing–merely what the two groups care about more. Software defect rate is predicated more on size of system and lines of code than on any particular testing focus, I believe.)

So, with two very different views of how testing should be done, what should we as professionals do?

What to test

I submit that, from what I’ve seen in my career and elsewhere, defect-free reliable performant software that doesn’t meet the business requirements does not matter to the business. Any testing beyond the bare minimum required to convince the business folks that the software is ready (because, ultimately, they pay us and not vice versa) is internal overhead and should be streamlined where possible.

Given the above ranking, the first thing that should be tested (using manual testing if needed) is acceptance of a software implementation of a workflow customers pay money for. If that isn’t being tested and verified, every other form of testing is just academic.

I’ll stake out an even stronger claim, to make sure that the window is properly ajar:

Any testing code written before complete verification of all customer-facing revenue-generating processes is professionally negligent.

What does that mean, exactly? What would that look like?

The process I’ve seen (and used) is something like:

  1. Identify all the customer interactions and workflows with the software.
  2. Tag all workflows as “essential” that involve the transfer of money to your employer.
  3. Tag all workflows as “non-essential” that do not culminate in the transfer of money to your employer.
  4. Tag all workflows as “dangerous” that could result in the wrong amount of money leaving your employer.
  5. Write a manual QA runbook that brings a tester (human, not software!) through all of the “essential” and “dangerous” workflows with GO/NO GO standards.
  6. Prior to each public release of software, run through this runbook. After deployment to a live environment (if applicable) do the same thing.
  7. Note how colossal a pain in the ass step 6 is, so start writing acceptance tests that pretend to be a user–you may find that the runbook for step 6 will provide excellent pseudo-code. Switch over to using this as soon as reasonable.
  8. Ask the business if you need to test the “non-essential” workflows. They’ll probably say that can wait.

Once that’s done, we have a basic sanity check that the business will be appeased. Then–arguably, only then–we can test everything else, in decreasing order of importance to ourselves.

Why don’t we (industry) test this way?

Good question. I blame it on the confluence of a few factors, many of which are under our control and several of which are not:

There are probably dozens more reasons, but those are enough to mull over for now and suggest a conclusion that our industry gravitates towards the rabbit’s foot of unit testing and test coverage, because it’s easier than changing our practices to allow a proper culture of quality engineering. How can we fix these factors? That’s a whole different blog post.

Conclusion

You may be shaking your head at this point. “This doesn’t sound like the good test-driven development practices I’ve heard Uncle Bob talk about.”

And you’re right! I think that this advice flies in the face of everything I’d come to accept about how Professional Software People Writing Professional Software Should Professionally Test.

But, here’s the key thing there: professional. It is unprofessional to get paid to waste your employer’s money by solving problems they don’t have without giving them the opportunity to intervene. It is unprofessional to deprioritize the core business needs that generate revenue in favor of some aesthetic of quality.

tl,dr: Be professional and test the needs–the real, cash-generating needs–of the business before working on anything else.


Tags: practices testing

<< An approach to NPC knowledge in persistent MMOs Back to index... Quick and dirty outage reports >>