MVP scope, sanity, and schedule

- 6 mins read

Introduction

This is a quick little ramble about the scope of minimum viable products (MVPs), their schedule for creation and delivery, and finally the interplay of those things on developers.

What’s in an MVP

The entire purpose of an MVP is to create the simplest, cheapest expression of a business idea possible to test if the market wants the thing you’re making.

Ways to succeed:

  • Your MVP should contain be clear, incremental step towards a business feature or idea–ideally testing only one thing.
  • Your MVP should be instrumented/monitored to figure out if it is actually creating results for the money funnel.
  • Your MVP should be cheap enough you don’t care if you need to replace it or throw it away. There’s no point in running an experiment if you’re not gonna try new things when if it doesn’t pan out.

Ways to screw up:

  • Create an MVP so expensive that people get invested in it, see also the sunk cost fallacy. This will create secondary pathologies in your org.
  • Create an MVP with too many moving parts and features.
  • Fail to identify the performance metrics of the MVP.
  • Having identified those metrics, fail to measure them.
  • Identify and measure metrics that have nothing to do with the core experiment of the MVP.

Arguably, if you aren’t actually testing a value proposition, you’re making a prototype and not an MVP. I’m unsure if the distinction is worth working into the daily lexicon of your org, but I suspect it may well be.

Scoping an MVP

There are three parts to an MVP, right there in the name:

  • Minimal–it has to not contain irrelevant details.
  • Viable–it has to be something customers care about.
  • Product–it has to actually exist and function, otherwise it be the MVPD or MVPP or MVWF (pitch deck, power point, or wire-frame respectively).

Note that, dear reader, if you are an engineer, the only part of that you really have control over is the “product” bit…“minimal” is the job of your product folks (hopefully with input from y’all!) and “viable” is entirely in the hands of the users (a sad fact bemoaned by both product and engineering.

Anyways, the scope of the MVP should contain:

  • Exactly what is required to implement the business hypothesis you’re testing.
  • Exactly what is required to get feedback on it.
  • Nothing else.

Things that commonly get included in scope but ideally shouldn’t be:

  • Aesthetically-pleasing designs.
  • Bugfixes to unrelated functionality.
  • Face-lifts to other parts of the product.

It’s been said that you should be a little embarrassed about the state of an MVP–and that’s true. A good MVP follows the same sort of domain rule, I’ve found, as courting somebody:

If they’ve decided to like you or give you a chance, you probably can’t do anything wrong. If they’ve decided not to, you probably can’t do anything right.

That being the case, we build the cheapest, simplest MVPs we can and see if we connect with our users on a fundamental level–and if we don’t, it ain’t gonna matter how many hours we spend making a perfect design or days we spend shaving time off a database query.

Scheduling an MVP

So, having decided to embark on making an MVP of a particular scope, your org needs to schedule time to work on it.

In theory, this should just be the time required to implement the ideas of the MVP modulo any time constraints imposed by the domain you’re testing. If there’s a big trade show or something coming up, you might want to have the MVP available for testing. If there’s a personnel change anticipated soon, you might want that MVP done before that person leaves (if, say, they’re the only dev that can implement it quickly or whatever).

My hunch is probably that it shouldn’t take more than a month of calendar time to create an MVP good enough to get customer feedback on.

(If you point at some kind of special-sauce AI service or robot thing, I’ll just answer that you are testing the business value to the customer and not the implementation, so just mechanical turk it or hide the operator or whatever so the user is getting a functionally-equivalent version to what the engineers will eventually scale.)

Unfortunately, even after taking those things into account, something can still go wrong.

Changing scope, the enemy of scheduling

There is great temptation, especially in orgs that have some heavy political investment in product design and “quality”, to refuse to ship an MVP until it meets the standards of the org.

This seems to manifest as:

  • needing extended design periods, delaying implementation
  • inserting additional pipeline stages (and thus latency) between biz person with idea and engineer shipping idea
  • adding a/b testing of irrelevant (from the perspective of the MVP’s thesis) details (and hence the creation of those details)

All of these change the predictability of the ship time for the MVP. Further, they increase the expense of the MVP (in terms of engineering and design man-hours, and also in friction as people argue over things like dialog color and whatnot).

This undermines the intent of opting for an MVP instead of a prototype.

The cost of changing scope and constant schedule

If you ask for more features or more flexibility in your MVP process, you need more time.

If you ask for reliable and predictable delivery schedules, you need to stamp out variability due to changing scope.

There is no way around this.

Well, sorta.

You can ask your developers to take up the slack by working extra hours and putting aside the cognitive dissonance (often non-trivial) of observing a changing situation while being required to assert that the schedule predictability is not affected.

Doing this will antagonize your engineers, least of all because seemingly simple variations/additions can absolutely balloon the implementation complexity of an MVP. Due to the current state of the art in computing, we can’t–for example–just add a text-box to “search for the right thing” without a pretty good idea of what the business means by the right thing. We similarly–from a previous gig–can’t just “add a sparkline” without considering where that data comes from, where that data is stored, what process is used to summarize the data, and how we should handle cases where the data hasn’t been collected.

In healthy orgs, this is inconvenient. In unhealthy orgs, this is indistinguishable from normal sandbagging and can trigger secondary and tertiary political problems perhaps leading in irreversible personnel changes.

Conclusion

MVPs are meant to be concise and cheap tests of an idea, delivered on-time. Changing their scope makes them more expensive and risks any promised timelines.

You can ignore this, but then your developers will pay the price.