Delivering enterprise software with incomplete requirements


How do Delivery Smart enterprise teams deal with missing requirements and still deliver on time and budget?

Lately I’ve been doing a lot of work with Big Data teams, those who supply analytics, business intelligence (BI), and data warehouse information to the C-suite, for both key operational and strategic decisions.  Since these teams are often on the “caboose” end of the project change train, they often struggle to begin their work because they are waiting on upstream application, UI, or transactional database teams to supply information on what’s changed before the Big Data teams can begin writing or revising reports.  These Big Data engagements are some of the most challenging I’ve ever had to contend with in all my years of helping enterprise teams deliver better and faster.

I advocate the use of proven Delivery Smart patterns, rather than prescriptive steps or methodology, because enterprise agile leaders face so many exception cases. In this article, we’ll focus on patterns to deal with missing requirements.

The Problem

I own real estate investment properties, and frequently call upon my favorite general contractor (GC) to estimate rehab or repair costs.  Sometimes I need these in a hurry, to know whether or not I should bid on a hot property.  “Fernando, I need a rough estimate to remodel a 2,000 square foot, 3 bedroom, 2 bath house.  What will that cost, and how long would it take?”  His answer is always the same:  “It depends.  Is this a paint/flooring/fixtures job, or is a gut rehab?  Are we moving interior walls?  Any of them load-bearing?  Do we need a new roof or AC?  How’s the foundation?  Any electrical or plumbing issues that we know about?  Is it septic and well-water, or city utilities? Do we have to pull city permits, or is this outside city limits?”

Globally distributed enterprise teams face similar challenges as general contractors when it comes to requirements definition for their projects. In less complex, single product-focused companies, delivery teams always have context for their work, because they know that Job #1 is to make the product better.  With enterprise teams, the reasons why projects are being requested and the true business intent may be buried in obscure business case documents that are never seen by IT teams.  It may be a capital investment, or a cost-control exercise.  It may architectural efficiency, or go-to-market expediency (the two are frequently at odds).

Worse, commitments to deliver a project are often made on the basis of a SWAG by one or two senior developers in the prior fiscal period before the project commences. Many companies pretend that this means IT has agreed to a fixed scope/fixed price contract (large companies often budget in terms of hours instead of dollars when they bill an internal business unit, to allow Accounting the ability to adjust cost data, and identify potential future savings).  But, in fact, this is not a fixed scope/fixed price contract;  it is a variable scope/fixed price contract. Why?  Because the requirements are not nearly fleshed out enough to estimate, and the team is often not fully assembled by the time the endeavor is accepted by IT.  The details of what the Customer wants may trickle in over the course of the project, leaving the final bill way out of whack with the original estimates.

It would be like Fernando accepting a contract to “remodel a house” for me, without asking all of his usual questions.  As the project progresses, I realize market demand is shifting, and I ask him to reconfigure the house from a 3/2 to a 4/2, to add a deck, make it 18-inch tile flooring instead of laminate, re-roof, and, oh, by the way, to replace the AC and water heaters with much more expensive ones, based on recent city code changes.  Except… I’m not going to pay him extra, because he agreed to the “fixed price” contract!  Small businesses eschew such deals, but big companies make these sorts of “bad deals” with their own IT departments every day.  IT knows that their track record on such promises is poor, and Customers lose faith in IT for making promises it can’t keep.

The Waterfall Solution (?)

One way to avoid this pain is to simply make the Customer give you all the requirements up front, so that your team can accurately estimate them.  There’s only two problems with this waterfall approach:  getting all of the requirements, and accurately estimating them.

Fifty years of research have confirmed that we never get all the requirements up front (colloquially referred to as Big Up-Front Design, or BUFD) for software projects, not because Customers are evil, but because Customers are an integral part of the design process, and cannot contribute their feedback before they experience working software results. Accurate estimation in enterprise environments is as challenging as predicting financial trends is for economists. In both cases, we are subject to high causal density (HCD), a concept covered in detail in my book Delivery Smart.  HCD is a term coined by Jim Manzi, author of Uncontrolled, and refers to the uncountable number of interdependent variables that affect or are affected by other people and systems.  Bent Flybjerg’s 2015 book Megaprojects chronicles similar issues with BUFD for all kinds of projects, both software and physical.

Even in real estate construction projects, to which software building is frequently (and inaccurately) compared, perfect knowledge of all requirements and variables is rarely available, which is one reason that even small repair and remodeling jobs frequently run grossly over budget.

I recently had Fernando replace the roof on a duplex I own.  Even though he asked all of his usual questions, physically inspected the property, and gave me a quote, he warned me that there were a few “unknown unknowns.”  For example, there could be rotting wood underneath the rusted aluminum chimney flashing.  There could be rotten decking underneath the curled-up shingles.  If these things obtained, I had two options:  1) ignore them, or 2) fix them properly.  Option 1 would be a fixed price, but would void any warranty Fernando offered on his work, since we couldn’t be sure if the problem was pre-existing or not.  Option 2 may incur marginal additional cost, but it would allow Fernando to warranty the work, ensure that the roof was good for the next 20 years, and give me and the tenants peace of mind that there would be no leaks (and future damage).  I chose option 2.  Notice that this was a “rough estimate” with time and materials (T&M) billing for overages.

For all these reasons, and many more detailed in thousands of other books and articles, waterfall is not a solution at all.

The Wish List Grooming Pattern solution

The Delivery Smart solution is to start with a fully-groomed backlog, something which is easier said than done in enterprise environments.  In order to achieve a fully-groomed backlog at the outset of project, something which is critical to setting stakeholder expectations and baselines, we have to use one or more of the following three sub-patterns:

  • Extrapolated Forecast Pattern
  • Requirement Placeholder Pattern
  • Requirement Do Nothing Pattern.

All of these are detailed below, but let us start with what we mean by a “groomed wish list.”

Delivery Smart teams use the product backlog concept from scrum. We call it The Wish List.  The Wish List captures everything that the Customer could possibly want, whether an enhancement to existing systems, brand new features, or bug fixes.  Any delta (change) from the status quo goes into the Wish List. Calling it a Wish List subtly highlights the notion that this is what Customers wish for, not necessarily what IT teams are committing to.

To groom the Wish List, we ensure that each item in it has (1) a mutually exclusive priority order (sometimes called stack ranking), and (2) a relative effort estimate (usually in delivery points, similar to story points). Having a relative effort estimate in delivery points presumes that the team has enough information to do a high-level, directionally accurate, though not necessarily precise, estimate.  We use this variation of the Fibonacci Scale for estimates: 1, 2, 3, 5, 8, 13, 20, 100, ? .  The (?) question mark means that the team does not have enough information to estimate, and should not pretend that they do.

Sidebar: Delivery Smart Points

Why not use the entire Fibonacci sequence?  You can.  I like this modification for two reasons.  First, it’s smaller, so there’s less haggling at the high-end.  Second, because the range of available values is more limited, it has the subtle effect of reducing variance across teams.  Now, that does not mean that Team A’s 8-point story is the same size or effort as Team B’s 8-point story.  But the more you limit degrees of freedom in estimation, the lower the number of standard deviations.  Statisticians have long recognized this, and it’s one reason that the 7-point semantic differential scale is sometimes used over a 5-point Likert Scale- to give respondents more choice.  In practice, I’ve found that the 9-point Delivery Smart variation on ol’ Fibonacci gives teams all the freedom they need, while reducing upper-bound confusion and time wasted arguing the meaning a 144-point story versus a 233-point story (ack!).

There are other ways to estimate effort in an agile fashion, including simply counting the number of requirements/use cases/user stories committed and delivered, assuming a roughly equal value for all.  But for estimates that deliver greater accuracy, more flexible metrics, and are more in keeping with the current zeitgeist of agile, use points.

Relative Estimates

For example, my contractor Fernando might estimate replacing an old ceiling fan with a new one as 2 points, interior painting as  8 points, re-roofing the house as 20 points, and building an addition as 100 points. Delivery Smart teams would break down any 100-point item into smaller chunks:

  • Build Addition = 100 points:
    • CAD drawings = 8 points
    • Pull permits = 5 points
    • Grade & level land = 13
    • Run plumbing lines = 20
    • Pour foundation = 13
    • Framing = 8
    • Dry-in = 5
    • Siding = 5
    • Roof = 5
    • Knock out interior connecting wall = 8
    • Electrical = 13
    • Finish-out 8
    • Clean up = 5

Note that the sum of all the “Build addition” delivery items does NOT need to total exactly 100.  Seeing a Wish List item pointed as a 100 is simply a red flag that the team needs to think more about what this item entails, break it down into its component parts, then prioritize and estimate them.

If the team assigns a “?” to an item, it highlights to their Project Manager the need to get more details on this requirement, whether from an upstream IT team, or from a Business customer.

Extrapolated Forecast Pattern

If your team has a huge number of stories, say, 200 or more, as is common with many enterprise teams, then you may want to extrapolate your initial grooming to provide management with a “quick and dirty” estimate which, while far from perfect, may be much better than a single “ivory tower expert” estimating your teams’ effort from afar.

To accomplish this, start by estimating a significant portion of your backlog.  Each team can decide what “significant” means, but statistically, it would mean that you have enough representative sample stories sized and prioritized that you can then multiply that number across the remaining number of unsized stories.

What is a “representative” number?

This depends on your team’s particular backlog. To achieve a statistically sound, 95% confidence interval across our 9-point scale, knowing that the size of requirements can vary significantly (averaging 3 standard deviations, in our experience), you would need to formally estimate at least 95 requirements (user stories) for a 99% likelihood that your extrapolation would be representative. If the stories regularly vary by 4 standard deviations or more, then you’ll need to estimate about 150 requirements to achieve that same level of confidence.  Consult statistics literature for more information on statistical power and sample size to arrive at your own criteria.

Or you can Blink it, to reference Malcom Gladwell’s book.  If your team is experienced, knows their domain, and has a pretty good feel for the type and difficulty for their requirements, then you can eyeball the list, noting the outliers (Gladwell again!), and estimate whatever number gives your team a warm fuzzy that yes, they do indeed have a good feel for the outstanding work in the backlog.  My only caution here is that, at some point, your PM is likely to have to explain and justify their estimation rationale to upper management.  If you’re not comfortable walking into a room with a Fortune 500 CIO, her VPs, and your boss and telling them “We eyeballed it,” you’re better off going with the statistical power method.

Predicting tomorrow’s weather

Either way, you should emerge with a number which represents the total amount of delivery points in your team’s backlog.  For each project, you should then estimate an ideal burndown velocity:

Total Points in each Project’s PBL / Number of sprints Team has left in this Fiscal Year = Ideal Average (Burndown) Velocity per Sprint

You can present these figures in all sorts of CIO-friendly graphs and charts, which will make you extremely popular among the management teams, and may even get you a fancy Aeron chair, if you play your cards right. But you can also present these charts to your own delivery team, and track sprint-over-sprint progress against this initial estimate. Further, you can project progress through the use of handy-dandy Excel trend lines.

That this method is not in every case provably better or worse than the ivory tower expert estimates mentioned above.  But in dozens of projects over many years and companies, I’ve found them to be more useful for the following reasons:

  1. They are created (at least the “estimate sample” is) by using the wide-band Delphi process of successful agile teams.  Having more people involved, talking through the requirements in detail, and arriving at a consensus of complexity from the team who will actually deliver the work is significantly more accurate, and infinitely better for team morale and IT/Business trust-building than the ivory tower method.
  2. Done correctly, the extrapolation can be shown to be statistically sound, and thus much more scientific than ivory tower estimates, even if such estimates were done for all 200 requirements (which they rarely are).
  3. It introduces all stakeholders to the concept of relative estimation right at the start of the project, and divorces effort and complexity estimates from dollars, avoiding the “price per feature” predictions that turn sprints into death marches. The ivory tower estimates are usually estimated in absolute measures, such as hours, dollars, and calendar delivery dates, which allows Business stakeholders to do quick conversions from budget to feature cost.  It also means that they’ll be off by orders of magnitude.  By focusing on an abstract like delivery points for effort and time estimation, project budget accounting becomes almost stupidly simple:  your budget burns down at a constant rate per day/week/quarter, etc., relative to the size of your team.  Time and materials.  No muss, no fuss.

What about missing upstream requirements?

I always encourage teams to start without “finalized” requirements.  I don’t care two whits about “finalized” requirements, because they never are.  Never.  Not in 20+ years of corporate, mid-market, and small business consulting (or in other ventures, for that matter).  So, how does a team account for “known unknowns,” such as upstream dependencies, whimsical exec mandates, market shifts, etc.?

Requirement Placeholder Pattern

Just as developers “stub out” code by putting in start/finish placeholders, then fleshing it out later with actual logic, you can do the same on the requirements side.  If you have a vague idea of a requirement, but don’t know the details, stub it out in your product backlog.  For example, if an ivory tower estimate was made during last year’s budgeting exercise, giving you a “vision” of the project and a few high-level objectives, but little technical detail, then stub out placeholders with your team by drilling down on what epics it would take to turn that vision into reality, and then breaking those epics down into sprint-sized bites, and putting some delivery point estimates on them, along with ordering (based on priority, order of execution, cost of delay, etc.).

If you suspect there will be more requirements coming, but they haven’t trickled down to your team yet, use an epic to stub out a large placeholder requirement. Then have your team conduct some thought experiments to determine the best, worst, and most likely scenarios (not the details of the requirement, but the amount of effort its likely to involve).  If they are stumped as to how to begin, pull some examples from past sprints, past projects, or past jobs to get the conversation started.  Very likely, someone on your team has a pretty good idea of what’s coming down, or knows who does.  Seek that person out, and invite them to your brainstorming session.

Requirement Do Nothing Pattern

Alternatively, do nothing. Present your project burndown forecast based on what you know now.  Stipulate loudly, clearly, and in bright red letters, that, if circumstances change, so will your predictions.  No one, not even the most irascible COO, can argue with that line, because they’ve used it so many times themselves!  As new requirements are added to the backlog (a common occurrence with agile teams), project managers can note the size and impact they had on the team, and use them as reference points for future brainstorming and though experiments on future projects, or even on latter stages of the current project.

With both the Placeholder and Do Nothing patterns, we are looking for estimates based on empirical data, rather than half-backed technical specs.  This is one of the reasons we use an agile, iterative approach.  When we have enough data points, we can apply quantitative analysis to our predictions; until then, we can use qualitative analysis such as the best/worst case thought experiment.  Even ignoring missing requirements is a sound data-based approach (we can’t predict what hasn’t been asked yet).

The next time you’re asked to lead a project with significant missing requirements, employ the Placeholder and Do Nothing sub-patterns as part of the Wish List Grooming pattern to give your team and stakeholders rapid traction and accountability.

About the author

Curtis Guilbot helps improve leaders and organizations. His new book is Delivery Smart: How Fortune 500 companies get 10x gains from enterprise IT teams. Curtis marries creativity (he’s an accomplished actor and musician) with 20 years of demonstrated results for global Fortune 500 leaders and entrepreneurs. For free resources, tools, insights, and case studies, visit

Share your thoughts

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s