12 Work Breakdown Structure Mistakes on Scale-Up Implementation Projects
Twelve specific WBS mistakes individual contributors make on scale-up implementation projects — each a documented contributor to silent expansion, with a detective signal to catch each one early.
Twelve places implementation work hides until it's too late to fund
A WBS that looks complete to the team that wrote it is usually missing the work nobody on that team owns.
Individual contributors at scale-ups writing WBSs for implementation projects make a recurring set of mistakes. The twelve below are listed in the order they typically appear in retrospect — earliest-discovered first, latest-discovered last. Each is paired with a detective signal: something visible in the first 30–60 days that flags the mistake while it's still cheap to fix.
Implementation projects are particularly vulnerable to the expansion pattern because the WBS is usually written by the team building the system, not by the team adopting it. Adoption work is structurally invisible to the build team and consistently underestimated. Most of the twelve mistakes below are versions of this same blind spot.
| Mistake | Detective signal (visible in 30-60 days) | Approximate cost when missed |
|---|---|---|
| 1. WBS only covers build, not adoption | Adoption tasks not appearing in any sprint | 10-25% of total project effort |
| 2. Training scope listed as one task | No named training owner; no curriculum draft by week 6 | 2-4 weeks of unscheduled work |
| 3. Data migration as a single line item | No data inventory document by week 4 | 3-8 weeks of late-stage rework |
| 4. Integration work bundled with build | Integrations not separately tracked in standups | 20-40% of integration effort missed |
| 5. Testing scope undecomposed | No test plan with named owners and dated cycles | 1-3 weeks of compressed test phase |
| 6. Change management absent | No comms plan; no champion network | Post-launch adoption stalls |
| 7. Documentation deferred | Docs assigned to 'whoever is free at the end' | Operational handover slips by 2-4 weeks |
| 8. Vendor scope not decomposed | Vendor SOW pasted in as one WBS branch | Disputes over deliverables in month 3 |
| 9. Cutover and rollback not in WBS | No cutover runbook by week 8 | Launch day improvised |
| 10. Stabilization period missing | No work scheduled for first 30 days post-launch | Team disbanded into the next project; bugs unresolved |
| 11. PM and program overhead unscoped | Status reporting, governance, sponsor management not in WBS | PM burns 30-50% of capacity invisibly |
| 12. Decommissioning of legacy system absent | Legacy system still in use 3 months after go-live | Double licensing costs; user confusion |
How to use the table
The table is structured for a specific use: you've inherited a WBS, and you have 30–60 days before the consequences of any missing scope start to bite. Walk down the table. For each row, look for the detective signal in your project. If the signal is present, mark the row as a known gap and start working a fix. If the signal isn't present, move on.
In most retrospectives, four to seven of the twelve rows light up on a given project. The team that scores fewer than three is either unusually disciplined or hasn't run this audit before — re-read the table, slowly. The team that scores nine or more has a structural problem with how WBSs get written; the corrective is at the template level, not at the project level.
- Week 4First auditWalk through all twelve rows. Identify which detective signals are present. Write findings as a one-page memo to yourself.
- Week 5Sponsor conversationBring the top three findings to the sponsor. Frame as risks, not WBS errors. Ask for sponsor sign-off on adding the missing branches to the project.
- Week 6-8Fold into the WBSFor each accepted finding, add the work to the WBS, the schedule, and the budget. Update the scope statement to match. Re-baseline if the deltas are significant.
- Week 12Re-auditWalk the twelve rows again. New gaps usually appear as the project's understanding deepens. Some early gaps will be closed; new ones surface.
Why these twelve, in this order
The ordering reflects discovery sequence in retrospects, not severity. Mistakes 1–3 are the ones most often noticed first because their absence is visible in the first weeks. Mistakes 10–12 are noticed last because their consequences only appear in the post-launch window when the team is already dispersing.
The asymmetric ordering matters for how you respond. Early-discovered mistakes can be fixed by adding work to the WBS and re-baselining. Late-discovered mistakes can rarely be fixed at all; the corrective is to capture the lesson and apply it to the next project's WBS template. If your retrospective only catches mistakes 1–4, you're catching the cheap ones; the expensive ones (8–12) require a different kind of looking, usually involving the post-launch period explicitly.
For the executive WBS template that prevents most of these structurally, see the executive WBS template; for the per-project diagnostic, see the WBS health assessment; for the upstream scope statement work, see the scope statement health assessment.