Reading the Estimation Pattern in Last Year's Enterprise Campaign Slips
Enterprise marketing executives have a year of slipped campaigns to learn from. A moderate retrospective playbook for reading the estimation pattern and changing the gate.
Last year's slip pattern is this year's gate criteria
Enterprise marketing teams don't slip randomly. They slip in patterns. Read the pattern, and the next gate writes itself.
Once a year, an enterprise marketing executive has the data they need to fix the estimation pattern at greenlight, and almost no organization uses it. The data is the previous year's campaigns: which ones slipped, by how much, and in what specific phase. Patterns repeat. Reading them takes an afternoon. The output is a sharper greenlight gate for the year ahead.
This playbook is the moderate version of that retrospective. It's structured around four diagnostic questions — each one applied to the previous twelve months of campaign launches. The output is concrete enough to change the next quarterly greenlight conversation.
- Step 1Pull the launch tableEvery campaign launched in the previous twelve months. Columns: original committed date, actual ship date, slip in days, primary reason given for slip.
- Step 2Cluster the slip reasonsGroup reasons. Most organizations will find five or six recurring patterns: legal review delay, asset rework, regional approval, partner delay, reprioritization, scope creep.
- Step 3Calculate phase contributionFor each campaign, identify which project phase absorbed the slip. Concept, production, review, launch readiness. Find the phase that contributed most slip overall.
- Step 4Translate findings to gate criteriaEach top recurring slip cause becomes a question on next year's greenlight gate. The most-affected phase becomes a mid-campaign checkpoint.
What you'll find
Most enterprise creative organizations, when they run this retrospective, find three or four patterns:
- Legal and compliance review is the single largest source of slip on regulated industries' campaigns. The slip is rarely measured against the legal team's actual SLA — it's measured against the marketing team's hopeful estimate of legal turnaround.
- Regional approval cycles are the second-largest source on global campaigns. Regions are usually pulled in too late, and conditioning loops cost more than running an early pre-brief would have.
- Partner asset delivery is the most variable source. When it slips, it slips by weeks, not days, and the marketing team has limited leverage.
- Late scope additions are the most preventable. They appear on roughly half of slipped campaigns, and they are almost always introduced after greenlight by stakeholders who weren't in the greenlight meeting.
The specific reasons matter less than the pattern: the same three or four causes account for the majority of slip, year after year. A gate that doesn't ask about them is letting them through.
Translating findings into gate criteria
For each of the top recurring slip causes, write one question for next year's greenlight gate.
For legal review: what is the campaign's legal review timeline based on — the legal team's stated SLA or the marketing team's estimate? If the answer is the marketing team's estimate, refuse to set a launch date until the legal team has committed in writing.
For regional approval: which regions have been pre-briefed, and which will see the campaign at final review? Any region in the second category is a slip risk.
For partner asset delivery: what is the contractual delivery date, and what is the worst-case if the partner misses? If the worst-case isn't documented, the campaign isn't ready for greenlight.
For late scope additions: which stakeholders are not in this room, and what's the protocol for adding scope after this meeting? If the protocol is informal, scope will be added informally.
“We did the retrospective once. The next year's greenlight gates included three new questions, all derived from the patterns. Average campaign slip dropped by 40%. We didn't get smarter — we got more specific about what to ask before signing.”
Retrospective output
0 / 5- Launch table for the past twelve months with original-vs-actual dates and slip reasons
- Top three to five recurring slip causes, ranked by total days of slip contributed
- Phase-level analysis showing which project phase absorbs the most slip
- One specific gate question added for each top recurring cause
- A standing review of the same retrospective at the same time next year, comparing year-over-year
Why annual is the right cadence
The retrospective only works at the annual cadence. Quarterly is too short — slip patterns are noisier than the data shows, and you'll over-correct on a one-off. Multi-year is too long — by the time you've collected three years of data, the team has changed, the partner mix has changed, and the patterns no longer apply.
Annual gives you twelve to twenty campaigns of data, enough to see the pattern through the noise, and recent enough that the team running the next year's campaigns is the same team whose slips you're studying. The retrospective is for them, not about them — that distinction is what makes it actionable rather than blame-finding.
For the preventive question to ask in the current greenlight that's coming up, see the preventive greenlight question; for the same pattern viewed in a different industry, see the hardware checklist.