Vizually
ArticleProject Lifecycle5 min read

Spotting Go/No-Go Problems Early on Mid-Size Hardware and Construction Projects

On hardware and construction projects, the data that proves a go decision is wrong is usually present in week six. A heavy detective playbook for delivery managers who want to find it.

Vizually Team·
Initiation & Chartering

The signals that prove the gate was wrong are visible in week six

By month three, every hardware project team knows whether the gate decision was correct. By month six, they're trying to find a way to say so without saying so.
Vizually editorial

On mid-size hardware and construction projects, a wrong go decision is rarely an information failure. The data needed to call it is usually present within six weeks of the gate. The failure is detection — nobody whose job is to spot the estimation pattern is also looking at the right signals at the right time.

This playbook is for the delivery manager who has just cleared a gate they suspect was wrong. It's structured around three signal classes that, taken together, give you defensible evidence to escalate within 60 days — early enough to recover, late enough to have real data.

Three signal classes, watched in parallel

The signals fall into three categories. The detective discipline is to watch all three, weekly, for the first eight weeks after the gate. Any single one in isolation is noise; two or more in combination is data.

Signal class 1: lead-time slippage on long-lead items. Hardware projects depend on components ordered against lead times stated at gate. By week 4, you should have confirmation from suppliers that the stated lead times are still valid. If even one major component has slipped by 10% or more, that's a class-1 signal. By week 8, if two or more components have slipped, the gate's schedule is already wrong and the team is just absorbing the slippage informally.

Signal class 2: integration discovery rate. Every hardware project has discoveries during integration — interfaces that need adjustment, specifications that turn out to be ambiguous, tolerances that need rework. The gate assumed some rate of discovery; track the actual rate against it. If the discovery rate in the first 30 days of integration is 50% higher than the gate assumed, the integration phase will overrun materially. This is the signal teams most often dismiss because individual discoveries feel manageable.

Signal class 3: regulatory and inspection responsiveness. The gate assumed inspector and reviewer turnaround times. Track actual turnaround. If the first two interactions with regulators or inspectors take longer than the gate assumed, every subsequent interaction will too. This signal is the single best predictor of regulatory-driven slip — and it's available within four weeks of the first submission.

  1. Week 1 post-gate
    Set up the dashboard
    Three numbers, one page. Component lead times (planned vs actual), integration discoveries per week, and regulator response time per cycle.
  2. Week 4
    First read
    Sit with each number. Ask: 'is this trending in line with the gate's assumptions?' Document the answer.
  3. Week 6
    Pattern check
    Look for two or more signal classes showing slippage. If yes, prepare a structured escalation. If no, continue weekly tracking.
  4. Week 8
    Decision point
    If the patterns held, escalate to steering with a recommendation: continue with revised commitments, pause for re-scoping, or no-go. Bring the data, not the conclusion.

Why this works on hardware specifically

Hardware and construction projects have a property that software projects often lack: meaningful physical signals available in the first 30–60 days. Components are ordered or they aren't. Inspectors respond on day three or day twelve. Integration discoveries either spike or stay flat. The signals are concrete, and they're early enough to act on.

What makes them hard to read is that, individually, each signal feels absorbable. A two-week supplier delay isn't an emergency. A handful of integration discoveries is normal. A slow first inspection is just a bad first inspection. Detective work is the discipline of treating two or three of these together as data rather than as separate inconveniences.

The team running the project is structurally bad at this work — they're absorbed in execution, and they have a stake in believing the gate decision was right. The delivery manager's role is to do the work nobody on the team has time to do, and to do it weekly while the data still matters.

Detective dashboard structure

0 / 6
  • Long-lead component status: planned lead time, current confirmed lead time, percentage variance
  • Integration discoveries per week: count, with brief category labels
  • Regulatory cycle time: each interaction's elapsed days, vs the gate's assumption
  • Trend lines for each, week over week
  • A 'red line' threshold for each — the value above which it counts as a class signal
  • A weekly 15-minute review with the project manager to read the dashboard

Escalation discipline

The hardest part of detective work is the escalation. By week 8, you have data. The team has invested. Steering has signed off. Saying 'the gate was wrong' carries political cost. The discipline that makes the escalation tractable is structural: bring the data, not the conclusion.

A structured escalation looks like this: a one-page summary of the three signal classes, with the gate's original assumptions on one side and the actual data on the other. A two-paragraph interpretation written in cautious language. Three options framed for the steering committee — continue with revised commitments (cost: schedule and budget reset), pause for re-scoping (cost: program delay), or no-go (cost: sunk investment, opportunity gain). The committee picks; you don't.

Framing the escalation as data + options, not as a recommendation, is what makes it possible to say without it landing as personal criticism of the original gate decision. The data is the criticism, dressed in numbers; the options give the committee a face-saving path forward.

When the data doesn't support escalation

The playbook also fails gracefully. If you watch the three signal classes for eight weeks and they all stay within the gate's assumptions, you have evidence that the gate was correct — which is also useful. The dashboard doesn't go away after week 8; it becomes a continuing health check, lighter, run monthly, that gives the steering committee confidence that the project is on track. That confidence is itself valuable, because it makes future detective signals more credible when they appear.

For the prevention side of this work, see the hardware go/no-go checklist. For the executive view of why these signals get missed, see five executive go/no-go mistakes. The patterns translate across project types; the signals are different, but the discipline is the same.

More in
CategoryProject Lifecycle

Related reading

Articlethe hardware go/no-go checklistArticlethe campaign retrospective worksheetArticlefive executive go/no-go mistakes