A User Stories Retrospective Checklist for New PMs on Startup Campaigns
A focused retrospective checklist for new PMs at startups — looking back at the user stories of a just-completed campaign to identify where silent disagreement crept in, and what to change for the next one.
An hour of retrospective work that sharpens every campaign after
Most startup campaigns end without anyone going back to the user stories. The teams that do — even briefly — write better stories on the next campaign.
New PMs who've just finished their first or second startup campaign rarely go back to the original user stories afterward. The campaign launched, the team moved on, and the stories file in some folder. The retrospective takes an hour and pays for itself across every subsequent campaign by surfacing the exact patterns of silent disagreement this team falls into.
This checklist is for the new PM who has just wrapped a campaign and has 60 minutes to spend on learning. It's structured as eight questions to ask of the original stories, with a note on what each question typically reveals.
Eight questions to ask of the original stories, in order
0 / 8- For each story, did the team's actual output match the original story? If not, did the story evolve formally (with a documented update) or informally (with no update)? Informal evolution is silent disagreement.
- Which stories had the largest gap between original and actual? Cluster the stories by gap size. The biggest gaps often have a common root cause — a specific stakeholder, a specific topic, a specific kind of ambiguity.
- Were the largest gaps in the 'what' or the 'who'? Gaps in 'what' are scope disagreements; gaps in 'who' are audience disagreements. The two have different correctives.
- Did the 'what it's not' fields catch real disagreements during the project? If yes, the format is working — keep it. If the 'what it's not' fields were sparse or unused, the format isn't being applied; the corrective is process, not template.
- Which open questions were resolved, which were resolved late, and which were never resolved? Late or never-resolved open questions are usually where the campaign drifted from the original stories.
- Were any stories silently dropped? Stories that started in the original list and didn't appear in the final campaign — without an explicit drop decision — are the most diagnostic data point.
- Did stakeholders revisit the stories during the campaign, or only at sign-off and launch? Stories that aren't revisited drift; stories that are revisited stay aligned. The cadence is itself a structural choice.
- What's the one change that would have prevented the largest gap? Constrain the answer to one change, not many. The constraint forces prioritization.
- Hour 1Walk through stories60 minutes alone, or with one other team member. Walk each story against questions 1-7. Take notes.
- End of hour 1Answer question 85 minutes. Constrain to one change.
- Next campaignApply the changeThe change is the only difference from the previous campaign. Track whether it works.
- After next campaignRe-run the retrospectiveRun the same eight questions. The patterns will have shifted.
What this typically reveals
New PMs running this retrospective for the first time usually find one or two recurring patterns. The most common: the same stakeholder is the source of most silent disagreements (because their language is consistently more abstract than the team's, or because they don't read the stories carefully). The corrective is structural — that stakeholder gets a different kind of conversation, not a better story format.
The second-most-common pattern: 'what it's not' fields stayed blank for stories that turned out to be the source of expansion. The corrective is process — make the 'what it's not' field a non-skippable field, not a recommended one.
The third-most-common pattern: stories were never re-read after sign-off. By launch, the team was operating from memory of the stories, not from the stories. The corrective is cadence — add a mid-campaign re-read to the team's process.
For the template that the retrospective is reflecting on, see the new-PM campaign template; for the parallel work on implementation projects, see the delivery lead implementation template; for the executive-level intervention, see the executive playbook on user stories.