Launch Readiness Runbook for Calm Releases
Every launch starts with the same questionâare we ready? This runbook gives me the answer.
Context
In the fast-paced world of product development, launches are a constant. But for a long time, our launches were anything but smooth. Too many rushed releases led to broken funnels, frustrated customers, and exhausted teams. It was a recurring nightmare: a Product Marketing Manager (PMM) would announce, "We go live Friday!" This would trigger a frantic scramble in engineering, only for the growth team to discover broken forms, stale dashboards, and critical bugs hours after the launch. The post-launch period was often a chaotic scramble to fix issues, leading to missed targets and a loss of trust.
This unsustainable cycle highlighted a critical need for a more structured approach. We needed a way to ensure that every aspect of a launch was thoroughly vetted and ready before going live. That's why I developed a comprehensive launch-readiness scorecard. This scorecard covers five critical pillars: Brand, Web, Data, Automations, and Operations. The rule is simple: if any pillar fails to meet its readiness criteria, we pause the launch. This runbook has transformed our launch weeks from frantic, reactive events into calm, well-orchestrated processes. And yes, we've proudly delayed launches twice because the scorecard clearly indicated "no." It was absolutely worth it to prevent customer-impacting bugs and maintain our team's sanity.
Stack I leaned on
- Linear project with stage gates and required checklist tasks: We use Linear to manage our launch projects. Each launch is a project with clearly defined stage gates and a comprehensive list of required checklist tasks. This ensures that all necessary steps are completed before moving to the next stage.
- Notion scorecard template auto-calculating readiness per pillar: Our launch readiness scorecard lives in Notion. It's a dynamic template that automatically calculates the readiness score for each pillar (Brand, Web, Data, Automations, Operations) based on the completion of checklist items. This provides a real-time view of our launch readiness.
- Metabase dashboards for pre-launch KPIs and burn-down charts: We use Metabase to create dashboards that track our pre-launch Key Performance Indicators (KPIs) and burn-down charts. These dashboards provide critical insights into our progress and help us identify any potential bottlenecks.
- PagerDuty/Slack for launch guard duty and incident escalation: During launch week, we have a dedicated launch guard team. PagerDuty is used for on-call rotations and incident escalation, while Slack serves as our primary communication channel for real-time updates and coordination.
- n8n automations to remind owners when deliverables slip: To ensure accountability and keep things on track, we use n8n to automate reminders. If a deliverable is nearing its deadline or has slipped, n8n automatically pings the owner, ensuring that no task falls through the cracks.
- Statuspage clone to communicate readiness to execs: For executive-level communication, we maintain a Statuspage clone. This provides a high-level, transparent overview of our launch readiness, allowing executives to quickly grasp the status without getting bogged down in details.
Readiness Dimensions
- Brand & Messaging
- Creative approved, legal reviewed, translations in place.
- Launch narrative documented (problem, proof, CTA).
- Web & Product Surfaces
- Pages built with Lighthouse â„90.
- Feature flags staged, rollback plan documented.
- Data & Analytics
- Tracking plan PRs merged, QA screenshots attached.
- KPI dashboards seeded with test data, alerts tuned.
- Automations & Enablement
- Lifecycle emails/plays QAâd, CRM fields ready, CSM enablement done.
- Operations & Support
- Support macros published, billing/pricing toggles tested, runbooks ready.
Each dimension owns a checklist and risk score (Green/Amber/Red). Red blocks launch; Amber requires exec sign-off.
Weekly Cadence
- T-28 days: Kickoff, assign owners, populate scorecard baseline.
- T-21: Mid-sprint review; update risks, ensure dependencies unblocked.
- T-14: Dry run of product demo + content, finalize pricing/legal.
- T-7: âGate reviewâ meetingâscorecard must be â„80% green to proceed.
- T-2: Fire drill + smoke tests; launch guard roster confirmed.
- T+1: Post-launch standup, confirm metrics, release recap email.
- T+7: Retro with action items + playbook updates.
Playbook
- Populate scorecard: owners fill status, attach evidence (screenshots, links).
- Risk scoring: 1â5 scale for impact Ă likelihood; auto-calc risk heatmap.
- Run smoke tests: API, forms, payments, integrations; record Loom evidence.
- Execute fire drill: simulate major incident (e.g., lead form outage) 48h prior.
- Staff launch guard: assign IC, comms lead, resolver; share on-call doc.
- Launch & monitor: use live dashboards + Slack bot to publish updates hourly.
- Post-launch plan: backlog of follow-ups, adoption metrics, retro schedule.
The Benefits of a Launch Readiness Runbook
- Reduced launch risk: By systematically checking all aspects of a launch, we significantly reduce the risk of critical bugs and post-launch incidents.
- Improved cross-functional collaboration: The runbook forces teams to work together and communicate effectively, breaking down silos and fostering a shared sense of ownership.
- Increased transparency: The scorecard and dashboards provide a clear, real-time view of launch readiness to all stakeholders, from individual contributors to executives.
- Empowered decision-making: With clear data and a structured process, we can make informed go/no-go decisions, even if it means delaying a launch.
- Faster incident response: By having a dedicated launch guard and clear escalation paths, we can respond to and resolve any post-launch incidents much faster.
- Continuous improvement: The post-launch retro ensures that we learn from every launch and continuously improve our processes.
Roles & Responsibilities
- Launch Captain (PMM): owns scorecard, facilitations, exec updates.
- Tech Lead: ensures feature flags, rollback scripts, observability, and runbooks exist.
- Growth Lead: verifies funnels, attribution, campaigns, and experiments are ready.
- Data Lead: signs off on tracking plan, DBT tests, and KPI dashboards.
- Ops/Sales Enablement: trains CSMs/AEs, updates pricing/billing, ensures support macros exist.
Each role has a backup and stands in the go/no-go meeting prepared to defend status.
Tooling Automations
- Linear automation: tickets cannot close until attached checklist tasks marked complete (via custom script).
- Slack bot:
/launch statusreturns readiness score, blockers, guard roster. - PagerDuty schedule:
launch-guardservice rotates IC/comms/resolver during launch window. - Metabase: auto-refresh dashboards pinned in #launch-control every hour.
- n8n: sends D-7, D-3, D-1 reminders plus collects retro notes via form.
Automation removes human follow-up.
Scorecard Snapshot (Notion)
| Pillar | Owner | Readiness % | Top Risks | Evidence | |--------|-------|-------------|-----------|----------| | Brand | @sofia | 90% | Localization pending for DE | Figma link | | Web | @marina | 85% | Lighthouse 88 on /pricing | Chromatic run 142 | | Data | @leo | 70% | Tracking plan PR #812 open | dbt docs | | Automations | @mila | 100% | None | Resend test suite | | Ops | @carlos | 95% | Billing team on-call gap | PagerDuty schedule |
Numbers roll up into an overall readiness score; anything <85% triggers exec review.
Sample Checklist Items
- Brand: âFinal hero copy approved by legal,â âMedia kit zipped + distributed.â
- Web: âForms pass QA on top 5 browsers,â âFeature flag rollback doc linked.â
- Data: âSegment events merged,â âMetabase chart verifies sample data.â
- Automations: âPlaybooks tested in staging,â âCRM fields live with defaults.â
- Ops: âSupport macros published,â âBilling toggles simulated in sandbox.â
Each checklist item requires a link or screenshot; empty fields fail the gate.
Risk Matrix
| Risk | Likelihood | Impact | Mitigation | Owner | |------|------------|--------|------------|-------| | Tracking PR pending | Medium | High | Block launch, add reviewer mob | Data lead | | Support understaffed | Low | High | Pull SDRs for weekend coverage | Ops | | Billing toggle unknown | Medium | Medium | Simulate in sandbox + record Loom proof | Engineering | | Messaging legal approval | High | Medium | Daily check-ins with counsel | PMM |
We update the matrix at every gate review. If impact Ă likelihood â„9, the launch pauses until mitigation is in place.
Fire Drill Template
- Trigger: e.g., lead form 500 error.
- Participants: dev, ops, support, comms.
- Timeline: 30-minute simulation.
- Objectives: Validate detection, comms, rollback.
- Output: Notion doc with gaps (missing scripts, unclear on-call, etc.).
Running drills forced us to write missing runbooks before launch day.
Metrics & Telemetry
- Strategic launch delays: We've strategically delayed 2 launches due to readiness issues, preventing critical customer-impacting bugs.
- Reduced critical incidents: Critical incidents after launch have been reduced by 80%.
- Exceeded adoption forecasts: Post-launch adoption has exceeded forecasts by 14%, indicating successful and smooth rollouts.
- High internal satisfaction: Internal satisfaction with the launch process is consistently at 9/10.
- Eliminated ownership ambiguity: We've had zero "unknown owner" issues during launches.
- Faster launch recaps: The average time to publish a launch recap has been reduced from 24 hours to 4 hours.
We review metrics quarterly to keep leadership bought in.
Post-Launch Evaluation
- Adoption review: compare actual vs. forecast KPIs within 24 hours; annotate dashboards with context.
- Incident recap: even if none occurred, confirm guard roster logs and update runbooks with lessons learned.
- Customer feedback: compile support tickets, social mentions, NPS comments in a Notion database.
- Debt backlog: capture hacks or shortcuts, assign owners in Linear with due dates.
- Retro: start/stop/continue meeting with all owners. Publish notes + decisions; update templates immediately.
This closes the loop so each launch makes the next easier.
Communication Plan
- Launch control Slack channel pinned with runbook, guard roster, dashboards.
- Executive digest auto-posted daily with readiness score and blockers.
- Statuspage clone showing color-coded pillars for stakeholders outside Slack.
- Go/No-Go meeting day before launch where each owner says âShipâ or âBlockâ with rationale.
Transparency keeps surprises low.
Lessons Learned
- The scorecard isnât bureaucracy; itâs a structured conversation.
- Celebrate when someone raises a red flag earlyâitâs courage, not friction.
- Evidence matters; no checklist item counts without proof.
- Assign backups for every owner; vacations shouldnât derail readiness.
- Keep retros short but mandatory; feed improvements back into template.
FAQ
- Do we ever skip the scorecard? Only for hotfixes/incident comms. Anything customer-facing goes through the runbook.
- What if a pillar is Amber but execs want to launch? Execs can override, but we log the decision+risk in Notion so accountability is preserved.
- How do you handle dependencies on other teams? We embed their tasks into our Linear project and give them visibility into the same dashboard; no off-the-books work.
What I'm building next
Iâm sharing my Notion template (scorecard + fire-drill + on-call roster) and a Linear workflow that locks âLaunchâ tickets until all five pillars submit evidence. Want it? leave me your email.
Want me to help you replicate this module? Drop me a note and weâll build it together.