MarsalaMarsala
Back to articles
GuideFeb 14, 2025·13 min read

How We Orchestrated a Modular Marketing Stack in 2025

Engineering playbook for building a composable marketing stack with a warehouse core, swappable modules, and weekly release trains.

Detailed walkthrough of the architecture, contracts, tooling bill, and operating cadence that keep Marsala’s go-to-market stack modular and resilient.

By Marsala Engineering Team·
#Growth#Architecture#Data

How We Orchestrated a Modular Marketing Stack in 2025

Executive Summary

Our go-to-market stack runs like infrastructure. Every module—site, forms, email, analytics, paid media—plugs into the warehouse, ships through the same CI/CD pipeline, and can be replaced without a six‑month migration. This article documents the engineering playbook behind that outcome: the architecture decisions, contracts, tooling bill, release cadence, observability guardrails, and operating rituals that let us launch new growth modules every single week without breaking core funnels. Think of it as a field manual from the Marsala engineering team to any growth org that wants velocity without spaghetti.

Why We Rebuilt It

By late 2024 our marketing stack looked like most scaleups: a Frankenstein of point tools shoehorned together by well-intentioned ops folks. Every change required spreadsheets, duplicated audiences and heroic QA. Engineering kept getting paged for problems we never built. We drew a hard line: the warehouse would become the single source of truth, every module would speak the same contract, and the stack would behave like LEGO bricks instead of poured concrete. Rebuilding meant rewriting large swaths of code, but the alternative was watching experimentation grind to a halt.

Guiding Principles

  1. Warehouse-first data flow. Nothing moves unless it reads from—and writes back to—the warehouse. That eliminates reconciliation fights and makes observability trivial.
  2. Stateless delivery surfaces. Frontends render from APIs and design tokens, not hardcoded assets. That lets us ship multi-brand experiences without forks.
  3. Explicit integration contracts. Every module publishes a JSON schema, an event spec, SLOs, and rollback instructions. Without contracts, composability is impossible.
  4. Automation and QA as code. There is no “manual playbook” separate from the repo. Workflows, n8n jobs, and regression tests live next to the feature code that depends on them.
  5. Replaceability as a requirement. We treat every vendor as a plug. If something breaks or pricing spikes, we can swap it by following the runbook.

Architecture Overview

At the heart sits the warehouse (BigQuery for long-term storage + Supabase for low-latency reads). Upstream we ingest from the site, product, paid media, CRM, and support channels via Segment. Downstream we expose contract-first APIs for the site, form handlers, and email automations. The public site is a Next.js/Turborepo workspace with shared design tokens and React Server Components. APIs run as Netlify serverless functions so we can keep infrastructure monotonic. Workflow glue happens inside n8n; every job is versioned, linted, and referenced in documentation. The entire codebase ships through GitHub Actions with preview builds on Vercel for design review.

Data Core

  • Raw layer: Segment events, Salesforce exports, product usage, and support metrics land in BigQuery. Schema changes are caught via dbt tests.
  • Model layer: dbt compiles curated marts (leads, accounts, campaigns, creatives). Each mart has freshness budgets and ownership.
  • Activation layer: Hightouch syncs curated audiences to Ads/CRM, while Resend + PostHog pull directly from Supabase when we need real-time personalization.

Experience Layer

  • Web: Next.js app with shared layout primitives, MDX-driven research content, and App Router APIs (re-imagined behind Netlify’s serverless runtime).
  • Forms + APIs: /api/contact and /api/waitlist are serverless handlers that validate inputs, log payloads, and push structured events to the warehouse.
  • Email: React Email components render transactional and journey content; Resend handles delivery with domain-authenticated senders.

Contracts and Schemas

Every module lives by a contract document that answers four questions:

  1. Shape of inputs/outputs. We publish JSON schema files along with TypeScript types so there’s no ambiguity when a module hands data to the warehouse.
  2. Event commitments. Each module declares the Segment events it emits, the properties it owns, and the SLO for their arrival.
  3. Failure behavior. We document fallbacks, retries, and escalation paths (i.e., which Slack channel is paged if /api/contact fails).
  4. Swap checklists. The runbook lists what to change if we swap vendors (DNS, keys, secrets, migration scripts, QA scenarios).

Contracts live in /contracts inside the repo and are validated in CI; if a module edits a contract without updating dependent code, the build fails.

Tooling Bill of Materials

| Layer | Tooling | Notes | |-------|--------|-------| | Experience | Next.js 15, Turborepo, Tailwind tokens, Storybook | Single workspace powering all public surfaces | | Data | BigQuery, Supabase, dbt, Elementary | Warehouse-first with automated freshness + schema alerts | | Automation | n8n (Fly.io), Segment, Hightouch | Declarative workflows with git-backed nodes | | Messaging | Resend, React Email, PostHog Journeys | Authenticated sending, shared components | | Observability | Metabase, Grafana, PagerDuty, Sentry | Unified health boards + budgets |

Delivery Workflow

  1. Backlog intake. Growth/marketing opens a Linear issue that includes goal metric, audience, entry points, and success SLO.
  2. Design + content pairing. Designers work in Figma using shared tokens; copy lives in MDX files or CMS docs.
  3. Implementation. Engineers create a pull request touching code, contracts, and workflow definitions. Side effects (e.g., dbt models, n8n jobs) are referenced directly in the PR checklist.
  4. Preview + QA. GitHub Actions builds a preview on Vercel, runs Playwright tests, lints n8n DAGs, and executes contract validation.
  5. Release train. Approved PRs merge to main; Netlify handles the production build. Feature flags gate new modules until analytics confirm baselines.
  6. Post-release audit. A 24-hour report tracks key KPIs, event health, and alert noise. If budgets hold, we flip the flag globally.

Implementation Timeline

The rebuild took twelve weeks with a squad of five engineers, one designer, and one RevOps partner. Week 1–2 focused on discovery, data lineage, and contract design. Weeks 3–6 delivered the new warehouse models and automation backbone. Weeks 7–9 rebuilt the public site, forms, and journeys atop the new APIs. Weeks 10–12 hardened observability, wrote replaceability runbooks, and migrated historical content. Throughout the project we maintained dual-run mode: the old stack served traffic while the new one shadowed every event, giving us confidence before the cutover.

Cost Considerations

Moving to a modular stack introduced predictable costs (warehouse storage, workflow compute, Netlify functions). Yet we eliminated three expensive SaaS suites and slashed agency retainers tied to manual ops. The net effect: a 28% reduction in annual growth tooling spend and far better unit economics per experiment. More importantly, the engineering team reclaimed ~25 hours per month previously spent babysitting bespoke integrations.

Observability and SLOs

We treat growth funnels like product features—each has SLOs. Examples:

  • Contact API latency: p95 < 750ms, error budget 0.1%.
  • Journey event freshness: 99% of lead_journey rows land within 5 minutes.
  • Web vitals: LCP < 2.3s / 2.7s (desktop/mobile).
  • Workflow uptime: n8n “lead-routing” DAG must succeed 99.5% of runs.

dbt + Elementary watch data SLAs, Sentry monitors APIs, and Grafana dashboards track core web vitals. Alerts route to PagerDuty with a rotation shared by engineering + RevOps so fixes remain collaborative.

Runbooks and Replaceability

Every module ships with a “Replace me” runbook covering:

  • Secrets/keys to reprovision.
  • Terraform or manual steps for DNS, webhooks, domains.
  • Test matrix (forms, journeys, analytics) that must pass before switching.
  • Rollback instructions with manual toggles (feature flag + DNS TTL).

Because runbooks live beside the code, swapping vendors is a disciplined process. In the last year we switched payment processors, email enrichment providers, and CMS assets without downtime.

Results We Track

  • Lead-to-SQL conversion: +34% after coherent scoring + routing.
  • Experiment velocity: average of 4 experiments per sprint (was 1).
  • Deployment cadence: weekly trains with a mean change failure rate <2%.
  • Ops load: time spent on manual exports dropped from 40h/month to <5h.
  • Content updates: marketing publishes new research posts without pinging engineering, thanks to standardized MDX + preview flows.

What We Learned

  • Contracts first. Without explicit schemas and SLAs, modularity is wishful thinking.
  • Rituals matter. Weekly demos, retros and on-call reviews keep the architecture healthy and the team aligned.
  • Documentation must be executable. If the runbook can’t be followed from the repo, it doesn’t count.
  • Observability needs budgets. Alert fatigue kills trust; we tie every alert to an error budget and retire the ones nobody cares about.
  • Invest in enablement. A composable stack is only valuable if marketing, ops, and product know how to use it; we host monthly enablement sessions and maintain a searchable glossary.

Risks and Mitigations

| Risk | Mitigation | |------|------------| | Contract drift between modules and warehouse models | Schema validation in CI + contract approval checklist | | Vendor lock-in for automation tools | Swap runbooks plus weekly “LEGO drill” where we replace one integration end-to-end in staging | | Orphaned feature flags | Feature-flag linter warns if a flag is older than six weeks or lacks an owner | | Overlapping experiments | Central experimentation calendar + PostHog guardrails ensure cohorts do not collide | | Onboarding fatigue | Dedicated enablement track with step-by-step labs for new hires |

Org Enablement

Technology alone was not enough—we invested in people:

  • Weekly office hours where marketing and ops bring ideas; engineers pair to scope them.
  • Documentation sprints every quarter to refresh runbooks, design tokens, and API contracts.
  • Partner enablement: agencies get access to a read-only version of the repo so they can submit PRs instead of requesting screenshots.
  • Training badges: completion of “Modular Stack 101” is required before someone can merge changes that touch contracts.

What’s Next

We are stitching AI copilots into the stack to auto-generate experiment ideas, flag anomalies and prefill copy variants. The first prototype reads telemetry and contract metadata, then suggests next-best tests. If you have a growth stack that needs modularity plus intelligence, we’re happy to compare notes.


Want help applying this blueprint to your stack? Drop us a note and the Marsala engineering team will walk you through the migration plan.

Marsala OS

Ready to turn this insight into a live system?

We build brand, web, CRM, AI, and automation modules that plug into your stack.

Talk to our team