April 12, 2026 Architecture

Building an Awards Platform from Zero: Pulse Awards Architecture

A full-stack awards platform with submissions, payments, judging dashboard, and 9 automated emails, built on Astro + Sanity + Stripe + Resend in six weeks on an association budget.

Every awards platform on the market is either a bloated enterprise tool nobody under 50 employees can afford or a glorified Google Form. The African American Marketing Association needed something in the middle: a full-stack awards platform with submission intake, Stripe payments, a judging dashboard with scoring rubrics, and nine automated emails across the lifecycle — launched in time for the 2026 submission window, on an association budget.

What we shipped is Pulse Awards: 11 public pages on Astro 5, a Sanity CMS backbone with 13 schemas, Stripe Checkout for submission fees, a 7-section admin dashboard, and Resend for every transactional email. One person built the whole thing in six weeks. Here's how the architecture worked.

The constraint shaped the stack

Three constraints drove every decision:

  1. Non-technical staff. The AAMA team is marketers, not engineers. They needed to manage submissions, judges, and content without opening a code editor.
  2. Fast launch. Submissions opened in early 2026. The build had to be production-ready in weeks, not months.
  3. Association budget. No $5K/month SaaS. Total monthly run cost had to sit under $100 for the first year.

These constraints rule out the obvious options. Enterprise awards platforms (Awards Force, OpenWater, Submittable) are $10K+ per cycle and require a sales call. Off-the-shelf form builders (Typeform, JotForm) can't handle judging workflows. WordPress plugins require hosting and maintenance overhead the AAMA team wouldn't absorb. What fit the gap was a custom build on a thin stack: Astro for the public site, Sanity for the data layer, Stripe for payments, Resend for email, Cloudflare Workers for SSR.

The stack decision tree

  • Astro 5, not Next.js. Astro's islands architecture meant the 90% of the site that was static (marketing pages, rules, judge bios) shipped with zero JavaScript. Only the submission flow and judge dashboard were React islands. Small bundles, cheap hosting.
  • Sanity, not Airtable. Airtable is seductive because it ships with a dashboard, but its content API is tuned for internal tools. Sanity gave us a proper CMS with type safety, a studio for editors, and GROQ queries that handled the judge dashboard's joins without needing a separate backend.
  • Stripe Checkout, not Payment Links. Payment Links would have been faster but can't attach metadata to a submission record. Stripe Checkout with a webhook into Sanity let us create submission and payment records in the same transaction.
  • Resend, not SendGrid. Resend's templating is React-based, which meant the transactional emails lived in the same codebase as the site. No separate template editor, no divergence between what the designer built and what the emails looked like.
  • Cloudflare Workers SSR, not Vercel. Equivalent performance at ~$5/month on the association's traffic curve instead of ~$60/month. Predictable deploy-time config without cold-start variance.

None of these choices were 'the best tool.' They were the best fit for this shape of problem on this budget. A different constraint set (higher budget, technical staff, 100K concurrent users) would yield different answers. This is the kind of stack decision the Architecture Review package exists to make before anyone writes code.

The 13 Sanity schemas

The content model is where most awards platforms go wrong. Too few schemas and you end up with gigantic documents that mix submission, judge, and results data. Too many and editors can't find anything. Pulse Awards settled on 13:

  • Submission — the entry: company, category, creative, contact, payment link
  • Category, Judge, JudgeAssignment, Score, Round — the judging model
  • Page, FAQ, Timeline, Sponsor, Winner — marketing content
  • Setting — global config (current cycle, nav, footer)
  • EmailLog — record of every Resend email sent, for debugging and compliance

The key insight: scoring is its own schema. Not a field on Submission, not a field on Judge. A separate document per judge per submission per criterion. This lets you model conflicts (judge A assigned, judge A never scored), filter dashboards (show me every score above 8 for the 'innovation' criterion), and audit the judging process after the fact.

The judge dashboard pattern

The admin side has seven sections: Submissions, Judges, Assignments, Scoring, Results, Content, Settings. Each is a thin UI on top of a GROQ query. The hardest part was anonymization. Judges should see the submission content and creative work, but not the company name or contact info — bias prevention is a core requirement of any credible awards program.

Sanity's field-level permissions don't cover this case directly, so I solved it with a projected view: the judge dashboard queries a subset of Submission fields that explicitly excludes identifying data. The raw document is protected by Sanity's dataset permissions; the projection is what gets rendered. No company names, no contact info, no branding. The judge sees the work and nothing else. If you're building a judging system where impartiality matters, don't try to hide fields in the UI — exclude them at the query layer.

Email orchestration: 9 templates, predictable behavior

Every transactional email in the system is a React Email component rendered through Resend. The templates: submission received, payment confirmed, judge assignment, scoring reminder, scoring confirmation, advancement notification, results published, winner announcement, post-event follow-up.

Each email gets tagged in Resend with the template name and a reference to the Submission or Judge ID. This gives us an audit trail — which is critical when an entrant emails saying 'I never got my confirmation' and you need to prove you sent it. The three-send cap pattern is also important: if retry logic loops, we cap total sends per recipient per template at 3 to prevent runaway billing or inbox flooding.

What I'd do differently

  • Judging workflow state machine. I modeled each score as an independent document, which is flexible but meant the UI has to compute 'has this judge finished their assignment' from a count query. A proper state machine on JudgeAssignment (assigned → in-progress → submitted → conflict-flagged) would make the dashboard cleaner.
  • Sanity-backed feature flags. We hardcoded deadline-based feature flags. Moving those into a Sanity Setting document would let the AAMA team push the cycle forward without a code deploy.

Where this pattern generalizes

Pulse Awards is specific, but the pattern — Astro plus Sanity plus Stripe plus Resend on Cloudflare, thin islands for interactive UI, scoring as a separate schema — applies to any multi-stage workflow tool: grant applications, fellowship programs, portfolio reviews, pitch competitions. The stack is cheap, the content model is flexible, and the whole system is editable by a marketing team without a developer. This is what architect-level thinking looks like: not the biggest or cleverest stack, but the one that survives contact with a small team and a real deadline. If you're weighing stack decisions against a constraint set like this, that's exactly what the Architecture Review is for. For the opposite call — when WordPress beats the modern stack — see Why I Stopped Using WordPress for Everything.

ArchitectureAstroSanityStripeCase Study