← Blog

Build Log #013: We Sent 150 Emails Before Building the Digest

We launched a golf tee time alert service with real users watching real courses. Then the scheduler woke up and decided every matching tee time deserved its own email. Here's the story of a six-hour scrape cycle, an inbox massacre, and three bugs that taught us to build the boring stuff first.

This is Build Log #013. We're building BirdiePing — a service that monitors golf course booking systems and alerts you when tee times matching your preferences open up. This entry covers the first 24 hours with real users, which went exactly as well as you'd expect.

The Dream

The idea was simple: you tell us what course, what day, how many players, and we watch the booking page for you. When a slot opens up, you get a clean email with the details and a button to book it. No more refreshing ForeUP at 5 AM hoping someone cancels their Saturday foursome.

We'd cracked two booking platform APIs — ForeUP (which covers thousands of courses with a hilariously named Api-Key: no_limits header) and Chronogolf/Lightspeed (which runs all of SLC's municipal courses through a marketplace API with a Cloudflare cookie that expires every 30 minutes). We had a FastAPI backend, a scheduler loop, a landing page, and actual users signing up organically. Things were looking good.

Then we turned on the scheduler.

Bug #1: Scraping the Entire Planet

Our course discovery scripts had found 4,448 courses across 40+ states. That's a beautiful number to put on a landing page. It's a terrible number to put in a scrape loop.

The scheduler was dutifully hitting every single one of those 4,448 courses every cycle, with a 5-second delay between requests to be polite. Quick math: 4,448 × 5 seconds = 6.2 hours per cycle. Our "every 30 minutes" scheduler was running for six hours, finishing, immediately starting again, and scraping courses that literally zero humans had asked about.

The fix was embarrassingly obvious: only scrape courses that have active watches. Query the watches table, get the distinct course IDs, scrape those. With four real users watching maybe a dozen courses, the full cycle dropped from 6+ hours to 40 seconds. We also cut the inter-course delay from 5 seconds to 1 second because when you're scanning 12 courses instead of 4,448, you don't need to be that polite.

Bug #2: The Inbox Massacre

This is the one we're not proud of.

The alert system was working exactly as coded. It found a matching tee time, it sent an email. Found another one, sent another email. The problem: a single course on a single day might have twenty or thirty matching slots. And we were scanning seven days ahead. Per watch. Per user.

One of our first real users — someone who'd signed up organically, not a test account — got 150+ individual emails in about ten minutes. Each one saying "Hey! There's a 7:20 AM slot at Stonebridge!" followed immediately by "Hey! There's a 7:30 AM slot at Stonebridge!" and then "Hey! 7:40 AM!" and on and on until Resend's rate limiter kicked in at 5 emails per second and started queuing the rest for delayed delivery.

Which meant the emails didn't even arrive in a burst you could batch-delete. They trickled in over the next hour. One by one. Like a very polite denial-of-service attack on someone's inbox.

The solution: digest emails. One email per user, per course, per 30-minute cycle. Groups all matching tee times by date, renders them as a clean HTML layout with time pills, spot counts, green fees, and booking buttons. Cancellation alerts get highlighted with a 🚨 because those are the ones you actually want to jump on. Added a 30-minute cooldown per watch+course combination so even if the scheduler runs twice, you don't get dupes.

We also restructured the alert queue to process once after all seven days of data are scraped for a course, instead of firing after each individual date. That alone cut alert volume by 7x before the digest logic even kicks in.

Lesson: Always build the digest first. It feels like a nice-to-have. It's not. It's the difference between a useful service and spam.

Bug #3: Nine Holes of Confusion

A user set up a watch for 18-hole tee times. They started getting alerts for 9-hole slots. This one was subtle.

The scraper was calling the ForeUP API with holes=all to get everything, which is fine — cast a wide net, filter later. Except we weren't filtering later. The API returns a holes field on each slot (9, 18, or "9/18" for flex times), and we were just... throwing it away. The column didn't even exist in our scraped_tee_times table.

The tipoff was the price. The alert showed a $27 green fee. The course's 18-hole rate is $55. That's not a discount — that's exactly half. Nine holes, half price. Once you see it, you can't unsee it.

Fix: added a holes column to both scraped_tee_times and tee_time_watches, defaulted all existing watches to 18 (safe assumption for most golfers), and added filtering logic to _slot_matches_watch(). The digest emails now show a "9h" badge on 9-hole slots so the distinction is clear even if you're watching for both.

The Accidental Architecture Review

Fixing these three bugs forced us to actually think about the data flow, and it wasn't great. The original design was "scrape → match → email" with no intermediate state, no batching, and no concept of a "cycle." Now it's more like a spine-and-leaf fanout:

  1. Scheduler kicks off a cycle
  2. Query all courses with active watches
  3. Scrape each course (7 days of availability per course)
  4. After all dates for a course are scraped, match against all watches for that course
  5. Group matches into digests per user
  6. Send one email per user per course per cycle
  7. Record cooldown timestamps to prevent dupes

We also added priority polling for paid tiers. Birdie Club members (the top tier at $29.99/month) get their watched courses scraped every 15 seconds instead of every 60. When someone cancels a prime Saturday morning tee time, 15 seconds vs 60 seconds is the difference between booking it and watching someone else book it.

The Bigger Picture

By the end of the day, we'd also shipped: a Group Rally feature (invite your buddies to confirm before booking — because group texts are where tee times go to die), date range watches for premium users, a contact form, admin notifications on signups, case-insensitive email handling (two users had signed up with different capitalizations of the same email and gotten duplicate accounts), and proper terms/privacy pages so we look like a real company.

Oh, and the DNS had two SPF records, which is invalid and was probably hurting deliverability. Merged those. Set up DKIM. DMARC in monitor mode. The usual "we forgot about email infrastructure until the emails stopped arriving" fire drill.

Six real users by end of day. Not six thousand. Six. But six users who signed up unprompted, created watches for real courses, and — after the digest fix — actually got useful alerts. One of them set up three watches within an hour of signing up. That felt like something.

What We Learned

Three things, all of which we should have known:

  1. Build the digest before the individual alert. Your first implementation will always be "send one message per event." Skip it. Go straight to batching. Your users' inboxes will thank you.
  2. Don't scrape what nobody asked for. A big number on a marketing page ("4,400+ courses!") is not a database query you should run every 30 minutes. Scrape on demand. Everything else is vanity infrastructure.
  3. Carry every field through the pipeline. If the API gives you a holes field, store the holes field. Don't decide at scrape time what you'll need at match time. You'll be wrong.

None of this was hard to fix. All of it was hard to catch. That's the pattern with most production bugs — the solution is obvious once you know the problem exists. The challenge is finding out the problem exists before your users do.

We found out after. Lesson learned.


BirdiePing is live at birdieping.com. If you're tired of refreshing booking pages at 5 AM, we're building this for you. Build Log #013, and we're just getting started.