Skip to main content
← Back to list
01Issue
BugShippedSwamp CLI
Assigneeskeeb

#278 discord-bot double-sends sign_up notifications

Opened by keeb · 5/7/2026· Shipped 5/7/2026

Symptom

Even with the discord-bot scaled to 1 node, sign-up notifications are sometimes posted to the Discord channel twice. Other event types (extension_published, release_published, badge_awarded) are not observed to duplicate.

Why dedup doesn't catch it

discord_event_queue is keyed off _id: event.insert_id and DiscordConsumer.publish() swallows code-11000 dup-key errors (services/telemetry/lib/consumers/discord.ts:42-53).

But insert_id is generated fresh on every /ingest call by the telemetry server:

// services/telemetry/lib/schema.ts:189
insert_id: crypto.randomUUID(),

So it dedupes a re-insert of the same telemetry doc, not two telemetry docs that describe the same logical sign-up. Once the discord-bot has processed and deleteOne'd the queue doc, a re-publish with the same insert_id succeeds (no dup error) and a second embed is sent.

The mechanism that re-inserts

services/telemetry/lib/watcher.ts:dispatchBatch runs:

  1. mark batch processing
  2. call DiscordConsumer.publish() -> inserts into discord_event_queue
  3. set consumer_status.discord = "delivered"
  4. await onBatchDelivered
  5. deleteMany

If the watcher dies between steps 2 and 5 (deploy, OOM, SIGTERM), the events-collection doc is stuck in processing even though discord already received it. On next startup:

  • recoverOrphaned (watcher.ts:245-263) resets every processing doc older than 5 min back to pending without consulting consumer_status.
  • The next processPending dispatches to registry.names (watcher.ts:113) — all consumers, not just the ones that still need to run.
  • DiscordConsumer.publish() runs again. Since the bot already drained and deleted the original queue doc, insertOne(_id: insert_id) succeeds with no dup error. A new queue doc lands, the bot polls it, posts again -> duplicate.

retryFailed has the same shape: if publish() partially succeeds and then throws on a transient mongo error mid-loop, the whole batch is re-dispatched and any event the bot has already drained gets re-inserted.

Why specifically sign_up

Of the four discord-relevant events, sign_up is the only one that fires on routine traffic. extension_published / release_published / badge_awarded are bursty/rare. Sign-ups are statistically the events most likely to be mid-pipeline when a deploy or watcher restart happens, so they're the ones that hit this race in practice.

Suggested fixes (one or more)

  • Stable dedup key for sign_up. track("sign_up", user.id, ...) only fires once per user creation in lib/auth.ts:464. Key the queue by event + distinct_id (e.g. _id: "sign_up:" + distinct_id) so a re-publish hits 11000 even after the bot has drained the original. Smallest, most targeted fix.
  • Consumer-status-aware dispatch. processPending and recoverOrphaned should skip consumers already marked delivered for a doc, so re-running the pipeline after a crash doesn't re-call discord at all. Fixes the class of bug, not just sign_up.
  • Ack-then-delete. Mark the events-collection doc delivered (or delete it) immediately after the consumer succeeds, instead of relying on the deleteMany at the end of the batch.

Reproduction (theoretical)

  1. Send a sign_up event to /ingest.
  2. Once DiscordConsumer.publish() has inserted into discord_event_queue but before watcher's deleteMany, kill the telemetry watcher.
  3. Wait for the bot to drain the queue doc and delete it.
  4. Wait >5 min and restart the watcher; recoverOrphaned resets the events doc to pending, processPending re-dispatches to all consumers, discord queue is re-populated, bot posts again.

Environment

  • swamp-club main, observed in production with discord-bot scaled to 1 node.
02Bog Flow
OPENTRIAGEDIN PROGRESSSHIPPED+ 1 MOREASSIGNED+ 5 MOREREVIEW+ 3 MOREPR_MERGEDSHIPPED

Shipped

5/7/2026, 4:47:33 PM

Click a lifecycle step above to view its details.

03Sludge Pulse
keeb assigned keeb5/7/2026, 3:45:38 PM

Sign in to post a ripple.