Building a Stripe → Slack billing alert pipeline in 20 lines
Step-by-step tutorial: connect Stripe webhooks, filter for failed charges, and post a rich Slack message — no servers needed.
Deep-dives, tutorials, and stories from the team building Mario Koala AI.
When we set our latency target at <50ms end-to-end for webhook triggers, the team wasn't sure it was achievable without sacrificing durability. Here's how we got there — and what we had to give up along the way.
// dispatch.go — hot path, no allocations func (d *Dispatcher) Dispatch(ctx context.Context, e Event) error { msg := pool.Get().(*kafka.Message) defer pool.Put(msg) msg.Topic = e.PipelineID msg.Value = e.Payload msg.Headers = []kafka.Header{ {Key: "mk-run-id", Value: e.RunID[:]}, {Key: "mk-ts", Value: nowBytes()}, } return d.producer.Produce(ctx, msg) // p99 latency: 23ms (measured @ 50k rps) }
Step-by-step tutorial: connect Stripe webhooks, filter for failed charges, and post a rich Slack message — no servers needed.
Today we're shipping our biggest release yet — parallel step execution, a declarative YAML pipeline format, and 40 new connectors including Notion, Linear, and Airtable.
We use pg_logical to stream database changes into pipelines. Here's everything that went wrong, and the monitoring we built to catch it earlier next time.
The platform team at Driftboard had 12 separate bash scripts and two Zapier zaps just to notify Slack when a deploy finished. Here's how they consolidated everything.
Schedule a nightly job that fetches paginated API data, deduplicates rows with an upsert, and sends a summary email if the sync fails — all in one pipeline definition.
At-least-once delivery means every pipeline step must be safe to replay. We'll show you the idempotency key design we use internally and how you can apply it in your own steps.