Why Webhooks Fail in Serverless (and how to fix it)
Serverless architectures (Lambda, Cloud Run, Vercel Functions) are perfect for many things, but they are fundamentally poorly suited for 'push' webhooks. Cold starts and strict timeouts lead to lost events and frustrated developers.
The Serverless Problem: Timeout Error
# Provider (Stripe/GitHub) waits 10s
# Your Lambda takes 12s to spin up (Cold Start)
# Result: 504 Gateway Timeout = Lost Webhook#Why do cold starts kill webhooks?
Most webhook providers (Stripe, GitHub, Shopify) have strict timeout windows—usually between 3 and 10 seconds. If your serverless function is 'cold,' the time taken to provision infrastructure and initialize your runtime often exceeds this window. The provider sees a timeout, marks the delivery as failed, and starts a retry backoff.
#How does the 'Pull' pattern solve this?
By introducing FetchHook as a buffer, the webhook delivery becomes asynchronous. FetchHook (highly available, always 'hot') accepts the webhook in milliseconds. Your serverless function can then 'pull' the data on its own schedule—or even better, you can use a CRON job to process webhooks in batches, completely avoiding the cold start pressure of real-time delivery.
The Solution: Batch Processing in Serverless
import { userAPI } from './lib/fastapi-client';
export const handler = async () => {
// Pull all events that arrived while we were "cold"
const { events } = await userAPI.fetchWebhooks("source_123", "fh_xxx");
for (const event of events) {
await processEvent(event);
}
};#Is this more expensive?
Actually, it's often cheaper. Instead of spinning up a Lambda for every single incoming webhook (which can be thousands during a burst), you can spin up one Lambda every minute to process the entire 'Stash.' This significantly reduces the total number of invocations and execution time.
Agent Protocol Instruction