Why Your Database Is Not a Message Queue (And the Disaster That Follows When You Treat It Like One)

February 14, 2026 (5d ago)

I've seen this pattern kill production systems more times than I can count.

A team needs asynchronous processing. Instead of introducing a proper message queue, someone adds a status column to an existing table. "PENDING", "PROCESSING", "DONE". Workers poll the table every few seconds. It works. For a while.

Then it doesn't.

The Seductive Simplicity

I get it. The appeal is obvious. Your database is already there. Your team knows SQL. Adding a table is trivial compared to operating RabbitMQ or Kafka. No new infrastructure, no new failure modes, no new monitoring.

Here's what a typical implementation looks like:

1CREATE TABLE task_queue (
2  id SERIAL PRIMARY KEY,
3  payload JSONB NOT NULL,
4  status VARCHAR(20) DEFAULT 'PENDING',
5  created_at TIMESTAMP DEFAULT NOW(),
6  updated_at TIMESTAMP DEFAULT NOW()
7);
8
9-- Worker picks up tasks
10UPDATE task_queue
11SET status = 'PROCESSING', updated_at = NOW()
12WHERE id = (
13  SELECT id FROM task_queue
14  WHERE status = 'PENDING'
15  ORDER BY created_at
16  LIMIT 1
17  FOR UPDATE SKIP LOCKED
18)
19RETURNING *;

This is the gateway drug of distributed systems antipatterns.

The Five Ways This Destroys You

1. Polling Is Wasteful and Unpredictable

Message queues push messages to consumers. Database-as-queue requires consumers to poll. Every worker runs a query every N seconds, regardless of whether there's work to do.

At low throughput, you're wasting database connections and CPU cycles on empty queries. At high throughput, your polling interval introduces artificial latency. You either waste resources or delay processing. There is no sweet spot.

2. Row-Level Locking Becomes a Bottleneck

When multiple workers compete for the next task, they contend on the same rows. Even with FOR UPDATE SKIP LOCKED (a PostgreSQL feature many databases don't have), you're creating lock contention at the database level.

I once debugged a system where 12 workers polling a task table caused the database CPU to spike to 90%. The actual business logic consumed 3% of total compute. 97% of the work was fighting over who gets to do the work.

3. Dead Messages Accumulate Silently

What happens when a worker crashes mid-processing? The row is stuck in "PROCESSING" forever. You need a reaper process that detects stale messages and resets them. But how do you distinguish between "stuck" and "legitimately slow"?

Dedicated message queues solve this with visibility timeouts, dead letter queues, and automatic redelivery. With a database, you're reimplementing all of this — poorly.

4. You Can't Scale the Queue Independently

Your queue traffic and your application traffic share the same database. When your queue gets busy, your application queries slow down. When your application is under load, your queue processing suffers.

This coupling is invisible until it's catastrophic. I've seen an e-commerce platform go down on Black Friday because their order processing queue overwhelmed the same database serving their product catalog.

5. No Backpressure Mechanism

A proper message queue lets consumers control their consumption rate. When a consumer is overwhelmed, unprocessed messages stay in the queue. The queue handles the buffering.

With a database queue, there's no backpressure. Producers keep inserting rows, the table grows unbounded, indexes bloat, and eventually your database performance degrades across the board.

When It Gets Truly Dangerous

The worst part isn't the initial failure — it's the cascading failure pattern:

1Queue table grows → Index performance degrades →
2Worker queries slow down → Processing falls behind →
3Table grows faster → Database connections saturate →
4Application queries timeout → Full system outage

This cascade is particularly insidious because each stage looks like a different problem. You'll chase index optimization, connection pool tuning, and query performance before realizing the architecture itself is the problem.

The Right Tool for the Job

Here's a decision framework I use:

Use a database queue when ALL of these are true:

Use a dedicated message queue when ANY of these are true:

For most production systems, that means you need a real queue.

The Transactional Outbox: A Legitimate Middle Ground

There is one pattern where databases and queues work beautifully together — the Transactional Outbox Pattern.

Instead of using the database as the queue, you use it as a staging area:

1BEGIN;
2  INSERT INTO orders (user_id, total) VALUES (42, 99.99);
3  INSERT INTO outbox (aggregate_id, event_type, payload)
4  VALUES (42, 'ORDER_CREATED', '{"user_id": 42, "total": 99.99}');
5COMMIT;

A separate process (or CDC pipeline) reads the outbox table and publishes events to an actual message queue. This gives you transactional consistency AND proper async processing.

This is how companies like Uber handle distributed transactions at scale — not by avoiding queues, but by using databases for what they're good at (ACID transactions) and queues for what they're good at (async message delivery).

The Uncomfortable Truth

Every team that uses a database as a message queue tells themselves the same story: "Our scale is small enough that it doesn't matter." And they're right — until they're suddenly wrong.

The cost of introducing a message queue early is a few days of infrastructure setup. The cost of migrating away from a database queue under production pressure is weeks of careful surgery on a live system.

I've done both. Trust me — pay the cost upfront.


The patterns discussed here draw from real-world architectures at companies like Uber, Airbnb, and Meta, where asynchronous processing at scale has been battle-tested across billions of operations daily.

Up next

The Architecture Behind Serving 40 Million Requests Per Second: Lessons from Uber's Caching Strategy

How Uber built an integrated caching layer that handles 40M+ requests per second, and the caching patterns every engineer should steal from big tech.