Event-driven architecture with message queues and streaming platforms has clear patterns for common use cases. The choice between a queue-based system (Service Bus) and a streaming platform (Kafka) is an architecture decision with significant operational implications.

Queues vs streams

A queue (Azure Service Bus, AWS SQS, RabbitMQ) provides at-least-once delivery with acknowledgement-based message removal: a message is delivered to one consumer, the consumer acknowledges it, and it is removed from the queue. The message is gone after consumption. A stream (Kafka, Azure Event Hubs) is an immutable log: messages are retained for a configured period regardless of consumption. Multiple consumers can read the same message independently. Use queues for command dispatching (one consumer per message); use streams for event broadcasting (multiple consumers, replay, and long-term retention).

The competing consumers pattern

Queue-based competing consumers: multiple consumer instances compete to process messages from a shared queue. The queue distributes messages across available consumers, providing horizontal scaling without coordination. Each message is processed by exactly one consumer (at-least-once, with idempotency). This is the natural pattern for background job processing, email sending, and any workload where messages represent discrete units of work.

Dead letter queues

Dead letter queues capture messages that cannot be processed after the maximum retry count. Every production queue should have a dead letter queue: subscribe to it, alert on dead letter queue growth, and have a process for investigating and replaying dead-lettered messages. Messages in the dead letter queue represent failed business operations that need human attention or automated remediation.

Transactional outbox pattern

The dual-write problem: in a service that both updates a database and publishes an event, a failure between the database commit and the event publish produces an inconsistent state. The transactional outbox pattern solves this: write the event to an outbox table in the same database transaction as the business update, and have a separate process read the outbox table and publish events. The event is not lost because it is in the database; the outbox reader handles publishing idempotently.