Skip to content

Architecture / Under the hood

This section explains how the system fits together so engineers can operate, extend, and debug it.

High-level architecture

flowchart TB
  subgraph ClientSide[Client Side]
    P[Producer]:::c
    C[Consumer]:::c
  end

  subgraph Service[Iron Messages Service]
    API[API layer]:::s
    Store[(Message store)]:::s
  end

  P -->|POST /v1/messages| API
  C -->|GET /v1/messages:poll| API
  C -->|POST /v1/messages/{id}:ack| API
  API <--> Store

  classDef c fill:#eef,stroke:#99f;
  classDef s fill:#efe,stroke:#9f9;

Key components

  • API layer: validates requests, enforces auth and limits, and persists message state.
  • Message store: persists queued/claimed/acked state.

Core data flow

sequenceDiagram
  participant Producer
  participant API
  participant Store
  participant Consumer

  Producer->>API: POST /v1/messages (Idempotency-Key)
  API->>Store: write message (dedupe)
  API-->>Producer: 200 {id,status}

  Consumer->>API: GET /v1/messages:poll
  API->>Store: claim message (visibility timeout)
  API-->>Consumer: {messages:[...]}

  Consumer->>API: POST /v1/messages/{id}:ack
  API->>Store: mark acked
  API-->>Consumer: {status:"acked"}

Runtime and infrastructure assumptions

  • Stateless API instances behind a load balancer
  • Durable backing store for message state
  • Time is monotonic enough to implement visibility timeouts safely

Operational model

  • Scale API horizontally.
  • Scale consumers independently.
  • Expect partial failures; design consumers to retry and dedupe.

Failure modes (expected behavior)

  • If an API instance dies mid-request, clients may see a timeout; publish retries should be safe with Idempotency-Key.
  • If a consumer dies before ack, the message may be delivered again after the visibility timeout.

Next: Getting Started · Reference · Changelog · Support