queued

Fast zero-configuration single-binary simple queue service.

Quick start

Currently, queued supports Linux only.

queued requires persistent storage, and it's preferred to provide a block device directly (e.g. /dev/my_block_device), to bypass the file system. Alternatively, a standard file can be used too (e.g. /var/lib/queued/data). In either case, the entire device/file will be used, and it must have a size that's a multiple of 1024 bytes.

Install

```

Ensure you have Rust installed.

cargo install queued

--format WILL DESTROY EXISTING CONTENTS. Only run this on the first run.

queued --device /dev/myblockdevice --format ```

Run

queued --device /dev/my_block_device

Call

``` 🌐 POST localhost:3333/push { "contents": "Hello, world!" } ✅ 200 OK { "index": 190234 }

🌐 POST localhost:3333/poll { "visibilitytimeoutsecs": 30 } ✅ 200 OK { "message": { "contents": "Hello, world!", "created": "2023-01-03T12:00:00Z", "index": 190234, "pollcount": 1, "polltag": "f914659685fcea9d60" } }

🌐 POST localhost:3333/delete { "index": 190234, "poll_tag": "f914659685fcea9d60" } ✅ 200 OK {} ```

Performance

On a machine with an Intel Core i5-12400 CPU, Samsung 970 EVO Plus 1TB NVMe SSD, and Linux 5.17 kernel, queued manages around 70,000 operations (push, poll, or delete) per second with 4,096 concurrent clients.

Safety

At the API layer, only a successful response (i.e. 2xx) means that the request has been successfully persisted to disk. Assume any interrupted or failed requests did not safely get stored, and retry as appropriate. Changes are strongly consistent and immediately visible to all other callers.

Internally, queued records a hash of persisted data (including metadata and data of messages), to verify integrity when starting the server. It's recommended to use error-detecting-and-correcting durable storage when running in production, like any other stateful workload.

Performing backups can be done by stopping the process and taking a copy of the contents of the file/device. Using compression can reduce bandwidth (when transferring) and storage usage.

Management

GET /healthz returns the current build version.

GET /metrics returns metrics in the Prometheus format:

```

HELP queued_available Amount of messages currently in the queue, including both past and future visibility timestamps.

TYPE queued_available gauge

queued_available 0 1672977507603

HELP queuedemptypoll Total number of poll requests that failed due to no message being available.

TYPE queuedemptypoll counter

queuedemptypoll 8192 1672977507603

HELP queuediosync Total number of fsync and fdatasync syscalls.

TYPE queuediosync counter

queuediosync 147600 1672977507603

HELP queuediosyncbackgroundloops Total number of delayed sync background loop iterations.

TYPE queuediosyncbackgroundloops counter

queuediosyncbackgroundloops 722814 1672977507603

HELP queuediosync_delayed Total number of requested syncs that were delayed until a later time.

TYPE queuediosync_delayed counter

queuediosync_delayed 2868573 1672977507603

HELP queuediosynclockhold_us Total microseconds spent holding I/O sync lock.

TYPE queuediosynclockhold_us counter

queuediosynclockhold_us 61899209 1672977507603

HELP queuediosynclockholds Total number of I/O sync lock acquisitions.

TYPE queuediosynclockholds counter

queuediosynclockholds 3722814 1672977507603

HELP queuediosynclongestdelay_us Total number of microseconds spent waiting for a sync by one or more delayed syncs.

TYPE queuediosynclongestdelay_us counter

queuediosynclongestdelay_us 31622905 1672977507603

HELP queuediosyncshortestdelay_us Total number of microseconds spent waiting after a final delayed sync before the actual sync.

TYPE queuediosyncshortestdelay_us counter

queuediosyncshortestdelay_us 22888216 1672977507603

HELP queuediosynctriggeredby_bytes Total number of syncs that were triggered due to too many written bytes from delayed syncs.

TYPE queuediosynctriggeredby_bytes counter

queuediosynctriggeredby_bytes 0 1672977507603

HELP queuediosynctriggeredby_time Total number of syncs that were triggered due to too much time since last sync.

TYPE queuediosynctriggeredby_time counter

queuediosynctriggeredby_time 147600 1672977507603

HELP queuediosync_us Total number of microseconds spent in fsync and fdatasync syscalls.

TYPE queuediosync_us counter

queuediosync_us 8005024654 1672977507603

HELP queuediowrite_bytes Total number of bytes written.

TYPE queuediowrite_bytes counter

queuediowrite_bytes 263888890 1672977507603

HELP queuediowrite Total number of write syscalls.

TYPE queuediowrite counter

queuediowrite 3000000 1672977507603

HELP queuediowrite_us Total number of microseconds spent in write syscalls.

TYPE queuediowrite_us counter

queuediowrite_us 54866298628 1672977507603

HELP queuedmissingdelete Total number of delete requests that failed due to the requested message not being found.

TYPE queuedmissingdelete counter

queuedmissingdelete 0 1672977507603

HELP queuedsuccessfuldelete Total number of delete requests that did delete a message successfully.

TYPE queuedsuccessfuldelete counter

queuedsuccessfuldelete 1000000 1672977507603

HELP queuedsuccessfulpoll Total number of poll requests that did poll a message successfully.

TYPE queuedsuccessfulpoll counter

queuedsuccessfulpoll 1000000 1672977507603

HELP queuedsuccessfulpush Total number of push requests that did push a message successfully.

TYPE queuedsuccessfulpush counter

queuedsuccessfulpush 1000000 1672977507603

HELP queuedsuspendeddelete Total number of delete requests while the endpoint was suspended.

TYPE queuedsuspendeddelete counter

queuedsuspendeddelete 0 1672977507603

HELP queuedsuspendedpoll Total number of poll requests while the endpoint was suspended.

TYPE queuedsuspendedpoll counter

queuedsuspendedpoll 0 1672977507603

HELP queuedsuspendedpush Total number of push requests while the endpoint was suspended.

TYPE queuedsuspendedpush counter

queuedsuspendedpush 0 1672977507603

HELP queued_vacant How many more messages that can currently be pushed into the queue.

TYPE queued_vacant gauge

queued_vacant 1000000 1672977507603 ```

POST /suspend can suspend specific API endpoints, useful for temporary debugging or emergency intervention without stopping the server. It takes a request body like:

json { "delete": true, "poll": true, "push": false }

Set a property to true to disable that endpoint, and false to re-enable it. Disabled endpoints will return 503 Service Unavailable. Use GET /suspend to get the currently suspended endpoints.

Important details

Development

queued is a standard Rust project, and does not require any special build tools or system libraries.

There are calls to pread and pwrite, so it won't build for targets without those.

As the design and functionality is quite simple, I/O tends to become the bottleneck at scale (and at smaller throughputs, the performance is more than enough). This is important to know when profiling and optimising. For example, with CPU flamegraphs, it may appear that the write syscall is the dominant cost (e.g. kernel and file system locks), but if queued is compiled with the unsafe_fsync_none feature, performance can increase dramatically, indicating that the CPU flamegraphs were missing I/O from the picture; off-CPU flamegraphs may be more useful. This can be expected, as the nature of queue service workloads is very high levels (queues are expected to have high throughput, and also every operation like push, poll, and delete is a write) of small writes (message contents are usually small) to non-contiguous areas (messages get deleted, updated, and retried at varying durations, so the storage layout tends towards high fragmentation without algorithmic rebalancing or frequent defragmentation).

Clients in example-client can help with running synthetic workloads for stress testing, performance tuning, and profiling.

As I/O becomes the main attention for optimisation, keep in mind: - write syscall data is immediately visible to all read syscalls in all threads and processes. - write syscalls can be reordered, unless fdatasync/fsync is used, which acts as both a barrier and cache-flusher. This means that a fast sequence of write (1 create) -> read (2 inspect) -> write (3 update) can actually cause 1 to clobber 3. Ideally there would be two different APIs for creating a barrier and flushing the cache.