Pivotal RabbitMQ v3.x

Persistence Configuration

The RabbitMQ persistence layer is intended to give good results in the majority of situations without configuration. However, some configuration is sometimes useful. This page explains how you can configure it. You are advised to read it all before taking any action.

How persistence works

First, some background: both persistent and transient messages can be written to disk. Persistent messages will be written to disk as soon as they reach the queue, while transient messages will be written to disk only so that they can be evicted from memory while under memory pressure. Persistent messages are also kept in memory when possible and only evicted from memory under memory pressure. The “persistence layer” refers to the mechanism used to store messages of both types to disk.

On this page we say “queue” to refer to an unmirrored queue or a queue master or a queue slave. Queue mirroring happens “above” persistence.

The persistence layer has two components: the queue index and the message store. The queue index is responsible for maintaining knowledge about where a given message is in a queue, along with whether it has been delivered and acknowledged. There is therefore one queue index per queue.

The message store is a key-value store for messages, shared among all queues in the server. Messages (the body, and any properties and / or headers) can either be stored directly in the queue index, or written to the message store. There are technically two message stores (one for transient and one for persistent messages) but they are usually considered together as “the message store”.

Memory costs

Under memory pressure, the persistence layer tries to write as much out to disk as possible, and remove as much as possible from memory. There are some things however which must remain in memory:

  • Each queue maintains some metadata for each unacknowledged message. The message itself can be removed from memory if its destination is the message store.
  • The message store needs an index. The default message store index uses a small amount of memory for every message in the store.

Messages in the queue index

There are advantages and disadvantages to writing messages to the queue index.


  • Messages can be written to disk in one operation rather than two; for tiny messages this can be a substantial gain.
  • Messages that are written to the queue index do not require an entry in the message store index and thus do not have a memory cost when paged out.


  • The queue index keeps blocks of a fixed number of records in memory; if non-tiny messages are written to the queue index then memory use can be substantial.
  • If a message is routed to multiple queues by an exchange, the message will need to be written to multiple queue indices. If such a message is written to the message store, only one copy needs to be written.
  • Unacknowledged messages whose destination is the queue index are always kept in memory.

The intent is for very small messages to be stored in the queue index as an optimisation, and for all other messages to be written to the message store. This is controlled by the configuration item queue_index_embed_msgs_below. By default, messages with a serialised size of less than 4096 bytes (including properties and headers) are stored in the queue index.

Each queue index needs to keep at least one segment file in memory when reading messages from disk. The segment file contains records for 16,384 messages. Therefore be cautious if increasing queue_index_embed_msgs_below; a small increase can lead to a large amount of memory used.

Accidentally limited persister performance

It is possible for persistence to underperform because the persister is limited in the number of file handles or async threads it has to work with. In both cases this can happen when you have a large number of queues which need to access the disk simultaneously.

Too few file handles

The RabbitMQ server is typically limited in the number of file handles it can open (on Unix, anyway). Every running network connection requires one file handle, and the rest are available for queues to use. If there are more disk-accessing queues than file handles after network connections have been taken into account, then the disk-accessing queues will share the file handles among themselves; each gets to use a file handle for a while before it is taken back and given to another queue.

This prevents the server from crashing due to there being too many disk-accessing queues, but it can become expensive. The management plugin can show I/O statistics for each node in the cluster; as well as showing rates of reads, writes, seeks and so on it will also show a rate of *reopen*s - the rate at which file handles are recycled in this way. A busy server with too few file handles might be doing hundreds of reopens per second - in which case its performance is likely to increase notably if given more file handles.

Too few async threads

The Erlang virtual machine creates a pool of async threads to handle long-running file I/O operations. These are shared among all queues. Every active file I/O operation uses one async thread while it is occurring. Having too few async threads can therefore hurt performance.

Note that the situation with async threads is not exactly analogous to the situation with file handles. If a queue executes a number of I/O operations in sequence it will perform best if it holds onto a file handle for all the operations; otherwise we may flush and seek too much and use additional CPU orchestrating it. However, queues do not benefit from holding an async thread across a sequence of operations (in fact they cannot do so).

Therefore there should ideally be enough file handles for all the queues that are executing streams of I/O operations, and enough async threads for the number of simultaneous I/O operations your storage layer can plausibly execute.

It’s less obvious when a lack of async threads is causing performance problems. (It’s also less likely in general; check for other things first!) Typical symptoms of too few async threads include the number of I/O operations per second dropping to zero (as reported by the management plugin) for brief periods when the server should be busy with persistence, while the reported time per I/O operation increases.

The number of async threads is configured by the +A argument to the Erlang virtual machine as described here, and is typically configured through the environment variable RABBITMQ_SERVER_ERL_ARGS. The default value is +A 30. It is likely to be a good idea to experiment with several different values before changing this.

Alternate message store index implementations

As mentioned above, each message which is written to the message store uses a small amount of memory for its index entry. The message store index is pluggable in RabbitMQ, and other implementations are available as plugins which can remove this limitation. (The reason we do not ship any with the server is that they all use native code.) Note that such plugins typically make the message store run more slowly.