April 2013


This whitepaper provides performance benchmark information for the R16.0 release of Isode's M-Vault directory server. R16.0 standardises on the transactional in-memory database introduced in R15.2, which had performance improvements as a primary goal. This paper compares R16.0 M-Vault performance to R15.1, which used the older on-disk database. R15.1 performance, with comparisons to other directory servers, is described in the whitepaper [M-Vault Performance].

Creative Commons License

Database Architecture

M-Vault uses an in-memory transactional database, which replaces the older on-disk database. The change is driven by several observations:

  1. When the original M-Vault database was designed, large directory servers needed to keep data on disk, as memory was too expensive to allow all data to be held in memory. Memory prices have dropped significantly, and systems running directory servers can be expected to have sufficient memory to hold all of the data.
  2. To obtain top performance, M-Vault 15.1 (in common with other directory servers) caches all data in memory. This approach leads to multiple layers of in-memory caching, which incurs performance overhead.
  3. An on-disk database limits write performance, particularly because of the need to maintain multiple on-disk indexes, and also as a result of database housekeeping activities such as checkpointing.
  4. There was a lot of complexity in the database layer that was not really needed.

The basic approach taken in the new database is to store data on disk in a simple robust format, and to load it into an optimized memory structure on directory server startup. Changes are written out to transaction logs (so that updates are transactional, and never lost). From time to time, these transaction logs are merged into the core database.

The primary downside of this approach is that there is a loading action on startup, and so startup is not instant. The measurements look carefully at this.

An alternate approach to a memory database architecture would be to use a memory mapped file. This was not chosen for a number of reasons:

  • Although it gives immediate startup, in practice there is a “ramp up” time, as data gets paged in.
  • It is difficult to get good write performance with this architecture.
  • The approach is not resilient to errors. Isode abandoned use of this approach in another product because of this.

The approach taken by Isode gives a number of advantages in addition to the high performance noted in this paper.

  • The database related code is significantly simplified, which will improve product supportability and resilience.
  • The simple on disk format makes direct database analysis straightforward.
  • Database backup can simply backup the files, unlike the on-disk database which needed a special backup procedure.

Test Setup

The basic test setup is simply a test client connecting to an M-Vault server. In order to obtain maximum read/search performance, significant care is needed in the configuration of the client. We also needed a test tool that would run efficiently on a system somewhat less powerful than the server under test (some open source tools would need much faster machines to generate sufficient load). For these reasons, we used Isode developed test clients.

The server used for tests is:

  • Model : HP Proliant DL585
  • CPU : 4 x Quad-Core AMD Opteron Processor 8356 (2.3GHz)
  • Disk : HP 146GB 10K SAS 2.5" Single Port
  • Memory: 64 GByte
  • OS: Centos 5.5

This is a fairly fast server machine, about 5 years old. It is a reasonable server to do benchmarking on, although on a new machine we would anticipate significantly faster performance.

For the tests, the M-Vault server was configured as follows:

  • Configured with equality indexes on 'common name', 'surname' and 'mail'. This is appropriate for a simple directory setup. In a complex directory deployment, more indexes will be needed. This would impact performance, in particular write performance.
  • Default access controls in all cases (except where noted). Access control checks can have significant impact on directory performance. The Isode default access control is appropriate for many types of directory service.
  • Audit logging disabled (except where noted). Audit logging has a significant effect on directory performance. Most tests were done without audit logging, as this gives a useful measure of directory performance. Measurements are also given to show the impact of audit logging.
  • For R15.1: Unlimited entry cache. We would recommend this for a large deployment on appropriate hardware.
  • For R16.0: 16 load threads on startup.

The tests were conducted on directory servers containing 1 million and 10 million entries in a flat Directory Information Tree. The data is systematically generated. Example entry:

dn: cn=Narida Valcourt-Zywiel,o=Test,c=BA
objectClass: inetOrgPerson
objectClass: organizationalPerson
objectClass: top
objectClass: person
cn: Narida Valcourt-Zywiel
sn: Valcourt-Zywiel
mail: N.Valcourt.Zywiel@test.co.uk
userPassword:: MTIzNA==

Search and Modify Performance

Read tests were done with a test client that maintains 64 open connections to the directory, and sends multiple (asynchronous) queries down each connection. Queries are made randomly across all of the entries. The client operates so that it provides maximum load on the server, but with load flow controlled to prevent over-loading the server.

Modification tests modify the value of one attribute within an entry.

Tests were made with 1 million entries and 10 million entries on R16.0 (transaction memory database) and R15.1 (on disk database). All results are in operations per second.

1 million entries
10 million entries
Max search rate (no attributes returned):
Max search rate (all attributes returned)
Max search rate (no access control, no attrs)
Max search rate (w/audit logging, no attrs)
Max modify rate:
Search during concurrent search/modify
Modify during concurrent search/modify

It can be seen:

  • The new database gives about 20% improvement of search performance over R15.1.
  • Modify rate is significantly faster.
  • There is moderate degradation of R16.0 performance at 10 million entries, but broadly performance holds up well.
  • Returning attributes and standard access control does not significantly affect performance.
  • Audit logging is performance limiting, and should be used with care.

The first tests tested a single operation. A real server will have a mix of search and modify operations. We made tests of concurrent search and modify on the 1 million entry server. The ratio you get will depend on the applied load (i.e., you can increase search rate at the expense of modify rate and vice versa). These tests were conducted with modify load around half of the maximum. It can be seen that this is achieved with good search throughput.

Startup and Snapshot

The next set of measurements look at the startup time of the R16.0 M-Vault. These are compared with the R15.1 ramp up time needed to reach maximum performance.

R16.0 Startup
R16.0 Startup (new hardware)
R15.1 Ramp Up Time
1 Million entries
51 seconds
15 seconds
140 seconds
10 Million entries
560 seconds
2,400 seconds

It takes a moderate time to load M-Vault, but it is useful to note:

  • This loading time is significantly less than the ramp-up time in R15.1, and this is an important operational improvement for a heavily loaded server.
  • Many M-Vault servers have a fraction of a million entries, and for them the startup time will have negligible impact.
  • A large directory will typically be provided by a number of servers, and so one server taking time to start up will not be an operational issue.

Startup time is CPU intensive. The tests were done on a machine that is about five years old. We repeated the load on a new machine (with insufficient memory to do the 10 million entry tests). This machine has 2 x Quad-Core Intel Core i7 860 (2.80GHz). A large directory server is likely to be deployed on a machine with at least this level of CPU specification.

The following table shows stable M-Vault process size in gigabytes:

1 million entries
10 million entries
Process Size

It can be seen that the memory database leads to somewhat improved process size, and much better linearity of growth. The cost here is 2 kBytes of process size for each directory entry. To obtain good performance, it will be important to have more physical memory than the process size.

When M-Vault makes changes, these are written into a transaction log. From time to time the transaction logs will be merged and a new snapshot created. At startup time, transaction logs load at about half the speed of the database (as the logs must be processed sequentially), so it is important to do this snapshot update at intervals appropriate to the update rate. M-Vault will create snapshots with configurable timing, which is by default once per day. By default it will also create a snapshot on shutdown (which is faster than the normal snapshot writing process).

Many servers have snapshotting and garbage collection activities, and these sometimes have significant performance impact. The following tests were run when applying transaction logs representing 462,000 modifications to a 10 million entry directory. The snapshot process took 504 seconds (8.4 minutes). Performance tests were run during the snapshot:

  Normal Operation During Snapshot
Max search rate (no attributes returned) 120,000 107,000
Max modify rate 25,800 16,000
Search during concurrent search/modify 86,000 85,000
Modify during concurrent search/modify 11,600 10,500

It can be seen that although performance is degraded during the snapshot process, high performance is still achieved.

We also measured the time taken to load 10 million entries direct to database, which can be an important bootstrap operation. For R16.0 it loaded in 22 minutes, as opposed to 40 minutes for R15.1.


This paper has provided benchmarks for M-Vault R16.0 using an in-memory transactional database. The new database gives significant improvements over the previous release and high performance for search and modify.