This paper gives performance benchmarks for Isode's M-Switch X.400, a high-performance X.400 Message Transfer Agent. M-Switch X.400 is deployed by Isode customers in a number of solutions areas:
- As a special purpose gateway, to integrate with other services, for example AFTN in aviation markets and ACP127 in military markets, using one of two X.400 Gateway APIs.
- As a MIXER gateway to convert between X.400 and Internet Mail.
- As a "Backbone MTA", whose role is primarily to connect together other X.400 MTAs, acting as a P1 switch, providing high performance switching and robust message routing.
- As a "Local MTA" (or "Departmental MTA") used to provide X.400 support to end users by use of User Agents. In this situation, M-Switch X.400 will often be used in conjunction with M-Store X.400 to provide mailbox storage for X.400 P7 User Agents.
- As a "Border MTA", to provide connection between different X.400 domains using capabilities such as authorization and anti-virus.
The benchmarks re-enforce our belief that M-Switch X.400 is substantially faster than any other X.400 MTA.
X.400 P7 can be used in many ways, and it would be impractical and confusing to test all combinations. This section summarizes the common usage models that are being modelled here.
- Full message fetch. All clients we are aware of operate by fetching complete messages, and not by selective component access.
- List based on message status. Although P7 permits many ways to select messages, most clients select messages based on message status and this is the focus of testing here.
- Empty mailbox. Many deployments work by the client fetching messages, and then deleting them soon afterward, so that the mailbox is kept small or empty.
- Large mailbox as archive. A common approach is for clients to fetch new messages, and then leave in the in box (or out box). Messages will be deleted by the administrator, typically after an archive period. The client will fetch new messages. Messages in the archive will primarily be used for recovery purposes.
- Periodic fetch, where a client will connect from time to time, list new messages and fetch them all.
- Auto-alert with immediate fetch, where a client remains bound with auto-alert set. When an auto-alert is received, the message is immediately fetched.
The tests made are intended to cover all of these elements. More testing is done on "Large Mailbox", as this is more demanding than "Empty Mailbox".
Testing was done on the following hardware:
- Dual 2.2 GHz Opteron.
- 4 GByte Memory
- 135 GByte SCSI RAID (0+1 configuration) with write-back cache
- Red Hat Linux
The core tests were done on a message store with the following configuration:
- 100 Mailboxes
- 11,000 messages in each mailbox, with 1,000 of status new and 10,000 of status processes.
- Total 1,100,000 messages stored.
- Messages 10 kBytes each.
This message store configuration is intended as a realistic test of the large mailbox model. The M-Store X.400 Server and the associated M-Vault X.500 index server were run on the same server. For tests involving the M-Switch X.400 MTA, this was also operating on the same server.
The first set of tests looks at each of the core P3 and P7 operations, and provides measurements for each, using the setup described above.
Message Delivery from MTA
|Number of Mailboxes delivered to||Delivery Rate (messages/sec)|
Message delivery rate is independent of the number of mailboxes delivered to.
|List all (uncached)||2248|
|List unread (un-cached)||2762|
|List all (cached)||3459|
The list test measured the rate at which message sequence numbers are returned by the list command. Listing all messages would be important when recovering messages from an archive. M-Store X.400 builds a cache that will improve performance of the list operation. The last test shows the effect of this cache.
The rate for summarize measures the number of messages that are analyzed in order to produce the summarize result.
|Fetch Processed Messages (1 client)||485|
|Fetch Processed Messages (10 clients)||1188|
|Fetch Unread Messages (1 client)||155|
|Fetch Unread Messages (10 clients)||294|
Message fetching tests were performed with one client and 10 clients. It can be seen that one client can use about 50% of the maximum store capacity. Fetching a processed message interacts only with the core M-Store X.400 process. When an unread message is fetched, the index server needs to be updated to reflect change in message status. This update reduces overall fetch performance.
|Message Submission (1 client)||500|
|Message Submission (10 clients)||405|
|Message Submission, with store on submission (1 client)||188|
|Message Submission, with store on submission (10 clients)||166|
Message submission is done using a P7 client to M-Store X.400, which in turn submits to M-Switch X.400 using P3. Tests were done with multiple clients, and it can be seen that additional clients slightly reduce overall submission rate. Tests were also performed using store on submission, where M-Store X.400 files a message copy in the Out Box that can be accessed with P7.
The second set of tests looks at operation combinations that reflect the primary models of message handling (periodic fetch and auto-alert) for empty and large mailboxes. For large mailboxes, the configuration described above was used. The operation sequence for the two models are:
- Periodic Fetch: Deliver; List Unread; Fetch; Delete.
- Auto-Alert: Deliver; Auto-Alert; Fetch; Delete
Performance for empty mailbox is better, as would be expected, and similar for both models. For large mailboxes, the auto-alert model gives significantly better performance, and is comparable to empty mailbox performance.
All of the above tests were done with 10kbyte messages. M-Store X.400 is designed so that performance is not significantly affected by message size. One fetch test was repeated using 200 kByte messages.
|Fetch New Messages (1 client)||10 kByte||155|
|Fetch New Messages (1 client)||200 kByte||153|
This indicates that performance is reasonably independent of message size.
Sizing and Scaling
The tests have been performed on a reasonably large system, and the numbers will easily meet the requirements of many deployments. This section considers first how the system will scale on the increase of some parameters:
- Number of mailboxes. There is very little dependency on the number of mailboxes, and it is anticipated that the server will scale well to support thousands of mailboxes, without significant performance impact.
- Mailbox size. The large mailbox model tested mailboxes of 10,000 messages. Moderate increase of this size is not expected to have significant impact. Some tests were done for a mailbox of 1,000,000 messages. This is usable, but would not be generally recommended for the current product version.
- Total number of messages. There are scaling implications for the total number of messages, particularly for the index server. It is anticipated that use for up to 10,000,000 messages in total would not give significant degradation in performance.
Where a very high performance system is needed, a number of options are available:
- Use a higher specification server, and in particular higher performance disks.
- Split load across multiple servers.
- Use of multiple file systems.
- Operate M-Store X.400 and the index server (M-Vault X.500) on separate servers.