This whitepaper is the second of a set of papers reporting on measurements made of MMHS (Military Message Handling Systems) operating over HF Radio and Satellite. This paper looks at operation over Satellite networks, and compares the performance of STANAG 4406 Annex E which is designed for constrained bandwidth networks with STANAG 4406 Annex A, which is intended for high speed networks.

The paper shows that use of STANAG 4406 Annex E gives substantial performance improvements.

Creative Commons License

MMHS Tactical Protocol Architectures

The diagram below shows the protocol layers for MMHS over HF Radio and Satellite. Details on these protocols and deployment configurations are given in the Isode whitepaper [Military Messaging over HF Radio and Satellite using STANAG 4406 Annex E].

Protocol layers - MMHS over HF Radio and Satellit

Encoding and compression of the Message Format (top box), is described in [Measuring MMHS Performance over HF Radio and Satellite: STANAG 4406 Annex E Encoding and Compression], and the data from that whitepaper is used in the throughput analysis of this whitepaper. This paper looks at the performance of the protocols used for Satellite communication and other

Protocol Comparison

This paper presents analysis of the ACP 142 protocol stack used by STANAG 4406 Annex E, and the "full stack" STANAG 4406 Annex A protocols. As both approaches can be used, and Annex A is widely deployed over IP, this comparison is useful. It shows the performance gains that can be achieved by moving from Annex

Measurement Approach

The initial plan for this paper was to make measurements over a Satellite link. We noted that Satellite networks have the following characteristics:

  1. Reliability of transmission is very high (unlike HF Radio). A typical bit error rate is 10-7, which gives an IP packet loss rate of around 0.05%. Packet loss on the Satellite will not have a significant effect on performance.
  2. Latency of a typical link is around 0.5 seconds. This has some impact on performance, which is discussed later. However, it does not have a major dynamic effect on the overall performance of the protocols concerned.
  3. The protocol exchanges are relatively straightforward and well defined.

Because of this, we concluded that a broadly theoretical analysis would give useful results. While it would be desirable to back this analysis with real measurements, we believe that the numbers presented here will be broadly accurate. The numbers were obtained by transferring real messages using these protocol stacks, and measuring the various protocol overheads using the Wireshark protocol analysis tool.

Throughput Analysis

For both STANAG 4406 Annex E (with ACP 142) and STANAG 4406 Annex A stacks, analysis goes down to the IP level, including the overhead of the IP packet headers. Overheads of any modem or other supporting protocols are not considered, but these would be expected to be similar for both approaches.

ACP 142 (used by STANG 4406 Annex A)

ACP 142 provides reliable multicast of data. The analysis here looks at unicast transmission, which gives a direct comparison to Annex A. Multicast is discussed separately.

ACP 142 is connectionless, and so there is no per connection overhead. A short message will be transferred in four packets, which has 208 bytes protocol overhead comprising:

  • ACP 142: 96 bytes (46%)
  • UDP: 32 bytes (15%)
  • IPv4: 80 bytes (39%)

Longer messages will need additional data packets. Each data packet has a 44 byte overhead comprising:

  • ACP 142: 16 bytes (36%)
  • UDP: 8 bytes (18%)
  • IPv4: 20 bytes (45%)

The basic data transfer efficiency will depend on the MTU (Maximum Transmission Unit) size. This will depend on the underlying network. The overhead for two common MTU sizes is analyzed in the table below:

MTU Size Scope Overhead
1500 Typical value for local network environment 3%
500 Typical value for wide area network 9%

STANAG 4406 Annex A (Full Stack)

STANAG 4406 Annex A uses the standard X.400 protocol stack over TCP/IP, referred to as "full stack" communication. The following numbers are based on protocol measurements

There is a per connection overhead of 1137 bytes comprising:

  • IPv4: 300 bytes (26%)
  • TCP: 316 bytes (28%)
  • OSI Stack: 233 bytes (20%)
  • X.400 including RTS: 288 bytes (25%)

There is a per message overhead of 325 bytes comprising:

  • IPv4: 120 bytes (37%)
  • TCP: 120 bytes (37%)
  • OSI Stack: 55 bytes (17%)
  • X.400 including RTS: 30 bytes (9%)

For data transfer (of large messages) there is an overhead split as follows:

  • IPv4: 49.4%
  • TCP: 49.4%
  • OSI Stack: 1.2%

The basic data transfer efficiency will depend on the MTU (Maximum Transmission Unit) size. This will depend on the underlying network. The overhead for two common MTU sizes is analyzed in the table below:

MTU Size Scope Overhead
1500 Typical value for local network environment 6%
500 Typical value for wide area network 20%

Overhead Comparison

The following table uses the protocol stack data to give the protocol overhead relative to the data being transferred. 100% means that the the data on the wire has the same volume as the data provided, and so the compression has "cancelled out"

  Annex A Annex E
  1 msg/conn 10 msg/conn perm conn 10 msg/conn
        no comp 30% comp 50% comp 70% comp
300 byte 587% 246% 208% 169% 139% 119% 99%
500 byte 392% 188% 165% 142% 112% 92% 72%
1 kByte 266% 164% 153% 130% 100% 80% 60%
1 kByte MTU=1500 246% 144% 133% 121% 91% 71% 51%
10 kByte 135% 124% 123% 111% 81% 61% 41%
100 kByte 121% 120% 120% 109% 79% 59% 39%
1 Mbyte 120% 120% 120% 109% 79% 59% 39%
1 Mbyte MTU=1500 106% 106% 106% 103% 73% 53% 33%

The protocol overheads depend on a number of factors, that are shown in the table above.

  • Message Size. Calculations are made for messages of varying sizes. 300 bytes is the practical minimum size for a STANAG 4406 message.
  • Connection re-use. Annex A has an overhead when establishing a connection. Overhead is shown for:
    • A new connection for each message, which would be typical for a lightly loaded system.
    • Sharing of connections, averaging 10 messages per connection. Some connection sharing will likely happen on more heavily loaded systems. The level of connection sharing will depend on load pattern and MTA design.
    • Permanent connections. This will effectively eliminate the per connection overhead. Some MTAs such as Isode’s M-Switch support this feature.
  • MTU Size. Most of the calculations are done for an MTU value of 500, which is a likely value for a satellite system. Two line are shown for MTU of 1500, to show the effect.
  • Compression. STANAG 4406 Annex E offers compression, and analysis is provided in the whitepaper [Measuring MMHS Performance over HF Radio and Satellite: STANAG 4406 Annex E Encoding and Compression]. For small messages, a compression of 30-40% would be expected. For larger messages, the nature of the data being carried will determine the achievable compression. For compressed binary formats such as JPEG, almost no compression will be achieved. For formats such as Word or Text, compression of up to 80% may be achieved.

Looking at the results, a number of points can be made:

  • For all options, Annex E gives better throughput.
  • The level of improvement varies very widely.
  • For small messages, with typical compression of 30-40%, the level of improvement will always be significant.
  • Increasing MTU size from 500 to 1500 has a measurable effect, which is most significant for larger messages.
  • Sharing connections with Annex A gives a significant improvement for small messages, and should be considered for traffic patterns with large numbers of small messages. Permanent associations are typically considered for improving latency, as discussed in the Isode whitepaper [Sending FLASH Messages Quickly: Techniques for Low Latency Message Switching and Precedence Handling].
  • For large messages, the performance improvements are dominated by the effectiveness of data compression. STANAG 4406 Annex E allows any agreed compression algorithm to be used, and it will be worth considering choice of compression algorithm appropriate to the data being transferred.

Benefits of STANAG 4406 Annex A

Given that Annex E always give s better throughput than Annex A, its useful to understand the benefits that Annex A (full stack) offers, and why it does not make sense to always use Annex E. Benefits of Annex A are:

  • Peer Authentication. The most significant benefit of Annex A is that it provides MTA to MTA authentication, which leads to quite a bit of the overhead. Annex E relies on network level authentication, and so can only be used over a trusted network infrastructure that does not require application level authentication. This constrains where Annex E can be used.
  • Knowledge that data is getting through. Annex A starts by establishing a connection, whereas Annex E is connectionless, and data is just “sent out”. This connectionless behavior can make error diagnosis harder, and leads to a situation where data gets sent out (consuming network bandwidth) but does not get received because the receiving system is not available or not working. This is a wasteful situation.
  • "Fair" behavior. When an IP network is overloaded, it drops packets. TCP is a "fair" application, and will reduce its rate of sending in light of packet loss. Thus TCP based applications will slow down and each take their share of capacity reduction. This behavior is important for robust operation of the Internet. ACP 142 is rate based, and will not generally change rate in light of packet loss. This means that when ACP 142 is sharing with TCP, packet loss will cause the TCP applications to slow down, while the ACP 142 application keeps going at the same rate. This behavior may be desirable in some situations and undesirable in others.

These points need to be considered in making a choice to use Annex A or Annex E.


The time to transfer a message will depend on:

  • Speed of the underlying network.
  • Volume of data being transferred.
  • Round trip delays for protocol handshaking.

The first two will depend on traffic and system. Note that a reduction in protocol overhead will reduce transfer time, as well as increasing the number of messages that can be transferred over a link of fixed capacity.

ACP 142 operates by transferring the message, with an ack coming back after the message has been sent onward. This means that the latency of transfer of a message is essentially the time taken to transfer the data in the message, plus the end to end latency (typically 0.5 seconds).

Use of Annex A (full stack) will introduce a number of round trip times:

  • Three round trip times (6 times end to end latency) to set up connection.
  • Two round trips (4 times end to end latency) to transfer a message.

This is not a major overhead (typically a few seconds for each message), but it may be desirable for some deployments to use STANAG 4406 Annex E in order to avoid this additional delay.


Satellite is a broadcast medium, and ACP 142 can take advantage of this, provided that the Satellite IP infrastructure supports multicast. ACP 142 multicast adds minimal overhead for additional destinations. This means that where message distribution over satellite sends a significant amount of traffic to multiple destinations, that ACP 142 & STANAG 4406 Annex E offer significant performance gains.


EMCON (Emission Control) or Radio Silence is important for some deployments. This is where a system receives data, but does not transmit. This will usually be done to hide location, but may also be due to the physical characteristics of the link (e.g., a Submarine receiving signals underwater). For Satellite systems, EMCON can be achieved at the radio level, and this may be offered at the IP level. Where EMCON is available at the IP level, ACP 142 can take advantage of this. As this is not available with Annex A, STANAG 4406 Annex E must be used if EMCON is needed.

Analysis & Conclusions

This paper has looked at the performance advantages of using STANAG 4406 Annex E over a satellite. In all cases, it gives performance benefit. If EMCON is needed, this it is the only option. For configurations that can take advantage of multicast, the performance gains will be significant. Performance gains depend significantly on load. There is substantial benefit for small messages, and for large messages where effective compression can be achieved.