Summary

In a previous whitepaper [Distributed Directory in support of large scale PKI] we looked at the directory requirements for the support of a large distributed PKI, and set out the reasons for building such a PKI and the requirements on the associated distributed directory in order to support the PKI. That white paper takes a "top level" view, and is focused particularly on the relationship between departments and what is needed to be supported in the middle. Departments are modeled as having a single directory server, which is quite simplistic.

This paper takes a departmental view, and looks at what a department will realistically need to do, in order to provide a directory service that will integrate into the complete system. While this whitepaper is written in a generic manner, the models set out are written in light of the requirements of US Government departments that need to conform to Homeland Security Policy Directive 12 (HSPD12) and will interconnect using the Federal Bridge as part of the US Federal PKI.

Creative Commons License

What Each Department Needs To Provide: Top Level View

In the example structure shown in the companion white paper, each department operates a DSA (Directory System Agent), which works with other departments, and in particular with the DSA operated centrally by the government in support of the central Certification Authorities (CAs) – the Federal CAs DSA.

From the central perspective, in order to make things work, each department needs to provide a directory service that:

  1. Enables the department to participate in the complete system,
  2. Provides information externally about the department, to be accessed from other departments.

The information that needs to be provided through this directory service is straightforward.

  1. Information for the departmental CAs, in particular CRLs and ARLs that will be needed by other departments in support of PKI based verification.
  2. Certificates for each user in the department, so that these can be retrieved from other departments.

Although functionally straightforward from an external viewpoint, it will generally be a significant internal problem to organize and provide this data in the manner needed externally. This paper considers what is needed to achieve this.

Benefits and Goals

The starting position of this paper is that there is a strong external requirement on the department to provide service that will work across the whole government. A clear design goal of the discussions here is to meet these external requirements.

In order to meet this external goal, it is very likely that the department will bring together data in a coherent form that will be useful internally to the department. This is because such data may not be available online, or may only be available in an ad-hoc and partial manner. Building the external service will often lead to some significant internal benefits and goals for the service.

Having this security information for the whole department available in one place, may be valuable for internal services.

Although the external service only requires certificate information to be associated with each user, there may be benefits in managing additional information such as telephone numbers and email addresses. This additional information might be made available as an external directory service, or may be restricted to internal use. This white paper considers how this can be done.

Three Options to provide the Service

The previous white paper set out what is needed to be provided by a department from an external perspective. This white paper presents three broad options to achieve this. For most departments, only the third option will be viable, and most of the remainder of this paper explores this third option.

Option 1: X.500 Directory

The previous white paper showed how provision of the distributed directory requires a number of capabilities for working between directory servers. This means that provision of such a system can either be done using proprietary approaches and a single vendor approach, or by using X.500. This paper is written on the assumption that the central approach is X.500 oriented, as with the US Federal Bridge.

The most elegant way for a department to operate as a part of such a system, is for the department to operate its directory services using X.500. In practice, this architecturally pure approach is unlikely.

Option 2: LDAP Directory

A second approach is for the department to operate an LDAP directory. There are several issues here:

  1. Although it can work, it is not as good as using X.500, for reasons which are made clear in the other white paper.
  2. Although departments will often operate LDAP directories, the necessary data for the external service will reside in many directories. This means that although relevant data may be available in an LDAP directory, there is no single LDAP directory that can be used to provide the external service.
  3. The LDAP directories will contain information that the department will not wish to expose externally. This could include information on a user that should not be published, or users that should not appear externally.

Use of a departmental LDAP directory for internal and external services is unlikely to be a viable option.

Option 3: Distributed Approach

This section outlines a third option, which we see as the realistic approach in most situations.

Overall Architecture

A practical solution to overcome the difficulties of the first two approaches is to use a hybrid (two tier) approach, illustrated below.

The basic model is that the department operates an X.500 External DSA, such as Isode's M-Vault X.500. This provides the optimal external service, to participate as part of the government wide system, and gives a clean interface that does not expose internal complexity and inappropriate internal data.

The model here is simple, and meets external needs, as described in the previous white paper, using standard external protocols. The key to making this work is the interaction between Internal and External DSAs. There are a number of choices, dependent on details of internal directories.

Moving Data from Internal to External

There are various options to getting data from the various internal DSAs which hold the master data to the external DSA. These are set out in the table below.

Scenario 1: Internal data in correct structure, and all internal data to be published externally.
  Internal data dynamically accessed by External DSA Internal data copied to External DSA

Dynamic access can be supported by the external DSA "chaining" the query to the internal DSA. There are two options:

1: Use X.500 DSP (Directory System Protocol). This is a good architecture, particularly where security is a concern, but requires the internal DSA to support X.500.

2: LDAP Chaining. This requires choice of an external DSA, such as M-Vault X.500, that supports LDAP chaining.

Direct replication of data can be used, as effectively data is just being copied. This may be preferable to chaining for performance and robustness reasons. Two approaches to direct replication are:

1: Use X.500 DISP (Directory Information Shadowing Protocol). Use of this open standard replication is a good approach, but depends on the internal DSA supporting X.500.

2: Use ad hoc LDAP replication. There are a number of options here, which are straightforward to automate. The simplest is to dump data in LDIF (LDAP Data Interchange Format) from the Internal DSA and load it into the external DSA

Scenario 2: Internal data in correct structure, but not all internal data to be published externally
Internal data dynamically accessed by External DSA Internal data copied to External DSA
n/a

In this situation, some data transformation is needed, but it is a simple filtering. Two options to achieve this are:

1: Filtered X.500 DISP. This gives the benefits of DISP noted above. Where the requirement is to remove internal information from entries, this is supported as a standard DISP function (attribute filtering). If there is a need to remove selected entries, Isode’s M-Vault X.500 also provides entry filtering. This is described in the Isode white paper [Replication for Tactical Directory].

2: Use a directory synchronization product, as discussed below.

Scenario 3: Internal data not in correct structure
Internal data dynamically accessed by External DSA Internal data copied to External DSA
n/a

This situation may arise for a number of reasons. A common one will be where data is held in Active Directory, and naming structures are different. This special situation is discussed in more detail in the Isode whitepaper [Using Active Directory as part of a distributed directory].

The solution here is to use a directory synchronization product (meta directory) to transform data from internal to external representation. There are a number of products on the market to do this

 

It is quite likely that a department will use a mixture of these strategies for handling different data in the department.

Handling Data Not in an Internal Directory

Some special consideration arises where data needed for the external directory (external service) is not held in an internal directory. The simplest situation is where data is simply not systematically managed internally in a directory. In this case, there is a simple solution,

A more complex scenario arises when the object names are managed in an internal directory, but the user’s certificate (and possibly other data desired to be in the external directory) is not in the internal directory. This could arise because a certificate is not issued for internal purposes, or because the certificate issued for internal purposes is different to the one needed externally. A common situation where this may arise is where Active Directory is used internally and there is name transformation between internal and external directory. In this situation, a different certificate will be needed for the external DSA. This is described in more detail in the Isode white paper [Using Active Directory as part of a distributed directory].

Where this situation arises, it makes sense to use directory synchronization to derive names in the External DSA from an Internal DSA. The certificates for external use should be published directly in the External DSA, as this is where they are needed.

Special Considerations for CRLs and other CA data

The relationship between Internal DSAs and External DSAs is appropriate for data on users. The External DSA needs to hold or have access to data relating to departmental CAs, and in particular their CRLs. This data is a particularly important for the service being provided. It is recommended that this data is held in the eternal DSA and not accessed by chaining. This can be achieved by the CAs publishing directly to the external DSA or by this data being replicated from an Internal DSA.

Internal Service Provision

It was noted in Section 3, that the information being assembled in the External DSA may be useful and desirable for services internal to the department. The simplest scenario is that this data is used directly. This can be achieved very simply, by replicating data in the Eternal DSA to other copies, that can be used in different locations. Where chaining to Internal DSAs is used, this can also be done by the replicated DSAs holding the external data.

A more complex scenario arises where internal provision can benefit from holding additional data to that which is in the External DSA. The following approach is recommended to achieve this:

In this model, the collation of data for this new internal service uses the model described above for the External DSA. The key to this approach is that brining together data from the various internal DSAs is done in a manner to achieve the extended goals for the integrated internal service. This will result in a Central Internal DSA, holding the information for the internal service. It is likely that this information will be replicated for local availability, performance and reliability reasons.

Data can then be transferred to the External DSA using filtered X.500 DISP, as described in the Isode white paper [Replication for Tactical Directory]. This uses two techniques:

  1. Where information associated with specific entries needs to be removed, the attributes can be removed using standard X.500 attribute filtering.
  2. Where only selected entries are to be published externally, this is managed by use of a 'Publish To' attribute. This attribute will give delegated internal control as to which entries are made available externally.

Conclusions

This white paper has shown how a government department can make use of an External DSA to participate in a government wide directory in support of a PKI system, and various approaches to dealing with different configuration situations and different types of data.