Overview

1. About DDF

Distributed Data Framework (DDF) is a free and open-source common data layer that abstracts services and business logic from the underlying data structures to enable rapid integration of new data sources.

Licensed under LGPL, DDF is an interoperability platform that provides secure and scalable discovery and retrieval from a wide array of disparate sources.

DDF is:

  • a flexible and modular integration framework.

  • built to "unzip and run" while having the ability to scale to large enterprise systems.

  • primarily focused on data integration, enabling clients to insert, query, and transform information from disparate data sources via the DDF Catalog.

1.1. Applications

DDF is comprised of several modular applications, to be installed or uninstalled as needed.

Admin Application

Enhances administrative capabilities when installing and managing DDF. It contains various services and interfaces that allow administrators more control over their systems.

Catalog Application

Provides a framework for storing, searching, processing, and transforming information. Clients typically perform local and/or federated query, create, read, update, and delete (QCRUD) operations against the Catalog. At the core of the Catalog functionality is the Catalog Framework, which routes all requests and responses through the system, invoking additional processing per the system configuration.

Platform Application

The Core application of the distribution. The Platform application has fundamental building blocks that the distribution needs to run.

Security Application

Provides authentication, authorization, and auditing services for the DDF. It is both a framework that developers and integrators can extend and a reference implementation that meets security requirements.

Solr Catalog Application

Includes the Solr Catalog Provider, an implementation of the Catalog Provider using Apache Solr as a data store.

Spatial Application

Provides OGC services, such as CSW, WCS, WFS, and KML.

Search UI

Allows a user to search for records in the local Catalog (provider) and federated sources. Results of the search are returned and displayed on a globe or map, providing a visual representation of where the records were found.

2. Documentation Guide

The DDF documentation is organized by audience.

Core Concepts

This introduction section is intended to give a high level overview of the concepts and capabilities of DDF.

Administrators

Managing | Administrators will be installing, maintaining, and supporting existing applications. Use this section to prepare, install, configure, run, and monitor a DDF.

Users

Using | Users interact with the system to search data stores. Use this section to navigate the various user interfaces available in DDF.

Integrators

Integrating | Integrators will use the existing applications to support their external frameworks. This section will provide details for finding, accessing and using the components of DDF.

Developers

Developing | Developers will build or extend the functionality of the applications. 

2.1. Documentation Conventions

The following conventions are used within this documentation:

2.1.1. Customizable Values

Many values used in descriptions are customizable and should be changed for specific use cases. These values are denoted by < >, and by [[ ]] when within XML syntax. When using a real value, the placeholder characters should be omitted.

2.1.2. Code Values

Java objects, lines of code, or file properties are denoted with the Monospace font style. Example: ddf.catalog.CatalogFramework

Some hyperlinks (e.g., /admin) within the documentation assume a locally running installation of DDF.  Simply change the hostname if accessing a remote host.

2.2. Support

Questions about DDF should be posted to the ddf-users forum or ddf-developers forum, where they will be responded to quickly by a member of the DDF team.

2.2.1. Documentation Updates

The most current DDF documentation is available at DDF Documentation.

Core Concepts

This introduction section is intended to give a high level overview of the concepts and capabilities of DDF.

DDF provides the capability to search the Catalog for metadata. There are a number of different types of searches that can be performed on the Catalog, and these searches are accessed using one of several interfaces. This section provides a very high level overview of introductory concepts of searching with DDF. These concepts are expanded upon in later sections.

3.1. Search Types

There are four basic types of metadata search. Additionally, any of the types can be combined to create a compound search.

A text search is used when searching for textual information. It searches all textual metadata fields by default, although it is possible to refine searches to a text search on a single attribute. It is similar to a Google search over the metadata contained in the Catalog. Text searches may use wildcards, logical operators, and approximate matches.

A spatial search is used for Area of Interest (AOI) searches. Polygon and point radius searches are supported.

A temporal search finds information from a specific time range. Two types of temporal searches are supported: relative and absolute. Relative searches contain an offset from the current time, while absolute searches contain a start and an end timestamp. Temporal searches can use the created or modified date attributes.

A datatype search is used to search for metadata based on the datatype of the resource. Wildcards (*) can be used in both the datatype and version fields. Metadata that matches any of the datatypes (and associated versions if specified) will be returned. If a version is not specified, then all metadata records for the specified datatype(s) regardless of version will be returned.

4. Metadata

In DDF, resources are the data products, files, reports, or documents of interest to users of the system.

Metadata is information about those resources, organized into a schema to make search possible. The Catalog stores this metadata and allows access to it. Metacards are single instances of metadata, representing a single resource, in the Catalog. Metacards follow one of several schemas to ensure reliable, accurate, and complete metadata. Essentially, Metacards function as containers of metadata.

5. Ingest

Ingest is the process of bringing data products, metadata, or both into the catalog to enable search, sharing, and discovery. Ingested files are transformed into a neutral format that can be searched against as well as migrated to other formats and systems. See Ingesting Data for the various methods of ingesting data.

5.1. Populating Metacards During Ingest

Upon ingest, a transformer will read the metadata from the ingested file and populate the fields of a metacard. Exactly how this is accomplished depends on the origin of the data, but most fields (except id) are imported directly.

6. Content

The Catalog Framework can interface with Storage Providers to provide storage of resources to specific types of storage, e.g., file system, relational database, XML database. A default file system implementation is provided by default.

Storage providers act as a proxy between the Catalog Framework and the mechanism storing the content. Storage providers expose the storage mechanism to the Catalog Framework. Storage plugins provide pluggable functionality that can be executed either immediately before or immediately after content has been stored or updated.

Storage providers provide the capability to the Catalog Framework to create, read, update, and delete content in the content repository.

7. Catalog Framework

The Catalog Framework wires all the Catalog components together.

It is responsible for routing Catalog requests and responses to the appropriate source, destination, federated system, etc. 

Endpoints send Catalog requests to the Catalog Framework. The Catalog Framework then invokes Catalog PluginsTransformers, and Resource Components as needed before sending requests to the intended destination, such as one or more Sources

The Catalog Framework decouples clients from service implementations and provides integration points for Catalog Plugins and convenience methods for Endpoint developers.

8. Federation

Federation is the ability of the DDF to query other data sources, including other DDFs. By default, the DDF is able to federate using OpenSearch and CSW protocols. The minimum configuration necessary to configure those federations is to supply a query address.

Federation enables constructing dynamic networks of data sources that can be queried individually, or aggregated into specific configuration to enable a wider range of accessibility for data and data products.

Federation provides the capability to extend the DDF enterprise to include Remote Sources, which may include other instances of DDF.  The Catalog handles all aspects of federated queries as they are sent to the Catalog Provider and Remote Sources, as they are processed, and as the query results are returned. Queries can be scoped to include only the local Catalog Provider (and any Connected Sources), only specific Federated Sources, or the entire enterprise (which includes all local and Remote Sources). If the query is supposed to be federated, the Catalog Framework passes the query to a Federation Strategy, which is responsible for querying each federated source that is specified. The Catalog Framework is also responsible for receiving the query results from each federated source and returning them to the client in the order specified by the particular federation strategy used. After the federation strategy handles the results, the Catalog returns them to the client through the Endpoint. Query results returned from a federated query are a list of metacards. The source ID in each metacard identifies the Source from which the metacard originated.

9. Events and Subscriptions

DDF can be configured to receive metacards whenever metadata is created, updated, or deleted in any federated sources. Creations, updates, and deletions are collectively called Events, and the process of registering to recieve them is called Subscription.

The behavior of these subscriptions is consistent, but the method of configuring them is specific to the Endpoint used.

10. Registry

The Registry Application serves as an index of registry nodes and their information, including service bindings, configurations and supplemental details.

Each registry has the capability to serve as an index of information about a network of registries which, in turn, can be used to connect across a network of DDFs and other data sources. Registries communicate with each other through the CSW endpoint and each registry node is converted into a registry metacard to be stored in the catalog. When a registry is subscribed to or published from, it sends the details of one or more nodes to another registry.

10.1. Identity Node

The Registry is initially comprised of a single registry node, refered to as the identity, which represents the registry’s primary configuration.

10.2. Subscription

Subscribing to a registry is the act of retreiving its information, specifically its identity information and any other registries it knows about. By default, subscriptions are configured to check for updates every 30 seconds.

10.3. Publication

Publishing is the act of sending a registry’s information to another registry. Once publication has occurred, any updates to the local registry will be pushed out to the registries that have been published to.

11. Endpoints

Endpoints expose the Catalog Framework to clients using protocols and formats that the clients understand.

Endpoint interface formats encompass a variety of protocols, including (but not limited to):

  • SOAP Web services

  • RESTful services

  • JMS

  • JSON

  • OpenSearch

The endpoint may transform a client request into a compatible Catalog format and then transform the response into a compatible client format. Endpoints may use Transformers to perform these transformations. This allows an endpoint to interact with Source(s) that have different interfaces. For example, an OpenSearch Endpoint can send a query to the Catalog Framework, which could then query a federated source that has no OpenSearch interface.

Endpoints are meant to be the only client-accessible components in the Catalog.


Managing

Administrators will be installing, maintaining, and supporting existing applications. Use this section to prepare, install, configure, run, and monitor a DDF.

12. Installing

Set up a complete, secure instance of DDF. For simplified steps used for a testing, development, or demonstration installation, see the DDF Quick Start.

Important

Although DDF can be installed by any user, it is recommended for security reasons to have a non-root user execute the DDF installation.

Note

Hardening guidance assumes a Standard installation.

Adding other components does not have any security/hardening implications.

12.1. Installation Prerequisites

These are the system/environment requirements to configure prior to an installation.

Hardware Requirements
  • At least 4096MB of memory for DDF.

Java Requirements
  • Supported platforms are *NIX - Unix/Linux/OSX, Solaris, and Windows.

  • JDK8 must be installed.

  • The JAVA_HOME environment variable must be set to the location where the JDK is installed.

    1. Install/Upgrade to Java 8 J2SE 8 SDK

      1. The recommended version is 8u60 or later.

      2. Java version must contain only number values.

    2. Install/Upgrade to JDK8.

    3. Set the JAVA_HOME environment variable to the location where the JDK is installed.

Setting JAVA_HOME variable
Warning
*NIX

Unlink JAVA_HOME if it is already linked to a previous version of the JRE:

unlink JAVA_HOME

Replace <JAVA_VERSION> with the version and build number installed.

  1. Open a terminal window(*NIX) or command prompt (Windows) with administrator privileges.

  2. Determine Java Installation Directory (This varies between operating system versions).

    Find Java Path in *NIX
    which java
    Find Java Path in Windows

    The path to the JDK can vary between versions of Windows, so manually verify the path under:

    C:\Program Files\Java\jdk<M.m.p_build>
  3. Copy path of Java installation to clipboard. (example: /usr/java/<JAVA_VERSION>)

  4. Set JAVA_HOME by replacing <PATH_TO_JAVA> with the copied path in this command:

    Setting JAVA_HOME on *NIX
    JAVA_HOME=<PATH_TO_JAVA><JAVA_VERSION>
    export JAVA_HOME
    Setting JAVA_HOME on Windows
    set JAVA_HOME=<PATH_TO_JAVA><JAVA_VERSION>
    setx JAVA_HOME "<PATH_TO_JAVA><JAVA_VERSION>" /m
    Adding JAVA_HOME to PATH Environment Variable on Windows
    setx PATH "%PATH%;%JAVA_HOME%\bin" /m
  5. Restart Terminal (shell) or Command Prompt.

    • Verify that the JAVA_HOME was set correctly.

*NIX
echo $JAVA_HOME
Windows
echo %JAVA_HOME%
Note
File Descriptor Limit on Linux
  • For Linux systems, increase the file descriptor limit by editing /etc/sysctl.conf to include:

fs.file-max = 6815744
  • For the change to take effect, a restart is required.

*Nix Restart Command
init 6

12.2. Installing With the DDF Distribution Zip

Warning
Check System Time

Prior to installing DDF, ensure the system time is accurate to prevent federation issues.

To install the DDF distribution zip, perform the following:

  1. Download the DDF zip file.

  2. After the prerequisites have been met, change the current directory to the desired install directory, creating a new directory if desired. This will be referred to as <DDF_HOME>.

    Warning
    Windows Pathname Warning

    Do not use spaces in directory or file names of the <DDF_HOME> path. For example, do not install in the default Program Files directory.

    Example: Create a Directory (Windows and *NIX)
    mkdir new_installation
    1. Use a Non-root User on *NIX. (Windows users skip this step)

      It is recommended that the root user create a new install directory that can be owned by a non-root user (e.g., DDF_USER). This can be a new or existing user. This DDF_USER can now be used for the remaining installation instructions.

    2. Create a new group or use an existing group (e.g., DDF_GROUP) (Windows users skip this step)

      Example: Add New Group on *NIX
      groupadd DDF_GROUP
      Example: Switch User on *NIX
      chown DDF_USER:DDF_GROUP new_installation
      
      su - DDF_USER
  3. Change the current directory to the location of the zip file (ddf-2.10.1.zip).

    *NIX (Example assumes DDF has been downloaded to a CD/DVD)
    cd /home/user/cdrom
    Windows (Example assumes DDF has been downloaded to the D drive)
    cd D:\
  4. Copy ddf-2.10.1.zip to <DDF_HOME>.

    *NIX
    cp ddf-2.10.1.zip <DDF_HOME>
    Windows
    copy ddf-2.10.1.zip <DDF_HOME>
  5. Change the current directory to the desired install location.

    *NIX or Windows
    cd <DDF_HOME>
  6. The DDF zip is now located within the <DDF_HOME>. Unzip ddf-2.10.1.zip.

    *NIX
    unzip ddf-2.10.1.zip
    Warning
    Windows Zip Utility Warning

    DO NOT use the windows zip utility bundled with windows to unzip DDF. Unzipping DDF using the windows zip utility will cause unexpected behavior and errors in DDF!

    Use Java to Unzip in Windows
    "%JAVA_HOME%\bin\jar.exe" xf ddf-2.10.1.zip

    The unzipping process may take time to complete. The command prompt will stop responding to input during this time.

12.2.1. Controlling File System Access

Restrict access to sensitive files by ensuring that the only users with access privileges are administrators.

Within the <DDF_HOME>, a directory is created named ddf-2.10.1. This directory will be referred to in the documentation as <DDF_HOME>.

  1. Do not assume the deployment is from a trusted source; verify its origination.

  2. Check the available storage space on the system to ensure the deployment will not exceed the available space.

  3. Set maximum storage space on the <DDF_HOME>/deploy and <DDF_HOME>/system directories to restrict the amount of space used by deployments.

Setting Directory Permissions
  • Required Step for Security Hardening

DDF relies on the Directory Permissions of the host platform to protect the integrity of the DDF during operation. System administrators MUST perform the following steps prior to deploying bundles added to the DDF.

Important

The system administrator must restrict certain directories to ensure that the application (user) cannot access restricted directories on the system. For example the DDF_USER should have read-only access to <DDF_HOME>, except for the sub-directories etc, data and instances.

Setting Directory Permissions on Windows

Set directory permissions on the <DDF_HOME>; all sub-directories except etc, data, and instances; and any directory intended to interact with the DDF to protect from unauthorized access.

  1. Right-click on the <DDF_HOME> directory.

  2. Select Properties → Security → Advanced.

  3. Under Owner, select Change.

  4. Enter Creator Owner into the Enter the Object Name…​ field.

  5. Select Check Names.

  6. Select Apply.

    1. If prompted Do you wish to continue, select Yes.

  7. Remove all Permission Entries for any groups or users with access to <DIB_HOME> other than System, Administrators, and Creator Owner.

    1. Note: If prompted with a message such as: You can’t remove X because this object is inheriting permissions from its parent. when removing entries from the Permission entries table:

      1. Select Disable Inheritance.

      2. Select Convert Inherited Permissions into explicit permissions on this object.

      3. Try removing the entry again.

  8. Select the option for Replace all child object permission entries with inheritable permission entries from this object.

  9. Close the Advanced Security Settings window.

Setting Directory Permissions on *NIX

Set directory permissions to protect the DDF from unauthorized access.

  • Change ownership of <DDF_HOME>

    • chown -R ddf-user <DDF_HOME>

  • Create instances sub-directory if does not exist

    • mkdir -p <DDF_HOME>/instances <DDF_HOME>/solr

  • Change group ownership on sub-directories

    • chgrp -R DDF_GROUP <DDF_HOME>/etc <DDF_HOME>/data <DDF_HOME>/instances

  • Change group permissions

    • chmod -R g-w <DDF_HOME>/etc <DDF_HOME>/data <DDF_HOME>/instances

  • Remove permissions for other users

    • chmod -R o-rwx <DDF_HOME>/etc <DDF_HOME>/data <DDF_HOME>/instances

12.3. Initial Startup

Run the DDF using the appropriate script.

*NIX
<DDF_HOME>/bin/ddf
Windows
<DDF_HOME>/bin/ddf.bat

The distribution takes a few moments to load depending on the hardware configuration.

Tip

To run DDF as a service, see Starting as a Service.

12.3.1. Verifying Startup

At this point, DDF should be configured and running with a Solr Catalog Provider. New features (endpoints, services, and sites) can be added as needed.

Verification is achieved by checking that all of the DDF bundles are in an Active state (excluding fragment bundles which remain in a Resolved state).

Note

It may take a few moments for all bundles to start so it may be necessary to wait a few minutes before verifying installation.

Execute the following command to display the status of all the DDF bundles:

View Status
ddf@local>list | grep -i ddf
Warning

Entries in the Resolved state are expected, they are OSGi bundle fragments. Bundle fragments are distinguished from other bundles in the command line console list by a field named Hosts, followed by a bundle number. Bundle fragments remain in the Resolved state and can never move to the Active state.

Example: Bundle Fragment in the Command Line Console
96 | Resolved |  80 | 2.10.0.SNAPSHOT | DDF :: Platform :: PaxWeb :: Jetty Config, Hosts: 90

After successfully completing these steps, the DDF is ready to be configured.

12.3.2. DDF Directory Contents after Installation and Initial Startup

During DDF installation, the major directories and files shown in the table below are created, modified, or replaced in the destination directory.

Table 1. DDF Directory Contents
Directory Name Description

bin

Scripts to start, stop, and connect to DDF.

data

The working directory of the system – installed bundles and their data

data/log/ddf.log

Log file for DDF, logging all errors, warnings, and (optionally) debug statements. This log rolls up to 10 times, frequency based on a configurable setting (default=1 MB)

data/log/ingest_error.log

Log file for any ingest errors that occur within DDF.

data/log/security.log

Log file that records user interactions with the system for auditing purposes.

data/log/solr.log

Log file for any info coming from Solr.

deploy

Hot-deploy directory – KARs and bundles added to this directory will be hot-deployed (Empty upon DDF installation)

documentation

HTML and PDF copies of DDF documentation.

etc

Directory monitored for addition/modification/deletion of .config configuration files or third party .cfg configuration files.

etc/failed

If there is a problem with any of the .config files, such as bad syntax or missing tokens, they will be moved here.

etc/processed

All successfully processed .config files will be moved here.

etc/templates

Template .config files for use in configuring DDF sources, settings, etc., by copying to the etc directory.

lib

The system’s bootstrap libraries. Includes the ddf-branding.jar file which is used to brand the system console with the DDF logo.

licenses

Licensing information related to the system.

system

Local bundle repository. Contains all of the JARs required by DDF, including third-party JARs.

12.3.3. Completing Installation from the Admin Console

Upon startup, complete installation by navigating to the Admin Console at https://localhost:8993/admin.

Warning
Internet Explorer 10 TLS Warning

Internet Exlorer 10 users may need to enable TLS 1.2 to access the Admin Console in the browser.

Enabling TLS1.2 in IE10
  1. Go to Tools → Internet Options → Advanced → Settings → Security.

  2. Enable TLS1.2.

  • Default user/password: admin/admin.

On the initial startup of the Admin Console, a series of prompts walks through essential configurations. These configurations can be changed later, if needed.

  • Click Start to begin.

Configure Guest Claim Attributes Page

Setting the attributes on the Configure Guest Claim Attributes page determines the minimum claims attributes (and, therefore, permissions) available to a guest, or not signed-in, user.

To change this later, see Configuring Guest Claim Attributes.

Setup Types

DDF is pre-configured with several installation profiles.

  • Standard Installation: Recommended. Includes these applications by default:

  • Development: Includes all demo, beta, and experimental applications.

  • Custom Installation: Advanced. Click Customize on either profile to add or remove applications to be installed.

    • If apps are preselected when the Select Applications page is reached, they will be uninstalled if unselected.

Warning

The Platform, Admin, and Security applications are required and CANNOT be selected or unselected.

The Security Application appears to be unselected upon first view of the tree structure, but it is in fact automatically installed with a later part of the installation process.

System Configuration Settings
  • System Settings: Set hostname and ports for this installation.

  • Contact Info: Contact information for the point-of-contact or administrator for this installation.

  • Certificates: Add PKI certificates for the Keystore and Truststore for this installation.

    • For a quick (test) installation, if the hostname/ports are not changed from the defaults, DDF includes self-signed certificates to use. Do not use in a working installation.

    • For more advanced testing, on initial startup of the Admin Console append the string ?dev=true to the url (https://localhost:8993/admin?dev=true) to auto-generate self-signed certificates from a demo Certificate Authority(CA) which will enable change hostname and port settings. Do not use in a working installation.

      • NOTE: ?dev=true generates certifcates on initial installation only.

    • For more information about importing certificate from a Certificate Authority, see Managing Keystores and Certificates.

Finished Page

Upon successful startup, the Finish page will redirect to the Admin Console to begin further configuration, ingest, or federation.

Note

The redirect will only work if the certificates are configured in the browser.
Otherwise the redirect link must be used.

13. Configuring

Note
Configuration Requirements

Because components can easily be installed and uninstalled, it’s important to remember that for proper DDF functionality, at least the Catalog API, one Endpoint, and one Catalog Framework implementation must be active.

DDF can be configured in several ways, depending on need:

Note

While there are multiple ways to configure DDF for use, the recommended method is to use the Admin Console.

Security is an important consideration for DDF, so it is imperative to update configurations away from the defaults to unique, secure settings.

Important
Securing DDF Components

DDF is enabled with an Insecure Defaults Service which will warn users/admins if the system is configured with insecure defaults.

A banner is displayed on the admin console notifying "The system is insecure because default configuration values are in use."

A detailed view is available of the properties to update.

Security concerns will be highlighted in the configuration sections to follow.

Security Hardening

To harden DDF, extra security precautions are required.

Where available, necessary migitations to harden an installation of DDF are called out in the following configuration steps.

Refer to the Hardening Checklist for a compilation of these mitigations.

13.2. Managing Keystores and Certificates

  • Required Step for Security Hardening

DDF uses certificates in two ways:

  1. Ensuring the privacy and integrity of messages sent or received over a network.

  2. Authenticating an incoming user request.

Managing Keystores

Certificates, and sometimes their associated private keys, are stored in keystore files. DDF includes two default keystore files, the server key store and the server trust store. The server keystore holds the certificates and private keys that DDF uses to identify itself to other nodes on the network. The truststore holds the certificates of nodes or other entities that DDF needs to trust.

Certificates (and certificates with keys) can be managed in the Admin Console. Navigate to the Securityapplication and select it. After it opens, select the notebook tab labeled "Certificates". This view shows the alias (name) of every certificate in the trust store and the key store. It also displays if the entry includes a private key ("Is Key") and the encryption scheme (typically "RSA" or "EC").

This view allows administrators remove certificates from DDF’s key and trust stores. It also allows administrators to import certificates and private keys into the keystores with the "+" button. The import function has two options: import from a file or import over HTTPS. The file option accepts a Java Keystore file or a PKCS12 keystore file. Because keystores can hold many keys, the import dialog asks the administrator to provide the alias of the key to import. Private keys are typically encrypted and the import dialog prompts the administrator to enter the password for the private key. Additionally, keystore files themselves are typically encrypted and the dialog asks for the keystore ("Store") password.

The name and location of the DDF trust and key stores can be changed by editing the system properties files, etc/system.properties. Additionally, the password that DDF uses to decrypt (unlock) the key and trust stores can be changed here.

Important

DDF assumes that password used to unlock the keystore is the same password that unlocks private keys in the keystore.

The location, file name, passwords and type of the server and trust key stores can be set in the system.properties file:

  1. Setting the Keystore and Truststore Java Properties

javax.net.ssl.keyStore=etc/keystores/serverKeystore.jks
javax.net.ssl.keyStorePassword=changeit
javax.net.ssl.trustStore=etc/keystores/serverTruststore.jks
javax.net.ssl.trustStorePassword=changeit
javax.net.ssl.keyStoreType=jks
javax.net.ssl.trustStoreType=jks
Default Certificates

DDF comes with a default keystore that contains certificates. This allows the distribution to be unzipped and run immediately.

If the installer was used to install the DDF and a hostname other than "localhost" was given, the user will be prompted to upload new trust/key stores.

If the hostname is localhost or, if the hostname was changed after installation, the default certificates will not allow access to the DDF instance from another machine over HTTPS (now the default for many services). The Demo Certificate Authority will need to be replaced with certificates that use the fully-qualified hostname of the server running the DDF instance.

Demo Certificate Authority (CA)

DDF comes with a populated truststore containing entries for many public certificate authorities, such as Go Daddy and Verisign. It also includes an entry for the DDF Demo Root CA. This entry is a self-signed certificate used for testing. It enables DDF to run immediately after unzipping the distribution. The keys and certificates for the DDF Demo Root CA are included as part of the DDF distribution. This entry must be removed from the truststore before DDF can operate securely.

Creating New Server Keystore Entry with the CertNew Scripts

To create a private key and certificate signed by the Demo Certificate Authority, use the provided scripts. To use the scripts, run them out of the <INSTALL_HOME>/etc/certs directory.

*NIX Demo CA Script

For *NIX, use the CertNew.sh script.

sh CertNew.sh <FQDN>

Alternatively, a distinguished name can be provided to the script with a comma-delimited string.

sh CertNew.sh -dn "c=US, st=California, o=Yoyodyne, l=San Narciso, cn=<FQDN>"

Windows Demo CA Script

For Windows, use the CertNew.cmd script.

CertNew -cn <FQDN>

Alternatively, a distinguished name can be provided to the script with a comma-delimited string.

CertNew -dn "c=US, st=California, o=Yoyodyne, l=San Narciso, cn=<FQDN>"

The CertNew scripts:

  • Create a new entry in the server keystore.

  • Use the hostname as the fully qualified domain name (FQDN) when creating the certificate.

  • Use the Demo Certificate Authority to sign the certificate so that it will be trusted by the default configuration.

To install a certificate signed by a different Certificate Authority, see Managing Keystores.

Finally, restart the DDF instance. Browse the Admin Console at https://<FQDN>:8993/admin to test changes.

Warning

If the server’s fully qualified domain name is not recognized, the name may need to be added to the network’s DNS server.

Tip

The DDF instance can be tested even if there is no entry for the FQDN in the DNS. First, test if the FQDN is already recognized. Execute this command:

ping <FQDN>

If the command responds with an error message such as unknown host, then modify the system’s hosts file to point the server’s FQDN to the loopback address. For example:

127.0.0.1 <FQDN>

Note

By default, the Catalog Backup Post-Ingest Plugin is NOT enabled. To enable, the Enable Backup Plugin configuration item must be checked in the Backup Post-Ingest Plugin configuration.

Enable Backup Plugin: true

Note
Changing Default Passwords

This step is not required for a hardened system. If testing DDF with a

  • The default password in config.ldif for serverKeystore.jks is changeit. This needs to be modified.

    • ds-cfg-key-store-file: ../../keystores/serverKeystore.jks

    • ds-cfg-key-store-type: JKS

    • ds-cfg-key-store-pin: password

    • cn: JKS

  • The default password in config.ldif for serverTruststore.jks is changeit. This needs to be modified.

    • ds-cfg-trust-store-file: ../../keystores/serverTruststore.jks

    • ds-cfg-trust-store-pin: password

    • cn: JKS

Updating Key Store / Trust Store via the Admin Console
  1. Open the Admin Console.

  2. Select the Security application.

  3. Select the Certificates tab.

  4. Add and remove certificates and private keys as necessary.

  5. Restart DDF.

Important

The default trust store and key store files for DDF included in etc/keystores use self-signed certificates. Self-signed certificates should never be used outside of development/testing areas.

Managing Certificate Revocation List (CRL)
  • Required Step for Security Hardening

For hardening purposes, it is recommended to implement a way to verify the CRL at least daily.

A Certificate Revocation List is a collection of formerly-valid certificates that should explicitly not be accepted.

Creating a Certificate Revocation List (CRL)

Create a CRL in which the token issuer’s certificate is valid. The example uses OpenSSL.

$> openssl ca -gencrl -out crl-tokenissuer-valid.pem

Note
Windows and OpenSSL

Windows does not include OpenSSL by default. For Windows platforms, a additional download of OpenSSL or an alternative is required.

Revoke a Certificate and Create a New CRL that Contains the Revoked Certificate
$> openssl ca -revoke tokenissuer.crt

$> openssl ca -gencrl -out crl-tokenissuer-revoked.pem
Viewing a CRL
  1. Use the following command to view the serial numbers of the revoked certificates: $> openssl crl -inform PEM -text -noout -in crl-tokenissuer-revoked.pem

Enabling Revocation
Note

Enabling CRL revocation or modifying the CRL file will require a restart of DDF to apply updates.

  1. Place the CRL in <DDF_HOME>/etc/keystores.

  2. Add the line org.apache.ws.security.crypto.merlin.x509crl.file=etc/keystores/<CRL_FILENAME> to the following files:

    1. <DDF_HOME>/etc/ws-security/server/encryption.properties

    2. <DDF_HOME>/etc/ws-security/issuer/encryption.properties

    3. <DDF_HOME>/etc/ws-security/server/signature.properties

    4. <DDF_HOME>/etc/ws-security/issuer/signature.properties

  3. (Replace <CRL_FILENAME> with the CRL file used in previous step.)

Adding this property will also enable CRL revocation for any context policy implementing PKI authentication. For example, adding an authentication policy in the Web Context Policy Manager of /search=SAML|PKI will disable basic authentication, require a certificate for the search UI, and allow a SAML SSO session to be created. If a certificate is not in the CRL, it will be allowed through, otherwise it will get a 401 error. If no certificate is provided, the guest handler will grant guest access.

This also enables CRL revocation for the STS endpoint. The STS CRL Interceptor monitors the same encryption.properties file and operates in an identical manner to the PKI Authenication’s CRL handler. Enabling the CRL via the encryption.properties file will also enable it for the STS, and also requires a restart.

Add Revocation to a Web Context

The PKIHandler implements CRL revocation, so any web context that is configured to use PKI authentication will also use CRL revocation if revocation is enabled.

  1. After enabling revocation (see above), open the Web Context Policy Manager.

  2. Add or modify a Web Context to use PKI in authentication. For example, enabling CRL for the search ui endpoint would require adding an authorization policy of /search=SAML|PKI

  3. If guest access is required, add GUEST to the policy. Ex, /search=SAML|PKI|GUEST.

With guest access, a user with a revoked certificate will be given a 401 error, but users without a certificate will be able to access the web context as the guest user.

The STS CRL interceptor does not need a web context specified. The CRL interceptor for the STS will become active after specifying the CRL file, or URL for the CRL in the encryption.properties file and restarting DDF.

Note

Disabling or enabling CRL revocation or modifying the CRL file will require a restart of DDF to apply updates. If CRL checking is already enabled, adding a new context via the Web Context Policy Manager will not require a restart.

Adding Revocation to an Endpoint
Note

This section explains how to add CXF’s CRL revocation method to an endpoint and not the CRL revocation method in the PKIHandler.

This guide assumes that the endpoint being created uses CXF and is being started via Blueprint from inside the OSGi container. If other tools are being used the configuration may differ.

Add the following property to the jasws endpoint in the endpoint’s blueprint.xml:

<entry key="ws-security.enableRevocation" value="true"/>
Example xml snippet for the jaxws:endpoint with the property:
<jaxws:endpoint id="Test" implementor="#testImpl"
                wsdlLocation="classpath:META-INF/wsdl/TestService.wsdl"
                address="/TestService">

    <jaxws:properties>
        <entry key="ws-security.enableRevocation" value="true"/>
    </jaxws:properties>
</jaxws:endpoint>
Verifying Revocation

A Warning similar to the following will be displayed in the logs of the source and endpoint showing the exception encountered during certificate validation:

11:48:00,016 | WARN  | tp2085517656-302 | WSS4JInInterceptor               | ecurity.wss4j.WSS4JInInterceptor  330 | 164 - org.apache.cxf.cxf-rt-ws-security - 2.7.3 |
org.apache.ws.security.WSSecurityException: General security error (Error during certificate path validation: Certificate has been revoked, reason: unspecified)
    at org.apache.ws.security.components.crypto.Merlin.verifyTrust(Merlin.java:838)[161:org.apache.ws.security.wss4j:1.6.9]
    at org.apache.ws.security.validate.SignatureTrustValidator.verifyTrustInCert(SignatureTrustValidator.java:213)[161:org.apache.ws.security.wss4j:1.6.9]

[ ... section removed for space]

Caused by: java.security.cert.CertPathValidatorException: Certificate has been revoked, reason: unspecified
    at sun.security.provider.certpath.PKIXMasterCertPathValidator.validate(PKIXMasterCertPathValidator.java:139)[:1.6.0_33]
    at sun.security.provider.certpath.PKIXCertPathValidator.doValidate(PKIXCertPathValidator.java:330)[:1.6.0_33]
    at sun.security.provider.certpath.PKIXCertPathValidator.engineValidate(PKIXCertPathValidator.java:178)[:1.6.0_33]
    at java.security.cert.CertPathValidator.validate(CertPathValidator.java:250)[:1.6.0_33]
    at org.apache.ws.security.components.crypto.Merlin.verifyTrust(Merlin.java:814)[161:org.apache.ws.security.wss4j:1.6.9]
    ... 45 more
Disallowing Login Without Certificates

DDF can be configured to prevent login without a valid PKI certificate.

  • Navigate to Admin Console

  • Under Security, select → Web Context Policy Manager

  • Add a policy for each context requiring restriction

    • For example: /search=SAML|PKI will disallow login without certificates to the Search UI.

    • The format for the policy should be: /<CONTEXT>=SAML|PKI

  • Click Save

Note

Ensure certificates comply with organizational hardening policies.

13.3. Configuring from the Admin Console

The Admin Console is the centralized location for administering the system. The Admin Console allows an administrator to install and remove selected applications and their dependencies and access configuration pages to configure and tailor system services and properties. The default address for the Admin Console is https://localhost:8993/admin.

Use this brief tutorial or start at Securing Admin Console.

Admin Console Tutorial

This overview covers general uses for the Admin Console.

Managing Applications from Admin Console

The Manage button enables activation/deactivation and adding/removing applications.

Activating / Deactivating Applications

The Deactivate button stops individual applications and any dependent apps. Certain applications are central to overall functionality and cannot be deactivated. These will have the Deactivate button disabled. Disabled apps will be moved to a list at the bottom of the page, with an enable button to reactivate, if desired.

The Add Application button is at the end of the list of currently active applications.

Removing Applications

To remove an application, it must first be deactivated. This enables the Remove Application button.

Upgrading Applications

Each application tile includes an upgrade button to select a new version to install.

System Settings Tab

The configuration and features installed can be viewed and edited from the System tab as well; however, it is recommended that configuration be managed from the applications tab.

Important

In general, applications should be managed via the applications tab. Configuration via this page could result in an unstable system. Proceed with caution!

Managing Federation in the Admin Console

It is recommended to use the Catalog App → Sources tab to configure and manage sites/sources.

Viewing Currently Active Applications from Admin Console

DDF displays all active applications in the Admin Console. This view can be configured according to preference. Either view has an > arrow icon to view more information about the application as currently configured.

Table 2. Admin Console Views
View Description

Tile View

The first view presented is the Tile View, displaying all active applications as individual tiles.

List View

Optionally, active applications can be displayed in a list format by clicking the list view button.

Application Detailed View

Each individual application has a detailed view to view information specific to that application, adjust configurations or enable/disable features. All applications have a standard set of tabs, although some apps may have additional ones with further information.

Table 3. Individual Application Views
Tab Explanation

Configuration

The Configuration tab lists all bundles associated with the application as links to configure any configurable properties of that bundle.

Details

The Details tab gives a description, version, status, and list of other applications that are required by, or rely on, the current application.

Features

The features tab breaks down the individual features of the application that can be installed or uninstalled as configurable features.

Managing Features Using the Admin Console

DDF includes many components, packaged as features, that can be installed and/or uninstalled without restarting the system. Features are collections of OSGi bundles, configuration data, and/or other features.

Note
Transitive Dependencies

Features may have dependencies on other features and will auto-install them as needed.

In the Admin Console, Features are found on the Features tab of each application.

  1. Select the appropriate application.

  2. Select the Features tab.

  3. Uninstalled features are shown with a play arrow under the Actions column.

    1. Select the play arrow for the desired feature.

    2. The Status will change from Uninstalled to Installed.

  4. Installed features are shown with a stop icon under the Actions column.

    1. Select the stop icon for the desired feature.

    2. The Status will change from Installed to Uninstalled.

Adding Feature Repositories

If needed, custom feature repositories can be added to extend DDF functionality.

  1. Select the Manage button in the upper right.

  2. Select the Add an Application tile

  3. Select File Upload to add a new .kar OR .jar file.

  4. Select the Maven URL tab and enter the URL of the feature repository.

    1. Select the Add URL button.

  5. Select the Save Changes button.

13.3.1. Securing Admin Console

If you have integrated DDF with your existing security infrastructure, then you may want to limit access to parts of the DDF based on user roles/groups.

Restricting Access to Admin Console
  • Required Step for Security Hardening

Limit access to the Admin Console to those users who need access. To set access restrictions on the Admin Console, consult the organization’s security architecture to identify specific realms, authentication methods, and roles required.

  1. Navigate to the Admin Console.

  2. Select the Security application.

  3. Select the Configuration tab.

  4. Select the Web Context Policy Manager.

    1. A dialogue will pop up that allows you to edit DDF access restrictions.

    2. Once you have configured your realms in your security infrastructure, you can associate them with DDF contexts.

    3. If your infrastructure supports multiple authentication methods, they may be specified on a per-context basis.

    4. Role requirements may be enforced by configuring the required attributes for a given context.

    5. The white listed contexts allows child contexts to be excluded from the authentication constraints of their parents.

Restricting Feature, App, Service, and Configuration Access
  • Required Step for Security Hardening

Limit access to the individual applications, features, or services to those users who need access. Organizational requirements should dictate which applications are restricted and the extent to which they are restricted.

  1. Navigate to the Admin Console.

  2. Select the Admin application.

  3. Select the Configuration tab.

  4. Select the Admin Configuration Policy.

  5. To add a feature or app permission:

    1. Add a new field to "Feature and App Permissions" in the format of:

      <feature name>/<app name> = "attribute name=attribute value","attribute name2=attribute value2", …​

    2. For example, to restrict access of any user without an admin role to the catalog-app:

      catalog-app = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role=admin", …​

  6. To add a configuration permission:

    1. Add a new field to "Configuration Permissions" in the format of:

      configuration id = "attribute name=attribute value","attribute name2=attribute value2", …​

    2. For example, to restrict access of any user without an admin role to the Web Context Policy Manager:

      org.codice.ddf.security.policy.context.impl.PolicyManager="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role=admin"

If a permission is specified, any user without the required attributes will be unable to see or modify the feature, app, or configuration.

13.3.2. Configuring for an LDAP Server

Warning

The configurations for Security STS LDAP and Roles Claims Handler and Security STS LDAP Login contain plain text default passwords for the embedded LDAP, which is insecure to use in production.

Use the encryption service, described in Encryption Service, from the Command Console to set passwords for your LDAP server. Then change the LDAP Bind User Password in the configurations to use the encrypted password.

Table 4. STS Ldap Login Configuration
Name Property Type Description Default Value Required

LDAP URL

ldapUrl

String

LDAP or LDAPS server and port

ldaps://${org.codice.ddf.system.hostname}:1636

yes

StartTLS

startTls

Boolean

Determines whether or not to use StartTLS when connecting via the ldap protocol. This setting is ignored if the URL uses ldaps.

false

yes

LDAP Bind User DN

ldapBindUserDn

String

DN of the user to bind with LDAP. This user should have the ability to verify passwords and read attributes for any user.

cn=admin

yes

LDAP Bind User Password

ldapBindUserPass

Password

Password used to bind user with LDAP.

secret

yes

LDAP Username Attribute

userNameAttribute

String

Attribute used to designate the user’s name in LDAP. Usually this is uid, cn, or something similar.

uid

yes

LDAP Base User DN

userBaseDn

String

Full LDAP path to where users can be found.

ou=users\,dc=example\,dc=com

yes

LDAP Base Group DN

groupBaseDn

String

ou=groups\,dc=example\,dc=com

Full LDAP path to where groups can be found.

yes

Configuring STS Claims Handlers

A claim is an additional piece of data about a principal that can be included in a token along with basic token data. A claims manager provides hooks for a developer to plug in claims handlers to ensure that the STS includes the specified claims in the issued token.

Claims handlers convert incoming user credentials into a set of attribute claims that will be populated in the SAML assertion. For example, the LDAPClaimsHandler takes in the user’s credentials and retrieves the user’s attributes from a backend LDAP server. These attributes are then mapped and added to the SAML assertion being created. Integrators and developers can add more claims handlers that can handle other types of external services that store user attributes.

Table 5. Security STS LDAP and Roles Claims Handler
Name Property Type Description Default Value Required

LDAP URL

url

String

true

ldaps://${org.codice.ddf.system.hostname}:1636

LDAP or LDAPS server and port

StartTLS

startTls

Boolean

Determines whether or not to use StartTLS when connecting via the ldap protocol. This setting is ignored if the URL uses ldaps.

false

true

LDAP Bind User DN

ldapBindUserDn

String

DN of the user to bind with LDAP. This user should have the ability to verify passwords and read attributes for any user.

cn=admin

true

LDAP Bind User Password

password

Password

Password used to bind user with LDAP.

secret

true

LDAP Group User Membership Attribute

membershipUserAttribute

String

Attribute used as the membership attribute for the user in the group. Usually this is uid, cn, or something similar.

uid

true

LDAP User Login Attribute

loginUserAttribute

String

Attribute used as the login username. Usually this is uid, cn, or something similar.

uid

true

LDAP Base User DN

userBaseDn

String

Full LDAP path to where users can be found.

ou=users\,dc=example\,dc=com

true

Override User Certificate DN

overrideCertDn

Boolean

When checked, this setting will ignore the DN of a user and instead use the LDAP Base User DN value.

false

true

LDAP Group ObjectClass

objectClass

String

ObjectClass that defines structure for group membership in LDAP. Usually this is groupOfNames or groupOfUniqueNames.

groupOfNames

true

LDAP Membership Attribute

memberNameAttribute

String

Attribute used to designate the user’s name as a member of the group in LDAP. Usually this is member or uniqueMember.

member

true

LDAP Base Group DN

groupBaseDn

String

Full LDAP path to where groups can be found.

ou=groups\,dc=example\,dc=com

true

Attribute Map File

propertyFileLocation

String

Location of the file which contains user attribute maps to use.

<INSTALL_HOME>/etc/ws-security/attributeMap.properties

true

13.3.3. Removing Default Users

  • Required Step for Security Hardening

Once DDF is configured to use an external user (such as LDAP), remove the users.properties file from the <INSTALL_HOME>/etc directory. Use of a users.properties file should be limited to emergency recovery operations and replaced as soon as effectively possible.

Note
Emergency Use of users.properties file

Typically, the DDF does not manage passwords. Authenticators are stored in an external identity management solution. However, DDF may be configure the users.properties file to include an account with a username and password for emergency use.

If a system recovery account is configured in users.properties, ensure:

  • The use of this account should be for as short a time as possible.

  • The default username/password of “admin/admin” should not be used.

  • All organizational standards for password complexity should still apply.

  • The password should be encrypted.

Note

It is recommended to perform yearly reviews of accounts for compliance with organizational account management requirements.

13.3.4. Hardening Guest User Access to the Search UI

This section explains how to protect the Search UI page from guest users. Depending on how the Search UI page is protected, users might be prompted with a login page to enter their credentials. Only authorized users are then allowed to continue to the Search UI page. By default, the Search UI allows guest access as part of the karaf security realm. The security settings for the Search UI and all other web contexts can be changed via the Web Context Policy Manager configuration.

These instructions assume that all security components are running on the same physical or virtual machine. For installations where some or all of these components reside on different network locations, adjust accordingly.

  • Make sure that all the default logical names for locations of the security services are defined.

Configuring Guest User for Unauthenticated Metadata Access

Unauthenticated access to a secured DDF system is provided by the Guest user. Guest authentication must be enabled to allow guest users. Once the guest user is configured, redaction and filtering of metadata is done for the guest user the same way it is done for normal users.

Enabling Guest Authentication

To enable guest authentication for a context, change the Authentication Type for that context to Guest.

  1. Navigate to the Admin Console.

  2. Select the Security application.

  3. Select the Configuration tab.

  4. Select Web Context Policy Manager.

  5. Select the desired context (/, /search, /admin, etc.).

  6. Add Guest to the Authentication Type list.

    1. Separate entries with a | symbol (eg. /=SAML|Guest).

Configuring Guest Interceptor
  • Required Step for Security Hardening

If a legacy client requires the use of the secured SOAP endpoints, the guest interceptor should be configured. Otherwise, the guest interceptor and public endpoints should be uninstalled for a hardened system.

Configuring Guest Claim Attributes

A guest user’s attributes define the most permissive set of claims for an unauthenticated user.

A guest user’s claim attributes are stored in configuration, not in the LDAP as normal authenticated users' attributes are.

  1. Navigate to the Admin Console.

  2. Select the Security application.

  3. Select the Configuration tab.

  4. Select the Security Guest Claims Handler.

  5. Add any additional attributes desired for the guest user.

  6. Save changes.

13.3.5. Configuring HTTP Port from Admin Console

Important

Do not use the Admin Console to change the HTTP port. While the Admin Console’s Pax Web Runtime offers this configuration option, it has proven to be unreliable and may crash the system. Use the Command Console instead.

13.3.6. Configuring HTTP to HTTPS Proxy From the Admin Console

The platform-http-proxy feature proxies https to http for clients that cannot use HTTPS and should not have HTTP enabled for the entire container via the etc/org.ops4j.pax.web.cfg file.

  1. Click the Platform application tile.

  2. Choose the Features tab.

  3. Select platform-http-proxy.

  4. Click on the Play button to the right of the word “Uninstalled”

Note
Configuring the proxy:

The hostname should be set by default. Only configure the proxy if this is not working.

  1. Select Configuration tab.

  2. Select HTTP to HTTPS Proxy Settings

    1. Enter the Hostname to use for HTTPS connection in the proxy.

  3. Click Save changes.

13.3.7. Configuring the Web Context Policy Manager

The Web Context Policy Manager defines all security policies for REST endpoints within DDF. It defines:

  • the realms a context should authenticate against.

  • the type of authentication that a context requires.

  • any user attributes required for authorization.

See Web Context Policy Manager Configurations for detailed descriptions of all fields.

Context Realms

The karaf realm is the only realm available by default and it authenticates against the users.properties file. As JAAS authentication realms are added to the STS, more realms become available to authenticate against.

For example, installing the security-sts-ldaplogin feature adds an ldap realm. Contexts can then be pointed to the ldap realm for authentication and STS will be instructed to authenticate them against ldap.

Authentication Types

As you add REST endpoints, you may need to add different types of authentication through the Web Context Policy Manager.

Any web context that allows or requires specific authentication types should be added here with the following format:

/<CONTEXT>=<AUTH_TYPE>|<AUTH_TYPE|...
Table 6. Default Types of Authentication
Authentication Type Description

saml

Activates single-sign on (SSO) across all REST endpoints that use SAML.

basic

Activates basic authentication.

PKI

Activates public key infrastructure authentication.

IdP

Activates SAML Web SSO authentication support. Additional configuration is necessary.

CAS

Enables SSO through a Central Authentication Server

guest

provides guest access

Required Attributes

The fields for required attributes allows configuring certain contexts to only be accessible to users with pre-defined attributes. For example, the default required attribute for the /admin context is role=system-admin, limiting access to the Admin Console to system administrators

White Listed Contexts

White listed contexts are trusted contexts which will bypass security. Any sub-contexts of a white listed context will be white listed as well, unless they are specifically assigned a policy.

Limiting Access to the STS
  • Required Step for Security Hardening

Be sure to limit the hosts that are allowed to connect to the STS:

  • Open the <DDF_HOME>/etc/system.properties file.

  • Edit the line ws-security.subject.cert.constraints = .*.

    • Remove the .* and replace with a comma-delimited list of desired hosts (<MY_HOST>):

      • ws-security.subject.cert.constraints = <MY_HOST>,<OTHER_HOST>

13.3.8. Reconfiguring DDF with a Different Catalog Provider

This scenario describes how to reconfigure DDF to use a different catalog provider.

This scenario assumes DDF is already running.

Uninstall Catalog Provider (if installed).
  1. Navigate to the Admin Console.

  2. Select the Solr Catalog application.

  3. Select the Features tab.

  4. Find and Stop the installed Catalog Provider

Install the new Catalog Provider
  1. Navigate to the Admin Console.

  2. Select the Solr Catalog application.

  3. Select the Features tab.

  4. Find and Start the desired Catalog Provider.

13.3.9. Configuring DDF as a Fanout Proxy

This scenario describes how to configure DDF as a fanout proxy such that only queries and resource retrieval requests are processed and create/update/delete requests are rejected. All queries are enterprise queries and no catalog provider needs to be configured.

  1. Reconfigure DDF in fanout proxy mode by going to the Features tab in the

  2. Navigate to the Admin Console.

  3. Select the Catalog application.

  4. Select the Configuration tab.

  5. Select Catalog Standard Framework.

  6. Select Enable Fanout Proxy.

  7. Save changes.

DDF is now operating as a fanout proxy. Only queries and resource retrieval requests will be allowed. All queries will be federated. Create, update, and delete requests will throw an UnsupportedOperationException, even if a Catalog Provider was configured prior to the reconfiguration to fanout.

13.3.10. Configuring the Product Cache from the Admin Console

All caching properties are part of the Resource Download Settings.

Invalidating the Product Cache
  1. The product cache directory can be administratively invalidated by turning off the product caching using the Enable Product Caching configuration.

  2. Alternatively, an administrator may manually invalidate products by removing them from the file system. Products are cached at the directory specified in the Product Cache Directory configuration.

Format:

<INSTALL-DIR>/data/product-cache/<source-id>-<metacard-id>

Example:

<INSTALL-DIR>/data/product-cache/abc123

13.3.11. Configuring Solr from Admin Console

Solr can only be configured/installed from configuration files. See Solr System Properties and Configuring Solr Catalog Provider Data Directory.

13.3.12. Securing Identity Provider/Service Provider

The Security Identity Provider (IdP) application provides service provider handling that satisfies the SAML 2.0 Web SSO profile in order to support external IdPs (Identity Providers).

IdP (Identity Provider) and SP (Service Provider)

IdP and SP are used for SSO authentication purposes.

Installing the IdP

The IdP bundles are not installed by default. They can be started by installing the security-idp feature.

  1. Install the security-idp feature either by command line: features:install security-idp, or by the Admin Console: SecurityFeaturessecurity-idp

Security IdP Service Provider (SP)

The IdP client that interacts with the specified Identity Provider.

IdP SSO Configuration
  1. Navigate to Admin Console.

  2. Select the Security application.

  3. Select the Configuration tab.

  4. Select IdP Client.

  5. Populate IdP Metadata field through one of the following:

    1. An HTTPS URL (https://)

    2. A file URL (file:)

    3. An XML block to refer to desired metadata

      1. (e.g., https://localhost:8993/services/idp/login/metadata)

IdP Client (SP) example.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
<md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" entityID="https://localhost:8993/services/idp/login">
  <md:IDPSSODescriptor WantAuthnRequestsSigned="true" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
    <md:KeyDescriptor use="signing">
      <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
        <ds:X509Data>
          <ds:X509Certificate>
            MIIDEzCCAnygAwIBAgIJAIzc4FYrIp9mMA0GCSqGSIb3DQEBBQUAMHcxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJBWjEMMAoGA1UECgwDRERGMQwwCgYDVQQLDANEZXYxGTAXBgNVBAMMEERERiBEZW1vIFJvb3QgQ0ExJDAiBgkqhkiG9w0BCQEWFWRkZnJvb3RjYUBleGFtcGxlLm9yZzAeFw0xNDEyMTAyMTU4MThaFw0xNTEyMTAyMTU4MThaMIGDMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQVoxETAPBgNVBAcMCEdvb2R5ZWFyMQwwCgYDVQQKDANEREYxDDAKBgNVBAsMA0RldjESMBAGA1UEAwwJbG9jYWxob3N0MSQwIgYJKoZIhvcNAQkBFhVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMeCyNZbCTZphHQfB5g8FrgBq1RYzV7ikVw/pVGkz8gx3l3A99s8WtA4mRAeb6n0vTR9yNBOekW4nYOiEOq//YTi/frI1kz0QbEH1s2cI5nFButabD3PYGxUSuapbc+AS7+Pklr0TDI4MRzPPkkTp4wlORQ/a6CfVsNr/mVgL2CfAgMBAAGjgZkwgZYwCQYDVR0TBAIwADAnBglghkgBhvhCAQ0EGhYYRk9SIFRFU1RJTkcgUFVSUE9TRSBPTkxZMB0GA1UdDgQWBBSA95QIMyBAHRsd0R4s7C3BreFrsDAfBgNVHSMEGDAWgBThVMeX3wrCv6lfeF47CyvkSBe9xjAgBgNVHREEGTAXgRVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwDQYJKoZIhvcNAQEFBQADgYEAtRUp7fAxU/E6JD2Kj/+CTWqu8Elx13S0TxoIqv3gMoBW0ehyzEKjJi0bb1gUxO7n1SmOESp5sE3jGTnh0GtYV0D219z/09n90cd/imAEhknJlayyd0SjpnaL9JUd8uYxJexy8TJ2sMhsGAZ6EMTZCfT9m07XduxjsmDz0hlSGV0=
          </ds:X509Certificate>
        </ds:X509Data>
      </ds:KeyInfo>
    </md:KeyDescriptor>
    <md:KeyDescriptor use="encryption">
      <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
        <ds:X509Data>
          <ds:X509Certificate>
            MIIDEzCCAnygAwIBAgIJAIzc4FYrIp9mMA0GCSqGSIb3DQEBBQUAMHcxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJBWjEMMAoGA1UECgwDRERGMQwwCgYDVQQLDANEZXYxGTAXBgNVBAMMEERERiBEZW1vIFJvb3QgQ0ExJDAiBgkqhkiG9w0BCQEWFWRkZnJvb3RjYUBleGFtcGxlLm9yZzAeFw0xNDEyMTAyMTU4MThaFw0xNTEyMTAyMTU4MThaMIGDMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQVoxETAPBgNVBAcMCEdvb2R5ZWFyMQwwCgYDVQQKDANEREYxDDAKBgNVBAsMA0RldjESMBAGA1UEAwwJbG9jYWxob3N0MSQwIgYJKoZIhvcNAQkBFhVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMeCyNZbCTZphHQfB5g8FrgBq1RYzV7ikVw/pVGkz8gx3l3A99s8WtA4mRAeb6n0vTR9yNBOekW4nYOiEOq//YTi/frI1kz0QbEH1s2cI5nFButabD3PYGxUSuapbc+AS7+Pklr0TDI4MRzPPkkTp4wlORQ/a6CfVsNr/mVgL2CfAgMBAAGjgZkwgZYwCQYDVR0TBAIwADAnBglghkgBhvhCAQ0EGhYYRk9SIFRFU1RJTkcgUFVSUE9TRSBPTkxZMB0GA1UdDgQWBBSA95QIMyBAHRsd0R4s7C3BreFrsDAfBgNVHSMEGDAWgBThVMeX3wrCv6lfeF47CyvkSBe9xjAgBgNVHREEGTAXgRVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwDQYJKoZIhvcNAQEFBQADgYEAtRUp7fAxU/E6JD2Kj/+CTWqu8Elx13S0TxoIqv3gMoBW0ehyzEKjJi0bb1gUxO7n1SmOESp5sE3jGTnh0GtYV0D219z/09n90cd/imAEhknJlayyd0SjpnaL9JUd8uYxJexy8TJ2sMhsGAZ6EMTZCfT9m07XduxjsmDz0hlSGV0=
          </ds:X509Certificate>
        </ds:X509Data>
      </ds:KeyInfo>
    </md:KeyDescriptor>
    <md:SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://localhost:8993/logout"/>
    <md:SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://localhost:8993/logout"/>
    <md:NameIDFormat>
      urn:oasis:names:tc:SAML:2.0:nameid-format:persistent
    </md:NameIDFormat>
    <md:NameIDFormat>
      urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified
    </md:NameIDFormat>
    <md:NameIDFormat>
      urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName
    </md:NameIDFormat>
    <md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://localhost:8993/services/idp/login"/>
    <md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://localhost:8993/services/idp/login"/>
  </md:IDPSSODescriptor>
</md:EntityDescriptor>
Security IdP Server

An internal Identity Provider solution.

Configuring IdP Server
  1. Navigate to Admin Console.

  2. Select the Security application.

  3. Select the Configuration tab.

  4. Select IdP Server.

  5. Click the + next to SP Metadata to add a new entry

  6. Populate the new entry:

    1. with an HTTPS URL (https://),

    2. file URL (file:), or

    3. XML block to refer to desired metadata, e.g. (https://localhost:8993/services/saml/sso/metadata)

IdP Server example.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" entityID="https://localhost:8993/services/saml">
  <md:SPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
  <md:KeyDescriptor use="signing">
    <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
      <ds:X509Data>
        <ds:X509Certificate>
          MIIDEzCCAnygAwIBAgIJAIzc4FYrIp9mMA0GCSqGSIb3DQEBBQUAMHcxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJBWjEMMAoGA1UECgwDRERGMQwwCgYDVQQLDANEZXYxGTAXBgNVBAMMEERERiBEZW1vIFJvb3QgQ0ExJDAiBgkqhkiG9w0BCQEWFWRkZnJvb3RjYUBleGFtcGxlLm9yZzAeFw0xNDEyMTAyMTU4MThaFw0xNTEyMTAyMTU4MThaMIGDMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQVoxETAPBgNVBAcMCEdvb2R5ZWFyMQwwCgYDVQQKDANEREYxDDAKBgNVBAsMA0RldjESMBAGA1UEAwwJbG9jYWxob3N0MSQwIgYJKoZIhvcNAQkBFhVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMeCyNZbCTZphHQfB5g8FrgBq1RYzV7ikVw/pVGkz8gx3l3A99s8WtA4mRAeb6n0vTR9yNBOekW4nYOiEOq//YTi/frI1kz0QbEH1s2cI5nFButabD3PYGxUSuapbc+AS7+Pklr0TDI4MRzPPkkTp4wlORQ/a6CfVsNr/mVgL2CfAgMBAAGjgZkwgZYwCQYDVR0TBAIwADAnBglghkgBhvhCAQ0EGhYYRk9SIFRFU1RJTkcgUFVSUE9TRSBPTkxZMB0GA1UdDgQWBBSA95QIMyBAHRsd0R4s7C3BreFrsDAfBgNVHSMEGDAWgBThVMeX3wrCv6lfeF47CyvkSBe9xjAgBgNVHREEGTAXgRVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwDQYJKoZIhvcNAQEFBQADgYEAtRUp7fAxU/E6JD2Kj/+CTWqu8Elx13S0TxoIqv3gMoBW0ehyzEKjJi0bb1gUxO7n1SmOESp5sE3jGTnh0GtYV0D219z/09n90cd/imAEhknJlayyd0SjpnaL9JUd8uYxJexy8TJ2sMhsGAZ6EMTZCfT9m07XduxjsmDz0hlSGV0=
        </ds:X509Certificate>
      </ds:X509Data>
    </ds:KeyInfo>
  </md:KeyDescriptor>
  <md:KeyDescriptor use="encryption">
    <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
      <ds:X509Data>
        <ds:X509Certificate>
        MIIDEzCCAnygAwIBAgIJAIzc4FYrIp9mMA0GCSqGSIb3DQEBBQUAMHcxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJBWjEMMAoGA1UECgwDRERGMQwwCgYDVQQLDANEZXYxGTAXBgNVBAMMEERERiBEZW1vIFJvb3QgQ0ExJDAiBgkqhkiG9w0BCQEWFWRkZnJvb3RjYUBleGFtcGxlLm9yZzAeFw0xNDEyMTAyMTU4MThaFw0xNTEyMTAyMTU4MThaMIGDMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQVoxETAPBgNVBAcMCEdvb2R5ZWFyMQwwCgYDVQQKDANEREYxDDAKBgNVBAsMA0RldjESMBAGA1UEAwwJbG9jYWxob3N0MSQwIgYJKoZIhvcNAQkBFhVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMeCyNZbCTZphHQfB5g8FrgBq1RYzV7ikVw/pVGkz8gx3l3A99s8WtA4mRAeb6n0vTR9yNBOekW4nYOiEOq//YTi/frI1kz0QbEH1s2cI5nFButabD3PYGxUSuapbc+AS7+Pklr0TDI4MRzPPkkTp4wlORQ/a6CfVsNr/mVgL2CfAgMBAAGjgZkwgZYwCQYDVR0TBAIwADAnBglghkgBhvhCAQ0EGhYYRk9SIFRFU1RJTkcgUFVSUE9TRSBPTkxZMB0GA1UdDgQWBBSA95QIMyBAHRsd0R4s7C3BreFrsDAfBgNVHSMEGDAWgBThVMeX3wrCv6lfeF47CyvkSBe9xjAgBgNVHREEGTAXgRVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwDQYJKoZIhvcNAQEFBQADgYEAtRUp7fAxU/E6JD2Kj/+CTWqu8Elx13S0TxoIqv3gMoBW0ehyzEKjJi0bb1gUxO7n1SmOESp5sE3jGTnh0GtYV0D219z/09n90cd/imAEhknJlayyd0SjpnaL9JUd8uYxJexy8TJ2sMhsGAZ6EMTZCfT9m07XduxjsmDz0hlSGV0=
        </ds:X509Certificate>
      </ds:X509Data>
    </ds:KeyInfo>
  </md:KeyDescriptor>
  <md:SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://localhost:8993/logout"/>
  <md:SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://localhost:8993/logout"/>
  <md:AssertionConsumerService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://localhost:8993/services/saml/sso"/>
  <md:AssertionConsumerService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://localhost:8993/services/saml/sso"/>
  </md:SPSSODescriptor>
</md:EntityDescriptor>
Configuring IdP Authentication Types

Set the authentication types that will be accepted by the IdP.

  1. Navigate to the Admin Console.

  2. Select the Security application.

  3. Select the Configuration tab.

  4. Select Web Context Policy Manager

  5. Under Authentication Types, set the IDP authentication type as necessary. Note that it should only be used on context paths that will be accessed by users via web browsers. For example:

    • /search=SAML|IDP

Note

If you have configured /search to use IDP, ensure to select the "External Authentication" checkbox in Search UI standard settings.

Identity Provider Limitations

The internal Identity Provider solution should be used in favor of any external solutions until the IdP Service Provider fully satisfies the SAML 2.0 Web SSO profile.

13.3.13. Catalog Filtering

Filtering is performed by an Access plugin, after a query or delete has been performed or before ingest has been performed.

How Filtering Works

Each metacard result can contain security attributes that are pulled from the metadata record after being processed by a PolicyPlugin that populates this attribute. The security attribute is a Map containing a set of keys that map to lists of values. The metacard is then processed by a filter plugin that creates a KeyValueCollectionPermission from the metacard’s security attribute. This permission is then checked against the user subject to determine if the subject has the correct claims to view that metacard. The decision to filter the metacard eventually relies on the installed Policy Decision Point (PDP). The PDP that is being used returns a decision, and the metacard will either be filtered or allowed to pass through.

How a metacard gets filtered is left up to any number of FilterStrategy implementations that might be installed. Each FilterStrategy will return a result to the filter plugin that says whether or not it was able to process the metacard, along with the metacard or response itself. This allows a metacard or entire response to be partially filtered to allow some data to pass back to the requester. This could also include filtering any products sent back to a requester.

The security attributes populated on the metacard are completely dependent on the type of the metacard. Each type of metacard must have its own PolicyPlugin that reads the metadata being returned and then returns the appropriate attributes.

Example (represented as simple XML for ease of understanding):
1
2
3
4
5
6
7
8
9
10
<metacard>
    <security>
        <map>
            <entry assertedAttribute1="A,B" />
            <entry assertedAttribute2="X,Y" />
            <entry assertedAttribute3="USA,GBR" />
            <entry assertedAttribute4="USA,AUS" />
        </map>
    </security>
</metacard>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<user>
    <claim name="subjectAttribute1">
        <value>A</value>
        <value>B</value>
    </claim>
    <claim name="subjectAttribute2">
        <value>X</value>
        <value>Y</value>
    </claim>
    <claim name="subjectAttribute3">
        <value>USA</value>
    </claim>
    <claim name="subjectAttribute4">
        <value>USA</value>
    </claim>
</user>

In the above example, the user’s claims are represented very simply and are similar to how they would actually appear in a SAML 2 assertion. Each of these user (or subject) claims will be converted to a KeyValuePermission object. These permission objects will be implied against the permission object generated from the metacard record. In this particular case, the metacard might be allowed if the policy is configured appropriately because all of the permissions line up correctly.

Configuring Filtering Policies

There are two options for processing filtering policies: internally, or through the use of a policy formatted in eXtensible Access Control Markup Language (XACML). The procedure for setting up a policy differs depending on whether that policy is to be used internally or by the external XACML processing engine. Setting up an internal policy is as follows:

  1. Navigate to the Admin Console.

  2. Select the Security application.

  3. Click the Configuration tab.

  4. Click on the Security AuthZ Realm configuration.

  5. Add any attribute mappings necessary to map between subject attributes and the attributes to be asserted.

    1. For example, the above example would require two Match All mappings of subjectAttribute1=assertedAttribute1 and subjectAttribute2=assertedAttribute2`

    2. Match One mappings would contain subjectAttribute3=assertedAttribute3` and subjectAttribute4=assertedAttribute4.

With the security-pdp-authz feature configured in this way, the above Metacard would be displayed to the user. Note that this particular configuration would not require any XACML rules to be present. All of the attributes can be matched internally and there is no reason to call out to the external XACML processing engine. For more complex decisions, it might be necessary to write a XACML policy to handle certain attributes.

To set up a XACML policy, place the desired XACML policy in the <distribution root>/etc/pdp/policies directory and update the included access-policy.xml to include the new policy. This is the directory in which the PDP will look for XACML policies every 60 seconds. A sample XACML policy is located at the end of this page. Information on specific bundle configurations and names can be found on the Security PDP application page.

Catalog Filter Policy Plugins

Several Policy Plugins for catalog filtering exist currently: Metacard Attribute Security Policy Plugin and XML Attribute Security Policy Plugin. These Policy Plugin implementations allow an administrator to easily add filtering capabilities to some standard Metacard types for all Catalog operations. These plugins will place policy information on the Metacard itself that allows the Filter Plugin to restrict unauthorized users from viewing content they are not allowed to view.

Creating a XACML Policy

This document assumes familiarity with the XACML schema and does not go into detail on the XACML language. When creating a policy, a target is used to indicate that a certain action should be run only for one type of request. Targets can be used on both the main policy element and any individual rules. Targets are geared toward the actions that are set in the request. These actions generally consist of the standard CRUD operations (create, read, update, delete) or a SOAPAction if the request is coming through a SOAP endpoint.

Note

These are only the action values that are currently created by the components that come with DDF. Additional components can be created and added to DDF to identify specific actions.

In the examples below, the policy has specified targets for the above type of calls. For the Filtering code, the target was set for "filter", and the Service validation code targets were geared toward two services: query and LocalSiteName. In a production environment, these actions for service authorization will generally be full URNs that are described within the SOAP WSDL.

XACML Policy Attributes

Attributes for the XACML request are populated with the information in the calling subject and the resource being checked.

XACML Policy Subject

The attributes for the subject are obtained from the SAML claims and populated within the XACML policy as individual attributes under the urn:oasis:names:tc:xacml:1.0:subject-category:access-subject category. The name of the claim is used for the AttributeId value. Examples of the items being populated are available at the end of this page.

XACML Policy Resource

The attributes for resources are obtained through the permissions process. When checking permissions, the XACML processing engine retrieves a list of permissions that should be checked against the subject. These permissions are populated outside of the engine and should be populated with the attributes that should be asserted against the subject. When the permissions are of a key-value type, the key being used is populated as the AttributeId value under the urn:oasis:names:tc:xacml:3.0:attribute-category:resource category.

Using a XACML Policy

To use a XACML policy, copy the XACML policy into the <DDF_HOME>/etc/pdp/policies directory.

13.3.14. Auditing

  • Required Step for Security Hardening

Audit logging captures security-specific system events for monitoring and review. DDF provides a Audit Plugin that logs all catalog transactions to the security.log. Information captured includes user identity, query information, and resources retrieved.

Follow all operational requirements for the retention of the log files. This may include using cryptographic mechanisms, such as encrypted file volumes or databases, to protect the integrity of audit information.

Note

The Audit Log default location is <DDF_HOME>/data/log/security.log

Note
Audit Logging Best Practices

For the most reliable audit trail, it is recommended to configure the operational environment of the DDF to generate alerts to notify adminstrators of:

  • auditing software/hardware errors

  • failures in audit capturing mechanisms

  • audit storage capacity (or desired percentage threshold) being reached or exceeded.

Warning

The security audit logging function does not have any configuration for audit reduction or report generation. The logs themselves could be used to generate such reports outside the scope of DDF.

Enabling Fallback Audit Logging
  • Required Step for Security Hardening

In the event the system is unable to write to the security.log file, DDF must be configured to fall back to report the error in the application log:

  • edit <INSTALL_HOME>/etc/org.ops4j.pax.logging.cfg

    • uncomment the line (remove the # from the beginning of the line) for log4j2 (org.ops4j.pax.logging.log4j2.config.file = ${karaf.etc}/log4j2.config.xml)

    • delete all subsequent lines

  • edit <INSTALL_HOME>/etc/startup.properties

    • replace the artifact pax-logging-service with the artifact pax-logging-log4j2 (keep same version and group)

  • edit <INSTALL_HOME>/etc/log4j2.config.xml

    • find the entry for the securityBackup appender. (see example)

    • change value of filename and prefix of filePattern to the name/path of the desired failover security logs (<NEW_FILE_NAME>)

securityBackup Appender Before
1
2
3
<RollingFile name="securityBackup" append="true" ignoreExceptions="false"
                     fileName="${sys:karaf.data}/log/securityBackup.log"
                     filePattern="${sys:karaf.data}/log/securityBackup.log-%d{yyyy-MM-dd-HH}-%i.log.gz">
securityBackup Appender After
1
2
3
<RollingFile name="securityBackup" append="true" ignoreExceptions="false"
                     fileName="${sys:karaf.data}/log/<NEW_FILE_NAME>"
                     filePattern="${sys:karaf.data}/log/<NEW_FILE_NAME>-%d{yyyy-MM-dd-HH}-%i.log.gz">

13.3.15. Configuring the Landing Page

The DDF landing page offers a starting point and general information for a DDF node. It is accessible at /(index|home|landing(.htm|html)).

Installing the Landing Page

The Landing Page is installed by default with a standard installation.

Customizing the Landing Page

Configure the Landing Page from the Admin Console:

  1. Navigate to the Admin Console.

  2. Select Platform Application.

  3. Select Configuration tab.

  4. Select Landing Page.

Table 7. Landing Page
Name Id Type Description Default Value Required

Description

description

String

Specifies the description to display on the landing page.

As a common data layer, DDF provides secure enterprise-wide data access for both users and systems.

true

Phone Number

phone

String

Specifies the phone number to display on the landing page.

true

Email Address

email

String

Specifies the email address to display on the landing page.

true

External Web Site

externalUrl

String

Specifies the external web site URL to display on the landing page.

true

Announcements

announcements

String

Announcements that will be displayed on the landing page.

null

true

Branding Background

background

String

Specifies the landing page background color. Use html css colors or #rrggbb.

true

Branding Foreground

foreground

String

Specifies the landing page foreground color. Use html css colors or #rrggbb.

true

Branding Logo

logo

String

Specifies the landing page logo. Use a base64 encoded image.

true

Additional Links

links

String

Additional links to be displayed on the landing page. Use the format <text>,<link> (e.g. example, http://www.example.com). Empty entries are ignored.

yes

13.4. Configuring from the Command Console

In environments where access to the Admin Console is not possible or not desired, configurations can also be performed through a command line interface, the Command Console.

13.4.1. Managing Applications From the Command Console

Applications can be installed from the Command Console using the following commands:

Table 8. App Commands
Command Effect

app:add <appName>

Install an app.

app:list

List all installed apps and current status.

app:remove <appName>

Uninstall an app.

app:start

Start an inactive app.

app:status <appName>

Detailed view of application status

app:stop <appName>

Stop an active app.

app:tree

Dependency tree view of all installed apps.

13.4.2. Managing Features From the Command Console

Individual features can also be added via the Command Console.

  1. Determine which feature to install by viewing the available features on DDF.
    ddf@local>feature:list

  2. The console outputs a list of all features available (installed and uninstalled). A snippet of the list output is shown below (the versions may differ):

State         Version            Name                                     Repository                           Description
[installed  ] [2.10.1  ] security-handler-api                     security-services-app-2.10.1 API for authentication handlers for web applications.
[installed  ] [2.10.1  ] security-core                            security-services-app-2.10.1 DDF Security Core
[uninstalled] [2.10.1  ] security-expansion                       security-services-app-2.10.1 DDF Security Expansion
[uninstalled] [2.10.1  ] security-cas-client                      security-services-app-2.10.1 DDF Security CAS Client.
[uninstalled] [2.10.1  ] security-cas-tokenvalidator              security-services-app-2.10.1 DDF Security CAS Validator for the STS.
[uninstalled] [2.10.1  ] security-cas-cxfservletfilter            security-services-app-2.10.1 DDF Security CAS Servlet Filter for CXF.
[installed  ] [2.10.1  ] security-pdp-authz                       security-services-app-2.10.1 DDF Security PDP.
[uninstalled] [2.10.1  ] security-pep-serviceauthz                security-services-app-2.10.1 DDF Security PEP Service AuthZ
[uninstalled] [2.10.1  ] security-expansion-user-attributes       security-services-app-2.10.1 DDF Security Expansion User Attributes Expansion
[uninstalled] [2.10.1  ] security-expansion-metacard-attributes   security-services-app-2.10.1 DDF Security Expansion Metacard Attributes Expansion
[installed  ] [2.10.1  ] security-sts-server                      security-services-app-2.10.1 DDF Security STS.
[installed  ] [2.10.1  ] security-sts-realm                       security-services-app-2.10.1 DDF Security STS Realm.
[uninstalled] [2.10.1  ] security-sts-ldaplogin                   security-services-app-2.10.1 DDF Security STS JAAS LDAP Login.
[uninstalled] [2.10.1  ] security-sts-ldapclaimshandler           security-services-app-2.10.1 Retrieves claims attributes from an LDAP store.
  1. Check the bundle status to verify the service is started.
    ddf@local>list

The console output should show an entry similar to the following:

[ 117] [Active     ] [            ] [Started] [   75] DDF :: Catalog :: Source :: Dummy (<version>)
Uninstalling Features from the Command Console
  1. Check the feature list to verify the feature is installed properly.
    ddf@local>feature:list

State         Version          Name                          Repository  		   Description
[installed  ] [2.10.1         ] ddf-core                      ddf-2.10.1
[uninstalled] [2.10.1         ] ddf-sts                       ddf-2.10.1
[installed  ] [2.10.1         ] ddf-security-common           ddf-2.10.1
[installed  ] [2.10.1         ] ddf-resource-impl             ddf-2.10.1
[installed  ] [2.10.1         ] ddf-source-dummy              ddf-2.10.1
  1. Uninstall the feature.
    ddf@local>feature:uninstall ddf-source-dummy

Warning

Dependencies that were auto-installed by the feature are not automatically uninstalled.

  1. Verify that the feature has uninstalled properly.
    ddf@local>feature:list

State         Version          Name                          Repository  Description
[installed  ] [2.10.1         ] ddf-core                      ddf-2.10.1
[uninstalled] [2.10.1         ] ddf-sts                       ddf-2.10.1
[installed  ] [2.10.1         ] ddf-security-common           ddf-2.10.1
[installed  ] [2.10.1         ] ddf-resource-impl             ddf-2.10.1
[uninstalled] [2.10.1         ] ddf-source-dummy              ddf-2.10.1

13.4.3. Configuring HTTP to HTTPS Proxy From the Command Console

DDF includes a proxy to transform HTTP to HTTPS for systems unable to use HTTPS.

  • Type the command feature:install platform-http-proxy

13.4.4. Configuring Solr from Command Console

Solr can only be configured/installed from configuration files. See Solr System Properties and Configuring Solr Catalog Provider Data Directory.

13.4.5. Standalone Security Token Service (STS) Installation

To run a STS-only DDF installation, uninstall the catalog components that are not being used. The following list displays the features that can be uninstalled to minimize the runtime size of DDF in an STS-only mode. This list is not a comprehensive list of every feature that can be uninstalled; it is a list of the larger components that can be uninstalled without impacting the STS functionality.

Unnecessary Features for Standalone STS
  • catalog-core-standardframework

  • catalog-opensearch-endpoint

  • catalog-opensearch-souce

  • catalog-rest-endpoint

13.4.6. Hardening Solr Index

  • Required Step for Security Hardening

The DDF’s design includes support for pluggable indexes. The default installation contains a Solr index to be used as the Metadata Catalog. If desired, this implementation can be replaced with an alternate 3rd party index implementation.

The following sections provide hardening guidance for Solr; however, they are to serve only as reference other an additional security requirements may be added.

Solr Admin User Interface Security

The Solr Admin user interface uses basic authentication as part of the server configuration.

Configuring Solr Node Security

The Solr server is protected by the built in REST security architecture. The configuration can be changed by editing the Web Context Policy Manager configuration for the /solr web context.

  1. Navigate to the Admin Console.

  2. Select the Security application.

  3. Select the Configuration tab.

  4. Select Web Context Policy Manager.

By default, the configuration is set to /solr=SAML|PKI|BASIC. This allows a user or another system to connect to Solr using any of those authentication methods.

Configuring Solr Encryption

While it is possible to encrypt the Solr index, it decreases performance significantly. An encrypted Solr index also can only perform exact match queries, not relative or contextual queries. As this drastically reduces the usefulness of the index, this configuration is not recommended. The recommended approach is to encrypt the entire drive through the Operating System of the server on which the index is located.

13.5. Configuring from Configuration Files

As most configurations are stored in configuration files, in some instances it may make sense to edit those configuration files directly. Additionally, configuration files may be pre-created and copied into a DDF installation. Finally, in an environment hardened for security purposes, access to the Admin Console or the Command Console might be denied and using the latter in such an environment may cause configuration errors. It is necessary to configure DDF (e.g., providers, Schematron rulesets, etc.) using .config files.

13.5.1. Configuring Global Settings with system.properties

Global configuration settings are configured via the properties file system.properties. These properties can be manually set by editing this file or set via the initial configuration from the Admin Console.

Note

Any changes made to this file require a restart of the system to take effect.

Important

The passwords configured in this section reflect the passwords used to decrypt JKS (Java KeyStore) files. Changing these values without also changing the passwords of the JKS causes undesirable behavior.

Table 9. Global Settings
Title Property Type Description Default Value Required

Keystore and truststore java properties

Keystore

javax.net.ssl.keyStore

String

Path to server keystore

etc/keystores/serverKeystore.jks

Yes

Keystore Password

javax.net.ssl.keyStorePassword

String

Password for accessing keystore

changeit

Yes

Truststore

javax.net.ssl.trustStore

String

The trust store used for SSL/TLS connections. Path is relative to <DDF_HOME>.

etc/keystores/serverTruststore.jks

Yes

Truststore Password

javax.net.ssl.trustStorePassword

String

Password for server Truststore

changeit

Yes

Keystore Type

javax.net.ssl.keyStoreType

String

File extension to use with server keystore

jks

Yes

Truststore Type

javax.net.ssl.trustStoreType

String

File extension to use with server truststore

jks

Yes

Headless mode

Headless Mode

java.awt.headless

Boolean

Force java to run in headless mode for when the server doesn’t have a display device

true

No

Global URL Properties

Default Protocol

org.codice.ddf.system.protocol

String

Default protocol that should be used to connect to this machine.

https://

Yes

Host

org.codice.ddf.system.hostname

String

The hostname or IP address used to advertise the system. Do not enter localhost. Possibilities include the address of a single node or that of a load balancer in a multi-node deployment.

If the hostname is changed during the install to something other than localhost a new keystore and truststore must be provided. See Managing Keystores and Certificates for details.

NOTE: Does not change the address the system runs on.

localhost

Yes

HTTPS Port

org.codice.ddf.system.httpsPort

String

The https port used by the system.

NOTE: This DOES change the port the system runs on.

8993

Yes

HTTP Port

org.codice.ddf.system.httpPort

String

The http port used by the system.

NOTE: This DOES change the port the system runs on.

8181

Yes

Default Port

org.codice.ddf.system.port

String

The default port used to advertise the system. This should match either the http or https port.

NOTE: Does not change the port the system runs on.

8993

Yes

Root Context

org.codice.ddf.system.rootContext

String

The the base or root context that services will be made available under.

/services

Yes

System Information Properties

Site Name

org.codice.ddf.system.siteName

String

The site name for DDF.

ddf.distribution

Yes

Site Contact

org.codice.ddf.system.siteContact

String

The email address of the site contact.

No

Version

org.codice.ddf.system.version

String

The version of DDF that is running.

This value should not be changed from the factory default.

2.10.1

Yes

Organization

org.codice.ddf.system.organization

String

The organization responsible for this installation of DDF.

Codice Foundation

Yes

Thread Pool Settings

Thread Pool Size

org.codice.ddf.system.threadPoolSize

Integer

Size of thread pool used for handling UI queries, federating requests, and downloading resources. See Configuring Thread Pools

128

Yes

HTTPS Specific Settings

Cipher Suites

https.cipherSuites

String

Cipher suites to use with secure sockets. If using the JCE unlimited strength policy, use this list in place of the defaults:

.

TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,

TLS_DHE_RSA_WITH_AES_128_CBC_SHA256,

TLS_DHE_RSA_WITH_AES_128_CBC_SHA,

TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,

TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256

No

Https Protocols

https.protocols

String

Protocols to allow for secure connections

TLSv1.1,TLSv1.2

No

Allow Basic Auth Over Http

org.codice.allowBasicAuthOverHttp

Boolean

Set to true to allow Basic Auth credentials to be sent over HTTP unsecurely. This should only be done in a test environment. These events will be audited.

false

Yes

Restrict the Security Token Service to allow connections only from DNs matching these patterns

ws-security.subject.cert.constraints

String

Set to a comma separated list of regex patterns to define which hosts are allowed to connect to the STS

.*

Yes

XML Settings

Parse XML documents into DOM object trees

javax.xml.parsers.DocumentBuilderFactory

String

Enables Xerces-J implementation of DocumentBuilderFactory

org.apache.xerces.jaxp.DocumentBuilderFactoryImpl

Yes

Catalog Source Retry Interval

Initial Endpoint Contact Interval

org.codice.ddf.platform.util.http.initialRetryInterval

Integer

If a Catalog Source is unavailable, try to connect to it after the initial interval has elapsed. After every retry, the interval doubles, up to a given maximum interval. The interval is measured in seconds.

10

Yes

Maximum Endpoint Contact Interval

Maximum seconds between attempts to establish contact with unavailable Catalog Source.

Integer

Do not wait longer than the maximum interval to attempt to establish a connection with an unavailable Catalog Source. Smaller values result in more current information about the status of Catalog Sources, but cause more network traffic. The interval is measured in seconds.

300

Yes

File Upload Settings

File extensions flagged as potentially dangerous to the host system or external clients

bad.file.extensions

String

Files uploaded with these bad file extensions will have their file names sanitized before being saved

.exe, .jsp, .html, .js, .php, .phtml, .php3, .php4, .php5, .phps, .shtml, .jhtml, .pl, .py, .cgi, .msi, .com, .scr, .gadget, .application, .pif, .hta, .cpl, .msc, .jar, .kar, .bat, .cmd, .vb, .vbs, .vbe, .jse, .ws, .wsf, .wsc, .wsh, .ps1, .ps1xml, .ps2, .ps2xml, .psc1, .psc2, .msh, .msh1, .msh2, .mshxml, .msh1xml, .msh2xml, .scf, .lnk, .inf, .reg, .dll, .vxd, .cpl, .cfg, .config, .crt, .cert, .pem, .jks, .p12, .p7b, .key, .der, .csr, .jsb, .mhtml, .mht, .xhtml, .xht

Yes

File names flagged as potentially dangerous to the host system or external clients

bad.files

String

Files uploaded with these bad file names will have their file names sanitized before being saved

crossdomain.xml, clientaccesspolicy.xml, .htaccess, .htpasswd, hosts, passwd, group, resolv.conf, nfs.conf, ftpd.conf, ntp.conf, web.config, robots.txt

Yes

Mime types flagged as potentially dangerous to external clients

bad.mime.types

String

Files uploaded with these mime types will be rejected from the upload

text/html, text/javascript, text/x-javascript, application/x-shellscript, text/scriptlet, application/x-msdownload, application/x-msmetafile

Yes

These properties are available to be used as variable parameters in input url fields within the Admin Console. For example, the url for the local csw service (https://localhost:8993/services/csw) could be defined as:

${org.codice.ddf.system.protocol}${org.codice.ddf.system.hostname}:${org.codice.ddf.system.port}${org.codice.ddf.system.rootContext}/csw

This variable version is more verbose, but will not need to be changed if the system host, port or root context changes.

Warning

Only root can access ports < 1024 on Unix systems.

13.5.2. Configuring with .config Files

The DDF is configured using .config files. Like the Karaf .cfg files, these configuration files must be located in the <DDF_HOME>/etc/ directory, have a name that matches the configuration persistence ID (PID) they represent, and have a service.pid property set to the configuration PID.

As opposed to .cfg however, this type of configuration file supports lists within configuration values (metatype cardinality attribute greater than 1).

Important

This new configuration file format must be used for any configuration that makes use of lists. Examples include Web Context Policy Manager (PID: org.codice.ddf.security.policy.context.impl.PolicyManager) and Security STS Guest Claims Handler (PID: ddf.security.sts.guestclaims).

Warning

Only one configuration file should exist for any given PID. The result of having both a .cfg and a .config file for the same PID is undefined and could cause the application to fail.

The main purpose of the configuration files is to allow administrators to pre-configure DDF without having to use the Admin Console. In order to do so, the configuration files need to be copied to the <DDF_HOME>/etc directory after DDF zip has been extracted.

Upon start up, all the .config files located in <DDF_HOME>/etc are automatically read and processed. Files that have been processed successfully are moved to <DDF_HOME>/etc/processed so they will not be processed again when the system is restarted. Files that could not be processed are moved to the <DDF_HOME>/etc/failed directory.

DDF also monitors the <DDF_HOME>/etc directory for any new .config file that gets added. As soon as a new file is detected, it is read, processed and moved to the appropriate directory based on whether it was successfully processed or not.

13.5.3. Configuring Using a .config File Template

A template file is provided for some configurable DDF items so that they can be copied/renamed then modified with the appropriate settings.

The following steps define the procedure for configuring a new source or feature using a config file:

  1. Copy/rename the provided template file in the `etc/templates directory to the etc directory. (Refer to the table above to determine correct template.)

    1. Not required, but a good practice is to change the instance name (e.g., OpenSearchSource.1.config) of the file to something identifiable (OpenSearchSource.remote-site-1.config).

  2. Edit the copied file to etc with the settings for the configuration. (Refer to the table above to determine the configurable properties).

    1. Consult the inline comments in the file for guidance on what to modify.

The new service can now be used as if it was created using the Admin Console.

Table 10. Templates included with DDF
DDF Service Template File Name Factory PID Configurable Properties

DDF Catalog Framework

ddf.catalog.impl.service.CatalogFrameworkImpl.cfg

ddf.catalog.CatalogFrameworkImpl

Standard Catalog Framework

13.5.4. Configuring WSS Using Standalone Servers

DDF can be configured to use SAML 2.0 Web SSO as a single sign-on service and LDAP and STS to keep track of users and user attributes. SAML, LDAP, and STS can be installed on a local DDF instance with only a few feature installs. Setting up these authentication components to run externally, however, is more nuanced, so this page will provide step-by-step instructions detailing the configuration process.

If using different keystore names, substitute the name provided in this document with the desired name for your setup. For this document, the following data is used:

Server

Keystore File

Comments

DDF

serverKeystore.jks

Keystore used for SSL/TLS connections.

Login Authentication Scheme
Figure 1. Login Authentication Scheme

13.5.5. Configuring Managed Service Factory Bundles

Services that are created using a Managed Service Factory can be configured using .config files as well. These configuration files, however, follow a different naming convention. The files must start with the Managed Service Factory PID, be followed by a unique identifier and have a .config extension. For instance, assuming that the Managed Service Factory PID is org.codice.ddf.factory.pid and two instances of the service need to be configured, files org.codice.ddf.factory.pid.<UNIQUE ID 1>.config and org.codice.ddf.factory.pid.<UNIQUE ID 2>.config should be created and added to <DDF_HOME>/etc.

The unique identifiers used in the file names have no impact on the order in which the configuration files are processed. No specific processing order should be assumed. Also, a new service will be created and configured every time a configuration file matching the Managed Service Factory PID is added to the directory, regardless of the unique id used.

These configuration files must also contain a service.factoryPid property set to the factory PID (without the sequential number). They should not however contain the service.pid property.

File Format

The basic syntax of the .config configuration files is similar to the older .cfg files but introduces support for lists and types other than simple strings. The type associated with a property must match the type attribute used in the corresponding metatype.xml file when applicable.

The following table shows the format to use for each property type supported.

Table 11. Property Formats
Type Format Example

Service PID

service.pid = "servicePid"

service.pid = "org.codice.ddf.security.policy.context.impl.PolicyManager"

Factory PID

service.factoryPid = "serviceFactoryPid"

service.factoryPid = "Csw_Federated_Source"

Strings

name = "value"

name = "john"

Booleans

name = B"true

false"

authorized = B"true"

Integers

name = I"value"

timeout=I"60"

Longs

name = L"value"

diameter = L"10000"

Floats

name = F"value"

cost = F"10.50"

Doubles

name = D"value"

latitude = D"45.0234"

Lists of Strings

name = [ "value1", "value2", …​ ]

`complexStringArray = [ "{\"url\"\ \"http://test.sample.com\"\ \"layers\"\ [\"0\"]\ \"VERSION\"\ \"1.1

1.2\"\ \"image/png\"}\ \"beta\"\ 1}", "{\"url\"\ \"http://test.sample.com"\ 0.5}", "/solr\=SAML

PKI

basic", "/security-config\=SAML

basic" ]`

Lists of Integers

Note
  • Lists of values can be prefixed with any of the supported types (B, I, L, F or D).

  • To prevent any configuration issues, the = signs used in values should be escaped using a backslash (\).

  • Boolean values will default to false if any value other than true is provided.

  • Escape character in values must be used for double quotes (") and spaces, but cannot be used with { } or [ ] pairings.

Sample configuration file
1
2
3
4
5
6
7
8
9
service.pid="org.codice.ddf.security.policy.context.impl.PolicyManager"

authenticationTypes=["/\=SAML|GUEST","/admin\=SAML|basic","/system\=basic","/solr\=SAML|PKI|basic","/sources\=SAML|basic","/security-config\=SAML|basic","/search\=basic"]

realms=["/\=karaf"]

requiredAttributes=["/\=","/admin\={http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role\=admin}","/solr\={http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role\=admin}","/system\={http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role\=admin}","/security-config\={http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role\=admin}"]

whiteListContexts=["/services/SecurityTokenService","/services/internal/metrics","/services/saml","/proxy","/services/csw"]

13.5.6. Installing Multiple DDFs on the Same Host

To have multiple DDF instances on the same host, it is necessary to edit the port numbers in the files in the DDF install folder.

File to Edit Property(ies) Original Value Example of New Value

bin/karaf.bat

address

5005

5006

etc/org.apache.karaf.management.cfg

rmiRegistryPort

1099

1199

rmiServerPort

44444

44445

etc/system.properties

httpsPort,port

8993

8994

httpPort

8181

8281

13.5.7. Configuring Files in Home Directory Hierarchy

Many important configuration settings are stored in the <DDF_HOME> directory.

Note

Depending on the environment, it may be easier for integrators and administrators to configure DDF using the Admin Console prior to disabling it for hardening purposes. The Admin Console can be re-enabled for additional configuration changes.

In an environment hardened for security purposes, access to the Admin Console or the Command Console might be denied and using the latter in such an environment may cause configuration errors. It is necessary to configure DDF (e.g., providers, Schematron rulesets, etc.) using .config files.

A template file is provided for some configurable DDF items so that they can be copied/renamed then modified with the appropriate settings.

Warning

If the Admin Console is enabled again, all of the configuration done via .config files will be loaded and displayed. However, note that the name of the .config file is not used in the Admin Console. Rather, a universally unique identifier (UUID) is added when the DDF item was created and displays this UUID in the console (e.g., OpenSearchSource.112f298e-26a5-4094-befc-79728f216b9b)

13.5.8. Configuring Solr Catalog Provider Data Directory

The HTTP Solr Catalog Provider and the Embedded Solr Catalog Provider writes index files to the file system. By default, these files are stored under DDF_HOME/data/solr/catalog/data. If there is inadequate space in DDF_HOME, or if it is desired to maintain backups of the indexes only, this directory can be changed.

In order to change the Data Directory, the system.properties file in DDF_HOME/etc must be edited prior to starting DDF.

Edit the system.properties file
# Uncomment the following line and set it to the desired path
# solr.data.dir = ${karaf.home}/data/solr
Changing the Data Directory

It may become necessary to change the data directory after DDF has ingested data.

  1. Shut down the DDF.

  2. Create the new directory to hold the indexes.

    Make new Data Directory
    mkdir -p /path/to/new/data/dir
  3. Copy the indexes to the new directory.

    Copy the indexes to the new Directory.
    cp /path/to/old/data/dir/* /path/to/new/data/dir/.
  4. Set the system.properties file to use the new directory.

    Update system.properties file
    solr.data.dir = /path/to/new/data/dir
  5. Restart the DDF.

Warning

Changes Require a Distribution Restart
If the Data Directory File Path property is changed, no changes will occur to the Solr Catalog Provider until the distribution has been restarted.

Note

If data directory file path property is changed to a new directory, and the previous data is not moved into that directory, no data will exist in Solr. Instead, Solr will create an empty index. Therefore, it is possible to have multiple places where Solr files are stored, and a user can toggle between those locations for different sets of data.

13.5.9. Configuring Thread Pools

The org.codice.ddf.system.threadPoolSize property can be used to specify the size of thread pools used by:

  • Federating requests between DDF systems

  • Downloading resources

  • Handling asynchronous queries, such as queries from the UI

By default, this value is set to 128. It is not recommended to set this value extremely high. If unsure, leave this setting at its default value of 128.

13.5.10. Configuring Web Service Providers

By default Solr, STS server, STS client and the rest of the services use the system property org.codice.ddf.system.hostname which is defaulted to 'localhost' and not to the fully qualified domain name of the DDF instance. Assuming the DDF instance is providing these services, the configuration must be updated to use the fully qualified domain name as the service provider.

This can be changed during Initial Configuration or later by editing the <INSTALL_HOME>/etc/system.properties file.

13.5.11. Isolating Solr Cloud and Zookeeper

  • Required Step for Security Hardening (if using Solr Cloud/Zookeeper)

Zookeeper cannot use secure (SSL/TLS) connection. The configuration information that Zookeeper sends and receives is vulnerable to network sniffing. Also, the connections between the local Solr Catalog service and the Solr Cloud is not necessarily secure. The connections between Solr Cloud nodes are not necessarily secure. Any unencrypted network traffic is vulnerable to sniffing attacks. To use Solr Cloud and Zookeeper securely, these processes must be isolated on the network, or their communications must be encrypted by other means. The DDF process must be visible on the network to allow authorized parties to interact with it.

Examples of Isolation:
  • Create a private network for Solr Cloud and Zookeeper. Only DDF is allowed to contact devices inside the private network.

  • Use IPsec to encrypt the connections between DDF, Solr Cloud nodes, and Zookeeper nodes.

  • Put DDF, Solr Cloud and Zookeeper behind a firewall that only allows access to DDF.

13.6. Importing Configurations

The Configuration Export/Import capability allows administrators to export the current DDF configuration and use it as a starting point for a new installation. This is useful when upgrading or expanding use of DDF where an identical configuration of multiple instances is desired.

Important
  • Importing configuration files is only guaranteed to work when importing files from the same DDF version. Importing from a different version is not recommended as it may cause the new DDF instance to be incorrectly configured and become unusable.

  • All configurations will be exported to` <DDF_HOME>/etc/exported` followed by their relative path from <DDF_HOME>. For instance, <DDF_HOME>/etc/keystores/keystore.jks will be exported to <DDF_HOME>/etc/exported/etc/keystores/keystore.jks, while <DDF_HOME>/etc/system.properties will be exported to <DDF_HOME>/etc/exported/etc/system.properties.

  • To keep the export/import process simple and consistent, all system configuration files are required to be under the <DDF_HOME> directory.

13.6.1. Exporting Existing Configurations

Exporting Existing Configurations from Admin Console

You can export the current system configurations using the Admin Console. This is useful for migrating from one running instance to another.

To export the current system configurations, follow these instructions:

  1. Select the System tab (next to the Applications tab)

    Exporting Step 1
  2. Click the Export Configuration button

    Exporting Step 2
  3. Fill out the form, specifying the destination for the export. A relative path will be relative to <DDF_HOME>.

    Exporting Step 3
  4. Click the Start Export button.

  5. If there are no warnings or errors, the form will automatically close upon finishing the export.

Export Existing Configuration Settings from Command Console
  • Required Step for Security Hardening

To export the current DDF configuration from the Command Console:

  1. Type in migration:export <directory>. This command creates the exported configuration files that are saved to the specified directory. If no directory is specified it will default to <DDF_HOME>/etc/exported

  2. Zip up the exported files in the export directory.

cd  <DDF_HOME>/etc/exported
zip -r exportedFiles.zip *
Troubleshooting Common Warnings or Failures of Configuration Export

If export is unsuccessful, use this list to verify the correct configuration.

  • Export Destination Directory Permissions Set to Read Only.

exporting configuration error
Figure 2. Insufficient Write Permissions
  • Properties Set to Absolute File Paths

    • Setting properties to absolute paths is not allowed; so update the property to a value that is relative to <DDF_HOME>. However, notice that the export did not completely fail, but issued a warning that a specific file was excluded.