1. About DDF
1.1. Introducing DDF
Distributed Data Framework (DDF) is a free and open-source common data layer that abstracts services and business logic from underlying data structures to enable rapid integration of new data sources.
Licensed under LGPL , DDF is an interoperability platform that provides secure and scalable discovery and retrieval from a wide array of disparate sources.
DDF is:
-
a flexible and modular integration framework.
-
built to "unzip and run" even when scaled to large enterprise systems.
-
primarily focused on data integration, enabling clients to insert, query, and transform information from disparate data sources via the DDF Catalog.
1.2. Component Applications
DDF is comprised of several modular applications, to be installed or uninstalled as needed.
- Admin Application
-
Enhances administrative capabilities when installing and managing DDF. It contains various services and interfaces that allow administrators more control over their systems.
- Catalog Application
-
Provides a framework for storing, searching, processing, and transforming information. Clients typically perform local and/or federated query, create, read, update, and delete (QCRUD) operations against the Catalog. At the core of the Catalog functionality is the Catalog Framework, which routes all requests and responses through the system, invoking additional processing per the system configuration.
- Platform Application
-
The Core application of the distribution. The Platform application contains the fundamental building blocks to run the distribution.
- Security Application
-
Provides authentication, authorization, and auditing services for the DDF. It is both a framework that developers and integrators can extend and a reference implementation that meets security requirements.
- Solr Catalog Application
-
Includes the Solr Catalog Provider, an implementation of the Catalog Provider using Apache Solr as a data store.
- Spatial Application
- Search UI
-
Allows a user to search for records in the local Catalog (provider) and federated sources. Results of the search are returned and displayed on a globe or map, providing a visual representation of where the records were found.
2. Documentation Guide
The DDF documentation is organized by audience.
- Core Concepts
-
This introduction section is intended to give a high-level overview of the concepts and capabilities of DDF.
- Administrators
-
Managing | Administrators will be installing, maintaining, and supporting existing applications. Use this section to prepare, install, configure, run, and monitor DDF.
- Users
-
Using | Users interact with the system to search data stores. Use this section to navigate the various user interfaces available in DDF.
- Integrators
-
Integrating | Integrators will use the existing applications to support their external frameworks. This section will provide details for finding, accessing and using the components of DDF.
- Developers
-
Developing | Developers will build or extend the functionality of the applications.
2.1. Documentation Conventions
The following conventions are used within this documentation:
2.1.1. Customizable Values
Many values used in descriptions are customizable and should be changed for specific use cases.
These values are denoted by < >
, and by [[ ]]
when within XML syntax. When using a real value, the placeholder characters should be omitted.
2.1.2. Code Values
Java objects, lines of code, or file properties are denoted with the Monospace
font style.
Example: ddf.catalog.CatalogFramework
2.1.3. Hyperlinks
Some hyperlinks (e.g., /admin
) within the documentation assume a locally running installation of DDF.
Simply change the hostname if accessing a remote host.
Hyperlinks that take the user away from the DDF documentation are marked with an external link
() icon.
2.2. Support
Questions about DDF should be posted to the ddf-users forum or ddf-developers forum , where they will be responded to quickly by a member of the DDF team.
2.2.1. Documentation Updates
The most current DDF documentation is available at DDF Documentation .
3. Core Concepts
This introduction section is intended to give a high-level overview of the concepts and capabilities of DDF.
3.1. Introduction to Search
DDF provides the capability to search the Catalog for metadata. There are a number of different types of searches that can be performed on the Catalog, and these searches are accessed using one of several interfaces. This section provides a very high-level overview of introductory concepts of searching with DDF. These concepts are expanded upon in later sections.
There are four basic types of metadata search. Additionally, any of the types can be combined to create a compound search.
- Text Search
-
A text search is used when searching for textual information. It searches all textual fields by default, although it is possible to refine searches to a text search on a single metadata attribute. Text searches may use wildcards, logical operators, and approximate matches.
- Spatial Search
-
A spatial search is used for Area of Interest (AOI) searches. Polygon and point radius searches are supported.
- Temporal Search
-
A temporal search finds information from a specific time range. Two types of temporal searches are supported: relative and absolute. Relative searches contain an offset from the current time, while absolute searches contain a start and an end timestamp. Temporal searches can use the
created
ormodified
date attributes. - Datatype Search
-
A datatype search is used to search for metadata based on the datatype of the resource. Wildcards (*) can be used in both the datatype and version fields. Metadata that matches any of the datatypes (and associated versions if specified) will be returned. If a version is not specified, then all metadata records for the specified datatype(s) regardless of version will be returned.
3.2. Introduction to Metadata
In DDF, resources are the data products, files, reports, or documents of interest to users of the system.
Metadata is information about those resources, organized into a schema to make search possible. The Catalog stores this metadata and allows access to it. Metacards are single instances of metadata, representing a single resource, in the Catalog. Metacards follow one of several schemas to ensure reliable, accurate, and complete metadata. Essentially, Metacards function as containers of metadata.
3.3. Introduction to Ingest
Ingest is the process of bringing data products, metadata, or both into the catalog to enable search, sharing, and discovery. Ingested files are transformed into a neutral format that can be searched against as well as migrated to other formats and systems. See Ingesting Data for the various methods of ingesting data.
Upon ingest, a transformer will read the metadata from the ingested file and populate the fields of a metacard. Exactly how this is accomplished depends on the origin of the data, but most fields (except id) are imported directly.
3.4. Introduction to Resources
The Catalog Framework can interface with storage providers to provide storage of resources to specific types of storage, e.g., file system, relational database, XML database. A default file system implementation is provided by default.
Storage providers act as a proxy between the Catalog Framework and the mechanism storing the content. Storage providers expose the storage mechanism to the Catalog Framework. Storage plugins provide pluggable functionality that can be executed either immediately before or immediately after content has been stored or updated.
Storage providers provide the capability to the Catalog Framework to create, read, update, and delete resources in the content repository.
See Data Management for more information on specific file types supported by DDF.
3.5. Introduction to the Catalog Framework
The Catalog Framework wires all the Catalog components together.
It is responsible for routing Catalog requests and responses to the appropriate source, destination, federated system, etc.
Endpoints send Catalog requests to the Catalog Framework. The Catalog Framework then invokes Catalog Plugins, Transformers, and Resource Components as needed before sending requests to the intended destination, such as one or more Sources.
The Catalog Framework decouples clients from service implementations and provides integration points for Catalog Plugins and convenience methods for Endpoint developers.
3.6. Introduction to Federation and Sources
Federation is the ability of the DDF to query other data sources, including other DDFs. By default, the DDF is able to federate using OpenSearch and CSW protocols. The minimum configuration necessary to configure those federations is a query address.
Federation enables constructing dynamic networks of data sources that can be queried individually or aggregated into specific configuration to enable a wider range of accessibility for data and data products.
Federation provides the capability to extend the DDF enterprise to include Remote Sources, which may include other instances of DDF. The Catalog handles all aspects of federated queries as they are sent to the Catalog Provider and Remote Sources, as they are processed, and as the query results are returned. Queries can be scoped to include only the local Catalog Provider (and any Connected Sources), only specific Federated Sources, or the entire enterprise (which includes all local and Remote Sources). If the query is federated, the Catalog Framework passes the query to a Federation Strategy, which is responsible for querying each federated source that is specified. The Catalog Framework is also responsible for receiving the query results from each federated source and returning them to the client in the order specified by the particular federation strategy used. After the federation strategy handles the results, the Catalog returns them to the client through the Endpoint. Query results are returned from a federated query as a list of metacards. The source ID in each metacard identifies the Source from which the metacard originated.
3.7. Introduction to Events and Subscriptions
DDF can be configured to receive notifications whenever metadata is created, updated, or deleted in any federated sources. Creations, updates, and deletions are collectively called Events, and the process of registering to receive them is called Subscription.
The behavior of these subscriptions is consistent, but the method of configuring them is specific to the Endpoint used.
3.8. Introduction to Registries
The Registry Application serves as an index of registry nodes and their information, including service bindings, configurations and supplemental details.
Each registry has the capability to serve as an index of information about a network of registries which, in turn, can be used to connect across a network of DDFs and other data sources. Registries communicate with each other through the CSW endpoint and each registry node is converted into a registry metacard to be stored in the catalog. When a registry is subscribed to or published from, it sends the details of one or more nodes to another registry.
- Identity Node
-
The Registry is initially comprised of a single registry node, refered to as the identity, which represents the registry’s primary configuration.
- Subscription
-
Subscribing to a registry is the act of retreiving its information, specifically its identity information and any other registries it knows about. By default, subscriptions are configured to check for updates every 30 seconds.
- Publication
-
Publishing is the act of sending a registry’s information to another registry. Once publication has occurred, any updates to the local registry will be pushed out to the registries that have been published to.
3.9. Introduction to Endpoints
Endpoints expose the Catalog Framework to clients using protocols and formats that the clients understand.
Endpoint interface formats encompass a variety of protocols, including (but not limited to):
-
SOAP Web services
-
RESTful services
-
JMS
-
JSON
-
OpenSearch
The endpoint may transform a client request into a compatible Catalog format and then transform the response into a compatible client format. Endpoints may use Transformers to perform these transformations. This allows an endpoint to interact with Source(s) that have different interfaces. For example, an OpenSearch Endpoint can send a query to the Catalog Framework, which could then query a federated source that has no OpenSearch interface.
Endpoints are meant to be the only client-accessible components in the Catalog.
3.10. Introduction to High Availability
DDF can be made highly available. In this context, High Availability is defined as the ability for DDF to be continuously operational with very little down time.
In a Highly Available Cluster, DDF has failover capabilities when a DDF node fails.
Note
|
The word "node", from a High Availability perspective, is one of the two DDF systems running within the Highly Available Cluster. Though there are multiple systems running with the Highly Available Cluster, it is still considered a single DDF from a user’s perspective or from other DDFs' perspectives. |
This setup consists of a Solr Cloud instance, 2 DDF nodes connected to that Solr Cloud, and a failover proxy that sits in front of those 2 nodes. One of the DDF nodes will be arbitrarily chosen to be the active node, and the other will be the "hot standby" node. It is called a "hot standby" node because it is ready to receive traffic even though it’s not currently receiving any. The failover proxy will route all traffic to the active node. If the active node fails for any reason, the standby node will become active and the failover proxy will route all traffic to the new active node. See the below diagrams for more detail.
There are special procedures for initial setup and configuration of a highly available DDF. See High Availability Initial Setup and High Availability Configuration for those procedures.
3.10.1. High Availability Supported Capabilities
Only these capabilities are supported in a Highly Available Cluster.
For a detailed list of features, look at the ha.json
file located in <DDF_HOME>/etc/profiles/
.
-
User Interfaces:
-
Simple
-
Intrigue
-
-
Catalog:
-
Validation
-
Plug-ins: Expiration Date, JPEG2000, Metacard Validation, Schematron, Versioning
-
Transformers
-
Content File System Storage Provider
-
-
Platform:
-
Actions
-
Configuration
-
Notifications
-
Persistence
-
Security: Audit, Encryption
-
-
Solr
-
Security
-
Thirdy Party:
-
CXF
-
Camel
-
-
Endpoints:
-
REST Endpoint
-
CSW Endpoint
-
OpenSearch Endpoint
-
3.11. Standards Supported by DDF
DDF incorporates support for many common Service, Metadata, and Security standards, as well as many common Data Formats.
3.11.1. Catalog Service Standards
Service standards are implemented within Endpoints and/or Sources. Standards marked Experimental are functional and have been tested, but are subject to change or removal during the incubation period.
Standard (public standards linked where available) | Endpoints | Sources | Status |
---|---|---|---|
Open Geospatial Consortium Catalog Service for the Web (OGC CSW) 2.0.1/2.0.2 |
Geographic MetaData extensible markup language (GMD) CSW Source |
Supported |
|
Supported |
|||
OGC WPS 2.0 Web Processing Service |
Experimental |
||
Supported |
|||
Supported |
|||
Atlassian Confluence® |
Supported |
3.11.2. Data Formats
DDF has extended capabilities to extract rich metadata from many common data formats if those attributes are populated in the source document. See appendix for a complete list of file formats that can be ingested with limited metadata coverage. Metadata standards use XML or JSON, or both.
Format | File Extensions | Additional Metadata Attributes Available (if populated) |
---|---|---|
Word Document |
|
|
PowerPoint |
|
|
Excel |
|
|
|
||
GeoPDF |
|
|
geojson |
|
|
html |
|
|
jpeg |
|
Standard attributes and additional Media attributes |
mp2 |
|
Standard attributes and additional Media attributes |
mp4 |
|
Standard attributes, additional Media attributes, and mp4 additional attribute |
WMV |
|
|
AVIs |
|
|
|
||
|
3.11.3. Map Formats
Intrigue includes capabilities to support custom map layer providers as well as support for several popular map layer providers.
Some provider types are currently only supported by the 2D OpenLayers map and some only by the 3D Cesium map.
Format | 2D Documentation | 3D Documentation |
---|---|---|
Open Street Map |
||
Web Map Service |
||
Web Map Tile Service |
||
ArcGIS Map Server |
||
Single Tile |
||
Bing Maps |
||
Tile Map Service |
||
Google Earth |
3.11.4. Security Standards
DDF makes use of these security standards to protect the system and interactions with it.
Standard | Support Status |
---|---|
Supported |
|
Supported |
Standard | Support Status |
---|---|
Supported |
|
|
Supported |
Standard | Support Status |
---|---|
HyperText Transport Protocol (HTTP) / HyperText Transport Protocol Secure (HTTPS) |
Supported |
File Transfer Protocol (FTP) / File Transfer Protocol Secure (FTPS) |
Supported |
Supported |
Standard | Support Status |
---|---|
Supported |
|
Supported |
|
Supported |
Standard | Support Status |
---|---|
Supported |
|
Supported |
|
Supported |
Standard | Support Status |
---|---|
Supported |
|
Supported |
|
Supported |
|
Supported |
4. Quick Start Tutorial
This quick tutorial will enable install, configuring and using a basic instance of DDF.
Note
|
This tutorial is intended for setting up a test, demonstration, or trial installation of DDF. For complete installation and configuration steps, see Installing. |
These steps will demonstrate:
4.1. Installing (Quick Start)
These are the basic requirements to set up the environment to run a DDF.
Warning
|
For security reasons, DDF cannot be started from a user’s home directory. If attempted, the system will automatically shut down. |
4.1.1. Quick Install Prerequisites
-
At least 4096MB of memory for DDF.
-
This amount can be increased to support memory-intensive applications. See Memory Considerations.
-
Set up Java to run DDF.
-
For a runtime system:
-
Install Oracle JRE 8 x64 or OpenJDK 8 JRE
-
-
For a development system:
-
Install/Upgrade to Java 8 x64 J2SE 8 SDK
-
The recommended version is 8u60 or later.
-
Java Version and Build numbers must contain only number values.
-
-
-
Microsoft Windows and Linux are supported. For more information about supported versions, see Installation Prerequisites
-
JRE 8 x64 or OpenJDK 8 JRE must be installed.
-
If the JRE was installed, the
JRE_HOME
environment variable must be set to the location where the JRE is installed. -
If the JDK was installed, the
JAVA_HOME
environment variable must be set to the location where the JDK is installed.
<JAVA_VERSION>
with the version and build number installed.)-
Determine Java Installation Directory (This varies between operating system versions).
for %i in (java.exe) do @echo. %~$PATH:i
which java
-
Copy path to Java installation. (example:
/usr/java/<JAVA_VERSION>
) -
Set
JAVA_HOME
orJRE_HOME
by replacing <PATH_TO_JAVA> with the copied path in this command:
If JDK was installed:
JAVA_HOME
on Windowsset JAVA_HOME=<PATH_TO_JAVA><JAVA_VERSION>
JAVA_HOME
to PATH
Environment Variable on Windowssetx PATH "%PATH%;%JAVA_HOME%\bin"
JAVA_HOME
on *nixexport JAVA_HOME=<PATH_TO_JAVA><JAVA_VERSION>
JAVA_HOME
to PATH
Environment Variable on *nixexport PATH=$JAVA_HOME/bin\:$PATH
IF JRE was installed:
JRE_HOME
on Windowsset JRE_HOME=<PATH_TO_JAVA><JAVA_VERSION>
JRE_HOME
to PATH
Environment Variable on Windowssetx PATH "%PATH%;%JRE_HOME%\bin"
JRE_HOME
on *nixexport JRE_HOME=<PATH_TO_JAVA><JAVA_VERSION>
JRE_HOME
to PATH
Environment Variable on *nixexport PATH=$JRE_HOME/bin\:$PATH
Warning
|
*nix
Unlink |
Tip
|
Verify that the
JAVA_HOME was set correctly.Windows
echo %JAVA_HOME% *nix
echo $JAVA_HOME |
Note
|
File Descriptor Limit on Linux
fs.file-max = 6815744
init 6 |
Warning
|
Check System Time
Prior to installing DDF, ensure the system time is accurate to prevent federation issues. |
4.1.2. Quick Install of DDF
-
Download the DDF zip file .
-
Install DDF by unzipping the zip file.
WarningWindows Zip Utility WarningThe Windows Zip implementation, which is invoked when a user double-clicks on a zip file in the Windows Explorer, creates a corrupted installation. This is a consequence of its inability to process long file paths. Instead, use the java jar command line utility to unzip the distribution (see example below) or use a third party utility such as 7-Zip.
Note: If and only if a JDK is installed, the jar command may be used; otherwise, another archiving utility that does not have issue with long paths should be installed
Use Java to Unzip in Windows(Replace<PATH_TO_JAVA>
with correct pathand <JAVA_VERSION>
with current version.)"<PATH_TO_JAVA>\jdk<JAVA_VERSION>\bin\jar.exe" xf ddf-2.15.0.zip
-
This will create an installation directory, which is typically created with the name and version of the application. This installation directory will be referred to as
<DDF_HOME>
. (Substitute the actual directory name.) -
Start DDF by running the
<DDF_HOME>/bin/ddf
script (orddf.bat
on Windows). -
Startup may take a few minutes.
-
Optionally, a
system:wait-for-ready
command (aliased towfr
) can be used to wait for startup to complete.
-
-
The Command Console will display.
ddf@local>
4.1.3. Quick Install of DDF on a remote headless server
If DDF is being installed on a remote server that has no user interface, the hostname will need to be updated in the configuration files and certificates.
Note
|
Do not replace all instances of |
-
Update the <DDF_HOME>/etc/custom.system.properties file. The entry
org.codice.ddf.system.hostname=localhost
should be updated toorg.codice.ddf.system.hostname=<HOSTNAME>
. -
Update the <DDF_HOME>/etc/users.properties file. Change the
localhost=localhost[…]
entry to <HOSTNAME>=<HOSTNAME>. (Keep the rest of the line as is.) -
Update the <DDF_HOME>/etc/users.attributes file. Change the "localhost" entry to "<HOSTNAME>".
-
From the console go to <DDF_HOME>/etc/certs and run the appropriate script.
-
*NIX:
sh CertNew.sh -cn <hostname> -san "DNS:<hostname>"
. -
Windows:
CertNew -cn <hostname> -san "DNS:<hostname>"
.
-
-
Proceed with starting the system and continue as usual.
-
Update the <DDF_HOME>/etc/custom.system.properties file. The entry
org.codice.ddf.system.hostname=localhost
should be updated toorg.codice.ddf.system.hostname=<IP>
. -
Update the <DDF_HOME>/etc/users.properties file. Change the
localhost=localhost[…]
entry to <IP>=<IP>. (Keep the rest of the line as is.) -
Update the <DDF_HOME>/etc/users.attributes file. Change the "localhost" entry to "<IP>".
-
From the console go to <DDF_HOME>/etc/certs and run the appropriate script.
-
*NIX:
sh CertNew.sh -cn <IP> -san "IP:<IP>"
. -
Windows:
CertNew -cn <IP> -san "IP:<IP>"
.
-
-
Proceed with starting the system and continue as usual.
4.2. Certificates (Quick Start)
DDF comes with a default keystore that contains certificates. This allows the distribution to be unzipped and run immediately. If these certificates are sufficient for testing purposes, proceed to Configuring (Quick Start).
To test federation using 2-way TLS, the default keystore certificates will need to be replaced, using either the included Demo Certificate Authority or by Creating Self-signed Certificates.
If the installer was used to install the DDF and a hostname other than "localhost" was given, the user will be prompted to upload new trust/key stores.
If the hostname is localhost
or, if the hostname was changed after installation, the default certificates will not allow access to the DDF instance from another machine over HTTPS (now the default for many services).
The Demo Certificate Authority will need to be replaced with certificates that use the fully-qualified hostname of the server running the DDF instance.
4.2.1. Demo Certificate Authority (CA)
DDF comes with a populated truststore containing entries for many public certificate authorities, such as Go Daddy and Verisign. It also includes an entry for the DDF Demo Root CA. This entry is a self-signed certificate used for testing. It enables DDF to run immediately after unzipping the distribution. The keys and certificates for the DDF Demo Root CA are included as part of the DDF distribution. This entry must be removed from the truststore before DDF can operate securely.
4.2.1.1. Creating New Server Keystore Entry with the CertNew Scripts
To create a private key and certificate signed by the Demo Certificate Authority, use the provided scripts.
To use the scripts, run them out of the <DDF_HOME>/etc/certs
directory.
The CertNew
scripts:
-
Create a new entry in the server keystore.
-
Use the hostname as the fully qualified domain name (FQDN) when creating the certificate.
-
Adds the specified subject alternative names if any.
-
Use the Demo Certificate Authority to sign the certificate so that it will be trusted by the default configuration.
To install a certificate signed by a different Certificate Authority, see Managing Keystores.
After this proceed to Updating Settings After Changing Certificates.
Warning
|
If the server’s fully qualified domain name is not recognized, the name may need to be added to the network’s DNS server. |
4.2.1.2. Dealing with Lack of DNS
In some cases DNS may not be available and the system will need to be configured to work with IP addresses.
Options can be given to the CertNew Scripts to generate certs that will work in this scenario.
After this proceed to Updating Settings After Changing Certificates, and be sure to use the IP address instead of the FQDN.
4.2.2. Creating Self-Signed Certificates
If using the Demo CA is not desired, DDF supports creating self-signed certificates with a self-signed certificate authority. This is considered an advanced configuration.
Creating self-signed certificates involves creating and configuring the files that contain the certificates.
In DDF, these files are generally Java Keystores (jks
) and Certificate Revocation Lists (crl
).
This includes commands and tools that can be used to perform these operations.
For this example, the following tools are used:
4.2.2.1. Creating a custom CA Key and Certificate
The following steps demonstrate creating a root CA to sign certificates.
-
Create a key pair.
$> openssl genrsa -aes128 -out root-ca.key 1024
-
Use the key to sign the CA certificate.
$> openssl req -new -x509 -days 3650 -key root-ca.key -out root-ca.crt
4.2.2.2. Sign Certificates Using the custom CA
The following steps demonstrate signing a certificate for the tokenissuer
user by a CA.
-
Generate a private key and a Certificate Signing Request (CSR).
$> openssl req -newkey rsa:1024 -keyout tokenissuer.key -out tokenissuer.req
-
Sign the certificate by the CA.
$> openssl ca -out tokenissuer.crt -infiles tokenissuer.req
These certificates will be used during system configuration to replace the default certificates.
4.2.3. Updating Settings After Changing Certificates
After changing the certificates it will be necessary to update the system user and the org.codice.ddf.system.hostname
property with the value of either the FQDN or the IP.
FQDNs should be used wherever possible. In the absence of DNS, however, IP addresses can be used.
Replace localhost
with the FQDN or the IP in <DDF_HOME>/etc/users.properties
, <DDF_HOME>/etc/users.attributes
, and <DDF_HOME>/etc/custom.system.properties
.
Tip
|
On linux this can be accomplished with a single command:
|
Finally, restart the DDF instance. Navigate to the Admin Console to test changes.
4.3. Configuring (Quick Start)
Set the configurations needed to run DDF.
-
In a browser, navigate to the Admin Console at https://{FQDN}:{PORT}/admin.
-
The Admin Console may take a few minutes to start up.
-
-
Enter the default username of
admin
and the password ofadmin
. -
Follow the installer prompts for a standard installation.
-
Click start to begin the setup process.
-
Configure guest claims attributes or use defaults.
-
See Configuring Guest Access for more information about the Guest user.
-
All users will be automatically granted these permissions.
-
Guest users will not be able to ingest data with more restrictive markings than the guest claims.
-
Any data ingested that has more restrictive markings than these guest claims will not be visible to Guest users.
-
-
Select Standard Installation.
-
This step may take several minutes to complete.
-
-
On the System Configuration page, configure any port or protocol changes desired and add any keystores/truststores needed.
-
See Certificates (Quick Start) for more details.
-
-
Click Next
-
Click Finish
-
4.4. Ingesting (Quick Start)
Now that DDF has been configured, ingest some sample data to demonstrate search capabilities.
This is one way to ingest into the catalog, for a complete list of the different methods, see Ingesting Data.
4.4.1. Ingesting Sample Data
-
Download a sample valid GeoJson file here .
-
Navigate in the browser to Intrigue at https://{FQDN}:{PORT}/search/catalog.
-
Select the Menu icon () in the upper left corner
-
Select Upload.
-
Drag and drop the sample file or click to navigate to it.
-
Select Start to begin upload.
Note
|
XML metadata for text searching is not automatically generated from GeoJson fields. |
Querying from Intrigue (https://{FQDN}:{PORT}/search/catalog) will return the record for the file ingested:
-
Select the Menu icon ()and return to Workspaces.
-
Search for the ingested data.
Note
|
The sample data was selected as an example of well-formed metadata. Other data can and should be used to test other usage scenarios. |
Managing
Administrators will be installing, maintaining, and supporting existing applications. Use this section to prepare, install, configure, run, and monitor a DDF.
5. Securing
Security is an important consideration for DDF, so it is imperative to update configurations away from the defaults to unique, secure settings.
Important
|
Securing DDF Components
DDF is enabled with an Insecure Defaults Service which will warn users/admins if the system is configured with insecure defaults. A banner is displayed on the admin console notifying "The system is insecure because default configuration values are in use." A detailed view is available of the properties to update. |
Security concerns will be highlighted in the configuration sections to follow.
5.1. Security Hardening
Note
|
The security precautions are best performed as configuration is taking place, so hardening steps are integrated into configuration steps. |
This is to avoid setting an insecure configuration and having to revisit during hardening. Most configurations have a security component to them, and important considerations for hardening are labeled as such during configuration as well as provided in a checklist format.
Some of the items on the checklist are performed during installation and others during configuration. Steps required for hardening are marked as Required for Hardening and are collected here for convenience. Refer to the checklist during system setup.
5.2. Auditing
-
Required Step for Security Hardening
Audit logging captures security-specific system events for monitoring and review.
DDF provides an Audit Plugin that logs all catalog transactions to the security.log
.
Information captured includes user identity, query information, and resources retrieved.
Follow all operational requirements for the retention of the log files. This may include using cryptographic mechanisms, such as encrypted file volumes or databases, to protect the integrity of audit information.
Note
|
The Audit Log default location is |
Note
|
Audit Logging Best Practices
For the most reliable audit trail, it is recommended to configure the operational environment of the DDF to generate alerts to notify adminstrators of:
|
Warning
|
The security audit logging function does not have any configuration for audit reduction or report generation. The logs themselves could be used to generate such reports outside the scope of DDF. |
5.2.1. Enabling Fallback Audit Logging
-
Required Step for Security Hardening
In the event the system is unable to write to the security.log
file, DDF must be configured to fall back to report the error in the application log:
-
edit
<DDF_HOME>/etc/org.ops4j.pax.logging.cfg
-
uncomment the line (remove the
#
from the beginning of the line) forlog4j2
(org.ops4j.pax.logging.log4j2.config.file = ${karaf.etc}/log4j2.xml
) -
delete all subsequent lines
-
If you want to change the location of your systems security backup log from the default location: <DDF_HOME>/data/log/securityBackup.log
, follow the next two steps:
-
edit
<DDF_HOME>/security/configurations.policy
-
find "Security-Hardening: Backup Log File Permissions"
-
below
grant codeBase "file:/pax-logging-log4j2"
add the path to the directory containing the new log file you will create in the next step.
-
-
edit
<DDF_HOME>/etc/log4j2.xml
-
find the entry for the
securityBackup
appender. (see example) -
change value of
filename
and prefix offilePattern
to the name/path of the desired failover security logs
-
securityBackup
Appender Before
1
2
3
<RollingFile name="securityBackup" append="true" ignoreExceptions="false"
fileName="${sys:karaf.log}/securityBackup.log"
filePattern="${sys:karaf.log}/securityBackup.log-%d{yyyy-MM-dd-HH}-%i.log.gz">
securityBackup
Appender After
1
2
3
<RollingFile name="securityBackup" append="true" ignoreExceptions="false"
fileName="<NEW_LOG_FILE>"
filePattern="<NEW_LOG_FILE>-%d{yyyy-MM-dd-HH}-%i.log.gz">
Warning
|
If the system is unable to write to the |
6. Installing
Set up a complete, secure instance of DDF. For simplified steps used for a testing, development, or demonstration installation, see the DDF Quick Start.
Important
|
Although DDF can be installed by any user, it is recommended for security reasons to have a non- |
Note
|
Hardening guidance assumes a Standard installation. Adding other components does not have any security/hardening implications. |
6.1. Installation Prerequisites
Warning
|
For security reasons, DDF cannot be started from a user’s home directory. If attempted, the system will automatically shut down. |
These are the system/environment requirements to configure prior to an installation.
Note
|
The DDF process or user under which the DDF process runs must have permission to create and write files in the directories where the Solr cores are installed, If this permission is missing, DDF will not be able to create new Solr cores and the system will not function correctly. |
6.1.1. Hardware Requirements
Minimum and Recommended Requirements for DDF Systems | ||
---|---|---|
Criteria |
Minimum |
Recommended |
CPU |
Dual Core 1.6 GHz |
Quad Core 2.6 GHz |
RAM |
8 GB* |
32 GB |
Disk Space |
40 GB |
80 GB |
Video Card |
— |
WebGL capable GPU |
Additional Software |
JRE 8 x64 |
JDK 8 x64 |
*The amount of RAM can be increased to support memory-intensive applications. See Memory Considerations
DDF has been tested on the following operating systems and with the following browsers. Other operating systems or browsers may be used but have not been officially tested.
Operating Systems | Browsers |
---|---|
Windows Server 2012 R2 |
Internet Explorer 11 |
6.1.2. Java Requirements
For a runtime system:
-
JRE 8 x64 or OpenJDK 8 JRE must be installed.
-
The
JRE_HOME
environment variable must be set to the locations where the JRE is installed
For a development system:
-
JDK8 must be installed.
-
The
JAVA_HOME
environment variable must be set to the location where the JDK is installed.-
Install/Upgrade to Java 8 x64 J2SE 8 SDK
-
The recommended version is 8u60 or later.
-
Java version must contain only number values.
-
-
Install/Upgrade to JDK8 .
-
Set the
JAVA_HOME
environment variable to the location where the JDK is installed.
-
Warning
|
*NIX Unlinking JAVA_HOME if Previously Set
Unlink
|
If JDK was installed:
Replace <JAVA_VERSION>
with the version and build number installed.
-
Open a terminal window(*NIX) or command prompt (Windows) with administrator privileges.
-
Determine Java Installation Directory (This varies between operating system versions).
Find Java Path in *NIXwhich java
Find Java Path in WindowsThe path to the JDK can vary between versions of Windows, so manually verify the path under:
C:\Program Files\Java\jdk<M.m.p_build>
-
Copy path of Java installation to clipboard. (example:
/usr/java/<JAVA_VERSION
>) -
Set
JAVA_HOME
by replacing <PATH_TO_JAVA> with the copied path in this command:SettingJAVA_HOME
on *NIXJAVA_HOME=<PATH_TO_JAVA><JAVA_VERSION> export JAVA_HOME
SettingJAVA_HOME
on Windowsset JAVA_HOME=<PATH_TO_JAVA><JAVA_VERSION> setx JAVA_HOME "<PATH_TO_JAVA><JAVA_VERSION>"
AddingJAVA_HOME
toPATH
Environment Variable on Windowssetx PATH "%PATH%;%JAVA_HOME%\bin"
-
Restart Terminal (shell) or Command Prompt.
-
Verify that the
JAVA_HOME
was set correctly.
-
echo $JAVA_HOME
echo %JAVA_HOME%
If JRE was installed:
Replace <JAVA_VERSION>
with the version and build number installed.
-
Open a terminal window(*NIX) or command prompt (Windows) with administrator privileges.
-
Determine Java Installation Directory (This varies between operating system versions).
Find Java Path in *NIXwhich java
Find Java Path in WindowsThe path to the JRE can vary between versions of Windows, so manually verify the path under:
C:\Program Files\Java\jre<M.m.p_build>
-
Copy path of Java installation to clipboard. (example:
/usr/java/<JAVA_VERSION
>) -
Set
JRE_HOME
by replacing <PATH_TO_JAVA> with the copied path in this command:SettingJRE_HOME
on *NIXJRE_HOME=<PATH_TO_JAVA><JAVA_VERSION> export JRE_HOME
SettingJRE_HOME
on Windowsset JRE_HOME=<PATH_TO_JAVA><JAVA_VERSION> setx JRE_HOME "<PATH_TO_JAVA><JAVA_VERSION>"
AddingJRE_HOME
toPATH
Environment Variable on Windowssetx PATH "%PATH%;%JRE_HOME%\bin"
-
Restart Terminal (shell) or Command Prompt.
-
Verify that the
JRE_HOME
was set correctly.
-
Note
|
File Descriptor Limit on Linux
fs.file-max = 6815744
*Nix Restart Command
init 6 |
6.1.3. Java Requirements
-
JDK8 must be installed.
-
The
JAVA_HOME
environment variable must be set to the location where the JDK is installed.-
Install/Upgrade to Java 8 x64 J2SE 8 SDK
-
The recommended version is 8u60 or later.
-
Java version must contain only number values.
-
-
Install/Upgrade to JDK8.
-
Set the
JAVA_HOME
environment variable to the location where the JDK is installed.
-
Warning
|
*NIX Unlinking JAVA_HOME if Previously Set
Unlink
|
Replace <JAVA_VERSION>
with the version and build number installed.
-
Open a terminal window(*NIX) or command prompt (Windows) with administrator privileges.
-
Determine Java Installation Directory (This varies between operating system versions).
Find Java Path in *NIXwhich java
Find Java Path in WindowsThe path to the JDK can vary between versions of Windows, so manually verify the path under:
C:\Program Files\Java\jdk<M.m.p_build>
-
Copy path of Java installation to clipboard. (example:
/usr/java/<JAVA_VERSION
>) -
Set
JAVA_HOME
by replacing <PATH_TO_JAVA> with the copied path in this command:SettingJAVA_HOME
on *NIXJAVA_HOME=<PATH_TO_JAVA><JAVA_VERSION> export JAVA_HOME
SettingJAVA_HOME
on Windowsset JAVA_HOME=<PATH_TO_JAVA><JAVA_VERSION> setx JAVA_HOME "<PATH_TO_JAVA><JAVA_VERSION>"
AddingJAVA_HOME
toPATH
Environment Variable on Windowssetx PATH "%PATH%;%JAVA_HOME%\bin"
-
Restart Terminal (shell) or Command Prompt.
-
Verify that the
JAVA_HOME
was set correctly.
-
echo $JAVA_HOME
echo %JAVA_HOME%
Note
|
File Descriptor Limit on Linux
fs.file-max = 6815744
*Nix Restart Command
init 6 |
6.2. Installing With the DDF Distribution Zip
Warning
|
Check System Time
Prior to installing DDF, ensure the system time is accurate to prevent federation issues. |
To install the DDF distribution zip, perform the following:
-
Download the DDF zip file .
-
After the prerequisites have been met, change the current directory to the desired install directory, creating a new directory if desired. This will be referred to as
<DDF_HOME>
.WarningWindows Pathname WarningDo not use spaces in directory or file names of the
<DDF_HOME>
path. For example, do not install in the defaultProgram Files
directory.Example: Create a Directory (Windows and *NIX)mkdir new_installation
-
Use a Non-
root
User on *NIX. (Windows users skip this step)It is recommended that the
root
user create a new install directory that can be owned by a non-root
user (e.g., DDF_USER). This can be a new or existing user. This DDF_USER can now be used for the remaining installation instructions. -
Create a new group or use an existing group (e.g., DDF_GROUP) (Windows users skip this step)
Example: Add New Group on *NIXgroupadd DDF_GROUP
Example: Switch User on *NIXchown DDF_USER:DDF_GROUP new_installation su - DDF_USER
-
-
Change the current directory to the location of the zip file (ddf-2.15.0.zip).
*NIX (Example assumes DDF has been downloaded to a CD/DVD)cd /home/user/cdrom
Windows (Example assumes DDF has been downloaded to the D drive)cd D:\
-
Copy ddf-2.15.0.zip to <DDF_HOME>.
*NIXcp ddf-2.15.0.zip <DDF_HOME>
Windowscopy ddf-2.15.0.zip <DDF_HOME>
-
Change the current directory to the desired install location.
*NIX or Windowscd <DDF_HOME>
-
The DDF zip is now located within the
<DDF_HOME>
. Unzip ddf-2.15.0.zip.*NIXunzip ddf-2.15.0.zip
WarningWindows Zip Utility WarningThe Windows Zip implementation, which is invoked when a user double-clicks on a zip file in the Windows Explorer, creates a corrupted installation. This is a consequence of its inability to process long file paths. Instead, use the java jar command line utility to unzip the distribution (see example below) or use a third party utility such as 7-Zip.
Use Java to Unzip in Windows(Replace<PATH_TO_JAVA>
with correct pathand <JAVA_VERSION>
with current version.)"<PATH_TO_JAVA>\jdk<JAVA_VERSION>\bin\jar.exe" xf ddf-2.15.0.zip
The unzipping process may take time to complete. The command prompt will stop responding to input during this time.
6.2.1. Configuring Operating Permissions and Allocations
Restrict access to sensitive files by ensuring that the only users with access privileges are administrators.
Within the <DDF_HOME>
, a directory is created named ddf-2.15.0.
This directory will be referred to in the documentation as <DDF_HOME>
.
-
Do not assume the deployment is from a trusted source; verify its origination.
-
Check the available storage space on the system to ensure the deployment will not exceed the available space.
-
Set maximum storage space on the
<DDF_HOME>/deploy
and<DDF_HOME>/system
directories to restrict the amount of space used by deployments.
6.2.1.1. Setting Directory Permissions
-
Required Step for Security Hardening
DDF relies on the Directory Permissions of the host platform to protect the integrity of the DDF during operation. System administrators MUST perform the following steps prior to deploying bundles added to the DDF.
Important
|
The system administrator must restrict certain directories to ensure that the application (user) cannot access restricted directories on the system.
For example the |
6.2.1.2. Configuring Memory Allocation for the DDF Java Virtual Machine
The amount of memory allocated to the Java Virtual Machine host DDF by the operating
system can be increased by updating the setenv
script:
<DDF_HOME>/bin/setenv Update the JAVA_OPTS -Xmx value <DDF_HOME>/bin/setenv-wrapper.conf Update the wrapper.java.additional -Xmx value
<DDF_HOME>/bin/setenv.bat Update the JAVA_OPTS -Xmx value <DDF_HOME>/bin/setenv-windows-wrapper.conf Update the wrapper.java.additional -Xmx value
6.2.1.3. Enabling JMX
By default, DDF prevents connections to JMX because the system is more secure when JMX is not enabled. However, many monitoring tools require a JMX connection to the Java Virtual Machine. To enable JMX, update the setenv script:
<DDF_HOME>/bin/setenv Remove -XX:+DisableAttachMechanism from JAVA_OPTS <DDF_HOME>/bin/setenv-wrapper.conf Comment out the -XX:+DisableAttachMechanism line and re-number remainder lines appropriately
<DDF_HOME>/bin/setenv.bat Remove -XX:+DisableAttachMechanism from JAVA_OPTS <DDF_HOME>/bin/setenv-windows-wrapper.conf Comment out the -XX:+DisableAttachMechanism line and re-number remainder lines appropriately
6.2.1.4. Configuring Memory for the Solr Server
Note
|
This section applies only to configurations that manage the lifecycle of the Solr server. It does not apply to Solr Cloud configurations. |
The Solr server consumes large amount of memory when it ingests documents. If the Solr server
runs out of memory, it terminates its process. To allocate more memory to the Solr server,
increase the value of the solr.mem
property.
6.2.2. Managing Keystores and Certificates
-
Required Step for Security Hardening
DDF uses certificates in two ways:
-
Ensuring the privacy and integrity of messages sent or received over a network.
-
Authenticating an incoming user request.
To ensure proper configuration of keystore, truststore, and certificates, follow the options below according to situation.
Jump to the steps referenced in the diagram:
6.2.2.1. Managing Keystores
Certificates, and sometimes their associated private keys, are stored in keystore files. DDF includes two default keystore files, the server key store and the server trust store. The server keystore holds the certificates and private keys that DDF uses to identify itself to other nodes on the network. The truststore holds the certificates of nodes or other entities that DDF needs to trust.
6.2.2.1.1. Adding an Existing Server Keystore
If provided an existing keystore for use with DDF, follow these steps to replace the default keystore.
-
Remove the default keystore at
etc/keystores/serverKeystore.jks
. -
Add the desired keystore file to the
etc/keystores
directory. -
Edit
custom.system.properties
file to set filenames and passwords.-
If using a type of keystore other than
jks
(such aspkcs12
), change thejavax.net.ssl.keyStoreType
property as well.
-
-
If the truststore has the correct certificates, restart server to complete configuration.
-
If provided with an existing server truststore, continue to Adding an Existing Server Truststore.
-
Otherwise, create a server truststore.
-
6.2.2.1.2. Adding an Existing Server Truststore
-
Remove the default truststore at
etc/keystores/serverTruststore.jks
. -
Add the desired truststore file to the
etc/keystores
directory. -
Edit
custom.system.properties
file to set filenames and passwords.-
If using a type of truststore other than
jks
(such aspkcs12
), change thejavax.net.ssl.trustStoreType
property as well.
-
If the provided server keystore does not include the CA certificate that was used to sign the server’s certificate, add the CA certificate into the serverKeystore
file.
6.2.2.1.3. Creating a New Keystore/Truststore with an Existing Certificate and Private Key
If provided an existing certificate, create a new keystore and truststore with it.
Note
|
DDF requires that the keystore contains both the private key and the CA. |
-
Using the private key, certificate, and CA certificate, create a new keystore containing the data from the new files.
cat client.crt >> client.key openssl pkcs12 -export -in client.key -out client.p12 keytool -importkeystore -srckeystore client.p12 -destkeystore serverKeystore.jks -srcstoretype pkcs12 -alias 1 keytool -changealias -alias 1 -destalias client -keystore serverKeystore.jks keytool -importcert -file ca.crt -keystore serverKeystore.jks -alias "ca" keytool -importcert -file ca-root.crt -keystore serverKeystore.jks -alias "ca-root"
-
Create the truststore using only the CA certificate. Based on the concept of CA signing, the CA should be the only entry needed in the truststore.
keytool -import -trustcacerts -alias "ca" -file ca.crt -keystore truststore.jks keytool -import -trustcacerts -alias "ca-root" -file ca-root.crt -keystore truststore.jks
-
Create a PEM file using the certificate, as some applications require that format.
openssl x509 -in client.crt -out client.der -outform DER openssl x509 -in client.der -inform DER -out client.pem -outform PEM
Important
|
The localhost certificate must be removed if using a system certificate. |
6.2.2.1.4. Updating Key Store / Trust Store via the Admin Console
Certificates (and certificates with keys) can be managed in the Admin Console.
-
Navigate to the Admin Console.
-
Select the Security application.
-
Select the Certificates tab.
-
Add and remove certificates and private keys as necessary.
-
Restart DDF.
Important
|
The default trust store and key store files for DDF included in |
This view shows the alias (name) of every certificate in the trust store and the key store. It also displays if the entry includes a private key ("Is Key") and the encryption scheme (typically "RSA" or "EC").
This view allows administrators remove certificates from DDF’s key and trust stores. It also allows administrators to import certificates and private keys into the keystores with the "+" button. The import function has two options: import from a file or import over HTTPS. The file option accepts a Java Keystore file or a PKCS12 keystore file. Because keystores can hold many keys, the import dialog asks the administrator to provide the alias of the key to import. Private keys are typically encrypted and the import dialog prompts the administrator to enter the password for the private key. Additionally, keystore files themselves are typically encrypted and the dialog asks for the keystore ("Store") password.
The name and location of the DDF trust and key stores can be changed by editing the system properties files, etc/custom.system.properties
.
Additionally, the password that DDF uses to decrypt (unlock) the key and trust stores can be changed here.
Important
|
DDF assumes that password used to unlock the keystore is the same password that unlocks private keys in the keystore. |
The location, file name, passwords and type of the server and trust key stores can be set in the custom.system.properties
file:
-
Setting the Keystore and Truststore Java Properties
javax.net.ssl.keyStore=etc/keystores/serverKeystore.jks javax.net.ssl.keyStorePassword=changeit javax.net.ssl.trustStore=etc/keystores/serverTruststore.jks javax.net.ssl.trustStorePassword=changeit javax.net.ssl.keyStoreType=jks javax.net.ssl.trustStoreType=jks
Note
|
If the server’s fully qualified domain name is not recognized, the name may need to be added to the network’s DNS server. |
Tip
|
The DDF instance can be tested even if there is no entry for the FQDN in the DNS. First, test if the FQDN is already recognized. Execute this command:
If the command responds with an error message such as unknown host, then modify the system’s
|
Note
|
Changing Default Passwords
This step is not required for a hardened system.
|
6.3. Initial Startup
Run the DDF using the appropriate script.
<DDF_HOME>/bin/ddf
<DDF_HOME>/bin/ddf.bat
The distribution takes a few moments to load depending on the hardware configuration.
Tip
|
To run DDF as a service, see Starting as a Service. |
6.3.1. Verifying Startup
At this point, DDF should be configured and running with a Solr Catalog Provider. New features (endpoints, services, and sites) can be added as needed.
Verification is achieved by checking that all of the DDF bundles are in an Active state (excluding fragment bundles which remain in a Resolved state).
Note
|
It may take a few moments for all bundles to start so it may be necessary to wait a few minutes before verifying installation. |
Execute the following command to display the status of all the DDF bundles:
ddf@local>list | grep -i ddf
Warning
|
Entries in the Resolved state are expected, they are OSGi bundle fragments.
Bundle fragments are distinguished from other bundles in the command line console list by a field named |
96 | Resolved | 80 | 2.10.0.SNAPSHOT | DDF :: Platform :: PaxWeb :: Jetty Config, Hosts: 90
After successfully completing these steps, the DDF is ready to be configured.
6.3.2. DDF Directory Contents after Installation and Initial Startup
During DDF installation, the major directories and files shown in the table below are created, modified, or replaced in the destination directory.
Directory Name | Description |
---|---|
|
Scripts to start, stop, and connect to DDF. |
|
The working directory of the system – installed bundles and their data |
|
Log file for DDF, logging all errors, warnings, and (optionally) debug statements. This log rolls up to 10 times, frequency based on a configurable setting (default=1 MB) |
|
Log file for any ingest errors that occur within DDF. |
|
Log file that records user interactions with the system for auditing purposes. |
|
Hot-deploy directory – KARs and bundles added to this directory will be hot-deployed (Empty upon DDF installation) |
|
HTML and PDF copies of DDF documentation. |
|
Directory monitored for addition/modification/deletion of |
|
Template |
|
The system’s bootstrap libraries. Includes the |
|
Licensing information related to the system. |
|
Apache Solr server used when DDF manages Solr |
|
Log file for Solr. |
|
Local bundle repository. Contains all of the JARs required by DDF, including third-party JARs. |
6.3.3. Completing Installation
Upon startup, complete installation from either the Admin Console or the Command Console.
6.3.3.1. Completing Installation from the Admin Console
Upon startup, the installation can be completed by navigating to the Admin Console at https://{FQDN}:{PORT}/admin.
Warning
|
Internet Explorer 10 TLS Warning
Internet Exlorer 10 users may need to enable TLS 1.2 to access the Admin Console in the browser. Enabling TLS1.2 in IE10
|
-
Default user/password:
admin
/admin
.
On the initial startup of the Admin Console, a series of prompts walks through essential configurations. These configurations can be changed later, if needed.
-
Click Start to begin.
6.3.3.2. Completing Installation from the Command Console
In order to install DDF from the Command Console, use the command profile:install <profile-name>
.
The <profile-name>
should be the desired Setup Type in lowercase letters.
To see the available profiles, use the command profile:list
.
Note
|
This only installs the desired Setup Type. There are other components that can be set up in the Admin Console Installer that cannot be setup on the Command Console. After installing the Setup Type, these other components can be set up as described below. |
6.3.3.2.1. Configuring Guest Claim Attributes
The Guest Claim Attributes can be configured via the Admin Console after running the profile:install
command.
See Configuring Guest Claim Attributes.
6.3.3.2.2. System Configuration Settings
System Settings and Contact Info, as described in System Configuration Settings, can be changed in <DDF_HOME>/etc/custom.system.properties
.
The certificates must be set up manually as described in Managing Keystores and Certificates.
Note
|
The system will need to be restarted after changing any of these settings. |
6.3.4. Firewall Port Configuration
Below is a table listing all of the default ports that DDF uses and a description of what they are used for. Firewalls will need to be configured to open these ports in order for external systems to communicate with DDF.
Port | Usage description |
---|---|
8993 |
https access to DDF admin and search web pages. |
8101 |
For administering DDF instances gives ssh access to the administration console. |
61616 |
DDF broker port for JMS messaging over the OpenWire protocol. |
5672 |
DDF broker port for JMS messaging over multiple protocols: Artemis CORE, AMQP and OpenWire by default . |
5671 |
DDF broker port for JMS messaging over: AMQP by default. |
1099 |
RMI Registry Port |
44444 |
RMI Server Port |
8994 |
Solr Server Port. DDF does not listen on this port, but the Solr process does and it must be able to receive requests from DDF on this port. |
Note
|
These are the default ports used by DDF. DDF can be configured to use different ports. |
6.3.5. Internet Explorer 11 Enhanced Security Configuration
Below are steps listing all of the changes that DDF requires to run on Internet Explorer 11 and several additional considerations to keep in mind.
-
In the IE11
Settings
>Compatibility View Settings
dialog, un-checkDisplay intranet sites in Compatibility View
. -
In the
Settings
>Internet Options
>Security
tab,Local intranet
zone:-
Click the
Sites
>Advanced
button, add the current host name to the list, e.g., https://windows-host-name.domain.edu, and close the dialog. -
Make sure the security level for the
Local intranet
zone is set toMedium-low
inCustom level…
.-
Enable Protected Mode
is checked by default, but it may need to be disabled if the above changes do not fully resolve access issues.
-
-
-
Restart the browser.
Note
|
During installation, make sure to use the host name and not localhost when setting up the DDF’s hostname, port, etc. |
6.4. High Availability Initial Setup
This section describes how to complete the initial setup of DDF in a Highly Available Cluster.
-
A failover proxy that can route HTTP traffic according to the pattern described in the Introduction to High Availability. It is recommended that a hardware failover proxy be used in a production environment.
-
Solr Cloud: See the Solr Cloud section for installation and configuration guidance to connect DDF nodes to Solr Cloud.
Once the prerequisites have been met, the below steps can be followed.
Note
|
Unless listed in the High Availability Initial Setup Exceptions section, the normal steps can be followed for installing, configuring, and hardening. |
-
Install the first DDF node. See the Installation Section.
-
Configure the first DDF node. See the Configuring Section.
-
Optional: If hardening the first DDF node (excluding setting directory permissions). See the Hardening Section.
-
Export the first DDF node’s configurations, install the second DDF node, and import the exported configurations on that node. See Reusing Configurations.
-
If hardening, set directory permissions on both DDF nodes. See Setting Directory Permissions.
6.4.1. High Availability Initial Setup Exceptions
These steps are handled differently for the initial setup of a Highly Available Cluster.
6.4.1.1. Failover Proxy Integration
In order to integrate with a failover proxy, the DDF node’s system properties (in <DDF_HOME>/etc/custom.system.properties
) must be changed to publish the correct port to external systems and users.
This must be done before installing the first DDF node. See High Availability Initial Setup.
There are two internal port properties that must be changed to whatever ports the DDF will use on its system. Then there are two external port properties that must be changed to whatever ports the failover proxy is forwarding traffic through.
Warning
|
Make sure that the failover proxy is already running and forwarding traffic on the chosen ports before starting the DDF. There may be unexpected behavior otherwise. |
In the below example, the failover proxy with a hostname of service.org is forwarding https traffic via 8993 and http traffic via 8181. The DDF node will run on 1111 for https and 2222 for http (on the host that it’s hosted on). The hostname of the DDF must match the hostname of the proxy.
org.codice.ddf.system.hostname=service.org
org.codice.ddf.system.httpsPort=1111
org.codice.ddf.system.httpPort=2222
org.codice.ddf.system.port=${org.codice.ddf.system.httpsPort}
org.codice.ddf.external.hostname=service.org
org.codice.ddf.external.httpsPort=8993
org.codice.ddf.external.httpPort=8181
org.codice.ddf.external.port=${org.codice.ddf.external.httpsPort}
6.4.1.2. Identical Directory Structures
The two DDF nodes need to be under identical root directories on their corresponding systems. On Windows, this means they must be under the same drive.
6.4.1.3. Highly Available Security Auditing
A third party tool will have to be used to persist the logs in a highly available manner.
-
Edit the
<DDF_HOME>/etc/org.ops4j.pax.logging.cfg
file to enable log4j2, following the steps in Enabling Fallback Audit Logging. -
Then put the appropriate log4j2 appender in
<DDF_HOME>/etc/log4j2.xml
to send logs to the chosen third party tool. See Log4j Appenders .
6.4.1.4. Shared Storage Provider
The storage provider must be in a location that is shared between the two DDF nodes and must be highly available. If hardening the Highly Available Cluster, this shared storage provider must be trusted/secured. One way to accomplish this is to use the default Content File System Storage Provider and configure it to point to a highly available shared directory.
6.4.1.5. High Availability Certificates
Due to the nature of highly available environments, localhost is not suitable for use as a hostname to identify the DDF cluster. The default certificate that ships with the product uses localhost as the common name, so this certificate needs to be replaced. The following describes how to generate a certificate signed by the DDF Demo Certificate Authority that uses a proper hostname.
Note
|
This certificate, and any subsequent certificates signed by the Demo CA, are intended for testing purposes only, and should not be used in production. |
Certificates need to have Subject Alternative Names (SANs) which will include the host for the failover
proxy and for both DDF nodes. A certificate with SANs signed by the Demo CA can be obtained by
navigating to <DDF_HOME>/etc/certs/
and, assuming the proxy’s hostname is service.org, running
the following for UNIX operating systems:
./CertNew.sh -cn service.org -san "DNS:service.org"
or the following for Windows operating systems:
CertNew -cn service.org -san "DNS:service.org"
Note
|
Systems that use DDF version 2.11.4 or later will automatically get a DNS SAN entry matching the CN
without the need to specify the |
More customization for certs can be achieved by following the steps at Creating New Server Keystore Entry with the CertNew Scripts.
6.4.1.6. High Availability Installation Profile
Instead of having to manually turn features on and off, there is a High Availability installation profile.
This profile will not show up in the UI Installer, but can be installed by executing profile:install ha
on the command line instead of stepping through the UI Installer.
This profile will install all of the High Availability supported features.
7. Configuring
DDF is highly configurable and many of the components of the system can be configured to use an included DDF implementation or replaced with an existing component of an integrating system.
Note
|
Configuration Requirements
Because components can easily be installed and uninstalled, it’s important to remember that for proper DDF functionality, at least the Catalog API, one Endpoint, and one Catalog Framework implementation must be active. |
DDF provides several tools for configuring the system.
The Admin Console is a useful interface for configuring applications, their features, and important settings.
Alternatively, many configurations can be updated through console commands entered into the Command Console.
Finally, configurations are stored in configuration files within the <DDF_HOME>
directory.
While many configurations can be set or changed in any order, for ease of use of this documentation, similar subjects have been grouped together sequentially.
See Keystores and certificates to set up the certificates needed for messaging integrity and authentication. Set up Users with security attributes, then configure data attribute handling, and finally, define the Security Policies that map between users and data and make decisions about access.
Connecting DDF to other data sources, including other instances of DDF is covered in the Configuring Federation section.
Lastly, see the Configuring for Special Deployments section for guidance on common specialized installations, such as fanout or multiple identical configurations.
7.1. Admin Console Tutorial
The Admin Console is the centralized location for administering the system. The Admin Console allows an administrator to configure and tailor system services and properties. The default address for the Admin Console is https://{FQDN}:{PORT}/admin.
The configuration and features installed can be viewed and edited from the System tab of the Admin Console.
It is recommended to use the Catalog App → Sources tab to configure and manage sites/sources.
DDF displays all active applications in the Admin Console.
This view can be configured according to preference.
Either view has an >
arrow icon to view more information about the application as currently configured.
View | Description |
---|---|
Tile View |
The first view presented is the Tile View, displaying all active applications as individual tiles. |
List View |
Optionally, active applications can be displayed in a list format by clicking the list view button. |
Each individual application has a detailed view to modify configurations specific to that application. All applications have a standard set of tabs, although some apps may have additional ones with further information.
Tab | Explanation |
---|---|
Configuration |
The Configuration tab lists all bundles associated with the application as links to configure any configurable properties of that bundle. |
DDF includes many components, packaged as features, that can be installed and/or uninstalled without restarting the system. Features are collections of OSGi bundles, configuration data, and/or other features.
Note
|
Transitive Dependencies
Features may have dependencies on other features and will auto-install them as needed. |
In the Admin Console, Features are found on the Features tab of the System tab.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Uninstalled features are shown with a play arrow under the Actions column.
-
Select the play arrow for the desired feature.
-
The Status will change from Uninstalled to Installed.
-
-
Installed features are shown with a stop icon under the Actions column.
-
Select the stop icon for the desired feature.
-
The Status will change from Installed to Uninstalled.
-
7.2. Console Command Reference
DDF provides access to a powerful Command Console to use to manage and configure the system.
7.2.1. Feature Commands
Individual features can also be added via the Command Console.
-
Determine which feature to install by viewing the available features on DDF.
ddf@local>feature:list
-
The console outputs a list of all features available (installed and uninstalled). A snippet of the list output is shown below (the versions may differ):
State Version Name Repository Description [installed ] [2.15.0 ] security-handler-api security-services-app-2.15.0 API for authentication handlers for web applications. [installed ] [2.15.0 ] security-core security-services-app-2.15.0 DDF Security Core [uninstalled] [2.15.0 ] security-expansion security-services-app-2.15.0 DDF Security Expansion [installed ] [2.15.0 ] security-pdp-authz security-services-app-2.15.0 DDF Security PDP. [uninstalled] [2.15.0 ] security-pep-serviceauthz security-services-app-2.15.0 DDF Security PEP Service AuthZ [uninstalled] [2.15.0 ] security-expansion-user-attributes security-services-app-2.15.0 DDF Security Expansion User Attributes Expansion [uninstalled] [2.15.0 ] security-expansion-metacard-attributes security-services-app-2.15.0 DDF Security Expansion Metacard Attributes Expansion [installed ] [2.15.0 ] security-sts-server security-services-app-2.15.0 DDF Security STS. [installed ] [2.15.0 ] security-sts-realm security-services-app-2.15.0 DDF Security STS Realm. [uninstalled] [2.15.0 ] security-sts-ldaplogin security-services-app-2.15.0 DDF Security STS JAAS LDAP Login. [uninstalled] [2.15.0 ] security-sts-ldapclaimshandler security-services-app-2.15.0 Retrieves claims attributes from an LDAP store.
-
Check the bundle status to verify the service is started.
ddf@local>list
The console output should show an entry similar to the following:
[ 117] [Active ] [ ] [Started] [ 75] DDF :: Catalog :: Source :: Dummy (<version>)
7.2.1.1. Uninstalling Features from the Command Console
-
Check the feature list to verify the feature is installed properly.
ddf@local>feature:list
State Version Name Repository Description [installed ] [2.15.0 ] ddf-core ddf-2.15.0 [uninstalled] [2.15.0 ] ddf-sts ddf-2.15.0 [installed ] [2.15.0 ] ddf-security-common ddf-2.15.0 [installed ] [2.15.0 ] ddf-resource-impl ddf-2.15.0 [installed ] [2.15.0 ] ddf-source-dummy ddf-2.15.0
-
Uninstall the feature.
ddf@local>feature:uninstall ddf-source-dummy
Warning
|
Dependencies that were auto-installed by the feature are not automatically uninstalled. |
-
Verify that the feature has uninstalled properly.
ddf@local>feature:list
State Version Name Repository Description [installed ] [2.15.0 ] ddf-core ddf-2.15.0 [uninstalled] [2.15.0 ] ddf-sts ddf-2.15.0 [installed ] [2.15.0 ] ddf-security-common ddf-2.15.0 [installed ] [2.15.0 ] ddf-resource-impl ddf-2.15.0 [uninstalled] [2.15.0 ] ddf-source-dummy ddf-2.15.0
7.3. Configuration Files
Many important configuration settings are stored in the <DDF_HOME>
directory.
Note
|
Depending on the environment, it may be easier for integrators and administrators to configure DDF using the Admin Console prior to disabling it for hardening purposes. The Admin Console can be re-enabled for additional configuration changes. |
In an environment hardened for security purposes, access to the Admin Console or the Command Console might be denied and using the latter in such an environment may cause configuration errors.
It is necessary to configure DDF (e.g., providers, Schematron rulesets, etc.) using .config
files.
A template file is provided for some configurable DDF items so that they can be copied/renamed then modified with the appropriate settings.
Warning
|
If the Admin Console is enabled again, all of the configuration done via |
7.3.1. Configuring Global Settings with custom.system.properties
Global configuration settings are configured via the properties file custom.system.properties
.
These properties can be manually set by editing this file or set via the initial configuration from the Admin Console.
Note
|
Any changes made to this file require a restart of the system to take effect. |
Important
|
The passwords configured in this section reflect the passwords used to decrypt JKS (Java KeyStore) files. Changing these values without also changing the passwords of the JKS causes undesirable behavior. |
Title | Property | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Keystore and Truststore Java Properties |
|||||
Keystore |
|
String |
Path to server keystore |
|
Yes |
Keystore Password |
|
String |
Password for accessing keystore |
|
Yes |
Truststore |
|
String |
The trust store used for SSL/TLS connections. Path is relative to |
|
Yes |
Truststore Password |
|
String |
Password for server Truststore |
|
Yes |
Keystore Type |
|
String |
File extension to use with server keystore |
|
Yes |
Truststore Type |
|
String |
File extension to use with server truststore |
|
Yes |
Headless mode |
|||||
Headless Mode |
|
Boolean |
Force java to run in headless mode for when the server doesn’t have a display device |
|
No |
Global URL Properties |
|||||
Internal Default Protocol |
|
String |
Default protocol that should be used to connect to this machine. |
|
Yes |
Internal Host |
|
String |
The hostname or IP address this system runs on. If the hostname is changed during the install to something other than |
|
Yes |
Internal HTTPS Port |
|
String |
The https port that the system uses. NOTE: This DOES change the port the system runs on. |
|
Yes |
Internal HTTP Port |
|
String |
The http port that the system uses. NOTE: This DOES change the port the system runs on. |
|
Yes |
Internal Default Port |
|
String |
The default port that the system uses. This should match either the above http or https port. NOTE: This DOES change the port the system runs on. |
|
Yes |
Internal Root Context |
|
String |
The base or root context that services will be made available under. |
|
Yes |
External Default Protocol |
|
String |
Default protocol that should be used to connect to this machine. |
|
Yes |
External Host |
|
String |
The hostname or IP address used to advertise the system. Do not enter If the hostname is changed during the install to something other than NOTE: Does not change the address the system runs on. |
|
Yes |
HTTPS Port |
|
String |
The https port used to advertise the system. NOTE: This does not change the port the system runs on. |
|
Yes |
External HTTP Port |
|
String |
The http port used to advertise the system. NOTE: This does not change the port the system runs on. |
|
Yes |
External Default Port |
|
String |
The default port used to advertise the system. This should match either the above http or https port. NOTE: Does not change the port the system runs on. |
|
Yes |
External Root Context |
|
String |
The base or root context that services will be advertised under. |
|
Yes |
System Information Properties |
|||||
Site Name |
|
String |
The site name for DDF. |
|
Yes |
Site Contact |
|
String |
The email address of the site contact. |
No |
|
Version |
|
String |
The version of DDF that is running. This value should not be changed from the factory default. |
|
Yes |
Organization |
|
String |
The organization responsible for this installation of DDF. |
|
Yes |
Registry ID |
|
String |
The registry id for this installation of DDF. |
No |
|
Thread Pool Settings |
|||||
Thread Pool Size |
|
Integer |
Size of thread pool used for handling UI queries, federating requests, and downloading resources. See Configuring Thread Pools |
|
Yes |
HTTPS Specific Settings |
|||||
Cipher Suites |
|
String |
Cipher suites to use with secure sockets. If using the JCE unlimited strength policy, use this list in place of the defaults: . |
|
No |
Https Protocols |
|
String |
Protocols to allow for secure connections |
|
No |
Allow Basic Auth Over Http |
|
Boolean |
Set to true to allow Basic Auth credentials to be sent over HTTP unsecurely. This should only be done in a test environment. These events will be audited. |
|
Yes |
Restrict the Security Token Service to allow connections only from DNs matching these patterns |
|
String |
Set to a comma separated list of regex patterns to define which hosts are allowed to connect to the STS |
|
Yes |
XML Settings |
|||||
Parse XML documents into DOM object trees |
|
String |
Enables Xerces-J implementation of |
|
Yes |
Catalog Source Retry Interval |
|||||
Initial Endpoint Contact Interval |
|
Integer |
If a Catalog Source is unavailable, try to connect to it after the initial interval has elapsed. After every retry, the interval doubles, up to a given maximum interval. The interval is measured in seconds. |
|
Yes |
Maximum Endpoint Contact Interval |
|
Integer |
Do not wait longer than the maximum interval to attempt to establish a connection with an unavailable Catalog Source. Smaller values result in more current information about the status of Catalog Sources, but cause more network traffic. The interval is measured in seconds. |
|
Yes |
File Upload Settings |
|||||
File extensions flagged as potentially dangerous to the host system or external clients |
|
String |
Files uploaded with these bad file extensions will have their file names sanitized before being saved E.g. sample_file.exe will be renamed to sample_file.bin upon ingest |
|
Yes |
File names flagged as potentially dangerous to the host system or external clients |
|
String |
Files uploaded with these bad file names will have their file names sanitized before being saved E.g. crossdomain.xml will be renamed to file.bin upon ingest |
|
Yes |
Mime types flagged as potentially dangerous to external clients |
|
String |
Files uploaded with these mime types will be rejected from the upload |
|
Yes |
File names flagged as potentially dangerous to external clients |
|
String |
Files uploaded with these file names will be rejected from the upload |
|
Yes |
|
String |
Type of Solr configuration |
|
Yes |
|
Solr Cloud Properties |
|||||
Zookeeper Nodes |
|
String |
Zookeeper hostnames and port numbers |
|
Yes |
Allow DDF to change the Solr server password if it detects the default password is in use |
|
Boolean |
If true, DDF attempts to change the default Solr server password to a randomly
generated UUID. This property is only used if the |
|
Yes |
Solr Data Directory |
|
String |
Directory for Solr core files |
|
Yes |
Solr server HTTP port |
|
Integer |
Solr server’s port. |
|
Yes |
|
String |
URL for a HTTP Solr server (required for HTTP Solr) |
|
Yes |
|
Solr Heap Size |
|
String |
Memory allocated to the Solr Java process |
|
Yes |
|
String |
The password used for basic authentication to Solr. This property is only used if the
|
|
Yes |
|
|
String |
The username for basic authentication to Solr. This property is only used if the
|
|
Yes |
|
|
Boolean |
If true, the HTTP Solr Client sends a username and password when sending requests to Solr server.
This property is only used if the |
|
Yes |
|
Start Solr server |
|
Boolean |
If true, application manages Solr server lifecycle |
|
Yes |
These properties are available to be used as variable parameters in input url fields within the Admin Console. For example, the url for the local csw service (https://{FQDN}:{PORT}/services/csw) could be defined as:
${org.codice.ddf.system.protocol}${org.codice.ddf.system.hostname}:${org.codice.ddf.system.port}${org.codice.ddf.system.rootContext}/csw
This variable version is more verbose, but will not need to be changed if the system host
, port
or root
context changes.
Warning
|
Only root can access ports < 1024 on Unix systems. |
7.3.2. Configuring with .config Files
The DDF is configured using .config
files.
Like the Karaf .cfg
files, these configuration files must be located in the <DDF_HOME>/etc/
directory.
Unlike the Karaf .cfg
files, .config
files must follow the naming convention that includes the configuration persistence ID (PID) that they represent.
The filenames must be the pid with a .config
extension.
This type of configuration file also supports lists within configuration values (metatype cardinality
attribute greater than 1) and String, Boolean, Integer, Long, Float, and Double values.
Important
|
This new configuration file format must be used for any configuration that makes use of lists.
Examples include Web Context Policy Manager ( |
Warning
|
Only one configuration file should exist for any given PID.
The result of having both a |
The main purpose of the configuration files is to allow administrators to pre-configure DDF without having to use the Admin Console.
In order to do so, the configuration files need to be copied to the <DDF_HOME>/etc
directory after DDF zip has been extracted.
Upon start up, all the .config
files located in <DDF_HOME>/etc
are automatically read and processed.
DDF monitors the <DDF_HOME>/etc
directory for any new .config
file that gets added.
As soon as a new file is detected, it is read and processed.
Changes to these configurations from the Admin Console or otherwise are persisted in the original configuration file in the <DDF_HOME>/etc
directory.
7.4. Configuring User Access
DDF does not define accounts or types of accounts to support access. DDF uses an attribute based access control (ABAC) model. For reference, ABAC systems control access by evaluating rules against the attributes of the entities (subject and object), actions, and the environment relevant to a request.
DDF can be configured to access many different types of user stores to manage and monitor user access.
7.4.1. Configuring Guest Access
Unauthenticated access to a secured DDF system is provided by the Guest user. By default, DDF allows guest access.
Because DDF does not know the identity of a Guest user, it cannot assign security attributes to the Guest. The administrator must configure the attributes and values (i.e. the "claims") to be assigned to Guests. The Guest Claims become the default minimum attributes for every user, both authenticated and unauthenticated. Even if a user claim is more restrictive, the guest claim will grant access, so ensure the guest claim is only as permissive as necessary.
The Guest user is uniquely identified with a Principal name of the format Guest@UID
. The unique
identifier is assigned to a Guest based on its source IP address and is cached so that subsequent
Guest accesses from the same IP address within a 30-minute window will get the same unique identifier.
To support administrators' need to track the source IP Address for a given Guest user, the IP Address
and unique identifier mapping will be audited in the security log.
-
Make sure that all the default logical names for locations of the security services are defined.
7.4.1.1. Denying Guest User Access
To disable guest access for a context, use the Web Context Policy Manager configuration to remove Guest. from the Authentication Type for that context. Only authorized users are then allowed to continue to the Search UI page.
Note
|
If using the included IdP for authentication, disable the |
7.4.1.2. Allowing Guest User Access
Guest authentication must be enabled and configured to allow guest users. Once the guest user is configured, redaction and filtering of metadata is done for the guest user the same way it is done for normal users.
To enable guest authentication for a context, use the Web Context Policy Manager configuration to change the Authentication Type for that context to Guest.
-
Navigate to the Admin Console.
-
Select the Security application.
-
Select the Configuration tab.
-
Select Web Context Policy Manager.
-
Select the desired context (
/
,/search
,/admin
, etc.). -
Add
Guest
to the Authentication Type list.-
Separate entries with a
|
symbol (eg./=SAML|Guest
).
-
7.4.1.2.1. Configuring Guest Interceptor if Allowing Guest Users
-
Required Step for Security Hardening
If a legacy client requires the use of the secured SOAP endpoints, the guest interceptor should be configured.
Otherwise, the guest interceptor and public
endpoints should be uninstalled for a hardened system.
To uninstall the guest interceptor and public
endpoints:
. Navigate to the Admin Console.
. Select the System tab.
. Open the Features section.
. Search for security-interceptor-guest.
. Click the Uninstall button.
7.4.1.2.2. Configuring Guest Claim Attributes
A guest user’s attributes define the most permissive set of claims for an unauthenticated user.
A guest user’s claim attributes are stored in configuration, not in the LDAP as normal authenticated users' attributes are.
-
Navigate to the Admin Console.
-
Select the Security application.
-
Select the Configuration tab.
-
Select the Security Guest Claims Handler.
-
Add any additional attributes desired for the guest user.
-
Save changes.
7.4.2. Configuring REST Services for Users
If using REST services or connecting to REST sources, several configuration options are available.
DDF includes an Identity Provider (IdP), but can also be configured to support an external IdP or no IdP at all. The following diagram shows the configuration options.
7.4.2.1. Configuring Included Identity Provider
The included IdP is installed by default.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Install
security-idp
feature.
Run the command feature:install security-idp
from the Command Console.
-
Navigate to the Admin Console.
-
Select the Security application.
-
Select the Configuration tab.
-
Select IdP Server.
-
Configure Authentication Request requirements
-
Disable the Require Signed AuthnRequests option to allow processing of authentication requests without signatures.
-
Disable the Limit RelayStates to 80 Bytes option to allow interoperability with Service Providers that are not compliant with the SAML Specifications and send RelayStates larger than 80 bytes.
-
-
Configure Guest Access:
-
Disable the Allow Guest Access option to disallow a user to authenticate against the IdP with a guest account.
-
-
Configure the Service Providers (SP) Metadata:
-
Select the
+
next to SP Metadata to add a new entry. -
Populate the new entry with:
-
an HTTPS URL (https://) such as https://localhost:8993/services/saml/sso/metadata1,
-
a file URL (file:), or
-
an XML block to refer to desired metadata.
-
-
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" entityID="https://localhost:8993/services/saml">
<md:SPSSODescriptor protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
<md:KeyDescriptor use="signing">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>
MIIDEzCCAnygAwIBAgIJAIzc4FYrIp9mMA0GCSqGSIb3DQEBBQUAMHcxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJBWjEMMAoGA1UECgwDRERGMQwwCgYDVQQLDANEZXYxGTAXBgNVBAMMEERERiBEZW1vIFJvb3QgQ0ExJDAiBgkqhkiG9w0BCQEWFWRkZnJvb3RjYUBleGFtcGxlLm9yZzAeFw0xNDEyMTAyMTU4MThaFw0xNTEyMTAyMTU4MThaMIGDMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQVoxETAPBgNVBAcMCEdvb2R5ZWFyMQwwCgYDVQQKDANEREYxDDAKBgNVBAsMA0RldjESMBAGA1UEAwwJbG9jYWxob3N0MSQwIgYJKoZIhvcNAQkBFhVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMeCyNZbCTZphHQfB5g8FrgBq1RYzV7ikVw/pVGkz8gx3l3A99s8WtA4mRAeb6n0vTR9yNBOekW4nYOiEOq//YTi/frI1kz0QbEH1s2cI5nFButabD3PYGxUSuapbc+AS7+Pklr0TDI4MRzPPkkTp4wlORQ/a6CfVsNr/mVgL2CfAgMBAAGjgZkwgZYwCQYDVR0TBAIwADAnBglghkgBhvhCAQ0EGhYYRk9SIFRFU1RJTkcgUFVSUE9TRSBPTkxZMB0GA1UdDgQWBBSA95QIMyBAHRsd0R4s7C3BreFrsDAfBgNVHSMEGDAWgBThVMeX3wrCv6lfeF47CyvkSBe9xjAgBgNVHREEGTAXgRVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwDQYJKoZIhvcNAQEFBQADgYEAtRUp7fAxU/E6JD2Kj/+CTWqu8Elx13S0TxoIqv3gMoBW0ehyzEKjJi0bb1gUxO7n1SmOESp5sE3jGTnh0GtYV0D219z/09n90cd/imAEhknJlayyd0SjpnaL9JUd8uYxJexy8TJ2sMhsGAZ6EMTZCfT9m07XduxjsmDz0hlSGV0=
</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:KeyDescriptor use="encryption">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>
MIIDEzCCAnygAwIBAgIJAIzc4FYrIp9mMA0GCSqGSIb3DQEBBQUAMHcxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJBWjEMMAoGA1UECgwDRERGMQwwCgYDVQQLDANEZXYxGTAXBgNVBAMMEERERiBEZW1vIFJvb3QgQ0ExJDAiBgkqhkiG9w0BCQEWFWRkZnJvb3RjYUBleGFtcGxlLm9yZzAeFw0xNDEyMTAyMTU4MThaFw0xNTEyMTAyMTU4MThaMIGDMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQVoxETAPBgNVBAcMCEdvb2R5ZWFyMQwwCgYDVQQKDANEREYxDDAKBgNVBAsMA0RldjESMBAGA1UEAwwJbG9jYWxob3N0MSQwIgYJKoZIhvcNAQkBFhVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMeCyNZbCTZphHQfB5g8FrgBq1RYzV7ikVw/pVGkz8gx3l3A99s8WtA4mRAeb6n0vTR9yNBOekW4nYOiEOq//YTi/frI1kz0QbEH1s2cI5nFButabD3PYGxUSuapbc+AS7+Pklr0TDI4MRzPPkkTp4wlORQ/a6CfVsNr/mVgL2CfAgMBAAGjgZkwgZYwCQYDVR0TBAIwADAnBglghkgBhvhCAQ0EGhYYRk9SIFRFU1RJTkcgUFVSUE9TRSBPTkxZMB0GA1UdDgQWBBSA95QIMyBAHRsd0R4s7C3BreFrsDAfBgNVHSMEGDAWgBThVMeX3wrCv6lfeF47CyvkSBe9xjAgBgNVHREEGTAXgRVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwDQYJKoZIhvcNAQEFBQADgYEAtRUp7fAxU/E6JD2Kj/+CTWqu8Elx13S0TxoIqv3gMoBW0ehyzEKjJi0bb1gUxO7n1SmOESp5sE3jGTnh0GtYV0D219z/09n90cd/imAEhknJlayyd0SjpnaL9JUd8uYxJexy8TJ2sMhsGAZ6EMTZCfT9m07XduxjsmDz0hlSGV0=
</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://localhost:8993/logout"/>
<md:SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://localhost:8993/logout"/>
<md:AssertionConsumerService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://localhost:8993/services/saml/sso"/>
<md:AssertionConsumerService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://localhost:8993/services/saml/sso"/>
</md:SPSSODescriptor>
</md:EntityDescriptor>
To use the IdP for authentication,
-
Navigate to the Admin Console.
-
Select the Security application.
-
Select the Configuration tab.
-
Select Web Context Policy Manager.
-
Under Authentication Types, set the IdP authentication type to context paths as necessary. Note that it should only be used on context paths that will be accessed by users via web browsers. For example:
-
/search=IdP
-
Other authentication types can also be used in conjunction with the IdP type.
For example, if you wanted to secure the entire system with the IdP, but still allow legacy clients that don’t understand the SAML ECP specification to connect, you could set /=IdP|PKI
.
With that configuration, any clients that failed to connect using either the SAML 2.0 Web SSO Profile or the SAML ECP specification would fall back to 2-way TLS for authentication.
Note
|
If you have configured |
To configure the IdP client (also known as the SP) that interacts with the specified IdP,
-
Navigate to the Admin Console.
-
Select the Security application.
-
Select the Configuration tab.
-
Select IdP Client.
-
Populate IdP Metadata field through one of the following:
-
an HTTPS URL (https://) e.g., https://localhost:8993/services/idp/login/metadata,
-
a file URL (file:), or
-
an XML block to refer to desired metadata.
-
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
<md:EntityDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" entityID="https://localhost:8993/services/idp/login">
<md:IDPSSODescriptor WantAuthnRequestsSigned="true" protocolSupportEnumeration="urn:oasis:names:tc:SAML:2.0:protocol">
<md:KeyDescriptor use="signing">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>
MIIDEzCCAnygAwIBAgIJAIzc4FYrIp9mMA0GCSqGSIb3DQEBBQUAMHcxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJBWjEMMAoGA1UECgwDRERGMQwwCgYDVQQLDANEZXYxGTAXBgNVBAMMEERERiBEZW1vIFJvb3QgQ0ExJDAiBgkqhkiG9w0BCQEWFWRkZnJvb3RjYUBleGFtcGxlLm9yZzAeFw0xNDEyMTAyMTU4MThaFw0xNTEyMTAyMTU4MThaMIGDMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQVoxETAPBgNVBAcMCEdvb2R5ZWFyMQwwCgYDVQQKDANEREYxDDAKBgNVBAsMA0RldjESMBAGA1UEAwwJbG9jYWxob3N0MSQwIgYJKoZIhvcNAQkBFhVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMeCyNZbCTZphHQfB5g8FrgBq1RYzV7ikVw/pVGkz8gx3l3A99s8WtA4mRAeb6n0vTR9yNBOekW4nYOiEOq//YTi/frI1kz0QbEH1s2cI5nFButabD3PYGxUSuapbc+AS7+Pklr0TDI4MRzPPkkTp4wlORQ/a6CfVsNr/mVgL2CfAgMBAAGjgZkwgZYwCQYDVR0TBAIwADAnBglghkgBhvhCAQ0EGhYYRk9SIFRFU1RJTkcgUFVSUE9TRSBPTkxZMB0GA1UdDgQWBBSA95QIMyBAHRsd0R4s7C3BreFrsDAfBgNVHSMEGDAWgBThVMeX3wrCv6lfeF47CyvkSBe9xjAgBgNVHREEGTAXgRVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwDQYJKoZIhvcNAQEFBQADgYEAtRUp7fAxU/E6JD2Kj/+CTWqu8Elx13S0TxoIqv3gMoBW0ehyzEKjJi0bb1gUxO7n1SmOESp5sE3jGTnh0GtYV0D219z/09n90cd/imAEhknJlayyd0SjpnaL9JUd8uYxJexy8TJ2sMhsGAZ6EMTZCfT9m07XduxjsmDz0hlSGV0=
</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:KeyDescriptor use="encryption">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>
MIIDEzCCAnygAwIBAgIJAIzc4FYrIp9mMA0GCSqGSIb3DQEBBQUAMHcxCzAJBgNVBAYTAlVTMQswCQYDVQQIDAJBWjEMMAoGA1UECgwDRERGMQwwCgYDVQQLDANEZXYxGTAXBgNVBAMMEERERiBEZW1vIFJvb3QgQ0ExJDAiBgkqhkiG9w0BCQEWFWRkZnJvb3RjYUBleGFtcGxlLm9yZzAeFw0xNDEyMTAyMTU4MThaFw0xNTEyMTAyMTU4MThaMIGDMQswCQYDVQQGEwJVUzELMAkGA1UECAwCQVoxETAPBgNVBAcMCEdvb2R5ZWFyMQwwCgYDVQQKDANEREYxDDAKBgNVBAsMA0RldjESMBAGA1UEAwwJbG9jYWxob3N0MSQwIgYJKoZIhvcNAQkBFhVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAMeCyNZbCTZphHQfB5g8FrgBq1RYzV7ikVw/pVGkz8gx3l3A99s8WtA4mRAeb6n0vTR9yNBOekW4nYOiEOq//YTi/frI1kz0QbEH1s2cI5nFButabD3PYGxUSuapbc+AS7+Pklr0TDI4MRzPPkkTp4wlORQ/a6CfVsNr/mVgL2CfAgMBAAGjgZkwgZYwCQYDVR0TBAIwADAnBglghkgBhvhCAQ0EGhYYRk9SIFRFU1RJTkcgUFVSUE9TRSBPTkxZMB0GA1UdDgQWBBSA95QIMyBAHRsd0R4s7C3BreFrsDAfBgNVHSMEGDAWgBThVMeX3wrCv6lfeF47CyvkSBe9xjAgBgNVHREEGTAXgRVsb2NhbGhvc3RAZXhhbXBsZS5vcmcwDQYJKoZIhvcNAQEFBQADgYEAtRUp7fAxU/E6JD2Kj/+CTWqu8Elx13S0TxoIqv3gMoBW0ehyzEKjJi0bb1gUxO7n1SmOESp5sE3jGTnh0GtYV0D219z/09n90cd/imAEhknJlayyd0SjpnaL9JUd8uYxJexy8TJ2sMhsGAZ6EMTZCfT9m07XduxjsmDz0hlSGV0=
</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</md:KeyDescriptor>
<md:SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://localhost:8993/logout"/>
<md:SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://localhost:8993/logout"/>
<md:NameIDFormat>
urn:oasis:names:tc:SAML:2.0:nameid-format:persistent
</md:NameIDFormat>
<md:NameIDFormat>
urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified
</md:NameIDFormat>
<md:NameIDFormat>
urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName
</md:NameIDFormat>
<md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect" Location="https://localhost:8993/services/idp/login"/>
<md:SingleSignOnService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" Location="https://localhost:8993/services/idp/login"/>
</md:IDPSSODescriptor>
</md:EntityDescriptor>
When using the included IdP, DDF can be configured to use the included Security Token Service(STS).
7.4.2.1.1. Configuring Included STS
An LDAP server can be used to maintain a list of DDF users and the attributes associated with them. The Security Token Service (STS) can use an LDAP server as an attribute store and convert those attributes to SAML claims.
DDF includes a demo LDAP server, but an external LDAP is required for actual installation.
The STS is installed by default in DDF.
-
Verify that the
serverKeystores.jks
file in<DDF_HOME>/etc/keystores
trusts the hostnames used in your environment (the hostnames of LDAP, and any DDF users that make use of this STS server). -
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Start the
security-sts-ldaplogin
andsecurity-sts-ldapclaimshandler
features. -
Select the Configuration tab.
-
Select the Security STS LDAP Login configuration.
-
Verify that the LDAP URL, LDAP Bind User DN, and LDAP Bind User Password fields match your LDAP server’s information.
-
The default DDF LDAP settings will match up with the default settings of the OpenDJ embedded LDAP server. Change these values to map to the location and settings of the LDAP server being used.
-
-
Select the Save changes button if changes were made.
-
Open the Security STS LDAP and Roles Claims Handler configuration.
-
Populate the same URL, user, and password fields with your LDAP server information.
-
Select the Save Changes button.
Configure the DDF to use this authentication scheme.
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Open the Web Context Policy Manager configuration.
-
Under Authentication Types, make any desired authentication changes to contexts.
-
In order to use the SAML 2.0 Web SSO profile against a context, you must specify only the IdP authentication type.
-
-
Configure the client connecting to the STS.
-
Navigate to the Admin Console.
-
Select the Security application.
-
Open the Security STS Client configuration.
-
Verify that the host/port information in the STS Address field points to your STS server. If you are using the default bundled STS, this information will already be correct.
See Security STS Client table for all configuration options.
The DDF should now use the SSO/STS/LDAP servers when it attempts to authenticate a user upon an attempted log in.
Connect to the server hosting the STS.
-
Navigate to the Admin Console.
-
Select the Security application.
-
Select the Security STS Server configuration.
-
Verify the hostname and usernames are correct.
See Security STS Server table for all configuration options.
Set up alternatives to displaying the username of the logged in user.
-
Navigate to the Admin Console.
-
Select the Security application.
-
Select the SAML NameID Policy configuration.
-
Add any desired attributes to display instead of the username. (The first matching attribute will be used.)
-
Required Step for Security Hardening
-
Open the
<DDF_HOME>/etc/custom.system.properties
file. -
Edit the line
ws-security.subject.cert.constraints = .*CN=<MY_HOST_CN>.*
.-
By default this will only allow your hostname. To allow other desired hosts add their CNs to the regular expression within parentheses delimited by
|
:-
ws-security.subject.cert.constraints = .*CN=(<MY_HOST_CN>|<OTHER_HOST_CN>|<ANOTHER_HOST_CN>).*
-
-
7.4.2.2. Connecting to an External Identity Provider
To connect to an external Identity Provider,
-
Provide the external IdP with DDF’s Service Provider (SP) metadata. The SP metadata can found at
https://<FQDN>:<PORT>/services/saml/sso/metadata
. -
Replace the IdP metadata field in DDF.
-
Navigate to the Admin Console.
-
Select the Security application.
-
Select the Configuration tab.
-
Select IdP Client.
-
Populate the IdP Metadata field with the external IdP’s metadata.
-
Note
|
DDF may not interoperate successfully with all IdPs. To idenify the ones it can interoperate with use the The Security Assertion Markup Language (SAML) Conformance Test Kit (CTK) |
It is not recommended to remove or replace the included Service Provider. To add an additional, external Service Provider, add the SP metadata to the IdP Server configuration. See Configuring Security IdP Service Provider for more detail.
7.4.2.3. Configuring Without an Identity Provider
To configure DDF to not use an Identity Provider (IdP),
-
Disable the IdP feature.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Uninstall the
security-idp
feature.
-
-
Change the Authentication Type if it is IdP.
-
Navigate to the Admin Console.
-
Select the Security application.
-
Select the Configuration tab.
-
Select Web Context Policy Manager
-
Under Authentication Types, remove the IdP authentication type from all context paths.
-
7.4.2.3.1. Using STS without IdP
To configure DDF to use the included Security Token Service (STS) without an IdP, follow the same Configuring STS steps, with one additional configuration to make via the Web Context Policy Manager.
-
Navigate to the Admin Console.
-
Select the Security application.
-
Select Configuration.
-
Select the Web Context Policy Manager.
-
Add any needed authentication types to the Authentication Types list, such as
PKI
,Basic
, etc.
7.4.3. Configuring SOAP Services for Users
If using SOAP services, DDF can be configured to use the included Security Token Service (STS).
7.4.3.1. Connecting to Included STS with SOAP
DDF includes a STS implementation that can be used for user authentication over SOAP services.
Configure the STS WSS.
-
Navigate to the Admin Console.
-
Select the Security application.
-
Select Configuration.
-
Select Security STS WSS.
-
Update the Claims that should be requested by the STS.
7.4.4. Connecting to an LDAP Server
Warning
|
The configurations for Security STS LDAP and Roles Claims Handler and Security STS LDAP Login contain plain text default passwords for the embedded LDAP, which is insecure to use in production. |
Use the Encryption Service, from the Command Console to set passwords for your LDAP server. Then change the LDAP Bind User Password in the Security STS LDAP and Roles Claims Handler configurations to use the encrypted password.
A claim is an additional piece of data about a principal that can be included in a token along with basic token data. A claims manager provides hooks for a developer to plug in claims handlers to ensure that the STS includes the specified claims in the issued token.
Claims handlers convert incoming user credentials into a set of attribute claims that will be populated in the SAML assertion.
For example, the LDAPClaimsHandler
takes in the user’s credentials and retrieves the user’s attributes from a backend LDAP server.
These attributes are then mapped and added to the SAML assertion being created.
Integrators and developers can add more claims handlers that can handle other types of external services that store user attributes.
See the Security STS LDAP and Roles Claims Handler for all possible configurations.
7.4.5. Updating System Users
By default, all system users are located in the <DDF_HOME>/etc/users.properties
and <DDF_HOME>/etc/users.attributes
files.
The default users included in these two files are "admin" and "localhost".
The users.properties
file contains username, password, and role information; while the users.attributes
file is used to mix in additional attributes.
The users.properties
file must also contain the user corresponding to the fully qualified domain name (FQDN) of the system where DDF is running.
This FQDN user represents this host system internally when making decisions about what operations the system is capable of performing.
For example, when performing a DDF Catalog Ingest, the system’s attributes will be checked against any security attributes present on the metacard, prior to ingest, to determine whether or not the system should be allowed to ingest that metacard.
Additionally, the users.attributes
file can contain user entries in a regex format.
This allows an administrator to mix in attributes for external systems that match a particular regex pattern.
The FQDN user within the users.attributes
file should be filled out with attributes sufficient to allow the system to ingest the expected data.
The users.attributes
file uses a JSON format as shown below:
1
2
3
4
5
6
7
8
9
10
11
12
{
"admin" : {
"test" : "testValue",
"test1" : [ "testing1", "testing2", "testing3" ]
},
"localhost" : {
},
".*host.*" : {
"reg" : "ex"
}
}
For this example, the "admin" user will end up with two additional claims of "test" and "test1" with values of "testValue" and [ "testing1", "testing2", "testing3" ] respectively. Also, any host matching the regex ".host." would end up with the claim "reg" with the single value of "ex". The "localhost" user would have no additional attributes mixed in.
Warning
|
It is possible for a regex in |
Warning
|
If your data will contain security markings, and these markings are being parsed out into the metacard security attributes via a PolicyPlugin, then the FQDN user MUST be updated with attributes that would grant the privileges to ingest that data. Failure to update the FQDN user with sufficient attributes will result in an error being returned for any ingest request. |
Warning
|
The following attribute values are not allowed:
Additionally, attribute names should not be repeated, and the order that the attributes are defined and the order of values within an array will be ignored. |
7.4.6. Restricting Access to Admin Console
-
Required Step for Security Hardening
If you have integrated DDF with your existing security infrastructure, then you may want to limit access to parts of the DDF based on user roles/groups.
Limit access to the Admin Console to those users who need access. To set access restrictions on the Admin Console, consult the organization’s security architecture to identify specific realms, authentication methods, and roles required.
-
Navigate to the Admin Console.
-
Select the Security application.
-
Select the Configuration tab.
-
Select the Web Context Policy Manager.
-
A dialogue will pop up that allows you to edit DDF access restrictions.
-
Once you have configured your realms in your security infrastructure, you can associate them with DDF contexts.
-
If your infrastructure supports multiple authentication methods, they may be specified on a per-context basis.
-
Role requirements may be enforced by configuring the required attributes for a given context.
-
The white listed contexts allows child contexts to be excluded from the authentication constraints of their parents.
-
7.4.6.1. Restricting Feature, App, Service, and Configuration Access
-
Required Step for Security Hardening
Limit access to the individual applications, features, or services to those users who need access. Organizational requirements should dictate which applications are restricted and the extent to which they are restricted.
-
Navigate to the Admin Console.
-
Select the Admin application.
-
Select the Configuration tab.
-
Select the Admin Configuration Policy.
-
To add a feature or app permission:
-
Add a new field to "Feature and App Permissions" in the format of:
<feature name>/<app name> = "attribute name=attribute value","attribute name2=attribute value2", …
-
For example, to restrict access of any user without an admin role to the catalog-app:
catalog-app = "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role=admin", …
-
-
To add a configuration permission:
-
Add a new field to "Configuration Permissions" in the format of:
configuration id = "attribute name=attribute value","attribute name2=attribute value2", …
-
For example, to restrict access of any user without an admin role to the Web Context Policy Manager:
org.codice.ddf.security.policy.context.impl.PolicyManager="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role=admin"
-
If a permission is specified, any user without the required attributes will be unable to see or modify the feature, app, or configuration.
7.4.7. Removing Default Users
-
Required Step for Security Hardening
The default security configuration uses a property file located at <DDF_HOME>/etc/users.properties
to store users and passwords.
A hardened system will remove this file and manage all users externally, via an LDAP server or by other means.
Note
|
Default Users are an Insecure Default
The Admin Console has an insecure default warning if the default users are not removed. |
Once DDF is configured to use an external user (such as LDAP), remove the users.properties
file from the <DDF_HOME>/etc
directory.
Use of a users.properties
file should be limited to emergency recovery operations and replaced as soon as effectively possible.
The deletion of the default users in the users.properties
file can be done automatically after 72 hours.
This feature can be found at Admin Console → Admin → Default Users Deletion Scheduler → Enable default users automatic deletion.
Warning
|
Once the default users are removed, the |
Note
|
Emergency Use of
users.properties fileTypically, the DDF does not manage passwords.
Authenticators are stored in an external identity management solution. However, administrators may temporarily use a If a system recovery account is configured in
|
Note
|
Compliance Reviews
It is recommended to perform yearly reviews of accounts for compliance with organizational account management requirements. |
7.4.8. Disallowing Login Without Certificates
DDF can be configured to prevent login without a valid PKI certificate.
-
Navigate to the Admin Console.
-
Select Security.
-
Select Web Context Policy Manager.
-
Add a policy for each context requiring restriction.
-
For example:
/search=SAML|PKI
will disallow login without certificates to the Search UI. -
The format for the policy should be:
/<CONTEXT>=SAML|PKI
-
-
Click Save.
Note
|
Ensure certificates comply with organizational hardening policies. |
7.4.9. Managing Certificate Revocation
-
Required Step for Security Hardening
For hardening purposes, it is recommended to implement a way to verify a Certificate Revocation List (CRL) at least daily or an Online Certificate Status Protocol (OCSP) server.
7.4.9.1. Managing a Certificate Revocation List (CRL)
A Certificate Revocation List is a collection of formerly-valid certificates that should explicitly not be accepted.
7.4.9.1.1. Creating a CRL
Create a CRL in which the token issuer’s certificate is valid. The example uses OpenSSL.
$> openssl ca -gencrl -out crl-tokenissuer-valid.pem
Note
|
Windows and OpenSSL
Windows does not include OpenSSL by default. For Windows platforms, a additional download of OpenSSL or an alternative is required. |
$> openssl ca -revoke tokenissuer.crt $> openssl ca -gencrl -out crl-tokenissuer-revoked.pem
-
Use the following command to view the serial numbers of the revoked certificates:
$> openssl crl -inform PEM -text -noout -in crl-tokenissuer-revoked.pem
7.4.9.1.2. Enabling Certificate Revocation
Note
|
Enabling CRL revocation or modifying the CRL file will require a restart of DDF to apply updates. |
-
Place the CRL in <DDF_HOME>/etc/keystores.
-
Add the line
org.apache.ws.security.crypto.merlin.x509crl.file=etc/keystores/<CRL_FILENAME>
to the following files (Replace<CRL_FILENAME>
with the URL or file path of the CRL location):-
<DDF_HOME>/etc/ws-security/server/encryption.properties
-
<DDF_HOME>/etc/ws-security/issuer/encryption.properties
-
<DDF_HOME>/etc/ws-security/server/signature.properties
-
<DDF_HOME>/etc/ws-security/issuer/signature.properties
-
-
(Replace <CRL_FILENAME> with the file path or URL of the CRL file used in previous step.)
Adding this property will also enable CRL revocation for any context policy implementing PKI authentication.
For example, adding an authentication policy in the Web Context Policy Manager of /search=SAML|PKI
will disable basic authentication, require a certificate for the search UI, and allow a SAML SSO session to be created.
If a certificate is not in the CRL, it will be allowed through, otherwise it will get a 401 error.
If no certificate is provided, the guest handler will grant guest access.
This also enables CRL revocation for the STS endpoint.
The STS CRL Interceptor monitors the same encryption.properties
file and operates in an identical manner to the PKI Authenication’s CRL handler. Enabling the CRL via the encryption.properties
file will also enable it for the STS, and also requires a restart.
If the CRL cannot be placed in <DDF_HOME>/etc/keystores but can be accessed via an HTTPS URL:
-
Navigate to the Admin Console.
-
Navigate to System → Configuration → Certificate Revocation List (CRL)
-
Add the HTTPS URL under CRL URL address
-
Check the Enable CRL via URL option
A local CRL file will be created and the encryption.properties
and signature.properties
files will be set as mentioned above.
The PKIHandler implements CRL revocation, so any web context that is configured to use PKI authentication will also use CRL revocation if revocation is enabled.
-
After enabling revocation (see above), open the Web Context Policy Manager.
-
Add or modify a Web Context to use PKI in authentication. For example, enabling CRL for the search ui endpoint would require adding an authorization policy of
/search=SAML|PKI
-
If guest access is required, add
GUEST
to the policy. Ex,/search=SAML|PKI|GUEST
.
With guest access, a user with a revoked certificate will be given a 401 error, but users without a certificate will be able to access the web context as the guest user.
The STS CRL interceptor does not need a web context specified.
The CRL interceptor for the STS will become active after specifying the CRL file path, or the URL for the CRL, in the encryption.properties
file and restarting DDF.
Note
|
Disabling or enabling CRL revocation or modifying the CRL file will require a restart of DDF to apply updates. If CRL checking is already enabled, adding a new context via the Web Context Policy Manager will not require a restart. |
Note
|
This section explains how to add CXF’s CRL revocation method to an endpoint and not the CRL revocation method in the |
This guide assumes that the endpoint being created uses CXF and is being started via Blueprint from inside the OSGi container. If other tools are being used the configuration may differ.
Add the following property to the jasws
endpoint in the endpoint’s blueprint.xml
:
<entry key="ws-security.enableRevocation" value="true"/>
jaxws:endpoint
with the property:<jaxws:endpoint id="Test" implementor="#testImpl"
wsdlLocation="classpath:META-INF/wsdl/TestService.wsdl"
address="/TestService">
<jaxws:properties>
<entry key="ws-security.enableRevocation" value="true"/>
</jaxws:properties>
</jaxws:endpoint>
A Warning similar to the following will be displayed in the logs of the source and endpoint showing the exception encountered during certificate validation:
11:48:00,016 | WARN | tp2085517656-302 | WSS4JInInterceptor | ecurity.wss4j.WSS4JInInterceptor 330 | 164 - org.apache.cxf.cxf-rt-ws-security - 2.7.3 |
org.apache.ws.security.WSSecurityException: General security error (Error during certificate path validation: Certificate has been revoked, reason: unspecified)
at org.apache.ws.security.components.crypto.Merlin.verifyTrust(Merlin.java:838)[161:org.apache.ws.security.wss4j:1.6.9]
at org.apache.ws.security.validate.SignatureTrustValidator.verifyTrustInCert(SignatureTrustValidator.java:213)[161:org.apache.ws.security.wss4j:1.6.9]
[ ... section removed for space]
Caused by: java.security.cert.CertPathValidatorException: Certificate has been revoked, reason: unspecified
at sun.security.provider.certpath.PKIXMasterCertPathValidator.validate(PKIXMasterCertPathValidator.java:139)[:1.6.0_33]
at sun.security.provider.certpath.PKIXCertPathValidator.doValidate(PKIXCertPathValidator.java:330)[:1.6.0_33]
at sun.security.provider.certpath.PKIXCertPathValidator.engineValidate(PKIXCertPathValidator.java:178)[:1.6.0_33]
at java.security.cert.CertPathValidator.validate(CertPathValidator.java:250)[:1.6.0_33]
at org.apache.ws.security.components.crypto.Merlin.verifyTrust(Merlin.java:814)[161:org.apache.ws.security.wss4j:1.6.9]
... 45 more
7.4.9.2. Managing an Online Certificate Status Protocol (OCSP) Server
An Online Certificate Status Protocol is a protocol used to verify the revocation status of a certificate. An OCSP server can be queried with a certificate to verify if it is revoked.
The advantage of using an OCSP Server over a CRL is the fact that a local copy of the revoked certificates is not needed.
7.4.9.2.1. Enabling OCSP Revocation
-
Navigate to the Admin Console.
-
Navigate to System → Configuration → Online Certificate Status Protocol (OCSP).
-
Add the URL of the OCSP server under OCSP server URL.
-
Check the Enable validating a certificate against an OCSP server option.
Note
|
If an error occurs while communicating with the OCSP server, an alert will be posted to the Admin Console. Until the error is resolved, certificates will not be verified against the server. |
7.5. Configuring Data Management
Data ingested into DDF has security attributes that can be mapped to users' permissions to ensure proper access. This section covers configurations that ensure only the appropriate data is contained in or exposed by DDF.
7.5.1. Configuring Solr
The default catalog provider for DDF is Solr. If using another catalog provider, see Changing Catalog Providers.
7.5.1.1. Configuring Solr Catalog Provider Synonyms
When configured, text searches in Solr will utilize synonyms when attempting to match text within the catalog.
Synonyms are used during keyword/anyText searches as well as when searching on specific text attributes when using the like
/ contains
operator.
Text searches using the equality
/ exact match
operator will not utilize synonyms.
Solr utilizes a synonyms.txt
file which exists for each Solr core.
Synonym matching is most pertinent to metacards which are contained within 2 cores: catalog
and metacard_cache
.
7.5.1.1.1. Defining synonym rules in the Solr Provider
-
Edit the
synonyms.txt
file under thecatalog
core. For each synonym group you want to define, add a line with the synonyms separated by a comma. For example:
United States, United States of America, the States, US, U.S., USA, U.S.A
-
Save the file
-
Repeat the above steps for the
metacard_cache
core. -
Restart the DDF.
Note
|
Data does not have to be re-indexed for the synonyms to take effect. |
7.5.1.2. Hardening Solr
The following sections provide hardening guidance for Solr; however, they are provided only as reference and additional security requirements may be added.
7.5.1.2.1. Hardening Solr Server Configuration
The Solr server configuration is configured to be secure by default. No additional hardening should be necessary. The default configuration starts Solr with TLS enabled and basic authentication required. That means DDF must trust Solr’s PKI certificate.
7.5.1.2.2. Solr Server Password Management
By default, DDF is configured to use Solr server. To verify this, view the property
solr.client
. If the property is set to HttpSolrClient
,
DDF is configured to use Solr server.
To ensure the security of its communication with Solr server, DDF sends HTTP requests over TLS. Solr is configured to use basic authentication to further ensure the requests originated from DDF. There are several system properties that control basic authentication and password management.
-
solr.useBasicAuth
Send basic authentication header if property istrue
-
solr.username
Username for basic authentication with Solr server. -
solr.password
Password for basic authentication. -
solr.attemptAutoPasswordChange
If this property istrue
, DDF attempts to change the default password to a randomly generated secure password if it detects the default password is in use. The new password is encrypted and then stored in the system properties.
The Solr distrubition included with DDF comes already configured with a user. To see the
username or default password, either inspect the file
<DDF_HOME>/etc/custom.system.properties
or refer to the properties
here.
A limitation of the current implementation is that the Solr password is not recoverable. Further, the migration command does not currently migrate the password. It may be necessary to reset the password:
-
After a migration.
-
If the administator needs access to the Solr admin UI.
-
If the administator wants to use their own password.
-
To prevent DDF from attempting to change the password set the property
solr.attemptAutoPasswordChange
tofalse
in the file<DDF_HOME>/etc/custom.system.properties
-
To change the Solr password to a specific string, send Solr an HTTP POST request. This is covered in the official Solr documentation . Here is an example that uses the command line utility
curl
to change the password fromadmin
tonewpassword
:curl -k -u "admin:admin" "https://{FQDN}:{PORT}/solr/admin/authentication" -H 'Content-type:application/json' -d "{ 'set-user': {'admin' : 'newpassword'}}"
-
Encrypt the password using the Encryption Service. The encryption command enciphers the password. It is safe to save the peristed password in a file.
-
Update property
solr.password
in the file <DDF_HOME>/etc/custom.system.properties` to be the ouput from the encryption command. Be sure to includeENC(
and)
characters produced by the encryption comand. Note that the default password is not enclosed inENC()
because that is not necessary for cleartext. Cleartext is used by the system exactly as it appears. follow these instructions. -
Finally, restart DDF
-
Restore the
<DDF_HOME>/solr/server/solr/security.json
from a zip file of the DDF distribution.
OR
-
Edit the
<DDF_HOME>/solr/server/solr/security.json
file. Solr stores a salted hash of the user passwords in this file. -
Assuming the Solr username is
admin
, change the credentials section to match this string:"credentials": { "admin": "EjjOS/zyQ1KQQdSXFb/rFm7w6MItU5pmdthM35ZiJaA= ZZI7d4jf/8hz5oZz7ljBE6+uv1wqncj+VudX3arbib4="}
The quoted string following the username
admin
is the salted hash for the passwordadmin
. -
Edit the file
<DDF_HOME>/etc/custom.system.properties
and change the value ofsolr.password
toadmin
. -
Optional: Prevent DDF from automatically changing the Solr password.
To disable Solr’s basic authentication mechanism, rename or remove the file
<DDF_HOME>/solr/server/solr/security.json
and restart Solr. The file security.json
configures Solr to use basic authetnication and defines Solr users. If the file is not present,
Solr requires no login. This could a security issue in many environments and it is recommended
to never disable Solr authentication in an operational environment. If authentication is disabled,
the system property solr.useBasicAuth
may be set to
false
.
7.5.1.2.3. Configuring Solr Encryption
While it is possible to encrypt the Solr index, it decreases performance significantly. An encrypted Solr index also can only perform exact match queries, not relative or contextual queries. As this drastically reduces the usefulness of the index, this configuration is not recommended. The recommended approach is to encrypt the entire drive through the Operating System of the server on which the index is located.
7.5.1.3. Accessing the Solr Admin UI
The Solr Admin UI for Solr server configurations is generally inaccessible through a web browser. A web browser can be configured to access the Solr Admin UI if required.
7.5.1.3.1. Configuring a Browser to Access Solr Admin UI
The Solr server configuration is secure by default. Solr server requires a TLS connection with client authentication. Solr only allows access to clients that present a trusted certificate.
7.5.1.3.2. Using DDF Keystores
Solr server uses the same keystores as DDF. A simple way to enable access to the Solr Admin UI is to install DDF’s own private key/certificate entry into a browser. The method to export DDF’s private key/certificate entry depend on the type of keystore being used. The method to import the private key/certificate entry into the browser depends on the operating system, and the browser itself. For more information consult the browser’s documentation.
If the browser is not correctly configured with a certificate that Solr trusts, the browser displays an error message about client authentication failing, or a message that the client certificate is invalid.
7.5.1.3.3. Solr Admin UI’s URL
The Solr server’s URL is configured in DDF’s custom.system.properties
file. See
solr.http.url for more information.
An example of a typical URL for the Solr Admin UI is https://hostname:8994
.
7.5.2. Changing Catalog Providers
This scenario describes how to reconfigure DDF to use a different catalog provider.
This scenario assumes DDF is already running.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Find and Stop the installed Catalog Provider
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Find and Start the desired Catalog Provider.
7.5.3. Changing Hostname
By default, the STS server, STS client and the rest of the services use the system property org.codice.ddf.system.hostname
which is defaulted to 'localhost' and not to the fully qualified domain name of the DDF instance.
Assuming the DDF instance is providing these services, the configuration must be updated to use the fully qualified domain name as the service provider.
If the DDF is being accessed from behind a proxy or load balancer, set the system property org.codice.ddf.external.hostname
to the hostname users will be using to access the DDF.
This can be changed during Initial Configuration or later by editing the <DDF_HOME>/etc/custom.system.properties
file.
7.5.4. Configuring Errors and Warnings
DDF performs several types of validation on metadata ingested into the catalog. Depending on need, configure DDF to act on the warnings or errors discovered.
7.5.4.1. Enforcing Errors or Warnings
Prevent data with errors or warnings from being ingested at all.
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select Configuration.
-
Select Metacard Validation Marker Plugin.
-
Enter ID of validator(s) to enforce.
-
Select Enforce errors to prevent ingest for errors.
-
Select Enforce warnings to prevent ingest for warnings.
7.5.4.2. Hiding Errors or Warnings from Queries
Prevent invalid metacards from being displayed in query results, unless specifically queried.
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select Configuration.
-
Select Catalog Federation Strategy.
-
Deselect Show Validations Errors to hide metacards with errors.
-
Deselect Show Validations Warnings to hide metacards with warnings.
7.5.4.3. Hiding Errors and Warnings from Users Based on Role
-
Required Step for Security Hardening
Prevent certain users from seeing data with certain types of errors or warnings. Typically, this is used for security markings. If the Metacard Validation Filter Plugin is configured to Filter errors and/or Filter warnings, metacards with errors/warnings will be hidden from users without the specified user attributes.
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select Configuration.
-
Select Metacard Validation Filter Plugin.
-
For Attribute map, enter both the metacard
SECURITY
attribute to filter and the user attribute to filter.-
The default attribute for viewing invalid metacards is
invalid-state
-
invalid-state=<USER ROLE>
. -
Replace
<USER ROLE>
with the roles that should be allowed to view invalid metacards.NoteTo harden the system and prevent other DDF systems from querying invalid data in the local catalog, it is recommended to create and set user roles that are unique to the local system (ie. a user role that includes a UUID).
-
-
-
Select Filter errors to filter errors. Users without the
invalid-state
attribute will not see metacards with errors. -
Select Filter warnings to filter warnings. Users without the
invalid-state
attribute will not see metacards with warnings.
7.5.5. Content Directory Monitor
The Content Directory Monitor (CDM) provides the capability to easily add content and metacards into the Catalog by placing a file in a directory.
7.5.5.1. Installing the Content Directory Monitor
The Content Directory Monitor is installed by default with a standard installation of the Catalog application.
7.5.5.2. Configuring Permissions for the Content Directory Monitor
Tip
|
If monitoring a WebDav server, then adding these permissions is not required and this section can be skipped. |
Configuring a Content Directory Monitor requires adding permissions to the Security Manager before CDM configuration.
Configuring a CDM requires adding read and write permissions to the directory being monitored. The following permissions, replacing <DIRECTORY_PATH> with the path of the directory being monitored, are required for each configured CDM and should be placed in the CDM section inside <DDF_HOME>/security/configurations.policy.
Warning
|
Adding New Permissions
After adding permissions, a system restart is required for them to take effect. |
-
permission java.io.FilePermission "<DIRECTORY_PATH>", "read";
-
permission java.io.FilePermission "<DIRECTORY_PATH>${/}-", "read, write";
Trailing slashes after <DIRECTORY_PATH> have no effect on the permissions granted. For example, adding a permission for "${/}test${/}path" and "${/}test${/}path${/}" are equivalent. The recursive forms "${/}test${/}path${/}-", and "${/}test${/}path${/}${/}-" are also equivalent.
Line 1 gives the CDM the permissions to read from the monitored directory path. Line 2 gives the CDM the permissions to recursively read and write from the monitored directory path, specified by the directory path’s suffix "${/}-".
If a CDM configuration is deleted, then the corresponding permissions that were added should be deleted to avoid granting unnecessary permissions to parts of the system.
7.5.5.3. Configuring the Content Directory Monitor
Important
|
Content Directory Monitor Permissions
When configuring a Content Directory Monitor, make sure to set permissions on the new directory to allow DDF to access it. Setting permissions should be done before configuring a CDM. Also, don’t forget to add permissions for products outside of the monitored directory. See Configuring Permissions for the Content Directory Monitor for in-depth instructions on configuring permissions. |
Note
|
If there’s a metacard that points to a resource outside of the CDM, then you must configure the URL Resource Reader to be able to download it. |
Warning
|
Monitoring Directories In Place
If monitoring a directory in place, then the URL Resource Reader must be configured prior to configuring the CDM to allow reading from the configured directory. This allows the Catalog to download the products. |
Configure the CDM from the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Configuration tab.
-
Select Catalog Content Directory Monitor.
See Content Directory Monitor configurations for all possible configurations.
7.5.5.4. Using the Content Directory Monitor
The CDM processes files in a directory, and all of its sub-directories. The CDM offers three options:
-
Delete
-
Move
-
Monitor in place
Regardless of the option, the DDF takes each file in a monitored directory structure and creates a metacard for it. The metacard is linked to the file. The behavior of each option is given below.
-
Copies the file into the Content Repository.
-
Creates a metacard in the Catalog from the file.
-
Erases the original file from the monitored directory.
-
Copies the file into the directory
.\ingested
(this will double the disk space used) -
Copies the file into the Content Repository.
-
Creates a metacard in the Catalog from the file.
-
Erases the original file from the monitored directory.
-
Creates a metacard in the Catalog from the file.
-
Creates a reference from the metacard to the original file in the monitored directory.
-
If the original file is deleted, the metacard is removed from the Catalog.
-
If the original file is modified, the metacard is updated to reflect the new content.
-
If the original file is renamed, the old metacard is deleted and a new metacard is created.
The CDM supports parallel processing of files (up to 8 files processed concurrently). This is configured by setting the number of Maximum Concurrent Files in the configuration. A maximum of 8 is imposed to protect system resources.
When the CDM is set up, the directory specified is continuously scanned, and files are locked for processing based on the ReadLock Time Interval. This does not apply to the Monitor in place processing directive. Files will not be ingested without having a ReadLock that has observed no change in the file size. This is done so that files that are in transit will not be ingested prematurely. The interval should be dependent on the speed of the copy to the directory monitor (ex. network drive vs local disk). For local files, the default value of 500 milliseconds is recommended. The recommended interval for network drives is 1000 - 2000 milliseconds. If the value provided is less than 100, 100 milliseconds will be used. It is also recommended that the ReadLock Time Interval be set to a lower amount of time when the Maximum Concurrent Files is set above 1 so that files are locked in a timely manner and processed as soon as possible. When a higher ReadLock Time Interval is set, the time it takes for files to be processed is increased.
The CDM supports setting metacard attributes directly when DDF ingests a file. Custom overrides are entered in the form:
attribute-name=attribute-value
For example, to set the contact email for all metacards, add the attribute override:
contact.point-of-contact-email=doctor@clinic.com
Each override sets the value of a single metacard attribute. To set the value of an additional attribute, select the "plus" icon in the UI. This creates an empty line for the entry.
To set multi-valued attributes, use a separate override for each value. For example, to add the keywords PPI and radiology to each metacard, add the custom attribute overrides:
topic.keyword=PPI
topic.keyword=radiology
Attributes will only be overridden if they are part of the metacard type or are injected.
All attributes in the catalog taxonomy tables are injected into all metacards by default and can be overridden.
Important
|
If an overridden attribute is not part of the metacard type or injected the attribute will not be added to the metacard. |
For example, if the metacard type contains contact email,
contact.point-of-contact-email
but the value is not currently set, adding an attribute override will set the attribute value. To override attributes that are not part of the metacard type, attribute injection can be used.
The CDM blacklist uses the "bad.files" and "bad.file.extensions" properties from the custom.system.properties file in "etc/" in order to prevent malicious or unwanted data from being ingested into DDF. While the CDM automatically omits hidden files, this is particularly useful when an operating system automatically generates files that should not be ingested. One such example of this is "thumbs.db" in Windows. This file type and any temporary files are included in the blacklist.
If the CDM fails to read the file, an error will be logged in the ingest log. If the directory monitor is
configured to Delete or Move, the original file is also moved to the \.errors
directory.
-
Multiple directories can be monitored. Each directory has an independent configuration.
-
To support the monitoring in place behavior, DDF indexes the files to track their names and modification timestamps. This enables the Content Directory Monitor to take appropriate action when files are changed or deleted.
-
The Content Directory Monitor recursively processes all subdirectories.
7.5.6. Configuring System Usage Message
The Platform UI configuration contains the settings for displaying messages to users at login or in banners in the headers and footers of all pages. For, example this configuration can provide warnings that system usage is monitored or controlled.
-
Navigate to the Admin Console.
-
Select the Platform application.
-
Select Configuration.
-
Select Platform UI Configuration.
-
Select Enable System Usage Message.
-
Enter text in the remaining fields and save.
See the Platform UI for all possible configurations.
7.5.7. Configuring Data Policy Plugins
Configure the data-related policy plugins to determine the accessibility of data held by DDF.
7.5.7.1. Configuring the Metacard Attribute Security Policy Plugin
The Metacard Attribute Security Policy Plugin combines existing metacard attributes to make new attributes and adds them to the metacard.
-
Navigate to the Admin Console.
-
Select the Catalog application tile
-
Select the Configuration tab
-
Select the Metacard Attribute Security Policy Plugin.
Sample configuration of the Metacard Attribute Security Policy Plugin.
To configure the plugin to combine the attributes sourceattribute1
and sourceattribute2
into a new
attribute destinationattribute1
using the union,
enter these two lines under the title Metacard Union Attributes
Metacard Union Attributes |
---|
|
|
See Metacard Attribute Security Policy Plugin configurations for all possible configurations.
7.5.7.2. Configuring the Metacard Validation Marker Plugin
By default, the Metacard Validation Marker Plugin will mark metacards with validation errors and warnings as they are reported by each metacard validator and then allow the ingest.
To prevent the ingest of certain invalid metacards, the Metacard Validity Marker
plugin can be configured to "enforce" one or more validators.
Metacards that are invalid according to an "enforced" validator will not be ingested.
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Configuration tab.
-
Select the Metacard Validity Marker Plugin.
-
If desired, enter the ID of any metacard validator to enforce. This will prevent ingest of metacards that fail validation.
-
If desired, check Enforce Errors or Enforce Warnings, or both.
-
See Metacard Validity Marker Plugin configurations for all possible configurations.
7.5.7.3. Configuring the Metacard Validity Filter Plugin
The Metacard Validity Filter Plugin determines whether metacards with validation errors or warnings are filtered from query results.
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Configuration tab.
-
Select the Metacard Validity Filter Plugin.
-
Check Filter Errors to hide metacards with errors from users.
-
Check Filter Warnings to hide metacards with warnings from users.
-
See Metacard Validity Filter Plugin configurations for all possible configurations.
7.5.7.4. Configuring the XML Attribute Security Policy Plugin
The XML Attribute Security Policy Plugin finds security attributes contained in a metacard’s metadata.
-
Navigate to the Admin Console.
-
Select the Catalog application tile.
-
Select the Configuration tab.
-
Select the XML Attribute Security Policy Plugin configuration.
See XML Attribute Security Policy Plugin configurations for all possible configurations.
7.5.8. Configuring Data Access Plugins
Configure access plugins to act upon the rules and attributes configured by the policy plugins and user attributes.
7.5.8.1. Configuring the Security Audit Plugin
The Security Audit Plugin audits specific metacard attributes.
To configure the Security Audit Plugin:
-
Navigate to the Admin Console.
-
Select Catalog application.
-
Select Configuration tab.
-
Select Security Audit Plugin.
Add the desired metacard attributes that will be audited when modified.
See Security Audit Plugin configurations for all possible configurations.
7.6. Configuring Security Policies
User attributes and Data attributes are matched by security policies defined within DDF.
7.6.1. Configuring the Web Context Policy Manager
The Web Context Policy Manager defines all security policies for REST endpoints within DDF. It defines:
-
the realms a context should authenticate against.
-
the type of authentication that a context requires.
-
any user attributes required for authorization.
See Web Context Policy Manager Configurations for detailed descriptions of all fields.
7.6.1.1. Authentication Types
As you add REST endpoints, you may need to add different types of authentication through the Web Context Policy Manager.
Any web context that allows or requires specific authentication types should be added here with the following format:
/<CONTEXT>=<AUTH_TYPE>|<AUTH_TYPE|...
Authentication Type | Description |
---|---|
|
Activates single-sign on (SSO) across all REST endpoints that use SAML. |
|
Activates basic authentication. |
|
Activates public key infrastructure authentication. |
|
Activates SAML Web SSO authentication support. Additional configuration is necessary. |
|
provides guest access |
7.6.1.2. Required Attributes
The fields for required attributes allows configuring certain contexts to only be accessible to users with pre-defined attributes.
For example, the default required attribute for the /admin
context is role=system-admin
, limiting access to the Admin Console to system administrators
7.6.1.3. White Listed Contexts
White listed contexts are trusted contexts which will bypass security. Any sub-contexts of a white listed context will be white listed as well, unless they are specifically assigned a policy.
7.6.2. Configuring Catalog Filtering Policies
Filtering is the process of evaluating security markings on data products, comparing them to the users permissions and protecting resources from inappropriate access.
There are two options for processing filtering policies: internally, or through the use of a policy formatted in eXtensible Access Control Markup Language (XACML). The procedure for setting up a policy differs depending on whether that policy is to be used internally or by the external XACML processing engine.
7.6.2.1. Setting Internal Policies
-
Navigate to the Admin Console.
-
Select the Security application.
-
Click the Configuration tab.
-
Click on the Security AuthZ Realm configuration.
-
Add any attribute mappings necessary to map between subject attributes and the attributes to be asserted.
-
For example, the above example would require two Match All mappings of
subjectAttribute1=assertedAttribute1
andsubjectAttribute2=assertedAttribute2`
-
Match One mappings would contain
subjectAttribute3=assertedAttribute3
andsubjectAttribute4=assertedAttribute4
.
-
With the security-pdp-authz
feature configured in this way, the above Metacard would be displayed to the user.
Note that this particular configuration would not require any XACML rules to be present.
All of the attributes can be matched internally and there is no reason to call out to the external XACML processing engine.
For more complex decisions, it might be necessary to write a XACML policy to handle certain attributes.
7.6.2.2. Setting XACML Policies
To set up a XACML policy, place the desired XACML policy in the <distribution root>/etc/pdp/policies
directory and update the included access-policy.xml
to include the new policy.
This is the directory in which the PDP will look for XACML policies every 60 seconds.
See Developing XACML Policies for more information about custom XACML policies.
7.6.2.3. Catalog Filter Policy Plugins
Several Policy Plugins for catalog filtering exist currently: Metacard Attribute Security Policy Plugin and XML Attribute Security Policy Plugin. These Policy Plugin implementations allow an administrator to easily add filtering capabilities to some standard Metacard types for all Catalog operations. These plugins will place policy information on the Metacard itself that allows the Filter Plugin to restrict unauthorized users from viewing content they are not allowed to view.
7.7. Configuring User Interfaces
DDF has several user interfaces available for users.
7.7.1. Configuring Intrigue
Start here to configure Intrigue.
7.7.1.1. Configuring Default Layout for Intrigue
Intrigue includes several options for users to display search results. By default, users start with a 3D map and an Inspector to view details of results or groups of results. Add or remove additional visualizations to the default view through the Default Layout UI. Users can customize their individual views as well.
- 3D Map (Default)
-
Display a fully-interactive three-dimensional globe.
- 2D Map
-
Display a less resource-intensive two-dimensional map.
- Inspector (Default)
-
Display a view of detailed information about a search result.
- Histogram
-
Compare attributes of items in a search result set as a histogram.
- Table
-
Compare attributes of items in a search result set as a table.
-
Navigate to the Admin Console.
-
Select the Search UI application.
-
Select the Default Layout tab.
-
Add or Remove visualizations as desired.
-
To add a visualization, select the Add icon.
-
To remove a visualization, select the Delete icon on the tab for that visualization.
-
-
Select Save to complete.
7.7.1.2. Configuring Map Layers for Intrigue
Customize the look of the map displayed to users in Intrigue by adding or removing map layers through the Map Layers UI. Equivalent addition and deletion of a map layer can be found in Map Configuration for Intrigue.
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Map Layers tab.
-
Add, Configure or Remove map layers as desired.
Adding a Map Layer translates to adding an Imagery Provider
-
Enter a unique alphanumeric Name (no special characters).
-
Enter the Provider URL for the server hosting the map layer instance.
-
Select Proxy if security policies or the tile server does not allow Cross-Origin Resource Sharing (CORS).
-
Select Allow Credential Formatting if map layer server prompts for credentials.
-
If selected, requests will fail if the server does not prompt for credentials.
-
-
Select from the list of available Provider Types.
-
Select a value for the Alpha to set the overall opacity of the map layer.
-
Setting Alpha to 0 will prevent the layer from loading.
-
-
Select Show to make the layer visible in Intrigue. (Deselect to hide.)
-
Select Transparent if tile images contain transparency.
To remove all map layers, select RESET.
-
Move layers Up and Down in loading order with the Arrow Icons associated with each layer.
Select Advanced Configuration to edit the JSON-formatted configuration directly. See Catalog UI Search Configurations for examples of map layer configurations.
External links to the specific API documentation of the map layer is also available from the Advanced Configuration menu.
7.7.1.3. Map Configuration for Intrigue
Customize the look of the map displayed to users in Intrigue through the Catalog UI Search. Equivalent addition and deletion of a map layer can be found in Configuring Map Layers for Intrigue.
-
Navigate to the Admin Console.
-
Select the Search UI application.
-
Select the Configuration tab.
-
Select the Catalog UI Search configuration.
-
Enter the properties of the map layer into the Imagery Provider in the proper syntax.
-
Example Imagery Provider Syntax:
{"type": "OSM", "url" "http://a.tile.openstreetmaps.org" "layers" ["layer1" "layer2"] "parameters" {"FORMAT" "image/png" "VERSION" "1.1.1"} "alpha" 0.5}
.-
"type": format of imagery provider.
-
"url": location of server hosting the imagery provider.
-
"layers": names of individual layers. (enclose list in square brackets`[ ]`).
-
"parameters": (enclose in braces
{}
)-
"FORMAT": image type used by imagery provider.
-
"VERSION": version of imagery provider to use.
-
"alpha": opacity of imagery provider layer.
-
-
-
-
Delete the properties in Imagery Provider text box.
-
Enter the properties into the Terrain Provider in the proper syntax.
-
A default Terrain Provider is provided:
{ "type": "CT", "url": "http://assets.agi.com/stk-terrain/tilesets/world/tiles" }
.-
"type": format of terrain provider.
-
"url": location of server hosting the terrain provider.
-
-
-
Check/Uncheck Show Gazetteer to control searching place names functionality.
-
Check/Uncheck Use Online Gazetteer to control Intrigue search gazetteer.
-
Unchecked: use local gazetteer service.
-
7.7.1.4. Configuring User Access to Ingest and Metadata for Intrigue
Intrigue lets the administrator control user access to ingest and metadata. The administrator can show or hide the uploader, letting them control whether users can ingest products. They can also choose whether or not users can edit existing metadata. By default, the uploader is available to users and editing is allowed.
Choose to hide or show the uploader. Note that hiding the uploader will remove the users' ability to ingest.
-
Navigate to the Admin Console.
-
Select the Search UI application.
-
Select the Configuration tab.
-
Select Catalog UI Search.
-
Select "Show Uploader".
-
Select Save to complete.
Allow or restrict the editing of metadata.
-
Navigate to the Admin Console.
-
Select the Search UI application.
-
Select the Configuration tab.
-
Select Catalog UI Search.
-
Select "Allow Editing".
-
Select Save to complete.
7.7.1.5. Configuring the Intrigue Upload Editor
The upload editor in Intrigue allows users to specify attribute overrides which should be applied on ingest. Administrators control the list of attributes that users may edit and can mark certain attributes as required. They may also disable the editor if desired.
-
Navigate to the Admin Console.
-
Select the Search UI application.
-
Select the Configuration tab.
-
Select Catalog UI Search.
-
Use the "Upload Editor: Attribute Configuration" field to configure the attributes shown in the editor.
-
Use the "Upload Editor: Required Attributes" field to mark attributes as required.
-
Select Save to complete.
See Intrigue Configurations for more information regarding these configurations.
The editor only appears if it has attributes to show. If the upload editing capability is not desired, simply remove all entries from the attribute configuration and the editor will be hidden.
7.7.1.6. Configuring Search Options for Intrigue
Intrigue provides a few options to control what metacards may be searched. By default, the user can perform searches that produce historical metacards, archived metacards, and metacards from the local catalog. However, administrators can disable searching for any of these types of metacards.
-
Navigate to the Admin Console.
-
Select the Search UI application.
-
Select the Configuration tab.
-
Select Catalog UI Search.
-
Scroll down to the "Disable Local Catalog" option with the other options below it.
-
To disable searching for a metacard type, check the corresponding box.
-
Select Save to complete.
7.7.1.7. Configuring Query Feedback for Intrigue
Intrigue provides an option to allow users to submit Query Feedback.
-
First, configure the Email Service to point to a mail server. See Email Service Configurations.
-
Navigate to the Admin Console.
-
Select the Search UI application.
-
Select the Configuration tab.
-
Select Catalog UI Search.
-
Select the Enable Query Feedback option to enable the query comments option for users in Intrigue.
-
Add a Query Feedback Email Subject Template.
-
Add a Query Feedback Email Body Template. The template may include HTML formatting.
-
Add the Query Feedback Email Destination.
-
Select the Save button.
The following keywords in the templates will be replaced with submission-specific values, or "Unknown" if unknown.
Template keyword | Replacement value |
---|---|
|
Username of the security subsystem (see Security Framework) |
|
Username of the user who submitted the Query Feedback |
|
Email of the user who submitted the Query Feedback |
|
Workspace ID of the query |
|
Workspace Name of the query |
|
Query |
|
Time of the query |
|
Status of the query |
|
Results of the query |
|
Comments provided by the user about the query |
-
Perform a search on any workspace.
-
Select the 3 dots on the results tab.
-
Choose the Submit Feedback option.
-
Add comments in the input box.
-
Select the Send button.
See Catalog UI Search Configurations for default Query Feedback configurations.
7.8. Configuring Federation
DDF is able to federate to other data sources, including other instances of DDF, with some simple configuration.
7.8.1. Enable SSL for Clients
In order for outbound secure connections (HTTPS) to be made from components like Federated Sources and Resource Readers configuration may need to be updated with keystores and security properties.
These values are configured in the <DDF_HOME>/etc/custom.system.properties
file.
The following values can be set:
Property | Sample Value | Description |
---|---|---|
|
|
The java keystore that contains the trusted public certificates for Certificate Authorities (CA’s) that can be used to validate SSL Connections for outbound TLS/SSL connections (e.g. HTTPS). When making outbound secure connections a handshake will be done with the remote secure server and the CA that is in the signing chain for the remote server’s certificate must be present in the trust store for the secure connection to be successful. |
|
|
This is the password for the truststore listed in the above property |
|
|
The keystore that contains the private key for the local server that can be used for signing, encryption, and SSL/TLS. |
|
|
The password for the keystore listed above |
|
|
The type of keystore |
|
|
The cipher suites that are supported when making outbound HTTPS connections |
|
|
The protocols that are supported when making outbound HTTPS connections |
|
|
The protocols that are supported when making inbound HTTPS connections |
|
'matched' |
For X.509 certificate based authentication (of non-exportable cipher suites), the DH key size matching the corresponding authentication key is used, except that the size must be between 1024 bits and 2048 bits. For example, if the public key size of an authentication certificate is 2048 bits, then the ephemeral DH key size should be 2048 bits unless the cipher suite is exportable. This key sizing scheme keeps the cryptographic strength consistent between authentication keys and key-exchange keys. |
Note
|
<DDF_HOME> DirectoryDDF is installed in the |
7.8.2. Configuring HTTP(S) Ports
To change HTTP or HTTPS ports from the default values, edit the custom.system.properties
file.
-
Open the file at <DDF_HOME>/etc/custom.system.properties
-
Change the value after the
=
to the desired port number(s):-
org.codice.ddf.system.httpsPort=8993
toorg.codice.ddf.system.httpsPort=<PORT>
-
org.codice.ddf.system.httpPort=8181
toorg.codice.ddf.system.httpPort=<PORT>
-
-
Restart DDF for changes to take effect.
Important
|
Do not use the Admin Console to change the HTTP port. While the Admin Console’s Pax Web Runtime offers this configuration option, it has proven to be unreliable and may crash the system. |
7.8.3. Configuring HTTP Proxy
The platform-http-proxy
feature proxies https to http for clients that cannot use HTTPS and should not have HTTP enabled for the entire container via the etc/org.ops4j.pax.web.cfg
file.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Select
platform-http-proxy
. -
Select the Play button to the right of the word “Uninstalled”
-
Type the command
feature:install platform-http-proxy
-
Select Configuration tab.
-
Select HTTP to HTTPS Proxy Settings
-
Enter the Hostname to use for HTTPS connection in the proxy.
-
-
Click Save changes.
Note
|
HTTP Proxy and Hostname
The hostname should be set by default. Only configure the proxy if this is not working. |
7.8.4. Federation Strategy
A federation strategy federates a query to all of the Remote Sources in the query’s list, processes the results in a unique way, and then returns the results to the client. For example, implementations can choose to halt processing until all results return and then perform a mass sort or return the results back to the client as soon as they are received back from a Federated Source.
An endpoint can optionally specify the federation strategy to use when it invokes the query operation. Otherwise, the Catalog provides a default federation strategy that will be used: the Catalog Federation Strategy.
7.8.4.1. Configuring Federation Strategy
The Catalog Federation Strategy configuration can be found in the Admin Console.
-
Navigate to Admin Console.
-
Select Catalog
-
Select Configuration
-
Select Catalog Federation Strategy.
See Federation Strategy configurations for all possible configurations.
7.8.4.1.1. Catalog Federation Strategy
The Catalog Federation Strategy is the default federation strategy and is based on sorting metacards by the sorting parameter specified in the federated query.
The possible sorting values are:
-
metacard’s effective date/time
-
temporal data in the query result
-
distance data in the query result
-
relevance of the query result
The supported sorting orders are ascending and descending.
The default sorting value/order automatically used is relevance descending.
Warning
|
The Catalog Federation Strategy expects the results returned from the Source to be sorted based on whatever sorting criteria were specified. If a metadata record in the query results contains null values for the sorting criteria elements, the Catalog Federation Strategy expects that result to come at the end of the result list. |
7.8.5. Connecting to Sources
A source is a system consisting of a catalog containing Metacards.
Catalog sources are used to connect Catalog components to data sources, local and remote. Sources act as proxies to the actual external data sources, e.g., a RDBMS database or a NoSQL database.
- Remote Source
-
Read-only data sources that support query operations but cannot be used to create, update, or delete metacards.
- Federated Sources
-
A federated source is a remote source that can be included in federated queries by request or as part of an enterprise query. Federated sources support query and site information operations only. Catalog modification operations, such as create, update, and delete, are not allowed. Federated sources also expose an event service, which allows the Catalog Framework to subscribe to event notifications when metacards are created, updated, and deleted.
Catalog instances can also be federated to each other. Therefore, a Catalog can also act as a federated source to another Catalog.
- Connected Sources
-
A Connected Source is a local or remote source that is always included in every local and enterprise query, but is hidden from being queried individually. A connected source’s identifier is removed in all query results by replacing it with DDF’s source identifier. The Catalog Framework does not reveal a connected source as a separate source when returning source information responses.
- Catalog Providers
-
A Catalog Provider is used to interact with data providers, such as files systems or databases, to query, create, update, or delete data. The provider also translates between DDF objects and native data formats.
All sources, including federated source and connected source, support queries, but a Catalog provider also allows metacards to be created, updated, and deleted. A Catalog provider typically connects to an external application or a storage system (e.g., a database), acting as a proxy for all catalog operations.
- Catalog Stores
-
A Catalog Store is an editable store that is either local or remote.
The following Federated Sources are available in a standard installation of DDF:
- Federated Source for Atlassian Confluence®
-
Retrieve pages, comments, and attachments from an Atlassian Confluence® REST API.
- CSW Specification Profile Federated Source
-
Queries a CSW version 2.0.2 compliant service.
- CSW Federation Profile Source
-
Queries a CSW version 2.0.2 compliant service.
- GMD CSW Source
-
Queries a GMD CSW APISO compliant service.
- OpenSearch Source
-
Performs OpenSearch queries for metadata.
- WFS 1.0 Source
-
Allows for requests for geographical features across the web.
- WFS 1.1 Source
-
Allows for requests for geographical features across the web.
- WFS 2.0 Source
-
Allows for requests for geographical features across the web.
The following Connected Sources are available in a standard installation of DDF:
- WFS 1.0 Source
-
Allows for requests for geographical features across the web.
- WFS 1.1 Source
-
Allows for requests for geographical features across the web.
- WFS 2.0 Source
-
Allows for requests for geographical features across the web.
The following Catalog Stores are available in a standard installation of DDF:
- Registry Store
-
Allows CSW messages to be turned into usable Registry metacards and for those metacards to be turned back into CSW messages.
The following Catalog Providers are available in a standard installation of DDF:
- Solr Catalog Provider
-
Uses Solr as a catalog.
The following Storage Providers are available in a standard installation of DDF:
- Content File System Storage Provider
-
.Sources Details Availability and configuration details of available sources.
7.8.5.1. Federated Source for Atlassian Confluence(R)
The Confluence source provides a Federated Source to retrieve pages, comments, and attachments from an Atlassian Confluence® REST API and turns the results into Metacards the system can use. The Confluence source does provide a Connected Source interface but its functionality has not been verified.
Confluence Source has been tested against the following versions of Confluence with REST API v2
-
Confluence 1000.444.5 (Cloud)
-
Confluence 5.10.6 (Server)
-
Confluence 5.10.7 (Server)
The Confluence Federated Source is installed by default with a standard installation in the Catalog application.
Add a New Confluence Federated Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Sources tab.
-
Add a New source.
-
Name the New source.
-
Select Confluence Federated Source from Binding Configurations.
Configure an Existing Confluence Federated Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Sources tab.
-
Select the name of the source to edit.
See Confluence Federated Source configurations for all possible configurations.
Important
|
If an additional attribute is not part of the Confluence metacard type or injected, the attribute will not be added to the metacard. |
Most of the fields that can be queried on Confluence have some sort of restriction on them. Most of the fields do not support the like
aka ~
operation so the source will convert like
queries to equal
queries for attributes that don’t support like
. If the source receives a query with attributes it doesn’t understand, it will just ignore them. If the query doesn’t contain any attributes that map to Confluence search attributes, an empty result set will be returned.
Depending on your version of Confluence, when downloading attachments you might get redirected to a different download URL. The default URLResourceReader configuration allows redirects, but if the option was disabled in the past, the download will fail. This can be fixed by re-enabling redirects in the URLResourceReader
configuration.
7.8.5.2. CSW Specification Profile Federated Source
The CSW Specification Profile Federated Source should be used when federating to an external (non-DDF-based) CSW (version 2.0.2) compliant service.
Add a New CSW Specification Profile Federated Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Sources tab.
-
Add a New source.
-
Name the New source.
-
Select CSW Specification Profile Federated Source from Source Type.
Configure an Existing CSW Specification Profile Federated Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Sources tab.
-
Select the name of the source to edit.
See CSW Specification Profile Federated Source configurations for all possible configurations.
-
Nearest neighbor spatial searches are not supported.
7.8.5.3. CSW Federation Profile Source
The CSW Federation Profile Source is DDF’s CSW Federation Profile which supports the ability to search collections of descriptive information (metadata) for data, services, and related information objects.
Use the CSW Federation Profile Source when federating to a DDF-based system.
Configure the CSW Federation Profile Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Add a New source.
-
Name the New source.
-
Select CSW Specification Profile Federated Source from Source Type.
Configure an Existing CSW Federated Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Sources tab.
-
Select the name of the source to edit.
See CSW Federation Profile Source configurations for all possible configurations.
-
Nearest neighbor spatial searches are not supported.
7.8.5.4. Content File System Storage Provider
The Content File System Storage Provider is the default Storage Provider included with DDF
The Content File System Storage Provider is installed by default with the Catalog application.
To configure the Content File System Storage Provider:
-
Navigate to the Admin Console.
-
Select Catalog.
-
Select Configuration.
-
Select Content File System Storage Provider.
See Content File System Storage Provider configurations for all possible configurations.
7.8.5.5. GMD CSW Source
The Geographic MetaData extensible markup language (GMD) CSW source supports the ability to search collections of descriptive information (metadata) for data, services, and related information objects, based on the Application Profile ISO 19115/ISO19119 .
Use the GMD CSW source if querying a GMD CSW APISO compliant service.
The GMD CSW source is installed by default with a standard installation in the Spatial application.
Configure a new GMD CSW APISO v2.0.2 Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Sources tab.
-
Add a New source.
-
Name the New source.
-
Select GMD CSW ISO Federated Source from Binding Configurations.
Configure an existing GMD CSW APISO v2.0.2 Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Sources tab.
-
Select the name of the source to edit.
See GMD CSW APISO v2.0.2 Source configurations for all possible configurations.
7.8.5.6. OpenSearch Source
The OpenSearch source provides a Federated Source that has the capability to do OpenSearch queries for metadata from Content Discovery and Retrieval (CDR) Search V1.1 compliant sources. The OpenSearch source does not provide a Connected Source interface.
The OpenSearch Source is installed by default with a standard installation in the Catalog application.
Configure a new OpenSearch Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Sources tab.
-
Add a New source.
-
Name the New source.
-
Select OpenSearch Source from Binding Configurations.
Configure an existing OpenSearch Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Sources tab.
-
Select the name of the source to edit.
See OpenSearch Source configurations for all possible configurations.
Use the OpenSearch source if querying a CDR-compliant search service is desired.
Element | OpenSearch HTTP Parameter | DDF Data Location |
---|---|---|
searchTerms |
|
Pulled from the query and encoded in UTF-8. |
routeTo |
|
Pulled from the query. |
maxResults |
|
Pulled from the query. |
count |
|
Pulled from the query. |
startIndex |
|
Pulled from the query. |
maxTimeout |
|
Pulled from the query. |
userDN |
|
DDF subject |
lat |
|
Pulled from the query if it is a point-radius query and the radius is > 0. |
lon |
|
|
radius |
|
|
box |
|
Pulled from the query if it is a bounding-box query. |
geometry |
|
Pulled from the DDF query and combined as a geometry collection if multiple spatial query exist. |
polygon |
|
According to the OpenSearch Geo Specification this is deprecated. Use the geometry parameter instead. |
start |
|
Pulled from the query if the query has temporal criteria for modified. |
end |
|
|
filter |
|
Pulled from the query. |
sort |
|
Calculated from the query.
Format: |
The OpenSearch source does not provide a Connected Source interface.
7.8.5.7. Registry Store
The Registry Store is the interface that allows CSW messages to be turned into usable Registry metacards and for those metacards to be turned back into CSW messages.
The Registry Store is installed by default with the Registry application.
To configure the Registry store:
-
Navigate to the Admin Console.
-
Select Registry.
-
Select the Remote Registries Tab and click the Add button.
-
ALTERNATIVELY: Select the Configuration Tab and select Registry Store.
-
7.8.5.8. Solr Catalog Provider
The Solr Catalog Provider is included with a standard installation of DDF. There are two configurations available:
DDF is bundled with a distribution of Apache Solr. This distribution includes special JAR libraries used by DDF. This DDF scripts manage the starting and stopping of the Solr server. Considerations include:
-
No configuration necessary. Simply start DDF and DDF manages starting and stopping the Solr server.
-
Backup can be performed using DDF console’s
backup
command. -
This configuration cannot be scaled larger than the single Solr server.
-
All data is located inside the {$branding} home directory. If the Solr index grows large, the storage volume may run low on space.
No installation is required because DDF includes a distribution of Apache Solr.
No configuration.
Solr Cloud is a cluster of distributed Solr servers used for high availability and scalability. If the DDF needs to be available with little or no downtime, then the Solr Cloud configuration should be used. The general considerations for selecting this configuration are:
-
SolrCloud can scale to support over 2 billion indexed documents.
-
Has network overhead and requires additional protection to be secure.
-
Installation is more involved (requires Zookeeper)
-
Configuration and administration is more complex due to replication, sharding, etc.
-
No way to backup currently, but will automatically recover from system failure.
Configuration shared between Solr Server instances is managed by Zookeeper. Zookeeper helps manage the overall structure.
Note
|
The instructions on setting up Solr Cloud for DDF only include setup in a *NIX environment. |
Before Solr Cloud can be installed:
-
ZooKeeper 3.4.5 (Refer to https://zookeeper.apache.org/doc/r3.1.2/zookeeperStarted.html#sc_Download for installation instructions.)
-
*NIX environment
-
JDK 8 or greater
Note
|
A minimum of three Zookeeper nodes required. Three Zookeeper nodes are needed to form a quorum. A three Zookeeper ensemble allows for a single server to fail and the service will still be available. More Zookeeper nodes can be added to achieve greater fault tolerance. The total number of nodes must always be an odd number. See Setting Up an External Zoo Keeper Ensemble for more information. |
Before starting the install procedure, download the extension jars. The jars are needed to support geospatial and xpath queries and need to be installed on every Solr server instance after the Solr Cloud installation instructions have been followed.
The JARs can be found here:
Repeat the following procedure for each Solr server instance that will be part of the Solr Cloud cluster:
-
Refer to https://cwiki.apache.org/confluence/display/solr/Apache+Solr+Reference+Guide for installation instructions.
-
Copy downloaded jar files to:
<SOLR_INSTALL_DIR>/server/solr-webapp/webapp/WEB-INF/lib/
Note
|
A minimum of two Solr server instances is required. Each Solr server instance must have a minimum of two shards. Having two Solr server instances guarantees that at least one Solr server is available if one fails. The two shards enables the document mapping to be restored if one shard becomes unavailable. |
-
On the DDF server, edit
<DDF_HOME>/etc/custom.system.properties
:-
Comment out the Solr Client Configuration for Http Solr Client section.
-
Uncomment the section for the Cloud Solr Client:
-
Set
solr.cloud.zookeeper
to<ZOOKEEPER_1_HOSTNAME>:<PORT_NUMBER>
,<ZOOKEEPER_2_HOSTNAME>:<PORT_NUMBER>
,<ZOOKEEPER_n_HOSTNAME>:<PORT_NUMBER>
-
Set
solr.data.dir
to the desired data directory.
-
solr.client = CloudSolrClient solr.data.dir = ${karaf.home}/data/solr solr.cloud.zookeeper = zk1:2181,zk2:2181,zk3:2181
7.8.5.9. WFS 1.0 Source
The WFS Source allows for requests for geographical features across the web using platform-independent calls.
A Web Feature Service (WFS) source is an implementation of the FederatedSource
interface provided by the DDF Framework.
Use the WFS Source if querying a WFS version 1.0.0 compliant service.
The WFS v1.0.0 Source is installed by default with a standard installation in the Spatial application.
Configure a new WFS v1.0.0 Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Sources tab.
-
Add a New source.
-
Name the New source.
-
Select WFS v1.0.0 Source from Binding Configurations.
Configure an existing WFS v1.0.0 Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Sources tab.
-
Select the name of the source to edit.
See WFS v.1.0 Federated Source configurations or WFS v1.0 Connected Source configurations for all possible configurations.a
The WFS URL must match the endpoint for the service being used. The type of service and version are added automatically, so they do not need to be included. Some servers will throw an exception if they are included twice, so do not include those.
The syntax depends on the server.
However, in most cases, the syntax will be everything before the ?
character in the URL that corresponds to the GetCapabilities
query.
http://www.example.org:8080/geoserver/ows?service=wfs&version=1.0.0&request=GetCapabilities
In this case, the WFS URL would be: http://www.example.org:8080/geoserver/ows
7.8.5.10. WFS 1.1 Source
The WFS Source allows for requests for geographical features across the web using platform-independent calls.
A Web Feature Service (WFS) source is an implementation of the FederatedSource
interface provided by the DDF Framework.
Use the WFS Source if querying a WFS version 1.1.0 compliant service.
The WFS v1.1.0 Source is installed by default with a standard installation in the Spatial application.
Configure a new WFS v1.1.0 Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Sources tab.
-
Add a New source.
-
Name the New source.
-
Select WFS v1.1.0 Source from Binding Configurations.
Configure an existing WFS v1.1.0 Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Sources tab.
-
Select the name of the source to edit.
See WFS v.1.1 Federated Source configurations for all possible configurations.
The WFS URL must match the endpoint for the service being used. The type of service and version are added automatically, so they do not need to be included. Some servers will throw an exception if they are included twice, so do not include those.
The syntax depends on the server.
However, in most cases, the syntax will be everything before the ?
character in the URL that corresponds to the GetCapabilities
query.
http://www.example.org:8080/geoserver/wfs?service=wfs&version=1.1.0&request=GetCapabilities
In this case, the WFS URL would be: http://www.example.org:8080/geoserver/wfs
The WFS v1.1.0 Source supports mapping metacard attributes to WFS feature properties for queries (GetFeature requests) to the WFS server.
The source uses a MetacardMapper
service to determine how to map a given metacard attribute in a query to a feature property the WFS server understands.
It looks for a MetacardMapper
whose getFeatureType()
matches the feature type being queried.
Any MetacardMapper
service implementation will work, but DDF provides one in the Spatial application called Metacard to WFS Feature Map.
7.8.5.11. WFS 2.0 Source
The WFS 2.0 Source allows for requests for geographical features across the web using platform-independent calls.
Use the WFS Source if querying a WFS version 2.0.0 compliant service. Also see Working with WFS Sources.
The WFS v2.0.0 Source is installed by default with a standard installation in the Spatial application.
Configure a new WFS v2.0.0 Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Sources tab.
-
Add a New source.
-
Name the New source.
-
Select WFS v2.0.0 Source from Binding Configurations.
Configure an existing WFS v2.0.0 Source through the Admin Console:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Sources tab.
-
Select the name of the source to edit.
See WFS v.2.0 Federated source configurations or WFS v2.0 Connected source configurations for all possible configurations.
The WFS URL must match the endpoint for the service being used. The type of service and version is added automatically, so they do not need to be included. Some servers will throw an exception if they are included twice, so do not include those.
The syntax depends on the server.
However, in most cases, the syntax will be everything before the ?
character in the URL that corresponds to the GetCapabilities
query.
http://www.example.org:8080/geoserver/ows?service=wfs&version=2.0.0&request=GetCapabilities
In this case, the WFS URL would be
http://www.example.org:8080/geoserver/ows
The WFS 2.0 Source allows for virtually any schema to be used to describe a feature.
A feature is relatively equivalent to a metacard. The MetacardMapper
was added to allow an administrator to configure which feature properties map to which metacard attributes.
MetacardMapper
Use the WFS MetacardMapper
to configure which feature properties map to which metacard attributes when querying a WFS version 2.0.0 compliant service.
When feature collection responses are returned from WFS sources, a default mapping occurs which places the feature properties into metacard attributes, which are then presented to the user via DDF.
There can be situations where this automatic mapping is not optimal for your solution.
Custom mappings of feature property responses to metacard attributes can be achieved through the MetacardMapper
.
The MetacardMapper
is set by creating a feature file configuration which specifies the appropriate mapping. The mappings are specific to a given feature type.
MetacardMapper
The WFS MetacardMapper
is is not installed by default with a standard application in the Spatial application.
MetacardMapper
There are two ways to configure the MetcardMapper
, one is to use the Configuration Admin available via the Admin Console.
Additionally, a feature.xml
file can be created and copied into the "deploy" directory.
The following shows how to configure the MetacardMapper
to be used with the sample data provided with GeoServer.
This configuration shows a custom mapping for the feature type ‘states’.
For the given type, we are taking the feature property ‘states.STATE_NAME’ and mapping it to the metacard attribute ‘title’.
In this particular case, since we mapped the state name to title in the metacard, it will now be fully searchable.
More mappings can be added to the featurePropToMetacardAttrMap
line through the use of comma as a delimiter.
MetacardMapper
Configuration Within a feature.xml
file:
1
2
3
4
5
6
7
<feature name="geoserver-states" version="2.15.0"
description="WFS Feature to Metacard mappings for GeoServer Example {http://www.openplans.org/topp}states">
<config name="org.codice.ddf.spatial.ogc.wfs.catalog.mapper.MetacardMapper-geoserver.http://www.openplans.org/topp.states">
featureType = {http://www.openplans.org/topp}states
featurePropToMetacardAttrMap = states.STATE_NAME=title
</config>
</feature>
7.8.6. Configuring Endpoints
Configure endpoints to enable external systems to send and receive content and metadata from DDF.
7.8.6.1. Configuring Catalog REST Endpoint
The Catalog REST endpoint allows clients to perform operations on the Catalog using REST.
To install the Catalog REST endpoint:
-
Navigate to the Admin Console.
-
Select System.
-
Select Features.
-
Install the
catalog-rest-endpoint
feature.
The Catalog REST endpoint has no configurable properties. It can only be installed or uninstalled.
7.8.6.2. Configuring CSW Endpoint
The CSW endpoint enables a client to search collections of descriptive information (metadata) about geospatial data and services.
To install the CSW endpoint:
-
Navigate to the Admin Console.
-
Select System.
-
Select Features.
-
Install the
csw-endpoint
feature.
The CSW endpoint has no configurable properties. It can only be installed or uninstalled.
7.8.6.3. Configuring FTP Endpoint
The FTP endpoint provides a method for ingesting files directly into the DDF Catalog using the FTP protocol The files sent over FTP are not first written to the file system, as the Directory Monitor does, but instead the FTP stream of the file is ingested directly into the DDF catalog, thus avoiding extra I/O overhead.
To install the FTP endpoint:
-
Navigate to the Admin Console.
-
Select System.
-
Select Features.
-
Install the
catalog-ftp
feature.
To configure the FTP endpoint:
-
Navigate to the Admin Console.
-
Select System.
-
Select Features.
-
Select FTP Endpoint.
See FTP Endpoint configurations for all possible configurations.
7.8.6.4. Configuring KML Endpoint
Keyhole Markup Language (KML) is an XML notation for describing geographic annotation and visualization for 2- and 3- dimensional maps.
The root network link will create a network link for each configured source, including the local catalog. The individual source network links will perform a query against the OpenSearch Endpoint periodically based on the current view in the KML client. The query parameters for this query are obtained by a bounding box generated by Google Earth. The root network link will refresh every 12 hours or can be forced to refresh. As a user changes their current view, the query will be re-executed with the bounding box of the new view. (This query gets re-executed two seconds after the user stops moving the view.)
This KML Network Link endpoint has the ability to serve up custom KML style documents and Icons to be used within that document. The KML style document must be a valid XML document containing a KML style. The KML Icons should be placed in a single level directory and must be an image type (png, jpg, tif, etc.). The Description will be displayed as a pop-up from the root network link on Google Earth. This may contain the general purpose of the network and URLs to external resources.
To install the KML endpoint:
-
Navigate to the Admin Console.
-
Select System.
-
Select Features.
-
Install the
spatial-kml
feature.
To configure the KML endpoint:
-
Navigate to the Admin Console.
-
Select System.
-
Select Features.
-
Select KML Endpoint.
See KML Endpoint configurations for all possible configurations.
7.8.6.5. Configuring OpenSearch Endpoint
The OpenSearch endpoint enables a client to send query parameters and receive search results. This endpoint uses the input query parameters to create an OpenSearch query. The client does not need to specify all of the query parameters, only the query parameters of interest.
To install the KML endpoint:
-
Navigate to the Admin Console.
-
Select System.
-
Select Features.
-
Install the
catalog-opensearch-endpoint
feature.
The OpenSearch endpoint has no configurable properties. It can only be installed or uninstalled.
7.8.6.6. Configuring WPS Endpoint
The WPS endpoint enables a client to execute and monitor long running processes.
To install the WPS endpoint:
-
Navigate to the Admin Console.
-
Select System.
-
Select Features.
-
Install the
spatial-wps
feature.
The WPS endpoint has no configurable properties. It can only be installed or uninstalled.
7.8.6.7. Compression Services
DDF supports compression of outgoing and incoming messages through the Compression Services. These compression services are based on CXF message encoding.
The formats supported in DDF are:
- gzip
-
Adds GZip compression to messages through CXF components. Code comes with CXF.
- exi
-
Adds Efficient XML Interchange (EXI) support to outgoing responses. EXI is an W3C standard for XML encoding that shrinks xml to a smaller size than normal GZip compression.
To Install a compression service:
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Start the service for the desired compression format:
-
compression-exi
-
compression-gzip
-
Warning
|
The compression services either need to be installed BEFORE the desired CXF service is started or the CXF service needs to be refreshed / restarted after the compression service is installed. |
Compression services have no configurable properties. They can only be installed or uninstalled.
7.8.7. Federating Through a Registry
Another approach to configuring federation is to use the Registry application to locate sources in a network/enterprise. See Registry Application Reference for details on installing the Registry application. Use the registry to subscribe to and federate with other instances of DDF.
Note
|
The Node Information and Remote Registries tabs appear in both the Registry application and the Catalog application. |
Note
|
For direct federation configuration, sources and registries can be configured at https://{FQDN}:{PORT}/admin/federation. |
7.8.7.1. Configuring Identity Node
The "Identity Node" is the local DDF instance. Configure the information to share with other registries/nodes.
-
Navigate to Registry (or Catalog) application.
-
Navigate to Node Information tab.
-
Click the name of the identity node.
-
Complete all required and any desired optional fields.
-
Add any desired service bindings under the Services tab.
-
-
Click Save.
Field | Description | Type | Required |
---|---|---|---|
Node Name |
This node’s name as it should appear to external systems |
string |
yes |
Node Description |
Short description for this node |
string |
yes |
Node Version |
This node’s Version |
string |
yes |
Security Attributes |
Security attributes associated with this node. |
String |
|
Last Updated |
Date this entry’s data was last updated |
Date |
|
Live Date |
Date indicating when this node went live or operational |
Date |
|
Custom Fields |
click Add button to add custom fields |
Configurable |
no |
Associations |
click Add button to add associations |
Configurable |
no |
Field | Description | Type | Required |
---|---|---|---|
Organization Name |
This organization’s name |
string |
yes |
Address |
This organization’s primary address |
Expand to enter address information |
yes |
Telephone Number |
Primary contact number for this organization |
no |
|
Primary contact email for this organization |
no |
||
Custom Fields |
click Add button to add custom fields |
Configurable |
no |
Associations |
click Add button to add associations |
Configurable |
no |
Field | Description | Type | Required |
---|---|---|---|
Contact Title |
Contact Title |
String |
yes |
Contact First Name |
Contact First Name |
String |
yes |
Contact Last Name |
Contact Last Name |
String |
yes |
Address |
Address for listed contact |
String |
minimum one |
Phone number |
Contact phone number |
minimum one |
|
Contact email |
String |
minimum one |
|
Custom Fields |
click Add button to add custom fields |
Configurable |
no |
Associations |
click Add button to add associations |
Configurable |
no |
Field | Description | Type | Required |
---|---|---|---|
Content Name |
Name for this metadata content |
string |
yes |
Content Description |
Short description for this metadata content |
string |
no |
Content Object Type |
The kind of content object this will be. Default value should be used in most cases. |
string |
yes |
Custom Fields |
click Add button to add custom fields |
Configurable |
no |
Associations |
click Add button to add associations |
Configurable |
no |
7.8.7.1.1. Adding a Service Binding to a Node
Advertise the methods other nodes use to connect to the local DDF instance.
-
Navigate to Admin Console.
-
Select Registry or Catalog.
-
(Node Information tab is editable from either application.)
-
-
Click the name of the desired local node.
-
Click the Services tab.
-
Click Add to add a service.
-
Expand new Service.
-
Enter Service name and details.
-
Click Add to add binding.
-
Select Service Binding type.
-
Select one of the defaults or empty for a custom service binding.
-
If selecting empty, fill in all required fields.
-
-
Click Save.
7.8.7.2. Publishing to Other Nodes
Send details about the local DDF instance to other nodes.
-
Navigate to the Remote Registries tab in either Registry or Catalog application.
-
Click Add to add a remote registry.
-
Enter Registry Service (CSW) URL.
-
Confirm Allow Push is checked.
-
Click Add to save the changes.
-
Navigate to the Sources Tab in Catalog App
-
Click desired node to be published.
-
Under Operations, click the *Publish to … * link that corresponds to the desired registry.
7.8.7.3. Subscribing to Another Node
Receive details about another node.
-
Navigate to the Remote Registries tab in either Registry or Catalog application.
-
Click Add to add a remote registry.
-
Add the URL to access node.
-
Enter any needed credentials in the Username/password fields.
-
Click Save/Add.
Update the configuration of an existing subscription.
-
Navigate to the Remote Registries tab in either Registry or Catalog application.
-
Click the name of the desired subscription.
-
Make changes.
-
Click Save.
Remove a subscription.
-
Click the Delete icon at the top of the Remote Registries tab.
-
Check the boxes of the Registry Nodes to be deleted.
-
Select the Delete button.
7.9. Environment Hardening
-
Required Step for Security Hardening
Important
|
It is recommended to apply the following security mitigations to the DDF. |
7.9.1. Known Issues with Environment Hardening
The session timeout should be configured longer than the UI polling time or you may get session timeout errors in the UI.
Protocol/Type |
Risk |
Mitigation |
||||
JMX |
tampering, information disclosure, and unauthorized access |
|
||||
File System Access |
tampering, information disclosure, and denial of service |
Set OS File permissions under the If Caching is installed:
On system: ensure that not everyone can change ACLs on your object. |
||||
SSH |
tampering, information disclosure, and denial of service |
By default, SSH access to DDF is only enabled to connections originating from the same
host running DDF.
For remote access to DDF,
first establish an SSH session with the host running
DDF. From within that session, initiate a new SSH connection (to localhost), and use
the To allow direct remote access to the DDF shell from any host, change the value of the
SSH can also be authenticated and authorized through an external Realm,
such as LDAP. This can be accomplished by editing the By definition, all connections over SSH will be authenticated and authorized and secure from eavesdropping.
Set |
||||
SSL/TLS |
man-in-the-middle, information disclosure |
Update the
|
||||
Session Inactivity Timeout |
unauthorized access |
Update the Session configuration to have no greater than a 10 minute Session Timeout.
|
||||
Shell Command Access |
command injection |
By default, some shell commands are disabled in order to secure the system.
DDF includes a whitelist of allowed shell commands in
By default, this list includes commands that are whitelisted only to administrators:
|
7.10. Configuring for Special Deployments
In addition to standard configurations, several specialized configurations are possible for specific uses of DDF.
7.10.1. Multiple Installations
One common specialized configuration is installing multiple instances of DDF.
7.10.1.1. Reusing Configurations
The Migration Export/Import capability allows administrators to export the current DDF configuration and use it to restore the same state for either a brand new installation or a second node for a Highly Available Cluster.
To export the current configuration settings:
-
Run the command migration:export from the Command Console.
-
Files named
ddf-2.15.0.dar
,ddf-2.15.0.dar.key
, andddf-2.15.0.dar.sha256
will be created in theexported
directory underneath<DDF_HOME>
. The.dar
file contains the encrypted information. The.key
and.sha256
files contains the encryption key and a validation checksum. Copy the '.dar' file to a secure location and copy the '.key' and '.sha256' to a different secure location. Keeping all 3 files together represents a security risk and should be avoided.
To import previously exported configuration settings:
-
Install DDF by unzipping its distribution.
-
Restore all external files, softlinks, and directories that would not have been exported and for which warnings would have been generated during export. This could include (but is not limited to) external certificates or monitored directories.
-
Launch the newly installed DDF.
-
Make sure to install and re-enable the DDF service on the new system if it was installed and enabled on the original system.
-
Copy the previously exported files from your secure locations to the
exported
directory underneath<DDF_HOME>
. -
Either:
-
Step through the installation process.
-
Run the command migration:import from the Command Console.
-
-
Or if an administrator wishes to restore the original profile along with the configuration (experimental):
-
Run the command migration:import with the option
--profile
from the Command Console.
-
-
DDF will automatically restart if the command is successful. Otherwise address any generated warnings before manually restarting DDF.
It is possible to decrypt the previously exported configuration settings but doing so is insecure and appropriate measures should be taken to secure the resulting decrypted file. To decrypt the exported file:
-
Copy all 3 exported files (i.e.
.dar
,.key
, and.sha256
) to theexported
directory underneath<DDF_HOME>
. -
Run the command migration:decrypt from the Command Console.
-
A file named
ddf-2.15.0.zip
will be created in theexported
directory underneath<DDF_HOME>
. This file represents the decrypted version of the.dar
file.
Important
|
|
7.10.1.2. Isolating Solr Cloud and Zookeeper
-
Required Step for Security Hardening (if using Solr Cloud/Zookeeper)
Zookeeper cannot use secure (SSL/TLS) connection. The configuration information that Zookeeper sends and receives is vulnerable to network sniffing. Also, the connections between the local Solr Catalog service and the Solr Cloud is not necessarily secure. The connections between Solr Cloud nodes are not necessarily secure. Any unencrypted network traffic is vulnerable to sniffing attacks. To use Solr Cloud and Zookeeper securely, these processes must be isolated on the network, or their communications must be encrypted by other means. The DDF process must be visible on the network to allow authorized parties to interact with it.
-
Create a private network for Solr Cloud and Zookeeper. Only DDF is allowed to contact devices inside the private network.
-
Use IPsec to encrypt the connections between DDF, Solr Cloud nodes, and Zookeeper nodes.
-
Put DDF, Solr Cloud and Zookeeper behind a firewall that only allows access to DDF.
7.10.2. Configuring for a Fanout Proxy
Optionally, configure DDF as a fanout proxy such that only queries and resource retrieval requests are processed and create/update/delete requests are rejected. All queries are enterprise queries and no catalog provider needs to be configured.
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Configuration tab.
-
Select Catalog Standard Framework.
-
Select Enable Fanout Proxy.
-
Save changes.
DDF is now operating as a fanout proxy. Only queries and resource retrieval requests will be allowed. All queries will be federated. Create, update, and delete requests will not be allowed, even if a Catalog Provider was configured prior to the reconfiguration as a fanout.
7.10.3. Standalone Security Token Service (STS) Installation
To run a STS-only DDF installation, uninstall the catalog components that are not being used. The following list displays the features that can be uninstalled to minimize the runtime size of DDF in an STS-only mode. This list is not a comprehensive list of every feature that can be uninstalled; it is a list of the larger components that can be uninstalled without impacting the STS functionality.
-
catalog-core-standardframework
-
catalog-opensearch-endpoint
-
catalog-opensearch-souce
-
catalog-rest-endpoint
7.10.4. Configuring for a Highly Available Cluster
This section describes how to make configuration changes after the initial setup for a DDF in a Highly Available Cluster.
In a Highly Available Cluster, configuration changes must be made on both DDF nodes. The changes can still be made in the standard ways via the Admin Console, the Command Line, or the file system.
Note
|
Changes made in the Admin Console must be made through the HTTP proxy. This means that the below steps should be followed to make a change in the Admin Console:
|
7.11. Configuring UI Themes
The optional configurations in this section cover minor changes that can be made to optimize DDF appearance.
7.11.1. Landing Page
The Landing Page is the first page presented to users of DDF. It is customizable to allow adding organizationally-relevant content.
7.11.1.1. Installing the Landing Page
The Landing Page is installed by default with a standard installation.
7.11.1.2. Configuring the Landing Page
The DDF landing page offers a starting point and general information for a DDF node.
It is accessible at /(index|home|landing(.htm|html))
.
7.11.1.3. Customizing the Landing Page
Configure the Landing Page from the Admin Console:
-
Navigate to the Admin Console.
-
Select Platform Application.
-
Select Configuration tab.
-
Select Landing Page.
Configure important landing page items such as branding logo, contact information, description, and additional links.
See Landing Page configurations for all possible configurations.
7.11.2. Configuring Logout Page
The logout pages is presented to users through the navigation of DDF and has a changeable timeout value.
-
Navigate to the Admin Console.
-
Select Security Application.
-
Select Configuration tab.
-
Select Logout Page.
The customizable feature of the logout page is the Logout Page Time Out. This is the time limit the IDP client will wait for a user to click log out on the logout page. Any requests that take longer than this time for the user to submit will be rejected.
-
Default value: 3600000 (milliseconds)
See Logout Configuration for detailed information.
7.11.3. Platform UI Themes
The Platform UI Configuration allows for the customization of attributes of all pages within DDF. It contains settings to display messages to users at login or in banners in the headers and footers of all pages, along with changing the colors of text and backgrounds.
7.11.3.1. Navigating to UI Theme Configuration
-
Navigate to the Admin Console.
-
Select the Platform application.
-
Select Configuration.
-
Select Platform UI Configuration.
7.11.3.2. Customizing the UI Theme
The customization of the UI theme across DDF is available through the capabilities of Platform UI Configuration. The banner has four items to configure:
-
Header (text)
-
Footer (text)
-
Text Color
-
Background Color
See the Platform UI for all possible configurations of the Platform UI Configuration.
7.12. Miscellaneous Configurations
The optional configurations in this section cover minor changes that can be made to optimize DDF.
7.12.1. Configuring Thread Pools
The org.codice.ddf.system.threadPoolSize
property can be used to specify the size of thread pools used by:
-
Federating requests between DDF systems
-
Downloading resources
-
Handling asynchronous queries, such as queries from the UI
By default, this value is set to 128. It is not recommended to set this value extremely high. If unsure, leave this setting at its default value of 128.
7.12.2. Configuring Jetty ThreadPool Settings
To prevent resource shortages in the event of concurrent requests, DDF allows configuring Jetty ThreadPool settings to specify the minimum and maximum available threads.
-
The settings can be changed at
etc/org.ops4j.pax.web.cfg
under Jetty Server ThreadPool Settings. -
Specify the maximum thread amount with
org.ops4j.pax.web.server.maxThreads
-
Specify the minimum thread amount with
org.ops4j.pax.web.server.minThreads
-
Specify the allotted time for a thread to complete with
org.ops4j.pax.web.server.idleTimeout
DDF does not support changing ThreadPool settings from the Command Console or the Admin Console.
7.12.3. Configuring Alerts
By default, DDF uses two services provided by Karaf Decanter for alerts that can be configured by configuration file. Further information on Karaf Decanter services and configurations can be found here .
7.12.3.1. Configuring Decanter Service Level Agreement (SLA) Checker
The Decanter SLA Checker provides a way to create alerts based on configurable conditions in events posted to decanter/collect/*
and can be configured by editing the file <DDF_HOME>/etc/org.apache.karaf.decanter.sla.checker.cfg
.
By default there are only two checks that will produce alerts, and they are based on the SystemNotice
event property of priority
.
Property | Alert Level | Expression | Description |
---|---|---|---|
priority |
warn |
equal:1,2,4 |
Produce a warn level alert if priority is important (3) |
priority |
error |
equal:1,2,3 |
Produce an error level alert if priority is critical (4) |
7.12.3.2. Configuring Decanter Scheduler
The Decanter Scheduler looks up services implementing the Runnable interface with the service-property decanter.collector.name
and executes the Runnable periodically.
The Scheduler can be configured by editing the file <DDF_HOME>/etc/org.apache.karaf.decanter.scheduler.simple.cfg
.
Property Name | Description | Default Value |
---|---|---|
period |
Decanter simple scheduler period (milliseconds) |
300000 (5 minutes) |
threadIdleTimeout |
The time to wait before stopping an idle thread (milliseconds) |
60000 (1 minute) |
threadInitCount |
Initial number of threads created by the scheduler |
5 |
threadMaxCount |
Maximum number of threads created by the scheduler |
200 |
7.12.4. Encrypting Passwords
DDF includes an encryption service to encrypt plain text such as passwords.
7.12.4.1. Encryption Command
An encrypt security command is provided with DDF to encrypt text. This is useful when displaying password fields to users.
Below is an example of the security:encrypt
command used to encrypt the plain text myPasswordToEncrypt
.
-
Navigate to the Command Console.
security:encrypt Command Exampleddf@local>security:encrypt myPasswordToEncrypt
-
The output is the encrypted value.
security:encrypt Command Outputddf@local>bR9mJpDVo8bTRwqGwIFxHJ5yFJzatKwjXjIo/8USWm8=
8. Running
Find directions here for running an installation of DDF.
- Starting
-
Getting an instance of DDF up and running.
- Managing Services
-
Running DDF as a managed service.
- Maintaining
-
Keeping DDF running with useful tasks.
- Monitoring
-
Tracking system health and usage.
- Troubleshooting
-
Common tips for unexpected behavior.
8.1. Starting
8.1.1. Run DDF as a Managed Service
8.1.1.1. Running as a Service with Automatic Start on System Boot
Because DDF is built on top of Apache Karaf, DDF can use the Karaf Wrapper to run DDF as a service and enable automatic startup and shutdown.
When DDF is started using Karaf Wrapper, new wrapper.log
and wrapper.log.n
(where n goes from 1 to 5 by default) log files will be generated to include wrapper and console specific information.
Warning
|
When installing as a service on *NIX, do not use spaces in the path for <DDF_HOME> as the service scripts that are generated by the wrapper cannot handle spaces. |
Warning
|
Ensure that JAVA_HOME is properly set before beginning this process. See Java Requirements |
-
Create the service wrapper.
DDF can create native scripts and executable files to run itself as an operating system service. This is an optional feature that is not installed by default. To install the service wrapper feature, go the DDF console and enter the command:
ddf@local> feature:install -r wrapper
-
Generate the script, configuration, and executable files:
*NIXddf@local> wrapper:install -i setenv-wrapper.conf -n ddf -d ddf -D "DDF Service"
Windowsddf@local> wrapper:install -i setenv-windows-wrapper.conf -n ddf -d ddf -D "DDF Service"
-
(Windows users skip this step) (All *NIX) If DDF was installed to run as a non-root user (as-recommended,) edit
<DDF_HOME>/bin/ddf-service
and change the property#RUN_AS_USER=
to:<DDF_HOME>/bin/ddf-serviceRUN_AS_USER=<ddf-user>
where <ddf-user> is the intended username:
-
(Windows users skip down) (All *NIX) Edit
<DDF_HOME>/bin/ddf-service
. AddLimitNOFILE
to the [Service] section.<DDF_HOME>/bin/ddf.serviceLimitNOFILE=6815744
-
(Windows users skip this step) (*NIX with
systemd
) Install the wrapper startup/shutdown scripts.Install the service and start it when the system boots, use
systemctl
From an OS console, execute:root@localhost# systemctl enable <DDF_HOME>/bin/ddf.service
-
(Windows users skip this step) (*NIX without
systemd
) Install the wrapper startup/shutdown scripts.If the system does not use
systemd
, use theinit.d
system to install and configure the service. Execute these commands as root or superuser:root@localhost# ln -s <DDF_HOME>/bin/ddf-service /etc/init.d/ root@localhost# chkconfig ddf-service --add root@localhost# chkconfig ddf-service on
-
(Windows only, if the system’s
JAVA_HOME
variable has spaces in it) Edit<DDF_HOME>/etc/ddf-wrapper.conf
. Put quotes aroundwrapper.java.additional.n
system properties for n from 1 to 13 like so:<DDF_HOME>/etc/ddf-wrapper.confwrapper.java.additional.1=-Djava.endorsed.dirs="%JAVA_HOME%/jre/lib/endorsed;%JAVA_HOME%/lib/endorsed;%KARAF_HOME%/lib/endorsed" wrapper.java.additional.2=-Djava.ext.dirs="%JAVA_HOME%/jre/lib/ext;%JAVA_HOME%/lib/ext;%KARAF_HOME%/lib/ext" wrapper.java.additional.3=-Dkaraf.instances="%KARAF_HOME%/instances" wrapper.java.additional.4=-Dkaraf.home="%KARAF_HOME%" wrapper.java.additional.5=-Dkaraf.base="%KARAF_BASE%" wrapper.java.additional.6=-Dkaraf.data="%KARAF_DATA%" wrapper.java.additional.7=-Dkaraf.etc="%KARAF_ETC%" wrapper.java.additional.8=-Dkaraf.log="%KARAF_LOG%" wrapper.java.additional.9=-Dkaraf.restart.jvm.supported=true wrapper.java.additional.10=-Djava.io.tmpdir="%KARAF_DATA%/tmp" wrapper.java.additional.11=-Djava.util.logging.config.file="%KARAF_ETC%/java.util.logging.properties" wrapper.java.additional.12=-Dcom.sun.management.jmxremote wrapper.java.additional.13=-Dkaraf.startLocalConsole=false wrapper.java.additional.14=-Dkaraf.startRemoteShell=true
-
(Windows only) Install the wrapper startup/shutdown scripts.
Run the following command in a console window. The command must be run with elevated permissions.
<DDF_HOME>\bin\ddf-service.bat install
Startup and shutdown settings can then be managed through Services → MMC Start → Control Panel → Administrative Tools → Services.
8.1.1.2. Karaf Documentation
Because DDF is built on top of Apache Karaf, more information on operating DDF can be found in the Karaf documentation .
8.2. Managed Services
The lifecycle of DDF and Solr processes can be managed by the operating system. The DDF documentation provides instructions to install DDF as a managed services on supported unix platforms. However, the documentation cannot account for all possible configurations. Please consult the documentation for the operating system and its init manager if the instructions in this document are inadequate.
8.2.1. Run Solr as Managed Service
These instructions are for configuring Solr as a service managed by the operating system.
8.2.1.1. Configure Solr as a Windows Service
Windows users can use the Task Scheduler to start Solr as a background process.
-
If DDF is running, stop it.
-
Edit
<DDF_HOME>/etc/custom.system.properties
and setstart.solr=false
. This prevents the DDF scripts from attempting to manage Solr’s lifecycle. -
Start the Windows Task Scheduler and open the Task Scheduler Library.
-
Under the Actions pane, select Create Basic Task….
-
Provide a useful name and description, then select Next.
-
Select When the computer starts as the Trigger and select Next.
-
Select Start a program as the Action and select Next.
-
Select the script to start Solr:
<DDF_HOME>\bin\ddfsolr.bat
-
Add the argument
start
in the window pane and select Next. -
Review the settings and select Finish.
It may be necessary to update the Security Options under the task Properties to Run with highest privileges or setting user to "SYSTEM".
Additionally, the process can be set to restart if it fails. The option can be found in the the Properties > Settings tab.
Depending on the system it may also make sense to delay the process from starting for a few minutes until the machine has fully booted. Open the task’s Properties settings and
-
Select Triggers.
-
Select Edit.
-
Select Advanced Settings.
-
Select Delay Task.
8.2.1.2. Configure Solr as a Systemd Service
These instructions are for unix operating systems running the systemd init manager. If configuring a Windows system, see Configure Solr as a Windows Service
-
If DDF is running, stop it.
-
Edit
<DDF_HOME>/etc/custom.system.properties
and setstart.solr=false
. -
Edit the file
<DDF_HOME>/solr/services/solr.service
-
Edit the property
Environment=JAVA_HOME
and replace<JAVA_HOME>
with the absolute path to the directory where the Java Runtime Environment is installed. -
Edit the property
ExecStart
and replace <DDF_HOME> with the absolute path to theddfsolr
file. -
Edit the property
ExecStop
and replace <DDF_HOME> with the absolute path to theddfsolr
file. -
Edit the property
User
and replace<USER>
with the user ID of the Solr process owner.
-
-
From the operating system command line, enable a Solr service using a provided configuration file. Use the full path to the file.
systemctl enable <DDF_HOME>/solr/service/solr.service
-
Start the service.
systemctl start solr
-
Check the status of Solr
systemctl status solr
Solr will start automatically each time the system is booted.
Follow the below steps to start and stop DDF.
8.2.2. Starting from Startup Scripts
Run one of the start scripts from a command shell to start the distribution and open a local console:
<DDF_HOME>/bin/ddf
<DDF_HOME>/bin/ddf.bat
8.2.3. Starting as a Background Process
Alternatively, to run DDF as a background process, run the start
script:
<DDF_HOME>/bin/start
<DDF_HOME>/bin/start.bat
Note
|
If console access is needed while running as a service, run the *NIX
<DDF_HOME>/bin/client Windows
<DDF_HOME>/bin/client.bat -h <FQDN> Use the |
8.2.4. Stopping DDF
There are two options to stop a running instance:
-
Call shutdown from the console:
ddf@local>shutdown
ddf@local>shutdown -f
-
Keyboard shortcut for shutdown
-
Ctrl
-D
-
Cmd
-D
-
-
Or run the stop script:
<DDF_HOME>/bin/stop
<DDF_HOME>/bin/stop.bat
Important
|
Shut Down
Do not shut down by closing the window (Windows, Unix) or using the |
8.3. Maintaining
8.3.1. Console Commands
Once the distribution has started, administrators will have access to a powerful command line console, the Command Console. This Command Console can be used to manage services, install new features, and manage the state of the system.
The Command Console is available to the user when the distribution is started manually or may also be accessed by using the bin/client.bat
or bin/client
scripts.
Note
|
The majority of functionality and information available on the Admin Console is also available on the Command Line Console. |
8.3.1.1. Console Command Help
For details on any command, type help
then the command.
For example, help search
(see results of this command in the example below).
ddf@local>help search DESCRIPTION catalog:search Searches records in the catalog provider. SYNTAX catalog:search [options] SEARCH_PHRASE [NUMBER_OF_ITEMS] ARGUMENTS SEARCH_PHRASE Phrase to query the catalog provider. NUMBER_OF_ITEMS Number of maximum records to display. (defaults to -1) OPTIONS --help Display this help message case-sensitive, -c Makes the search case sensitive -p, -provider Interacts with the provider directly instead of the framework.
The help
command provides a description of the provided command, along with the syntax in how to use it, arguments it accepts, and available options.
8.3.1.2. CQL Syntax
The CQL syntax used with console commands should follow the OGC CQL format. GeoServer provides a description of the grammar and examples in this CQL Tutorial .
Finding all notifications that were sent due to a download:
ddf@local>store:list --cql "application='Downloads'" --type notification
Deleting a specific notification:
ddf@local>store:delete --cql "id='fdc150b157754138a997fe7143a98cfa'" --type notification
8.3.1.3. Available Console Commands
Many console commands are available, including DFF commands and the core Karaf console commands. For more information about these core Karaf commands and using the console, see the Commands documentation for Karaf 4.2.2 at Karaf documentation .
For a complete list of all available commands, from the Command Console, press TAB and confirm when prompted.
Console commands follow a format of namespace:command
.
To get a list of commands, type in the namespace of the desired extension then press TAB.
For example, type catalog
, then press TAB.
8.3.1.3.1. Catalog Commands
Warning
|
Most commands can bypass the Catalog framework and interact directly with the Catalog provider if given the |
Command | Description | ||
---|---|---|---|
|
Provides a basic description of the Catalog implementation. |
||
|
Exports metacards from the local Catalog. Does not remove them. See date filtering options below. |
||
|
Provides a list of environment variables. |
||
|
Exports Metacards and history from the current Catalog. |
||
|
Imports Metacards and history into the current Catalog. |
||
|
Ingests data files into the Catalog. XML is the default transformer used. See Ingest Command for detailed instructions on ingesting data and Input Transformers for all available transformers. |
||
|
Provides the various fields of a metacard for inspection. |
||
|
Retrieves the latest records from the Catalog based on the Core.METACARD_MODIFIED date. |
||
|
Allows two |
||
|
Searches by the given range arguments (exclusively). |
||
|
Deletes a record from the local Catalog. |
||
|
Attempts to delete all records from the local Catalog. |
||
|
Replicates data from a federated source into the local Catalog. |
||
|
Searches records in the local Catalog. |
||
|
Searches spatially the local Catalog. |
||
|
Provides information on available transformers. |
||
|
Validates an XML file against all installed validators and prints out human readable errors and warnings. |
The catalog:dump
command provides selective export of metacards based on date ranges.
The --created-after
and --created-before
options allow filtering on the date and time that the metacard was created, while --modified-after
and --modified-before
options allow filtering on the date and time that the metacard was last modified (which is the created date if no other modifications were made).
These date ranges are exclusive (i.e., if the date and time match exactly, the metacard will not be included).
The date filtering options (--created-after
, --created-before
, --modified-after
, and --modified-before
) can be used in any combination, with the export result including only metacards that match all of the provided conditions.
If no date filtering options are provided, created and modified dates are ignored, so that all metacards match.
Supported dates are taken from the common subset of ISO8601, matching the datetime from the following syntax:
datetime = time | date-opt-time time = 'T' time-element [offset] date-opt-time = date-element ['T' [time-element] [offset]] date-element = std-date-element | ord-date-element | week-date-element std-date-element = yyyy ['-' MM ['-' dd]] ord-date-element = yyyy ['-' DDD] week-date-element = xxxx '-W' ww ['-' e] time-element = HH [minute-element] | [fraction] minute-element = ':' mm [second-element] | [fraction] second-element = ':' ss [fraction] fraction = ('.' | ',') digit+ offset = 'Z' | (('+' | '-') HH [':' mm [':' ss [('.' | ',') SSS]]]
ddf@local>// Given we've ingested a few metacards ddf@local>catalog:latest # ID Modified Date Title 1 a6e9ae09c792438e92a3c9d7452a449f 2019-05-29 2 b4aced45103a400da42f3b319e58c3ed 2019-05-29 3 a63ab22361e14cee9970f5284e8eb4e0 2019-05-29 myTitle ddf@local>// Filter out older files ddf@local>catalog:dump --created-after 2019-05-29 /home/user/ddf-catalog-dump 1 file(s) dumped in 0.015 seconds ddf@local>// Filter out new file ddf@local>catalog:dump --created-before 2019-05-29 /home/user/ddf-catalog-dump 2 file(s) dumped in 0.023 seconds ddf@local>// Choose middle file ddf@local>catalog:dump --created-after 2019-05-29 /home/user/ddf-catalog-dump 1 file(s) dumped in 0.020 seconds ddf@local>// Modified dates work the same way ddf@local>catalog:dump --modified-after 2019-05-29 /home/user/ddf-catalog-dump 1 file(s) dumped in 0.015 seconds ddf@local>// Can mix and match, most restrictive limits apply ddf@local>catalog:dump --modified-after 2019-05-29 /home/user/ddf-catalog-dump 1 file(s) dumped in 0.024 seconds ddf@local>// Can use UTC instead of (or in combination with) explicit time zone offset ddf@local>catalog:dump --modified-after 2019-05-29 /home/user/ddf-catalog-dump 2 file(s) dumped in 0.020 seconds ddf@local>catalog:dump --modified-after 2019-05-29 /home/user/ddf-catalog-dump 1 file(s) dumped in 0.015 seconds ddf@local>// Can leave off time zone, but default (local time on server) may not match what you expect! ddf@local>catalog:dump --modified-after 2019-05-29 /home/user/ddf-catalog-dump 1 file(s) dumped in 0.018 seconds ddf@local>// Can leave off trailing minutes / seconds ddf@local>catalog:dump --modified-after 2019-05-29 /home/user/ddf-catalog-dump 2 file(s) dumped in 0.024 seconds ddf@local>// Can use year and day number ddf@local>catalog:dump --modified-after 2019-05-29 /home/user/ddf-catalog-dump 2 file(s) dumped in 0.027 seconds
8.3.1.3.2. Solr Commands
8.3.1.3.3. Subscriptions Commands
Note
|
The subscriptions commands are installed when the Catalog application is installed. |
Note that no arguments are required for the subscriptions:list
command.
If no argument is provided, all subscriptions will be listed.
A count of the subscriptions found matching the list command’s search phrase (or LDAP filter) is displayed first followed by each subscription’s ID.
ddf@local>subscriptions:list
Total subscriptions found: 3
Subscription ID
my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL
my.contextual.id.v30|http://172.18.14.169:8088/mockEventConsumerBinding?WSDL
my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification
ddf@local>subscriptions:list "my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL"
Total subscriptions found: 1
Subscription ID
my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL
Warning
|
It is recommended to always quote the search phrase (or LDAP filter) argument to the command so that any special characters are properly processed. |
ddf@local>subscriptions:list "my*"
Total subscriptions found: 3
Subscription ID
my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL
my.contextual.id.v30|http://172.18.14.169:8088/mockEventConsumerBinding?WSDL
my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification
ddf@local>subscriptions:list "*json*"
Total subscriptions found: 1
Subscription ID
my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification
ddf@local>subscriptions:list "*WSDL"
Total subscriptions found: 2
Subscription ID
my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL
my.contextual.id.v30|http://172.18.14.169:8088/mockEventConsumerBinding?WSDL
The example below illustrates searching for any subscription that has "json" or "v20" anywhere in its subscription ID.
ddf@local>subscriptions:list -f "(|(subscription-id=*json*) (subscription-id=*v20*))"
Total subscriptions found: 2
Subscription ID
my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL
my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification
The example below illustrates searching for any subscription that has json
and 172.18.14.169
in its subscription ID. This could be a handy way of finding all subscriptions for a specific site.
ddf@local>subscriptions:list -f "(&(subscription-id=*json*) (subscription-id=*172.18.14.169*))" Total subscriptions found: 1 Subscription ID my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification
The arguments for the subscriptions:delete
command are the same as for the list
command, except that a search phrase or LDAP filter must be specified.
If one of these is not specified an error will be displayed.
When the delete
command is executed it will display each subscription ID it is deleting.
If a subscription matches the search phrase but cannot be deleted, a message in red will be displayed with the ID.
After all matching subscriptions are processed, a summary line is displayed indicating how many subscriptions were deleted out of how many matching subscriptions were found.
ddf@local>subscriptions:delete "my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification"
Deleted subscription for ID = my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification
Deleted 1 subscriptions out of 1 subscriptions found.
ddf@local>subscriptions:delete "my*"
Deleted subscription for ID = my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL
Deleted subscription for ID = my.contextual.id.v30|http://172.18.14.169:8088/mockEventConsumerBinding?WSDL
Deleted 2 subscriptions out of 2 subscriptions found.
ddf@local>subscriptions:delete "*json*"
Deleted subscription for ID = my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification
Deleted 1 subscriptions out of 1 subscriptions found.
ddf@local>subscriptions:delete *
Deleted subscription for ID = my.contextual.id.v30|http://172.18.14.169:8088/mockEventConsumerBinding?WSDL
Deleted subscription for ID = my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL
Deleted subscription for ID = my.contextual.id.json|http://172.18.14.169:8088/services/json/local/event/notification
Deleted 3 subscriptions out of 3 subscriptions found.
ddf@local>subscriptions:delete -f "(&(subscription-id=*WSDL) (subscription-id=*172.18.14.169*))"
Deleted subscription for ID = my.contextual.id.v20|http://172.18.14.169:8088/mockCatalogEventConsumerBinding?WSDL
Deleted subscription for ID = my.contextual.id.v30|http://172.18.14.169:8088/mockEventConsumerBinding?WSDL
Deleted 2 subscriptions out of 2 subscriptions found.
8.3.1.3.4. Platform Commands
Command | Description |
---|---|
|
Shows the current platform configuration. |
|
Provides a list of environment variables. |
8.3.1.3.5. Persistence Store Commands
8.3.1.3.6. Migrate Commands
Note
|
Performing a data migration creates, updates, or deletes existing metacards within the system. A data migration needs to be run when the structure of the data changes to ensure that existing resources function as expected. The effects of this command cannot be reverted or undone. It is highly recommended to back up the catalog before performing a data migration. |
The syntax for the migration command is
-
migrate:data --list
-
migrate:data --all
-
migrate:data <serviceId>
Select the <serviceId>
based on which data migration task you wish to run.
To see a list of all data migrations tasks that are currently available, run the
migrate:data --list
command.
The --all
option runs every data migration task that is available.
The --list
option lists all available data migration tasks.
Note
|
If an error occurrs performing a data migration the specifics of that error are available in the logs or are printed to the karaf console. |
8.3.1.4. Command Scheduler
The Command Scheduler allows administrators to schedule Command Line Commands to be run at specified intervals.
The Command Scheduler allows administrators to schedule Command Line Shell Commands to be run in a platform-independent way. For instance, if an administrator wanted to use the Catalog commands to export all records of a Catalog to a directory, the administrator could write a cron job or a scheduled task to remote into the container and execute the command. Writing these types of scripts are specific to the administrator’s operating system and also requires extra logic for error handling if the container is up. The administrator can also create a Command Schedule, which currently requires only two fields. The Command Scheduler only runs when the container is running, so there is no need to verify if the container is up. In addition, when the container is restarted, the commands are rescheduled and executed again. A command will be repeatedly executed indefinitely according to the configured interval until the container is shutdown or the Scheduled Command is deleted.
Note
|
There will be further attempts to execute the command according to the configured interval even if an attempt fails. See the log for details about failures. |
8.3.1.4.1. Schedule a Command
Configure the Command Scheduler to execute a command at specific intervals.
-
Navigate to the Admin Console (https://{FQDN}:{PORT}/admin).
-
Select the Platform application.
-
Click on the Configuration tab.
-
Select Platform Command Scheduler.
-
Enter the command or commands to be executed in the Command text field. Commands can be separated by a semicolon and will execute in order from left to right.
-
Enter an interval in the Interval field. This can either be a Quartz Cron expression or a positive integer (seconds) (e.x.
0 0 0 1/1 * ? *
or12
). -
Select the interval type in the Interval Type drop-down.
-
Click the Save changes button.
Note
|
Scheduling commands will be delayed by 1 minute to allow time for bundles to load when DDF is starting up. |
8.3.1.4.2. Updating a Scheduled Command
Change the timing, order, or execution of scheduled commands.
-
Navigate to the Admin Console.
-
Click on the Platform application.
-
Click on the Configuration tab.
-
Under the Platform Command Scheduler configuration are all of the scheduled commands. Scheduled commands have the following syntax:
ddf.platform.scheduler.Command.{GUID}
such asddf.platform.scheduler.Command.4d60c917-003a-42e8-9367-1da0f822ca6e
. -
Find the desired configuration to modify, and update fields.
-
Click the Save changes button.
8.3.1.4.3. Output of Scheduled Commands
Commands that normally write out to the console will write out to the log.
For example, if an echo "Hello World"
command is set to run every five seconds, the log contains the following:
16:01:32,582 | INFO | heduler_Worker-1 | ddf.platform.scheduler.CommandJob 68 | platform-scheduler | Executing command [echo Hello World] 16:01:32,583 | INFO | heduler_Worker-1 | ddf.platform.scheduler.CommandJob 70 | platform-scheduler | Execution Output: Hello World 16:01:37,581 | INFO | heduler_Worker-4 | ddf.platform.scheduler.CommandJob 68 | platform-scheduler | Executing command [echo Hello World] 16:01:37,582 | INFO | heduler_Worker-4 | ddf.platform.scheduler.CommandJob 70 | platform-scheduler | Execution Output: Hello World
In short, administrators can view the status of a run within the log as long as INFO was set as the status level.
8.4. Monitoring
The DDF contains many tools to monitor system functionality, usage, and overall system health.
8.4.1. Metrics Reporting
Metrics are available in several formats and levels of detail.
Complete the following procedure now that several queries have been executed.
-
Select Platform
-
Select Metrics tab
-
For individual metrics, choose the format desired from the desired timeframe column:
-
PNG
-
CSV
-
XLS
-
-
For a detailed report of all metrics, at the bottom of the page are selectors to choose time frame and summary level. A report is generated in xls format.
8.4.2. Managing Logging
The DDF supports a dynamic and customizable logging system including log level, log format, log output destinations, roll over, etc.
8.4.2.1. Configuring Logging
Edit the configuration file <DDF_HOME>/etc/org.ops4j.pax.logging.cfg]
8.4.2.2. DDF log file
The name and location of the log file can be changed with the following setting:
log4j.appender.out.file=<DDF_HOME>/data/log/ddf.log
8.4.2.3. Controlling log level
A useful way to debug and detect issues is to change the log level:
log4j.rootLogger=DEBUG, out, osgi:VmLogAppender
8.4.2.4. Controlling the size of the log file
Set the maximum size of the log file before it is rolled over by editing the value of this setting:
log4j.appender.out.maxFileSize=20MB
8.4.2.5. Number of backup log files to keep
Adjust the number of backup files to keep by editing the value of this setting:
log4j.appender.out.maxBackupIndex=10
8.4.2.6. Enabling logging of inbound and outbound SOAP messages for the DDF SOAP endpoints
By default, the DDF start scripts include a system property enabling logging of inbound and outbound SOAP messages.
-Dcom.sun.xml.ws.transport.http.HttpAdapter.dump=true
In order to see the messages in the log, one must set the logging level for org.apache.cxf.services
to INFO
. By default, the logging level for org.apache.cxf
is set to WARN
.
ddf@local>log:set INFO org.apache.cxf.services
8.4.2.7. Logging External Resources
Other appenders can be selected and configured.
For more detail on configuring the log file and what is logged to the console see: Karaf Documentation: Log .
8.4.2.8. Enabling HTTP Access Logging
To enable access logs for the current DDF, do the following:
-
Update the
jetty.xml
file located inetc/
adding the following xml:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
<Get name="handler">
<Call name="addHandler">
<Arg>
<New class="org.eclipse.jetty.server.handler.RequestLogHandler">
<Set name="requestLog">
<New id="RequestLogImpl" class="org.eclipse.jetty.server.NCSARequestLog">
<Arg><SystemProperty name="jetty.logs" default="data/log/"/>/yyyy_mm_dd.request.log</Arg>
<Set name="retainDays">90</Set>
<Set name="append">true</Set>
<Set name="extended">false</Set>
<Set name="LogTimeZone">GMT</Set>
</New>
</Set>
</New>
</Arg>
</Call>
</Get>
Change the location of the logs to the desired location. In the settings above, location will default to data/log (same place where the log is located).
The log is using National Center for Supercomputing Association Applications (NCSA) or Common format (hence the class 'NCSARequestLog'). This is the most popular format for access logs and can be parsed by many web server analytics tools. Here is a sample output:
127.0.0.1 - - [14/Jan/2013:16:21:24 +0000] "GET /favicon.ico HTTP/1.1" 200 0
127.0.0.1 - - [14/Jan/2013:16:21:33 +0000] "GET /services/ HTTP/1.1" 200 0
127.0.0.1 - - [14/Jan/2013:16:21:33 +0000] "GET /services//?stylesheet=1 HTTP/1.1" 200 0
127.0.0.1 - - [14/Jan/2013:16:21:33 +0000] "GET /favicon.ico HTTP/1.1" 200 0
8.4.2.9. Using the LogViewer
-
Navigate to the Admin Console
-
Navigate to the System tab
-
Select Logs
The LogViewer displays the most recent 500 log messages by default, but will grow to a maximum of 5000 messages. To view incoming logs, select the PAUSED button to toggle it to LIVE mode. Switching this back to PAUSED will prevent any new logs from being displayed in the LogViewer. Note that this only affects the logs displayed by the LogViewer and does not affect the underlying log.
Log events can be filtered by:
-
Log level (
ERROR
,WARNING
, etc).-
The LogViewer will display at the currently configured log level for the Karaf logs.
-
See Controlling Log Level to change log level.
-
-
-
Log message text.
-
Bundle generating the message.
Warning
|
It is not recommended to use the LogViewer if the system logger is set to a low reporting level such as The actual logs being polled by the LogViewer can still be accessed at |
Note
|
The LogViewer settings don’t change any of the underlying logging settings, only which messages are displayed. It does not affect the logs generated or events captured by the system logger. |
8.5. Troubleshooting
If, after configuration, a DDF is not performing as expected, consult this table of common fixes and workarounds.
Issue | Solution |
---|---|
Unable to unzip distribution on Windows platform |
The default Windows zip utility is not compatible with the DDF distribution zip file. Use Java or a third-party zip utility. |
Unable to federate on Windows Platform |
Windows default firewall is not compatible with DDF. |
Ingesting more than 200,000 data files stored NFS shares may cause Java Heap Space error (Linux-only issue). |
This is an NFS bug where it creates duplicate entries for some files when doing a file list. Depending on the OS, some Linux machines can handle the bug better and able get a list of files but get an incorrect number of files. Others would have a Java Heap Space error because there are too many file to list. As a workaround, ingest files in batches smaller than 200,000. |
Ingesting serialized data file with scientific notation in WKT string causes RuntimeException. |
WKT string with scientific notation such as POINT (-34.8932113039107 -4.77974239601E-5) won’t ingest. This occurs with serialized data format only. |
Exception Starting DDF (Windows) An exception is sometimes thrown starting DDF on a Windows machine (x86). If using an unsupported terminal, |
Install missing Windows libraries. Some Windows platforms are missing libraries that are required by DDF. These libraries are provided by the Microsoft Visual C++ 2008 Redistributable Package x64 . |
CXF BusException The following exception is thrown:
|
Restart DDF.
. Shut down DDF: |
Distribution Will Not Start DDF will not start when calling the start script defined during installation. |
Complete the following procedure.
|
Multiple This can be caused when another DDF is not properly shut down. |
Perform one or all of the following recommended solutions, as necessary.
|
8.5.1. Deleted Records Are Being Displayed In The Search UI’s Search Results
When queries are issued by the Search UI, the query results that are returned are also cached in an internal Solr database for faster retrieval when the same query may be issued in the future. As records are deleted from the catalog provider, this Solr cache is kept in sync by also deleting the same records from the cache if they exist.
Sometimes the cache may get out of sync with the catalog provider such that records that should have been deleted are not.
When this occurs, users of the Search UI may see stale results since these records that should have been deleted are being returned from the cache.
Records in the cache can be manually deleted using the URL commands listed below from a browser.
In these command URLs, metacard_cache
is the name of the Solr query cache.
-
To delete all of the records in the Solr cache:
https://{FQDN}:{PORT}/solr/metacard_cache/update?stream.body=<delete><query>*:*</query></delete>&commit=true
-
To delete a specific record in the Solr cache by ID (specified by the original_id_txt field):
https://{FQDN}:{PORT}/solr/metacard_cache/update?stream.body=<delete><query>original_id_txt:50ffd32b21254c8a90c15fccfb98f139</query></delete>&commit=true
-
To delete record(s) in the Solr cache using a query on a field in the record(s) - in this example, the
title_txt
field is being used with wildcards to search for any records with word remote in the title:
https://{FQDN}:{PORT}/solr/metacard_cache/update?stream.body=<delete><query>title_txt:*remote*</query></delete>&commit=true
9. Data Management
9.1. Ingesting Data
Ingesting is the process of getting metacard(s) into the Catalog Framework. Ingested files are "transformed" into a neutral format that can be searched against as well as migrated to other formats and systems. There are multiple methods available for ingesting files into the DDF.
Note
|
Guest Claims Attributes and Ingest
Ensure that appropriate Guest Claims are configured to allow guest users to ingest data and query the catalog. |
9.1.1. Ingest Command
The Command Console has a command-line option for ingesting data.
Note
|
Ingesting with the console ingest command creates a metacard in the catalog, but does not copy the resource to the content store. The Ingest Command requires read access to the directory being ingested. See the URL Resource Reader for configuring read permission entries to the directory. |
The syntax for the ingest command is
ingest -t <transformer type> <file path>
Select the <transformer type>
based on the type of file(s) ingested.
Metadata will be extracted if it exists in a format compatible with the transformer.
The default transformer is the XML input transformer, which supports the metadata schema catalog:metacard
.
To see a list of all transformers currently installed, and the file types supported by each, run the catalog:transformers
command.
For more information on the schemas and file types(mime-types) supported by each transformer see the Input Transformers.
The <file path>
is relative to the <DDF_HOME> directory.
This can be the path to a file or a directory containing the desired files.
Note
|
Windows Users
On Windows, put the file path in quotes: |
Successful command line ingest operations are accompanied with messaging indicating how many files were ingested and how long the operations took.
The ingest command also prints which files could not be ingested with additional details recorded in the ingest log.
The default location of the log is <DDF_HOME>/data/log/ingest_error.log
.
9.1.2. User Interface Ingest
Files can also be ingested directly from Intrigue.
Warning
|
The Intrigue uploader is intended for the upload of products (such as images or documents), not metadata files (such as Metacard XML). A user will not be able to specify which input transformer is used to ingest the document. |
See Ingesting from Intrigue for details.
9.1.3. Content Directory Monitor Ingest
The Catalog application contains a Content Directory Monitor feature that allows files placed in a single directory to be monitored and ingested automatically. For more information about configuring a directory to be monitored, see Configuring the Content Directory Monitor.
Files placed in the monitored directory will be ingested automatically.
If a file cannot be ingested, they will be moved to an automatically-created directory named .errors
.
More information about the ingest operations can be found in the ingest log.
The default location of the log is <DDF_HOME>/data/log/ingest_error.log
.
Optionally, ingested files can be automatically moved to a directory called .ingested
.
9.1.4. External Methods of Ingesting Data
Third-party tools, such as cURL.exe and the Chrome Advanced Rest Client , can be used to send files to DDF for ingest.
curl -H "Content-type: application/json;id=geojson" -i -X POST -d @"C:\path\to\geojson_valid.json" https://{FQDN}:{PORT}/services/catalog
curl -H "Content-type: application/json;id=geojson" -i -X POST -d @geojson_valid.json https://{FQDN}:{PORT}/services/catalog
Where:
-H adds an HTTP header. In this case, Content-type header application/json;id=geojson
is added to match the data being sent in the request.
-i requests that HTTP headers are displayed in the response.
-X specifies the type of HTTP operation. For this example, it is necessary to POST (ingest) data to the server.
-d specifies the data sent in the POST request. The @
character is necessary to specify that the data is a file.
The last parameter is the URL of the server that will receive the data.
This should return a response similar to the following (the actual catalog ID in the id and Location URL fields will be different):
1
2
3
4
5
6
HTTP/1.1 201 Created
Content-Length: 0
Date: Mon, 22 Apr 2015 22:02:22 GMT
id: 44dc84da101c4f9d9f751e38d9c4d97b
Location: https://{FQDN}:{PORT}/services/catalog/44dc84da101c4f9d9f751e38d9c4d97b
Server: Jetty(7.5.4.v20111024)
-
Use a web browser to verify a file was successfully ingested. Enter the URL returned in the response’s HTTP header in a web browser. For instance in our example, it was
/services/catalog/44dc84da101c4f9d9f751e38d9c4d97b
. The browser will display the catalog entry as XML in the browser. -
Verify the catalog entry exists by executing a query via the OpenSearch endpoint.
-
Enter the following URL in a browser
/services/catalog/query?q=ddf
. A single result, in Atom format, should be returned.
A resource can also be ingested with metacard metadata associated with it using the multipart/mixed content type.
curl -k -X POST -i -H "Content-Type: multipart/mixed" -F parse.resource=@/path/to/resource -F parse.metadata=@/path/to/metacard https://{FQDN}:{PORT}/services/catalog
More information about the ingest operations can be found in the ingest log.
The default location of the log is <DDF_HOME>/data/log/ingest_error.log
.
9.1.5. Creating And Managing System Search Forms Through Karaf
System search provide a way to execute queries with pre-defined templates and search criteria. System search forms are loaded via the system and are read-only. This command allows an administrator to ingest, modify or remove system search forms within the system.
forms:load
forms:load --formsDirectory "/etc/forms" --forms "forms.json" --results "results.json"
Where:
-formsDirectory Specifies the directory in which the forms JSON and XML will reside
-results Specifies the file name of the results.json
file
-forms Specifies the file name of the forms.json
file
It’s important to note that forms:load
will fallback to the system default location for forms, results and the forms directory. The defaults are as follows:
formsDirectory: "/etc/forms"
forms: "forms.json"
results: "results.json"
Example search forms and result form data can be found in <DDF_HOME>/etc/forms/readme.md
.
Managing Forms
In addition to ingesting new system forms into the system, we provide the capability to manage the forms, view the forms and remove them.
forms:manage --list
forms:manage --remove-single "METACARD_ID"
forms:manage --remove-all
Where:
-list Displays the titles and IDs of all system forms in the system
-remove-single Takes in a metacard ID as an argument and removes it
-remove-all Removes all system forms from the system
9.1.6. Other Methods of Ingesting Data
The DDF provides endpoints for integration with other data systems and to further automate ingesting data into the catalog. See Endpoints for more information.
9.2. Validating Data
Configure DDF to perform validation on ingested documents to verify the integrity of the metadata brought into the catalog.
Isolate metacards with data validation issues and edit the metacard to correct validation errors. Additional attributes can be added to metacards as needed.
9.2.1. Validator Plugins on Ingest
When Enforce Errors is enabled within the Admin Console, validator plugins ensure the data being ingested is valid. Below is a list of the validators run against the data ingested.
Note
|
Enforcing errors:
|
9.2.1.1. Validators run on ingest
-
TDF Schema Validation Service: This service validates a TDO against a TDF schema.
-
Size Validator: Validates the size of an attribute’s value(s).
-
Range Validator: Validates an attribute’s value(s) against an inclusive numeric range.
-
Enumeration Validator: Validates an attribute’s value(s) against a set of acceptable values.
-
Future Date Validator: Validates an attribute’s value(s) against the current date and time, validating that they are in the future.
-
Past Date Validator: Validates an attribute’s value(s) against the current date and time, validating that they are in the past.
-
ISO3 Country Code Validator: Validates an attribute’s value(s) against the ISO_3166-1 Alpha3 country codes.
-
Pattern Evaluator: Validates an attribute’s value(s) against a regular expression.
-
Required Attributes Metacard Validator: Validates that a metacard contains certain attributes.
-
Duplication Validator: Validates metacard against the local catalog for duplicates based on configurable attributes.
-
Relationship Validator: Validates values that an attribute must have, can only have, and/or can’t have.
-
Metacard WKT Validator: Validates a location metacard attribute (WKT string) against valid geometric shapes.
9.2.2. Configuring Schematron Services
DDF uses Schematron Validation to validate metadata ingested into the catalog.
Custom schematron rulesets can be used to validate metacard metadata.
Multiple services can be created, and each service can have multiple rulesets associated with it.
Namespaces are used to distinguish services.
The root schematron files may be placed anywhere on the file system as long as they are configured with an absolute path.
Any root schematron files with a relative path are assumed to be relative to <DDF_HOME>/schematron
.
Tip
|
Schematron files may reference other schematron files using an include statement with a relative path. However, when using the document function within a schematron ruleset to reference another file, the path must be absolute or relative to the DDF installation home directory. |
Schematron validation services are configured with a namespace and one or more schematron rulesets. Additionally, warnings may be suppressed so that only errors are reported.
To create a new service:
-
Navigate to the Admin Console.
-
Select the Catalog.
-
Select Configuration.
-
Ensure that
catalog-schematron-plugin
is started. -
Select Schematron Validation Services.
9.2.3. Viewing Invalid Metacards
To view invalid metacards, query for them through Intrigue. Viewing will require DDF-administrator privileges, if Catalog Federation Strategy is configured to filter invalid metacards.
-
Navigate to Intrigue (https://{FQDN}:{PORT}/search).
-
Select Advanced Search.
-
Change the search property to metacard-tags.
-
Change the value of the property to invalid.
-
Select Search.
9.2.4. Manually Editing Attributes
For small numbers of metacards, or for metacards ingested without overrides, attributes can be edited directly.
Warning
|
Metacards retrieved from connected sources or from a fanout proxy will appear to be editable but are not truly local so changes will not be saved. |
-
Navigate to Intrigue.
-
Search for the metacard(s) to be updated.
-
Select the metacards to be updated from the results list.
-
Select Summary or Details.
-
Select Actions from the Details view.
-
Select Add.
-
Select attribute from the list of available attributes.
-
Add any values desired for the attribute.
9.2.5. Injecting Attributes
To create a new attribute, it must be injected into the metacard before it is available to edit or override.
Injections are defined in a JSON-formatted file See Developing Attribute Injections for details on creating an attribute injection file.
9.2.6. Overriding Attributes
Automatically change the value of an existing attribute on ingest by setting an attribute override.
Note
|
Attribute overrides are available for the following ingest methods:
|
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select Configuration.
-
Select the configuration for the desired ingest method.
-
Catalog Content Directory Monitor.
-
Confluence Connected Source.
-
Confluence Federated Source.
-
-
Select Attribute Overrides.
-
Enter the key-value pair for the attribute to override and the value(s) to set.
9.3. Backing Up the Catalog
To backup local catalog records, a Catalog Backup Plugin is available. It is not installed by default for performance reasons.
See Catalog Backup Plugin for installation and configuration instructions).
9.4. Removing Expired Records from the Catalog
DDF has many ways to remove expired records from the underlying Catalog data store. Nevertheless, the benefits of data standardization is that an attempt can be made to remove records without the need to know any vendor-specific information. Whether the data store is a search server, a No-SQL database, or a relational database, expired records can be removed universally using the Catalog API and the Catalog Commands.
9.5. Migrating Data
Data migration is the process of moving metacards from one catalog provider to another. It is also the process of translating metadata from one format to another. Data migration is necessary when a user decides to use metadata from one catalog provider in another catalog provider.
The process for changing catalog providers involves first exporting the metadata from the original catalog provider and ingesting it into another.
From the Command Console, use these commands to export data from the existing catalog and then import into the new one.
catalog:export
-
Exports Metacards and history from the current Catalog to an auto-generated file inside <DDF_HOME>.
Use thecatalog:export --help
command to see all available options. catalog:import <FILE_NAME>
-
Imports Metacards and history into the current Catalog.
Use thecatalog:import --help
command to see all available options.
9.6. Automatically Added Metacard Attributes
This section describes how attributes are automatically added to metacards.
9.6.1. Attributes Added on Ingest
A metacard is first created and populated by parsing the ingested resource with an Input Transformer.
Then Attributes Are Injected, Default Attribute Types are applied, and Attribute are Overridden.
Finally the metacard is passed through a series of Pre-Authorization Plugins and Pre-Ingest Plugins.
9.6.1.1. Attributes Added by Input Transformers
Input Transformers create and populate metacards by parsing a resource. See File Format Specific Attributes to see the attributes used for specific file formats.
DDF chooses which input transformer to use by:
-
Resolving the mimetype for the resource.
-
Gathering all of the input transformers associated with the resolved mimetype. See Supported File Formats for a list of supported mimetypes.
-
Iterating through the transformers until a successful transformation is performed.
The first transformer that can successfully create a metacard from the ingested resource is chosen. If no transformer can successfully transform the resource, the ingest process fails.
Important
|
Each of the ingest methods have their own subtle differences when resolving the resource’s mimetype/input transformer. |
9.6.1.2. Attributes Added by Attribute Injection
Attribute Injection is the act of adding attributes to a metacard’s Metacard Type.
A Metacard Type indicates the attributes available for a particular metacard, and is created at the same time as the metacard.
Note
|
Attribute values can only be set/modified if the attribute exists in the metacard’s metacard type. Attributes are initially injected with blank values. However, if an attempt is made to inject an attribute that already exists, the attribute will retain the original value. |
See Catalog Taxonomy Definitions for a list of attributes injected by default.
See Developing Attribute Injections to learn how to configure attribute injections.
9.6.1.3. Attributes Added by Default Attribute Types
Developing Default Attribute Types is a configurable way to assign default values to a metacard’s attributes.
Note that the attribute must be part of the metacard’s Metacard Type before it can be assigned a default value.
See Attributes Added By Attribute Injection for more information about injecting attributes into the metacard type.
9.6.1.4. Attributes Added by Attribute Overrides (Ingest)
Attribute Overriding is the act of replacing existing attribute values with a new value.
Attribute overrides can be configured for the Content Directory Monitor.
Note that the attribute must be part of the metacard’s Metacard Type before it can be overridden.
See Attributes Added By Attribute Injection for more information about injecting attributes into the metacard type.
9.6.1.5. Attributes Added by Pre-Authorization Plugins
The Pre-Authorization Plugins provide an opportunity to take action before any security rules are applied.
-
The Metacard Ingest Network Plugin is a configurable plugin that allows the conditional insertion of new attributes on metacards during ingest based on network information from the ingest request. See Configuring the Metacard Ingest Network Plugin for configuration details.
9.6.1.6. Attributes Added by Pre-Ingest Plugins
The Pre-Ingest Plugins are responsible for setting attribute fields on metacards before they are stored in the catalog.
-
The Expiration Date Pre-Ingest Plugin adds or updates expiration dates which can be used later for archiving old data.
-
The Geocoder Plugin is responsible for populating the metacard’s
Location.COUNTRY_CODE
attribute if the metacard has an associated location. If the metacard’s country code is already populated, the plugin will not override it. -
The Identification Plugin assigns IDs to registry metacards and adds/updates IDs on create and update.
-
The Metacard Groomer plugin adds/updates IDs and timestamps to the created metacard.
9.6.2. Attributes Added on Query
Metacards resulting from a query will undergo Attribute Injection, then have their Attributes Overridden.
9.6.2.1. Attributes Added by Attribute Overrides (Query)
Attribute Overriding is the act of replacing existing attribute values with a new value.
Attribute overrides can be configured for query results from the following Sources:
Note that the attribute must be part of the metacard’s Metacard Type before it can be overridden.
See Attributes Added By Attribute Injection for more information about injecting attributes into the metacard type.
Using
These user interfaces are available in DDF.
- Using the Landing Page
-
Using the Landing Page.
- Using Intrigue
-
Using Intrigue.
- Using the Simple Search
-
Using the Simple Search user interface. None.
10. Using the Landing Page
The DDF Landing Page is the starting point for using DDF. It is accessible at https://{FQDN}:{PORT}.
10.1. Search DDF Button
The search button navigates to the Search UI, enabling catalog queries.
10.2. Data Source Availability
The data source availabilty pane provides a quick glance at the status of configured data sources.
10.3. Announcements
The announcements pane contains messages from system adminstrators.
11. Using Intrigue
Introduction: Intrigue represents the most advanced search interface available with DDF. It provides metadata search and discovery, resource retrieval, and workspace management with a 3D or optional 2D map visualization.
Note
|
For more detail on any feature or button within Intrigue, click the |
11.1. Accessing Intrigue
The default URL for Intrigue is https://{FQDN}:{PORT}/search/catalog
Note
|
Catalog UI Guest Users
If Guest access has been enabled, users not signed in to DDF (i.e. guest users) will have access to search functions, but all workspace configuration and settings will only exist locally and will not be available for sharing. |
The default view for Intrigue is the Workspaces view. For other views or to return to the Workspaces view, click the Navigation menu in the upper-left corner of Intrigue and select the desired view.
11.2. Workspaces in Intrigue
Within Intrigue, workspaces are collections of settings, searches, and bookmarks that can be shared between users and stored for repeated access.
11.2.1. Creating a Workspace in Intrigue
Before searching in DDF, at least one workspace must be created.
11.2.2. Configuring a Workspace in Intrigue
Configure each workspace with searches and share options.
-
From the default Workspaces view, select the workspace to add a search to.
-
Click Search DDF Intrigue in the upper left corner, enter search terms, and click Search to add a search. This step can be repeated to add additional searches. Each workspace can have up to ten searches.
-
Select Basic Search to select simple search criteria, such as text, time, and location.
-
Select Advanced Search to access a query builder for more complex queries.
-
-
Click the save () icon next to the workspace title in the upper left corner.
-
Workspaces: View all available workspaces.
-
Upload: Add new metadata and resources to the catalog.
-
Sources: Lists all sources and their statuses.
-
Open Workspaces: Lists open workspaces.
-
To view a workspace’s options from the Workspaces view, press the Options button () for the workspace.
-
Save: Save changes to the workspace.
-
Run All Searches: Start all saved searches within this workspace.
-
Cancel All Searches: Cancel all running searches.
-
Open in New Tab: Opens this workspace in a separate tab.
-
View Sharing: View and edit settings for sharing this workspace. Users must be signed in to share workspaces or view shared workspaces.
-
View Details: View the current details for a cloud-based workspace Users must be signed in to view workspace details.
-
Duplicate: Create a copy of this workspace.
-
Subscribe/Unsubscribe: Selecting Subscribe will enable email notifications for search results on this workspace. Selecting Unsubscribe will disable email notifications for search results on this workspace.
-
Move to Trash: Delete (archive) this workspace.
-
11.2.3. Sharing Workspaces
Workspaces can be shared between users at different levels of access as needed.
-
From the Workspaces view, select the Options menu () for the workspace in which sharing will be modified.
-
Select View Sharing.
-
To share by user role, set the drop-down menu to Read or Read and Write for each desired role. All users with that role will be able to view the workspace, but will be limited based on the permission assigned. No user will be granted the ability to share the workspace with additional users.
-
To share with an individual user, add his/her email to the email list and set the drop-down menu to Read, Read and Write, or Read, Write, and Share.
-
-
Click Apply.
-
From the Workspaces view, select the Options menu () for the workspace in which sharing will be modified.
-
Select View Sharing.
-
To remove the workspace from users with specific roles, set the drop-down menu to No Access for those roles.
-
To remove individual users, remove the users' email addresses from the email list.
-
-
Click Apply.
11.3. Ingesting from Intrigue
Data can be ingested via Intrigue.
Warning
|
The Intrigue uploader is intended for the upload of products (such as images or documents), not metadata files (such as Metacard XML). A user will not be able to specify which input transformer is used to ingest the document. |
Files are processed individually with a visual status indication of each upload.
If there are any failures, the user is notified with a message on that specific product.
More information about the uploads can be found in the ingest log.
The default location of the log is <DDF_HOME>/data/log/ingest_error.log
.
Note
|
Uploaded products may be marked with Validation Warnings or Errors. Additional configuration may be needed to view these products in searches. |
11.3.1. Using the Upload Editor
Intrigue provides an upload editor form which allows users to customize the metadata of their uploads. If enabled, it will appear alongside the upload dropzone and will displays a list of attributes a that may be set.
To set an attribute, simply provide a value in the corresponding form control. All custom values in the form will be applied on upload. If a field is left blank, the attribute will be ignored. To remove all custom values entered, simply click the "Reset Attributes" button at the bottom of the form.
Certain attributes within the form may be marked as required (indicated by an asterisk). These fields must be set before uploads will be permitted.
11.4. Searching with Intrigue
The Search pane has two tabs: Search and Lists.
11.4.1. Search Tab
View and edit searches from the Search tab.
The available searches for a workspace can be viewed by clicking on the drop-down on the Search tab.
At the bottom of each search is a list of options for the search.
-
Run: Trigger this search to begin immediately.
-
Edit: Edits the search criteria.
-
Settings: Edits the search settings, such as sorting.
-
Notifications: Allows setting up search notifications.
-
Stop: Stop this search.
-
Delete: Remove this search.
-
Duplicate: Create a copy of this search as a starting point.
-
Search Archived: Execute this search, but specifically for archived results.
-
* Search Historical*: Execute this search, but specifically for historical results.
11.4.1.1. Editing a Search
An existing search can be updated by selecting the search in the Search tab of a workspace and by clicking the Edit () icon.
-
Text: Perform a minimal textual search that is treated identically to a Basic search with only Text specified.
-
Basic: Define a Text, Temporal, Spatial, or Type Search.
-
Text Search Details: Searches across all textual data of the targeted data source. Text search capabilities include:
-
Search for an exact word, such as
Text = apple
: Returns items containing the word "apple" but not "apples". Matching occurs on word boundaries. -
Search for the existence of items containing multiple words, such as
Text = apple orange
: Returns items containing both "apple" and "orange" words. Words can occur anywhere in an item’s metadata. -
Search using wildcards, such as
Text = foo*
: Returns items containing words like "food", "fool", etc.. -
Wildcards should only be used for single word searches, not for phrases.
WarningWhen searching with wildcards, do not include the punctuation at the beginning or the end of a word. For example, search for Text = ca*
instead ofText = -ca*
when searching for words like "cat", "-cat", etc.. and search forText = *og
instead ofText = *og.
when searching for words like "dog", "dog.", etc.. -
Text searches are by default case insensitive, but case sensitive searches are an option.
-
-
Temporal Search Details: Search based on absolute time of the created, modified, or effective date.
-
Any: Search without any time restrictions (default).
-
After: Search records after a specified time.
-
Before: Search records before a specified time.
-
Between: Set a beginning and end time to search between.
-
Relative: Search records relative to the current time.
-
-
-
Search by latitude/longitude (decimal degrees or degrees minutes seconds), USNG/MGRS, or UTM using a line, polygon, point-radius, or bounding box. Spatial criteria can also be defined by entering a Keyword for a region, country, or city in the Location section of the query builder.
-
-
-
Search for specific content types.
-
-
-
Advanced: Advanced query builder can be used to create more specific searches than can be done through the other methods.
-
Advanced Query Builder Details
-
Operator: If 'AND' is used, all the filters in the branch have to be true for this branch to be true. If 'OR' is used, only one of the filters in this branch has to be true for this branch to be true.
-
Property: Property to compare against.
-
Comparison: How to compare the value for this property against the provided value. Depending on the type of property selected, various comparison values will be available. See Types of Comparators
-
Search Terms: The value for the property to use during comparison.
-
Sorting: Sort results by relevance, distance, created time, modified time or effective time.
-
Sources: Perform an enterprise search (the local Catalog and all federated sources) or search specific sources.
-
-
Advanced Query Builder Comparators
-
Textual:
-
CONTAINS: Equivalent to Basic Text Search with Matchcase set to No.
-
MATCHCASE: Equivalent to Basic Text Search with Matchcase set to Yes.
-
=: Matches if an attribute is precisely equal to that search term.
-
NEAR: Performs a fuzzy proximity-based textual search. A NEAR query of
"car street" within 3
will match a sample text ofthe blue car drove down the street with the red building
because performing three word deletions in that phrase (drove
,down
,the
) causescar
andstreet
to become adjacent. -
EMPTY: Search records when the attribute itself does not exist or when the attribute value is null.
More generally, a NEAR query of
"A B" within N
matches a text document if you can perform at most N insertions/deletions to your document and end up withA
followed byB
.It is worth noting that
"street car" within 3
will not match the above sample text because it is not possible to match the phrase"street car"
after only three insertions/deletions."street car" within 5
will match, though, as you can perform three word deletions to get"car street"
, one deletion of one of the two words, and one insertion on the other side.If multiple terms are used in the phrase, then the
within
amount specifies the total number of edits that can be made to attempt to make the full phrase match."car down street" within 2
will match the above text because it takes two word deletions (drove
,the
) to turn the phrasecar drove down the street
intocar down street
.
-
-
Temporal:
-
BEFORE: Search records before a specified time.
-
AFTER: Search records after a specified time.
-
RELATIVE Search records relative to the current time.
-
EMPTY: Search records when the attribute itself does not exist or when the attribute value is null.
-
-
Spatial:
-
INTERSECTS: Gives a component with the same functionality as Basic Spatial Search.
-
EMPTY: Search records when the attribute itself does not exist or when the attribute value is null.
-
-
Numeric:
-
>: Search records with field entries greater than the specified value.
-
>=: Search records with field entries greater than or equal to the specified value.
-
=: Search records with field entries equal to the specified value.
-
<=: Search records with field entries less than or equal to the specified value.
-
<: Search records with field entries less than the specified value.
-
EMPTY: Search records when the attribute itself does not exist or when the attribute value is null.
-
-
-
11.4.1.1.3. Viewing Search Status
An existing search’s status can be viewed by selecting the search in the Search tab of a workspace and by clicking the Status () icon. The Status view for a search displays information about the sources searched.
Note
|
Intersecting Polygon Searchs
If a self intersecting polygon is used to perform a geographic search, the polygon will be converted into a non-intersection one via a convex hull conversion. In the example below the blue line shows the original self intersecting search polygon and the red line shows the converted polygon that will be used for the search. The blue dot shows a search result that was not within the original polygon but was returned because it was within the converted polygon. |
11.4.1.2. Refining Search Results
Returned search results can be refined further, bookmarked, and/or downloaded from the Search tab. Result sets are color-coded by source as a visual aid. There is no semantic meaning to the colors assigned.
-
On the Search tab, select a search from the drop-down list.
-
Perform any of these actions on the results list of the selected search:
-
Filter the result set locally. This does not re-execute the search.
-
Customize results sorting. The default sort is by title in ascending order.
-
Toggle results view between List and Gallery.
-
11.4.1.3. Search Result Options
-
Download: Downloads the result’s associated product directly to the local machine. This option is only available for results that have products.
-
Bookmark: Adds/removes the results to/from the saved bookmarks.
-
Hide from Future Searches: Adds to a list of results that will be hidden from future searches.
-
Expand Metacard View: Navigates to a view that only focuses on this particular result.
-
Create Search from Location: Searches for all records that intersect the current result’s location geometry.
11.4.2. Lists Tab
Lists organize results and enable performing actions on those sets of results.
-
Perform any of these actions on lists:
-
Filter the result set locally (does not re-execute the search),
-
Customize results sorting (Default: Title in Ascending Order).
-
Toggle results view between List and Gallery.
-
Note
|
Lists are not available to guest users. |
11.4.2.1. Creating a List
A new list can be created by selecting the Lists tab and selecting the new list text.
11.4.2.2. Adding/Removing Results to a List
Results can be added to a list by selecting the + icon on a result.
Results can be added or removed to/from a list through the result’s dropdown menu.
11.5. Viewing Search Results
11.5.1. Adding Visuals
Visuals are different ways to view search results.
-
Click the Add Visual () icon in the bottom right corner of Intrigue.
-
Select a visual to add.
-
2D Map: A 2 dimensional map view.
-
3D Map: A 3 dimensional map view.
-
Inspector: In depth details and actions for the results of a search.
-
Histogram: A configurable histogram view for the results of a search.
-
Table: A configurable table view for the results of a search.
-
The Search tab displays a list of all of the search results for the selected search. The Inspector visual provides in depth information and actions for each search result.
- Summary
-
A summarized view of the result.
- Details
-
A detailed view of the result.
- History
-
View revision history of this record.
- Associations
-
View or edit the relationship(s) between this record and others in the catalog.
- Quality
-
View the completeness and accuracy of the metadata for this record.
- Actions
-
Export the metadata/resource to a specific format.
- Archive
-
Remove the selected result from standard search results.
- Overwrite
-
Overwrite a resource.
11.5.2. Editing Records
Results can be edited from the Summary or Details tabs in the Inspector visual.
11.5.3. Viewing Text Previews
If a preview for a result is available, an extra tab will appear in the Inspector visual that allows you to see a preview of the resource.
11.5.4. Editing Associations on a Record
Update relationships between records through Associations.
-
Select the desired result from the Search tab.
-
Select the Inspector visual.
-
Select the Associations tab.
-
Select Edit.
-
For a new association, select Add Association. Only items in the current result set can be added as associations.
-
Select the related result from either the Parent or Child drop-down.
-
Select the type of relationship from the Relationship drop-down.
-
Select Save.
-
-
To edit an existing association, update the selections from the appropriate drop-downs and select Save.
View a graphical representation of the associations by selecting Graph icon from the Associations menu.
11.5.5. Viewing Revision History
View the complete revision history of a record.
-
Select the desired result from the Search tab.
-
Select the Inspector visual.
-
Select the History tab.
-
Select a previous version from the list.
-
Select Revert to Selected Version to undo changes made after that revision.
-
11.5.6. Viewing Metadata Quality
View and fix issues with metadata quality in a record.
Note
|
Correcting metadata issues may require administrative permissions. |
-
Select the desired result from the Search tab.
-
Select the Inspector visual.
-
Select the Quality tab.
-
A report is displayed showing any issues:
-
Metacard Validation Issues.
-
Attribute Validation Issues.
-
11.5.7. Exporting a Result
Export a result’s metadata and/or resource.
-
Select the desired result from the Search tab.
-
Select the Inspector visual.
-
Select Actions tab.
-
Select the desired export format.
-
Export opens in a new browser tab. Save, if desired.
11.5.8. Archiving a Result
To remove a result from the active search results, archive it.
-
Select the desired result from the Search tab.
-
Select the Inspector visual.
-
Select the Archive tab.
-
Select Archive item(s).
-
Select Archive.
11.5.9. Restoring Archived Results
Restore an archived result to return it to the active search results.
-
Select the Search Archived option from the Search Results Options menu.
-
Select the desired result from the Search tab.
-
Select the Inspector visual.
-
Select the Archive tab.
-
Select Restore item(s).
-
Select Restore.
Restore hidden results to the active search results.
11.5.10. Overwriting a Resource
Replace a resource.
-
Select the desired result from the Search tab.
-
Select the Inspector visual.
-
Select the Overwrite tab.
-
Select Overwrite content.
-
Select Overwrite
-
Navigate to the new content via the navigation window.
11.5.11. Intrigue Settings
-
Theme: Visual options for page layout.
-
Notifications: Select if notifications persist across sessions.
-
Map: Select options for map layers.
-
Query: Customize the number of search results returned.
-
Time: Set the time format (ISO-8601, 24 Hour or 12 Hour), as well as the timezone (UTC-12:00 through UTC+12:00).
-
Hidden: View or edit a list of results that have been hidden from the current search results.
11.5.13. Intrigue Low Bandwidth Mode
Low bandwidth mode can be enabled by passing in a ?lowBandwidth
parameter along with any URL targeting the Intrigue endpoint.
Ex: https://{FQDN}:{PORT}/search/catalog/?lowBandwidth#workspaces
. Currently, enabling this parameter causes the system to prompt the user for confirmation before loading potentially bandwidth-intensive components like the 2D or 3D Maps.
12. Using the Simple Search
The DDF Simple Search UI application provides a low-bandwidth option for searching records in the local Catalog (provider) and federated sources. Results are returned in HTML format.
12.1. Search
The Input form allows the user to specify keyword, geospatial, temporal, and type query parameters. It also allows the user to select the sources to search and the number of results to return.
12.1.1. Search Criteria
Enter one or more of the available search criteria to execute a query:
- Keyword Search
-
A text box allowing the user to enter a textual query. This supports the use of (*) wildcards. If blank, the query will contain a contextual component.
- Temporal Query
-
Select from any, relative, or absolute. Selecting Any results in no temporal restrictions on the query, selecting relative allows the user to query a period from some length of time in the past until now, and selecting absolute allows the user to specify a start and stop date range.
- Spatial Search
-
Select from any, point-radius, and bounding box. Selecting Any results in no spatial restrictions on the query, selecting point-radius allows the user to specify a lat/lon and radius to search, and selecting a bounding box allows the user to specify an eastern, western, southern and northern boundary to search within.
- Type Search
-
Select from any, or a specific type. Selecting Any results in no type restrictions on the query, and Selecting Specific Types shows a list of known content types on the federation, and allows the user to select a specific type to search for.
- Sources
-
Select from none, all sources, or specific sources. Selelcting None results in querying only the local provider, Selecting All Sources results in an enterprise search where all federations are queried, and selecting Specific Sources allows the user to select which sources are queried.
- Results per Page
-
Select the number of results to be returned by a single query.
12.1.2. Results
The table of results shows the details of the results found, as well as a link to download the product if applicable.
12.1.2.1. Results Summary
- Total Results
-
Total Number of Results available for this query. If there are more results than the number displayed per page then a page navigation links will appear to the right.
- Pages
-
Provides page navigation, which generate queries for requesting additional pages of results.
12.1.2.2. Results Table
The Results table provides a preview of and links to the results. The table consists of these columns:
- Title
-
Displays title of the metacard. This will be a link which can clicked to view the metacard in the Metacard View.
- Source
-
Displays where the metadata came from, which could be the local provider or a federated source.
- Location
-
Displays the WKT Location of the metacard, if available.
- Time
-
Shows the Received (Created) and Effective times of the metacard, if available.
- Thumbnail
-
Shows the thumbnail of the metacard, if available.
- Download
-
A download link to retrieve the product associated with the metacard, when applicable, if available.
12.1.3. Result View
This view shows more detailed look at a result.
- Back to Results Button
-
Returns the view back to the Results Table.
- Previous & Next
-
Navigation to page through the results one by one.
- Result Table
-
Provides the list of properties and associated values of a single search result.
- Metadata
-
The metadata, when expanded, displays a tree structure representing the result’s custom metadata.
Integrating
Warning
|
If integrating with a Highly Available Cluster of DDF, see High Availability Guidance. |
DDF is structured to enable flexible integration with external clients and into larger component systems.
If integrating with an existing installation of DDF, continue to the following sections on endpoints and data/metadata management.
If a new installation of DDF is required, first see the Managing section for installation and configuration instructions, then return to this section for guidance on connecting external clients.
If you would like to set up a test or demo installation to use while developing an external client, see the Quick Start Tutorial for demo instructions.
For troubleshooting and known issues, see the Release Notes .
13. Endpoints
Federation with DDF is primarily accomplished through Endpoints accessible through http(s) requests and responses.
Note
|
Not all installations will expose all available endpoints. Check with DDF administrator to confirm availability of these endpoints. |
13.1. Ingest Endpoints
Ingest is the process of getting data and/or metadata into the DDF catalog framework.
These endpoints are provided by DDF to be used by integrators to ingest content or metadata.
- Catalog REST Endpoint
-
Uses REST to interact with the catalog.
- CSW Endpoint
-
Searches collections of descriptive information (metadata) about geospatial data and services.
- FTP Endpoint
-
Ingests files directly into the DDF catalog using the FTP protocol.
13.2. CRUD Endpoints
To perform CRUD (Create, Read, Update, Delete) operations on data or metadata in the catalog, work with one of these endpoints.
- Catalog REST Endpoint
-
Uses REST to interact with the catalog.
- CSW Endpoint
-
Searches collections of descriptive information (metadata) about geospatial data and services.
- Queries Endpoint
-
To perform CRUD (Create, Read, Update, Delete) operations on query metacards in the catalog, work with one of these endpoints.
13.3. Query Endpoints
Query data or metadata stored within an instance of DDF using one of these endpoints.
- CSW Endpoint
-
Searches collections of descriptive information (metadata) about geospatial data and services.
- OpenSearch Endpoint
-
Sends query parameters and receives search results.
13.4. Content Retrieval Endpoints
To retrieve content from an instance of DDF, use one of these endpoints.
- Catalog REST Endpoint
-
Uses REST to interact with the catalog.
13.5. Pub-Sub Endpoints
These endpoints provide publication and subscription services to allow notifications when certain events happen within DDF.
- CSW Endpoint
-
Searches collections of descriptive information (metadata) about geospatial data and services.
13.6. Endpoint Details
13.6.1. Catalog REST Endpoint
The Catalog REST Endpoint allows clients to perform operations on the Catalog using REST, a simple architectural style that performs communication using HTTP.
Bulk operations are not supported: for all RESTful CRUD commands, only one metacard ID is supported in the URL.
The Catalog REST Endpoint can be used for one or more of these operations on an instance of DDF:
This example metacard can be used to test the integration with DDF.
1
2
3
4
5
6
7
8
9
10
<?xml version="1.0" encoding="UTF-8"?>
<metacard xmlns="urn:catalog:metacard" xmlns:gml="http://www.opengis.net/gml" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:smil="http://www.w3.org/2001/SMIL20/" xmlns:smillang="http://www.w3.org/2001/SMIL20/Language" gml:id="3a59483ba44e403a9f0044580343007e">
<type>ddf.metacard</type>
<string name="title">
<value>Test REST Metacard</value>
</string>
<string name="description">
<value>Vestibulum quis ipsum sit amet metus imperdiet vehicula. Nulla scelerisque cursus mi.</value>
</string>
</metacard>
13.6.1.1. Catalog REST Create Operation Examples
The REST endpoint can be used to upload resources as attachments.
Send a POST
request with the input to be ingested contained in the HTTP request body to the endpoint.
https://<FQDN>:<PORT>/services/catalog/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
POST /services/catalog?transform=xml HTTP/1.1
Host: <FQDN>:<PORT>
Content-Type: multipart/form-data; boundary=----WebKitFormBoundary7MA4YWxkTrZu0gW
Cache-Control: no-cache
------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="parse.resource"; filename=""
Content-Type:
------WebKitFormBoundary7MA4YWxkTrZu0gW
Content-Disposition: form-data; name="parse.metadata"; filename=""
Content-Type:
------WebKitFormBoundary7MA4YWxkTrZu0gW--
The create
and update
methods both support the multipart mime format.
If only a single attachment exists, it will be interpreted as a resource to be parsed, which will result in a metacard and resource being stored in the system.
If multiple attachments exist, then the REST endpoint will assume that one attachment is the actual resource (attachment should be named parse.resource
) and the other attachments are overrides of metacard attributes (attachment names should follow metacard attribute names).
In the case of the metadata attribute, it is possible to also have the system transform that metadata and use the results of that to override the metacard that would be generated from the resource (attachment should be named parse.metadata
).
If the ingest is successful, a status of 201 Created
will be returned, along with the Metacard ID in the header of the response.
Note
|
Request with Non-XML Data
If a request with non-XML data is sent to the Catalog REST endpoint,
the metacard will be created but the resource will be stored in the |
If content or metadata is not ingested successfully, check for these error messages.
Status Code | Error Message | Possible Causes |
---|---|---|
|
|
Malformed XML Response: If the XML being ingested has formatting errors. |
Request with Unknown Schema: If ingest is attempted with a schema that is unknown, unsupported, or not configured by the endpoint, DDF creates a generic resource metacard with the provided XML as content for the |
13.6.1.2. Catalog REST Read Operation Examples
The read
operation can be used to retrieve metadata in different formats.
-
Send a
GET
request to the endpoint. -
Optionally add a
transform
query parameter to the end of the URL with the transformer to be used (such astransform=kml
). By default, the response body will include the XML representation of the Metacard.
https://<FQDN>:<PORT>/services/catalog/<metacardId>
If successful, a status of 200 OK
will be returned, along with the content of the metacard requested.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
<metacard xmlns="urn:catalog:metacard" xmlns:gml="http://www.opengis.net/gml" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:smil="http://www.w3.org/2001/SMIL20/" xmlns:smillang="http://www.w3.org/2001/SMIL20/Language" gml:id="<METACARD_ID>">
<type>ddf.metacard</type>
<source>ddf.distribution</source>
<string name="title">
<value>Test REST Metacard</value>
</string>
<string name="point-of-contact">
<value>email@example.com</value>
</string>
<dateTime name="metacard.created">
<value>2019-05-29</value>
</dateTime>
<dateTime name="effective">
<value>2019-05-29</value>
</dateTime>
<dateTime name="modified">
<value>2019-05-29</value>
</dateTime>
<dateTime name="created">
<value>2019-05-29</value>
</dateTime>
<string name="description">
<value>Vestibulum quis ipsum sit amet metus imperdiet vehicula. Nulla scelerisque cursus mi.</value>
</string>
<string name="metacard-tags">
<value>resource</value>
<value>VALID</value>
</string>
<dateTime name="metacard.modified">
<value>2019-05-29</value>
</dateTime>
</metacard>
-
To receive metadata in an alternate format, add a transformer to the request URL.
https://<FQDN>:<PORT>/services/catalog/<metacardId>?transform=<TRANSFORMER_ID>
transform=geojson
)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
{
"geometry": null,
"type": "Feature",
"properties": {
"effective": "2019-05-29",
"point-of-contact": "email@example.com",
"created": "2019-05-29",
"metacard.modified": "2019-05-29",
"metacard-tags": [
"resource",
"VALID"
],
"modified": "2019-05-29",
"description": "Vestibulum quis ipsum sit amet metus imperdiet vehicula. Nulla scelerisque cursus mi.",
"id": "3a59483ba44e403a9f0044580343007e",
"metacard-type": "ddf.metacard",
"title": "Test REST Metacard",
"source-id": "ddf.distribution",
"metacard.created": "2019-05-29"
}
}
To retrieve a metacard from a specific federated source, add sources/<SOURCE_ID>
to the URL.
https://<FQDN>:<PORT>/services/catalog/sources/<sourceId>/<metacardId>?transform=<TRANSFORMER_ID>
To retrieve the resource associated with a metacard, use the resource
transformer with the GET
request.
https://<FQDN>:<PORT>/services/catalog/<metacardId>?transform=resource
See Metacard Transformers for details on metacard transformers.
If the metacard or resource is not returned successfully, check for these errors.
Status Code | Error Message | Possible Causes |
---|---|---|
|
|
Invalid Metacard ID |
|
|
Transformer is invalid, unsupported, or not configured. |
|
Metacard does not have an associated resource (is metadata only). |
|
|
Invalid source ID, or source unavailable. |
13.6.1.3. Catalog Rest Update Operation Examples
To update the metadata for a metacard, send a PUT
request with the ID of the Metacard to be updated appended to the end of the URL
and the updated metadata is contained in the HTTP body.
Optionally, specify the transformer to use when parsing an override of a metadata attribute.
https://<FQDN>:<PORT>/<metacardId>?transform=<input transformer>
Status Code | Error Message | Possible Causes |
---|---|---|
|
|
Invalid metacard ID. |
|
|
Invalid transformer ID. |
13.6.1.4. Catalog REST Delete Operation Examples
To delete a metacard, send a DELETE
request with the metacard ID to be deleted appended
to the end of the URL.
Delete Request URL
https://<FQDN>:<PORT>/<metacardId>
Status Code | Error Message | Possible Causes |
---|---|---|
|
|
Invalid metacard ID. |
13.6.1.5. Catalog REST Sources Operation Examples
To retrieve information about federated sources, including sourceId
,
availability
, contentTypes
,and version
,
send a GET
request to the endpoint.
https://<FQDN>:<PORT>//sources/
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[
{
"id" : "DDF-OS",
"available" : true,
"contentTypes" :
[
],
"version" : "2.15.0"
},
{
"id" : "ddf.distribution",
"available" : true,
"contentTypes" :
[
],
"version" : "2.15.0"
}
]
Status Code | Error Message | Possible Causes |
---|---|---|
403 |
<p>Problem accessing /ErrorServlet. Reason: <pre> Forbidden</pre></p>` |
Connection error or service unavailable. |
13.6.2. CSW Endpoint
The CSW endpoint enables a client to search collections of descriptive information (metadata) about geospatial data and services.
The CSW endpoint supports metadata operations only.
For more information about the Catalogue Services for Web (CSW) standard .
The CSW Endpoint can be used for one or more of these operations on an instance of DDF:
Note
|
Sample Responses May Not Match Actual Responses
Actual responses may vary from these samples, depending on your configuration. Send a GET or POST request to obtain an accurate response. |
13.6.2.1. CSW Endpoint Create Examples
Metacards are ingested into the catalog via the Insert
sub-operation.
The schema of the record needs to conform to a schema of the information model that the catalog supports.
Send a POST
request to the CSW endpoint URL.
https://<FQDN>:<PORT>/services/csw
Include the metadata to ingest within a csw:Insert
block in the body of the request.
Insert
Request<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:Transaction
service="CSW"
version="2.0.2"
verboseResponse="true"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2">
<csw:Insert typeName="csw:Record">
<csw:Record
xmlns:ows="http://www.opengis.net/ows"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:dct="http://purl.org/dc/terms/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<dc:identifier></dc:identifier>
<dc:title>Aliquam fermentum purus quis arcu</dc:title>
<dc:type>http://purl.org/dc/dcmitype/Text</dc:type>
<dc:subject>Hydrography--Dictionaries</dc:subject>
<dc:format>application/pdf</dc:format>
<dc:date>2019-05-29</dc:date>
<dct:abstract>Vestibulum quis ipsum sit amet metus imperdiet vehicula. Nulla scelerisque cursus mi.</dct:abstract>
<ows:BoundingBox crs="urn:x-ogc:def:crs:EPSG:6.11:4326">
<ows:LowerCorner>44.792 -6.171</ows:LowerCorner>
<ows:UpperCorner>51.126 -2.228</ows:UpperCorner>
</ows:BoundingBox>
</csw:Record>
</csw:Insert>
</csw:Transaction>
To specify the document type being ingested and select the appropriate input transformer,
use the typeName
attribute in the csw:Insert
element
<csw:Insert typeName="xml">
To receive a copy of the metacard in the response, specify verboseResponse="true"
in the csw:Transaction
.
The InsertResult
element of the response will hold the metacard information added to the catalog.
<csw:Transaction service="CSW" version="2.0.2" verboseResponse="true" [...]
1
2
3
4
5
6
7
8
9
10
11
12
<csw:Transaction service="CSW" version="2.0.2" verboseResponse="true" xmlns:csw="http://www.opengis.net/cat/csw/2.0.2">
<csw:Insert typeName="xml">
<metacard xmlns="urn:catalog:metacard" xmlns:ns2="http://www.opengis.net/gml"
xmlns:ns3="http://www.w3.org/1999/xlink" xmlns:ns4="http://www.w3.org/2001/SMIL20/"
xmlns:ns5="http://www.w3.org/2001/SMIL20/Language">
<type>ddf.metacard</type>
<string name="title">
<value>PlainXml near</value>
</string>
</metacard>
</csw:Insert>
</csw:Transaction>
A successful ingest will return a status of 200 OK
and csw:TransactionResponse
.
Insert
Response<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:TransactionResponse xmlns:ogc="http://www.opengis.net/ogc"
xmlns:gml="http://www.opengis.net/gml"
xmlns:ns3="http://www.w3.org/1999/xlink"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ns5="http://www.w3.org/2001/SMIL20/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:ows="http://www.opengis.net/ows"
xmlns:dct="http://purl.org/dc/terms/"
xmlns:ns9="http://www.w3.org/2001/SMIL20/Language"
xmlns:ns10="http://www.w3.org/2001/XMLSchema-instance"
version="2.0.2"
ns10:schemaLocation="http://www.opengis.net/csw /ogc/csw/2.0.2/CSW-publication.xsd">
<csw:TransactionSummary>
<csw:totalInserted>1</csw:totalInserted>
<csw:totalUpdated>0</csw:totalUpdated>
<csw:totalDeleted>0</csw:totalDeleted>
</csw:TransactionSummary>
<csw:InsertResult>
<csw:BriefRecord>
<dc:identifier><METACARD ID</dc:identifier>
<dc:title>Aliquam fermentum purus quis arcu</dc:title>
<dc:type>http://purl.org/dc/dcmitype/Text</dc:type>
<ows:BoundingBox crs="EPSG:4326">
<ows:LowerCorner>-6.171 44.792</ows:LowerCorner>
<ows:UpperCorner>-2.228 51.126</ows:UpperCorner>
</ows:BoundingBox>
</csw:BriefRecord>
</csw:InsertResult>
</csw:TransactionResponse>
Status Code | Error Message | Possible Causes |
---|---|---|
400 Bad Request |
|
XML error. Check for formatting errors in record. |
Schema error. Verify metadata is compliant with defined schema. |
13.6.2.2. CSW Endpoint Query Examples
To query through the CSW Enpoint, send a POST
request to the CSW endpoint.
https://<FQDN>:<PORT>/services/csw
Within the body of the request, include a GetRecords
operation to define the query.
Define the service and version to use (CSW, 2.0.2).
The output format must be application/xml
.
Specify the output schema.
(To get a list of supported schemas, send a Get Capabilities request to the CSW endpoint.)
1
2
3
4
5
6
7
8
9
10
11
<GetRecords xmlns="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
service="CSW"
version="2.0.2"
maxRecords="4"
startPosition="1"
resultType="results"
outputFormat="application/xml"
outputSchema="http://www.opengis.net/cat/csw/2.0.2"
xsi:schemaLocation="http://www.opengis.net/cat/csw/2.0.2 ../../../csw/2.0.2/CSW-discovery.xsd">
Include the query within the GetRecords
request.
Optionally, set the ElementSetName
to determine how much detail to return.
-
Brief: the least possible detail.
-
Summary: (Default)
-
Full: All metadata elements for the record(s).
Within the Constraint
element, define the query as an OSG or CQL filter.
1
2
3
4
5
6
7
8
9
10
11
<Query typeNames="Record">
<ElementSetName>summary</ElementSetName>
<Constraint version="1.1.0">
<ogc:Filter>
<ogc:PropertyIsLike wildCard="%" singleChar="_" escapeChar="\">
<ogc:PropertyName>AnyText</ogc:PropertyName>
<ogc:Literal>%</ogc:Literal>
</ogc:PropertyIsLike>
</ogc:Filter>
</Constraint>
</Query>
1
2
3
4
5
6
7
8
<Query typeNames="Record">
<ElementSetName>summary</ElementSetName>
<Constraint version="2.0.0">
<ogc:CqlText>
"AnyText" = '%'
</ogc:CqlText>
</csw:Constraint>
</Query>
GetRecords
XML Request Example<?xml version="1.0" ?>
<GetRecords xmlns="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
service="CSW"
version="2.0.2"
maxRecords="4"
startPosition="1"
resultType="results"
outputFormat="application/xml"
outputSchema="http://www.opengis.net/cat/csw/2.0.2"
xsi:schemaLocation="http://www.opengis.net/cat/csw/2.0.2 ../../../csw/2.0.2/CSW-discovery.xsd">
<Query typeNames="Record">
<ElementSetName>summary</ElementSetName>
<Constraint version="1.1.0">
<ogc:Filter>
<ogc:PropertyIsLike wildCard="%" singleChar="_" escapeChar="\">
<ogc:PropertyName>AnyText</ogc:PropertyName>
<ogc:Literal>%</ogc:Literal>
</ogc:PropertyIsLike>
</ogc:Filter>
</Constraint>
</Query>
</GetRecords>
GetRecords
Sample Response (application/xml
)<?xml version='1.0' encoding='UTF-8'?>
<csw:GetRecordsResponse xmlns:dct="http://purl.org/dc/terms/"
xmlns:xml="http://www.w3.org/XML/1998/namespace"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ows="http://www.opengis.net/ows"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0.2">
<csw:SearchStatus timestamp="2019-05-29"/>
<csw:SearchResults numberOfRecordsMatched="1" numberOfRecordsReturned="1" nextRecord="0" recordSchema="http://www.opengis.net/cat/csw/2.0.2" elementSet="summary">
<csw:Record xmlns:ows="http://www.opengis.net/ows"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:dct="http://purl.org/dc/terms/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<dc:identifier/>
<dc:title>Aliquam fermentum purus quis arcu</dc:title>
<dc:type>http://purl.org/dc/dcmitype/Text</dc:type>
<dc:subject>Hydrography--Dictionaries</dc:subject>
<dc:format>application/pdf</dc:format>
<dc:date>2019-05-29</dc:date>
<dct:abstract>Vestibulum quis ipsum sit amet metus imperdiet vehicula. Nulla scelerisque cursus mi.</dct:abstract>
<ows:BoundingBox crs="urn:x-ogc:def:crs:EPSG:6.11:4326">
<ows:LowerCorner>44.792 -6.171</ows:LowerCorner>
<ows:UpperCorner>51.126 -2.228</ows:UpperCorner>
</ows:BoundingBox>
</csw:Record>
</csw:SearchResults>
</csw:GetRecordsResponse>
To query a Specific Source
, specify a query for a source-id
.
To find a valid source-id
, send a Get Capabilities request.
Configured sources will be listed in the FederatedCatalogs
section of the response.
Note
|
The |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
<?xml version="1.0" ?>
<csw:GetRecords resultType="results"
outputFormat="application/xml"
outputSchema="urn:catalog:metacard"
startPosition="1"
maxRecords="10"
service="CSW"
version="2.0.2"
xmlns:ns2="http://www.opengis.net/ogc" xmlns:csw="http://www.opengis.net/cat/csw/2.0.2" xmlns:ns4="http://www.w3.org/1999/xlink" xmlns:ns3="http://www.opengis.net/gml" xmlns:ns9="http://www.w3.org/2001/SMIL20/Language" xmlns:ns5="http://www.opengis.net/ows" xmlns:ns6="http://purl.org/dc/elements/1.1/" xmlns:ns7="http://purl.org/dc/terms/" xmlns:ns8="http://www.w3.org/2001/SMIL20/">
<csw:DistributedSearch hopCount="2" />
<ns10:Query typeNames="csw:Record" xmlns="" xmlns:ns10="http://www.opengis.net/cat/csw/2.0.2">
<ns10:ElementSetName>full</ns10:ElementSetName>
<ns10:Constraint version="1.1.0">
<ns2:Filter>
<ns2:And>
<ns2:PropertyIsEqualTo wildCard="*" singleChar="#" escapeChar="!">
<ns2:PropertyName>source-id</ns2:PropertyName>
<ns2:Literal>Source1</ns2:Literal>
</ns2:PropertyIsEqualTo>_
<ns2:PropertyIsLike wildCard="*" singleChar="#" escapeChar="!">
<ns2:PropertyName>title</ns2:PropertyName>
<ns2:Literal>*</ns2:Literal>
</ns2:PropertyIsLike>
</ns2:And>
</ns2:Filter>
</ns10:Constraint>
</ns10:Query>
</csw:GetRecords>
To receive a response to a GetRecords
query that conforms to the GMD specification, set the Namespace(xmlns),outputschema
, and typeName
elements for GML schema.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<?xml version="1.0" ?>
<GetRecords xmlns="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:gmd="http://www.isotc211.org/2005/gmd"
xmlns:gml="http://www.opengis.net/gml"
service="CSW"
version="2.0.2"
maxRecords="8"
startPosition="1"
resultType="results"
outputFormat="application/xml"
outputSchema="http://www.isotc211.org/2005/gmd"
xsi:schemaLocation="http://www.opengis.net/cat/csw/2.0.2 ../../../csw/2.0.2/CSW-discovery.xsd">
<Query typeNames="gmd:MD_Metadata">
<ElementSetName>summary</ElementSetName>
<Constraint version="1.1.0">
<ogc:Filter>
<ogc:PropertyIsLike wildCard="%" singleChar="_" escapeChar="\">
<ogc:PropertyName>apiso:Title</ogc:PropertyName>
<ogc:Literal>%</ogc:Literal>
</ogc:PropertyIsLike>
</ogc:Filter>
</Constraint>
</Query>
</GetRecords>
UTM coordinates can be used when making a CSW GetRecords request using an ogc:Filter
.
UTM coordinates should use EPSG:326XX`as the `srsName
where XX
is the zone within the northern hemisphere.
UTM coordinates should use EPSG:327XX
as the srsName
where XX
is the zone within the southern hemisphere.
Note
|
UTM coordinates are only supported with requests providing an |
<GetRecords xmlns="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:gml="http://www.opengis.net/gml"
service="CSW"
version="2.0.2"
maxRecords="4"
startPosition="1"
resultType="results"
outputFormat="application/xml"
outputSchema="http://www.opengis.net/cat/csw/2.0.2"
xsi:schemaLocation="http://www.opengis.net/cat/csw/2.0.2 ../../../csw/2.0.2/CSW-discovery.xsd">
<Query typeNames="Record">
<ElementSetName>summary</ElementSetName>
<Constraint version="1.1.0">
<ogc:Filter>
<ogc:Intersects>
<ogc:PropertyName>ows:BoundingBox</ogc:PropertyName>
<gml:Envelope srsName="EPSG:32636">
<gml:lowerCorner>171070 1106907</gml:lowerCorner>
<gml:upperCorner>225928 1106910</gml:upperCorner>
</gml:Envelope>
</ogc:Intersects>
</ogc:Filter>
</Constraint>
</Query>
</GetRecords>
To locate a record by Metacard ID, send a POST
request with a GetRecordById
element specifying the ID.
GetRecordById
Request Example
1
2
3
4
5
6
7
8
9
10
11
12
<GetRecordById xmlns="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
service="CSW"
version="2.0.2"
outputFormat="application/xml"
outputSchema="http://www.opengis.net/cat/csw/2.0.2"
xsi:schemaLocation="http://www.opengis.net/cat/csw/2.0.2
../../../csw/2.0.2/CSW-discovery.xsd">
<ElementSetName>full</ElementSetName>
<Id><METACARD-ID></Id>
</GetRecordById>
CSW Record Field | Metacard Field | Brief Record | Summary Record | Record |
---|---|---|---|---|
|
|
1-n |
1-n |
0-n |
|
0-n |
|||
|
0-n |
0-n |
||
|
0-n |
|||
|
0-n  |
|||
|
0-n |
|||
|
|
0-n |
||
|
|
0-1 |
0-1 |
0-n |
|
0-n |
0-n |
||
|
|
1-n |
1-n |
0-n |
|
|
0-n |
||
|
0-n |
|||
|
0-n |
0-n |
||
|
0-n |
|||
|
0-n |
|||
|
|
0-n |
0-n |
|
|
0-n |
|||
|
|
0-n |
||
|
0-n |
|||
|
0-n |
|||
|
|
0-n |
||
|
0-n |
|||
|
|
0-n |
||
|
|
0-n |
||
|
|
0-n |
||
|
|
0-n  |
||
|
0-n  |
|||
|
0-n |
|||
|
0-n |
|||
|
0-n |
|||
|
0-n  |
|||
|
0-n  |
|||
|
0-n |
|||
|
0-n |
|||
|
0-n |
|||
|
0-n  |
|||
|
|
0-n  |
||
|
0-n |
|||
|
0-n |
|||
|
0-n |
|||
|
0-n |
|||
|
|
0-n |
0-n |
|
|
0-n |
|||
|
0-n |
|||
|
0-n |
|||
|
0-n |
|||
|
0-n |
|||
|
|
0-n |
0-n  |
|
|
0-n |
|||
|
|
0-n |
||
|
|
0-n  |
||
|
0-n |
0-n |
0-n |
Status Code | Error Message | Possible Causes |
---|---|---|
|
|
A query to a specific source has specified a source that is unavailable. |
200 OK |
|
No results found for query. Verify input. |
13.6.2.3. CSW Endpoint Update Examples
The CSW Endpoint can edit the metadata attributes of a metacard.
Send a POST
request to the CSW Endpoint URL:
https://<FDQN>:<PORT>/services/csw
Replace the <METACARD-ID>
value with the metacard id being updated, and edit any properties within the csw:Record
.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:Transaction
service="CSW"
version="2.0.2"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2">
<csw:Update>
<csw:Record
xmlns:ows="http://www.opengis.net/ows"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:dct="http://purl.org/dc/terms/"
xmlns:xsd="http://www.w3.org/2001/XMLSchema">
<dc:identifier><METACARD-ID></dc:identifier>
<dc:title>Aliquam fermentum purus quis arcu</dc:title>
<dc:type>http://purl.org/dc/dcmitype/Text</dc:type>
<dc:subject>Hydrography--Dictionaries</dc:subject>
<dc:format>application/pdf</dc:format>
<dc:date>2019-05-29</dc:date>
<dct:abstract>Vestibulum quis ipsum sit amet metus imperdiet vehicula. Nulla scelerisque cursus mi.</dct:abstract>
<ows:BoundingBox crs="urn:x-ogc:def:crs:EPSG:6.11:4326">
<ows:LowerCorner>44.792 -6.171</ows:LowerCorner>
<ows:UpperCorner>51.126 -2.228</ows:UpperCorner>
</ows:BoundingBox>
</csw:Record>
</csw:Update>
</csw:Transaction>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:TransactionResponse xmlns:ows="http://www.opengis.net/ows"
xmlns:ns2="http://www.w3.org/1999/xlink"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:gml="http://www.opengis.net/gml"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ns6="http://www.w3.org/2001/SMIL20/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:dct="http://purl.org/dc/terms/"
xmlns:ns9="http://www.w3.org/2001/SMIL20/Language"
xmlns:ns10="http://www.w3.org/2001/XMLSchema-instance" version="2.0.2"
ns10:schemaLocation="http://www.opengis.net/csw /ogc/csw/2.0.2/CSW-publication.xsd">
<csw:TransactionSummary>
<csw:totalInserted>0</csw:totalInserted>
<csw:totalUpdated>1</csw:totalUpdated>
<csw:totalDeleted>0</csw:totalDeleted>
</csw:TransactionSummary>
</csw:TransactionResponse>
Within the csw:Transaction
element, use the csw:RecordProperty
to update individual metacard attributes.
Use the Name
element to specify the name of the record property to be updated and set
the Value
element to the value to update in the record.
The values in the Update
will completely replace those that are already in the record.
1
2
3
4
<csw:RecordProperty>
<csw:Name>title</csw:Name>
<csw:Value>Updated Title</csw:Value>
</csw:RecordProperty>
To remove a non-required attribute, send the csw:Name
without a csw:Value
.
1
2
3
<csw:RecordProperty>
<csw:Name>title</csw:Name>
</csw:RecordProperty>
Required attributes are set to a default value if no Value
element is provided.
Property | Default Value |
---|---|
|
Resource |
|
current time |
|
current time |
|
current time |
|
myVersion |
|
current time |
|
current time |
|
resource, VALID |
|
system@localhost |
|
current time |
Use a csw:Constraint
to specify the metacard ID.
The constraint can be an OGC Filter or a CQL query.
1
2
3
4
5
6
7
8
<csw:Constraint version="2.0.0">
<ogc:Filter>
<ogc:PropertyIsEqualTo>
<ogc:PropertyName>id</ogc:PropertyName>
<ogc:Literal><METACARD-ID></ogc:Literal>
</ogc:PropertyIsEqualTo>
</ogc:Filter>
</csw:Constraint>
1
2
3
4
5
<csw:Constraint version="2.0.0">
<ogc:CqlText>
"id" = '<METACARD-ID>'
</ogc:CqlText>
</csw:Constraint>
Warning
|
These filters can search on any arbitrary query criteria, but take care to only affect desired records. |
Update
Request with OGC filter constraint<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:Transaction
service="CSW"
version="2.0.2"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ogc="http://www.opengis.net/ogc">
<csw:Update>
<csw:RecordProperty>
<csw:Name>title</csw:Name>
<csw:Value>Updated Title</csw:Value>
</csw:RecordProperty>
<csw:Constraint version="2.0.0">
<ogc:Filter>
<ogc:PropertyIsEqualTo>
<ogc:PropertyName>id</ogc:PropertyName>
<ogc:Literal><METACARD-ID></ogc:Literal>
</ogc:PropertyIsEqualTo>
</ogc:Filter>
</csw:Constraint>
</csw:Update>
</csw:Transaction>
Update
Request with CQL filter constraint<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:Transaction
service="CSW"
version="2.0.2"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ogc="http://www.opengis.net/ogc">
<csw:Update>
<csw:RecordProperty>
<csw:Name>title</csw:Name>
<csw:Value>Updated Title</csw:Value>
</csw:RecordProperty>
<csw:RecordProperty>
</csw:RecordProperty>
<csw:Constraint version="2.0.0">
<ogc:CqlText>
"id" = '<METACARD-ID>'
</ogc:CqlText>
</csw:Constraint>
</csw:Update>
</csw:Transaction>
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:TransactionResponse xmlns:ogc="http://www.opengis.net/ogc"
xmlns:gml="http://www.opengis.net/gml"
xmlns:ns3="http://www.w3.org/1999/xlink"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ns5="http://www.w3.org/2001/SMIL20/"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:ows="http://www.opengis.net/ows"
xmlns:dct="http://purl.org/dc/terms/"
xmlns:ns9="http://www.w3.org/2001/SMIL20/Language"
xmlns:ns10="http://www.w3.org/2001/XMLSchema-instance"
ns10:schemaLocation="http://www.opengis.net/csw /ogc/csw/2.0.2/CSW-publication.xsd"
version="2.0.2">
<csw:TransactionSummary>
<csw:totalInserted>0</csw:totalInserted>
<csw:totalUpdated>1</csw:totalUpdated>
<csw:totalDeleted>0</csw:totalDeleted>
</csw:TransactionSummary>
</csw:TransactionResponse>
Status Code | Error Message | Possible Causes |
---|---|---|
400 Bad Request |
|
XML or CSW schema error. Verify input. |
200 OK |
|
No records were updated. Verify metacard id or search parameters. |
13.6.2.4. CSW Endpoint Publication/Subscription Examples
The subscription GetRecords
operation is very similar to the GetRecords
operation used to search the catalog
but it subscribes to a search and sends events to a ResponseHandler
endpoint as metacards are ingested matching
the GetRecords
request used in the subscription.
The ResponseHandler
must use the https protocol and receive a HEAD request to poll for availability and
POST/PUT/DELETE requests for creation, updates, and deletions.
The response to a GetRecords
request on the subscription url will be an acknowledgement containing the original
GetRecords
request and a requestId
.
The client will be assigned a requestId
(URN).
A Subscription listens for events from federated sources if the DistributedSearch
element is present and the catalog is a member of a federation.
Send a POST
request to the CSW endpoint.
https://<FQDN>:<PORT>/services/csw/subscription
GetRecords
XML Request
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
<?xml version="1.0" ?>
<GetRecords xmlns="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
service="CSW"
version="2.0.2"
maxRecords="4"
startPosition="1"
resultType="results"
outputFormat="application/xml"
outputSchema="http://www.opengis.net/cat/csw/2.0.2"
xsi:schemaLocation="http://www.opengis.net/cat/csw/2.0.2 ../../../csw/2.0.2/CSW-discovery.xsd">
<ResponseHandler>https://some.ddf/services/csw/subscription/event</ResponseHandler>
<Query typeNames="Record">
<ElementSetName>summary</ElementSetName>
<Constraint version="1.1.0">
<ogc:Filter>
<ogc:PropertyIsLike wildCard="%" singleChar="_" escapeChar="\">
<ogc:PropertyName>xml</ogc:PropertyName>
<ogc:Literal>%</ogc:Literal>
</ogc:PropertyIsLike>
</ogc:Filter>
</Constraint>
</Query>
</GetRecords>
To update an existing subscription, send a PUT
request with the requestid
URN appended to the url.
CSW Endpoint Subscription Update URL
https://{FQDN}:{PORT}/services/csw/subscription/urn:uuid:4d5a5249-be03-4fe8-afea-6115021dd62f
GetRecords
XML Response<?xml version="1.0" ?>
<Acknowledgement timeStamp="2019-05-29T18:49:45" xmlns="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.opengis.net/cat/csw/2.0.2 ../../../csw/2.0.2/CSW-discovery.xsd">
<EchoedRequest>
<GetRecords
requestId="urn:uuid:4d5a5249-be03-4fe8-afea-6115021dd62f"
service="CSW"
version="2.0.2"
maxRecords="4"
startPosition="1"
resultType="results"
outputFormat="application/xml"
outputSchema="urn:catalog:metacard">
<ResponseHandler>https://some.ddf/services/csw/subscription/event</ResponseHandler>
<Query typeNames="Record">
<ElementSetName>summary</ElementSetName>
<Constraint version="1.1.0">
<ogc:Filter>
<ogc:PropertyIsLike wildCard="%" singleChar="_" escapeChar="\">
<ogc:PropertyName>xml</ogc:PropertyName>
<ogc:Literal>%</ogc:Literal>
</ogc:PropertyIsLike>
</ogc:Filter>
</Constraint>
</Query>
</GetRecords>
</EchoedRequest>
<RequestId>urn:uuid:4d5a5249-be03-4fe8-afea-6115021dd62f</ns:RequestId>
</Acknowledgement>
GetRecords
Event Sample Response<csw:GetRecordsResponse version="2.0.2" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dct="http://purl.org/dc/terms/" xmlns:ows="http://www.opengis.net/ows" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:csw="http://www.opengis.net/cat/csw/2.0.2" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<csw:SearchStatus timestamp="2014-02-19T15:33:44.602-05:00"/>
<csw:SearchResults numberOfRecordsMatched="1" numberOfRecordsReturned="1" nextRecord="5" recordSchema="http://www.opengis.net/cat/csw/2.0.2" elementSet="summary">
<csw:SummaryRecord>
<dc:identifier>f45415884c11409497e22db8303fe8c6</dc:identifier>
<dc:title>Product10</dc:title>
<dc:type>pdf</dc:type>
<dct:modified>2014-02-19T15:22:51.563-05:00</dct:modified>
<ows:BoundingBox crs="urn:x-ogc:def:crs:EPSG:6.11:4326">
<ows:LowerCorner>20.0 10.0</ows:LowerCorner>
<ows:UpperCorner>20.0 10.0</ows:UpperCorner>
</ows:BoundingBox>
</csw:SummaryRecord>
</csw:SearchResults>
</csw:GetRecordsResponse>
To retrieve an active subscription, send a GET
request with the requestid
URN appended to the url.
https://<FQDN>:<PORT>/services/csw/subscription/urn:uuid:4d5a5249-be03-4fe8-afea-6115021dd62f
HTTP GET
Sample Response<?xml version="1.0" ?>
<Acknowledgement timeStamp="2019-05-29T18:49:45" xmlns="http://www.opengis.net/cat/csw/2.0.2"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.opengis.net/cat/csw/2.0.2 ../../../csw/2.0.2/CSW-discovery.xsd">
<EchoedRequest>
<GetRecords
requestId="urn:uuid:4d5a5249-be03-4fe8-afea-6115021dd62f"
service="CSW"
version="2.0.2"
maxRecords="4"
startPosition="1"
resultType="results"
outputFormat="application/xml"
outputSchema="urn:catalog:metacard">
<ResponseHandler>https://some.ddf/services/csw/subscription/event</ResponseHandler>
<Query typeNames="Record">
<ElementSetName>summary</ElementSetName>
<Constraint version="1.1.0">
<ogc:Filter>
<ogc:PropertyIsLike wildCard="%" singleChar="_" escapeChar="\">
<ogc:PropertyName>xml</ogc:PropertyName>
<ogc:Literal>%</ogc:Literal>
</ogc:PropertyIsLike>
</ogc:Filter>
</Constraint>
</Query>
</GetRecords>
</EchoedRequest>
<RequestId>urn:uuid:4d5a5249-be03-4fe8-afea-6115021dd62f</ns:RequestId>
</Acknowledgement>
To delete a subscription, send a DELETE
request with the requestid
URN appended to the url.
https://<FQDN>:<PORT>/services/csw/subscription/urn:uuid:4d5a5249-be03-4fe8-afea-6115021dd62f
13.6.2.5. CSW Endpoint Delete Examples
To delete metacards via the CSW Endpoint, send a POST
request with a csw:Delete
to the CSW Endpoint URL.
https://<FQDN>:<PORT>/services/csw
Define the records to delete with the csw:Constraint
field.
The constraint can be either an OGC or CQL filter.
Delete
Request with OGC filter constraint
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:Transaction service="CSW" version="2.0.2"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:gml="http://www.opengis.net/gml"
xmlns:ogc="http://www.opengis.net/ogc">
<csw:Delete typeName="csw:Record" handle="something">
<csw:Constraint version="2.0.0">
<ogc:Filter>
<ogc:PropertyIsEqualTo>
<ogc:PropertyName>id</ogc:PropertyName>
<ogc:Literal><METACARD-ID></ogc:Literal>
</ogc:PropertyIsEqualTo>
</ogc:Filter>
</csw:Constraint>
</csw:Delete>
</csw:Transaction>
Delete
Request with CQL filter constraint
1
2
3
4
5
6
7
8
9
10
11
12
13
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:Transaction service="CSW" version="2.0.2"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:gml="http://www.opengis.net/gml"
xmlns:ogc="http://www.opengis.net/ogc">
<csw:Delete typeName="csw:Record" handle="something">
<csw:Constraint version="2.0.0">
<ogc:CqlText>
"id" = '<METACARD-ID>'
</ogc:CqlText>
</csw:Constraint>
</csw:Delete>
</csw:Transaction>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:TransactionResponse xmlns:ows="http://www.opengis.net/ows"
xmlns:ns2="http://www.w3.org/1999/xlink"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:dct="http://purl.org/dc/terms/"
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
xmlns:gml="http://www.opengis.net/gml"
xmlns:ns8="http://www.w3.org/2001/SMIL20/"
xmlns:ns9="http://www.w3.org/2001/SMIL20/Language"
xmlns:ns10="http://www.w3.org/2001/XMLSchema-instance"
version="2.0.2" ns10:schemaLocation="http://www.opengis.net/csw /ogc/csw/2.0.2/CSW-publication.xsd">
<csw:TransactionSummary>
<csw:totalInserted>0</csw:totalInserted>
<csw:totalUpdated>0</csw:totalUpdated>
<csw:totalDeleted>1</csw:totalDeleted>
</csw:TransactionSummary>
</csw:TransactionResponse>
Status Code | Error Message | Possible Causes |
---|---|---|
|
|
No records matched filter criteria. Verify metacard ID. |
`400 Bad Request |
|
XML or CSW formatting error. Verify request. |
13.6.2.6. CSW Endpoint Get Capabilities Examples
The GetCapabilities
operation describes the operations the catalog supports and the URLs used to access those operations.
The CSW endpoint supports both HTTP GET
and HTTP POST
requests for the GetCapabilities
operation.
The response to either request will always be a csw:Capabilities
XML document.
This XML document is defined by the CSW-Discovery XML Schema .
GetCapabilities
URL for GET requesthttps://<FQDN>:<PORT>/services/csw?service=CSW&version=2.0.2&request=GetCapabilities
Alternatively, send a POST
request to the root CSW endpoint URL.
GetCapabilities
URL for GET request$https://<FQDN>:<PORT>/services/csw
Include an XML message body with a GetCapabilities
element.
GetCapabilities
Sample Request<?xml version="1.0" ?>
<csw:GetCapabilities
xmlns:csw="http://www.opengis.net/cat/csw/2.0.2"
service="CSW"
version="2.0.2" >
</csw:GetCapabilities>
GetCapabilities
Sample Response (application/xml
)<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<csw:Capabilities xmlns:ows="http://www.opengis.net/ows" xmlns:ns2="http://www.w3.org/1999/xlink" xmlns:ogc="http://www.opengis.net/ogc" xmlns:gml="http://www.opengis.net/gml" xmlns:csw="http://www.opengis.net/cat/csw/2.0.2" xmlns:ns6="http://www.w3.org/2001/SMIL20/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dct="http://purl.org/dc/terms/" xmlns:ns9="http://www.w3.org/2001/SMIL20/Language" xmlns:ns10="http://www.w3.org/2001/XMLSchema-instance" version="2.0.2" ns10:schemaLocation="http://www.opengis.net/csw /ogc/csw/2.0.2/CSW-publication.xsd">
<ows:ServiceIdentification>
<ows:Title>Catalog Service for the Web</ows:Title>
<ows:Abstract>DDF CSW Endpoint</ows:Abstract>
<ows:ServiceType>CSW</ows:ServiceType>
<ows:ServiceTypeVersion>2.0.2</ows:ServiceTypeVersion>
</ows:ServiceIdentification>
<ows:ServiceProvider>
<ows:ProviderName>DDF</ows:ProviderName>
<ows:ProviderSite/>
<ows:ServiceContact/>
</ows:ServiceProvider>
<ows:OperationsMetadata>
<ows:Operation name="GetCapabilities">
<ows:DCP>
<ows:HTTP>
<ows:Get ns2:href="https://<FQDN>:<PORT>/services/csw"/>
<ows:Post ns2:href="https://<FQDN>:<PORT>/services/csw">
<ows:Constraint name="PostEncoding">
<ows:Value>XML</ows:Value>
</ows:Constraint>
</ows:Post>
</ows:HTTP>
</ows:DCP>
<ows:Parameter name="sections">
<ows:Value>ServiceIdentification</ows:Value>
<ows:Value>ServiceProvider</ows:Value>
<ows:Value>OperationsMetadata</ows:Value>
<ows:Value>Filter_Capabilities</ows:Value>
</ows:Parameter>
</ows:Operation>
<ows:Operation name="DescribeRecord">
<ows:DCP>
<ows:HTTP>
<ows:Get ns2:href="https://<FQDN>:<PORT>/services/csw"/>
<ows:Post ns2:href="https://<FQDN>:<PORT>/services/csw">
<ows:Constraint name="PostEncoding">
<ows:Value>XML</ows:Value>
</ows:Constraint>
</ows:Post>
</ows:HTTP>
</ows:DCP>
<ows:Parameter name="typeName">
<ows:Value>csw:Record</ows:Value>
<ows:Value>gmd:MD_Metadata</ows:Value>
</ows:Parameter>
<ows:Parameter name="OutputFormat">
<ows:Value>application/xml</ows:Value>
<ows:Value>application/json</ows:Value>
<ows:Value>application/atom+xml</ows:Value>
<ows:Value>text/xml</ows:Value>
</ows:Parameter>
<ows:Parameter name="schemaLanguage">
<ows:Value>http://www.w3.org/XMLSchema</ows:Value>
<ows:Value>http://www.w3.org/XML/Schema</ows:Value>
<ows:Value>http://www.w3.org/2001/XMLSchema</ows:Value>
<ows:Value>http://www.w3.org/TR/xmlschema-1/</ows:Value>
</ows:Parameter>
</ows:Operation>
<ows:Operation name="GetRecords">
<ows:DCP>
<ows:HTTP>
<ows:Get ns2:href="https://<FQDN>:<PORT>/services/csw"/>
<ows:Post ns2:href="https://<FQDN>:<PORT>/services/csw">
<ows:Constraint name="PostEncoding">
<ows:Value>XML</ows:Value>
</ows:Constraint>
</ows:Post>
</ows:HTTP>
</ows:DCP>
<ows:Parameter name="ResultType">
<ows:Value>hits</ows:Value>
<ows:Value>results</ows:Value>
<ows:Value>validate</ows:Value>
</ows:Parameter>
<ows:Parameter name="OutputFormat">
<ows:Value>application/xml</ows:Value>
<ows:Value>application/json</ows:Value>
<ows:Value>application/atom+xml</ows:Value>
<ows:Value>text/xml</ows:Value>
</ows:Parameter>
<ows:Parameter name="OutputSchema">
<ows:Value>urn:catalog:metacard</ows:Value>
<ows:Value>http://www.isotc211.org/2005/gmd</ows:Value>
<ows:Value>http://www.opengis.net/cat/csw/2.0.2</ows:Value>
</ows:Parameter>
<ows:Parameter name="typeNames">
<ows:Value>csw:Record</ows:Value>
<ows:Value>gmd:MD_Metadata</ows:Value>
</ows:Parameter>
<ows:Parameter name="ConstraintLanguage">
<ows:Value>Filter</ows:Value>
<ows:Value>CQL_Text</ows:Value>
</ows:Parameter>
<ows:Constraint name="FederatedCatalogs">
<ows:Value>Source1</ows:Value>
<ows:Value>Source2</ows:Value>
</ows:Constraint>
</ows:Operation>
<ows:Operation name="GetRecordById">
<ows:DCP>
<ows:HTTP>
<ows:Get ns2:href="https://<FQDN>:<PORT>/services/csw"/>
<ows:Post ns2:href="https://<FQDN>:<PORT>/services/csw">
<ows:Constraint name="PostEncoding">
<ows:Value>XML</ows:Value>
</ows:Constraint>
</ows:Post>
</ows:HTTP>
</ows:DCP>
<ows:Parameter name="OutputSchema">
<ows:Value>urn:catalog:metacard</ows:Value>
<ows:Value>http://www.isotc211.org/2005/gmd</ows:Value>
<ows:Value>http://www.opengis.net/cat/csw/2.0.2</ows:Value>
<ows:Value>http://www.iana.org/assignments/media-types/application/octet-stream</ows:Value>
</ows:Parameter>
<ows:Parameter name="OutputFormat">
<ows:Value>application/xml</ows:Value>
<ows:Value>application/json</ows:Value>
<ows:Value>application/atom+xml</ows:Value>
<ows:Value>text/xml</ows:Value>
<ows:Value>application/octet-stream</ows:Value>
</ows:Parameter>
<ows:Parameter name="ResultType">
<ows:Value>hits</ows:Value>
<ows:Value>results</ows:Value>
<ows:Value>validate</ows:Value>
</ows:Parameter>
<ows:Parameter name="ElementSetName">
<ows:Value>brief</ows:Value>
<ows:Value>summary</ows:Value>
<ows:Value>full</ows:Value>
</ows:Parameter>
</ows:Operation>
<ows:Operation name="Transaction">
<ows:DCP>
<ows:HTTP>
<ows:Post ns2:href="https://<FQDN>:<PORT>/services/csw">
<ows:Constraint name="PostEncoding">
<ows:Value>XML</ows:Value>
</ows:Constraint>
</ows:Post>
</ows:HTTP>
</ows:DCP>
<ows:Parameter name="typeNames">
<ows:Value>xml</ows:Value>
<ows:Value>appxml</ows:Value>
<ows:Value>csw:Record</ows:Value>
<ows:Value>gmd:MD_Metadata</ows:Value>
<ows:Value>tika</ows:Value>
</ows:Parameter>
<ows:Parameter name="ConstraintLanguage">
<ows:Value>Filter</ows:Value>
<ows:Value>CQL_Text</ows:Value>
</ows:Parameter>
</ows:Operation>
<ows:Parameter name="service">
<ows:Value>CSW</ows:Value>
</ows:Parameter>
<ows:Parameter name="version">
<ows:Value>2.0.2</ows:Value>
</ows:Parameter>
</ows:OperationsMetadata>
<ogc:Filter_Capabilities>
<ogc:Spatial_Capabilities>
<ogc:GeometryOperands>
<ogc:GeometryOperand>gml:Point</ogc:GeometryOperand>
<ogc:GeometryOperand>gml:LineString</ogc:GeometryOperand>
<ogc:GeometryOperand>gml:Polygon</ogc:GeometryOperand>
</ogc:GeometryOperands>
<ogc:SpatialOperators>
<ogc:SpatialOperator name="BBOX"/>
<ogc:SpatialOperator name="Beyond"/>
<ogc:SpatialOperator name="Contains"/>
<ogc:SpatialOperator name="Crosses"/>
<ogc:SpatialOperator name="Disjoint"/>
<ogc:SpatialOperator name="DWithin"/>
<ogc:SpatialOperator name="Intersects"/>
<ogc:SpatialOperator name="Overlaps"/>
<ogc:SpatialOperator name="Touches"/>
<ogc:SpatialOperator name="Within"/>
</ogc:SpatialOperators>
</ogc:Spatial_Capabilities>
<ogc:Scalar_Capabilities>
<ogc:LogicalOperators/>
<ogc:ComparisonOperators>
<ogc:ComparisonOperator>Between</ogc:ComparisonOperator>
<ogc:ComparisonOperator>NullCheck</ogc:ComparisonOperator>
<ogc:ComparisonOperator>Like</ogc:ComparisonOperator>
<ogc:ComparisonOperator>EqualTo</ogc:ComparisonOperator>
<ogc:ComparisonOperator>GreaterThan</ogc:ComparisonOperator>
<ogc:ComparisonOperator>GreaterThanEqualTo</ogc:ComparisonOperator>
<ogc:ComparisonOperator>LessThan</ogc:ComparisonOperator>
<ogc:ComparisonOperator>LessThanEqualTo</ogc:ComparisonOperator>
<ogc:ComparisonOperator>EqualTo</ogc:ComparisonOperator>
<ogc:ComparisonOperator>NotEqualTo</ogc:ComparisonOperator>
</ogc:ComparisonOperators>
</ogc:Scalar_Capabilities>
<ogc:Id_Capabilities>
<ogc:EID/>
</ogc:Id_Capabilities>
</ogc:Filter_Capabilities>
</csw:Capabilities>
13.6.3. FTP Endpoint
The FTP Endpoint provides a method for ingesting files directly into the DDF catalog using the FTP protocol.
The FTP endpoint can be accessed from any FTP client of choice. Some common clients are FileZilla, PuTTY, or the FTP client provided in the terminal. The default port number is 8021. If FTPS is enabled with 2-way TLS, a client that supports client authentication is required.
Custom Ftplets can be implemented by extending the DefaultFtplet
class provided by Apache FTP Server.
Doing this will allow custom handling of various FTP commands by overriding the methods of the DefaultFtplet
.
Refer to https://mina.apache.org/ftpserver-project/ftplet.html for available methods that can be overridden.
After creating a custom Ftplet, it needs to be added to the FTP server’s Ftplets before the server is started. Any Ftplets that are registered to the FTP server will execute the FTP command in the order that they were registered.
Operation | FTP Request Type | Details | Example URL |
---|---|---|---|
ingest |
|
|
The FTP endpoint supports the PUT
, MPUT DELE
, RETR
, RMD
, APPE
, RNTO
, STOU
, and SITE
operations.
The FTP endpoint supports files being uploaded as a dot-file (e.g., .foo
) and then being renamed to the final filename (e.g., some-file.pdf
). The endpoint will complete the ingest process when the rename command is sent.
13.6.4. OpenSearch Endpoint
The OpenSearch Endpoint enables a client to send query parameters and receive search results. This endpoint uses the input query parameters to create an OpenSearch query. The client does not need to specify all of the query parameters, only the query parameters of interest.
The OpenSearch specification defines a file format to describe an OpenSearch endpoint. This file is XML-based and is used to programatically retrieve a site’s endpoint, as well as the different parameter options a site holds. The parameters are defined via the OpenSearch and CDR IPT Specifications.
13.6.4.1. OpenSearch Contextual Queries
To use the OpenSearch endpoint for a query, send a GET
request with the query options as parameters
https://<FQDN>:<PORT>/services/catalog/query?<NAME>="<VALUE>"
OpenSearch Element | HTTPS Parameter | Possible Values | Comments |
---|---|---|---|
|
|
URL-encoded, space-delimited list of search terms |
Complex contextual search string. |
|
|
Integer >= 0 |
Maximum # of results to retrieve. default: |
|
|
integer > 0 |
Index of first result to return. This value uses a one-based index for the results. default: |
|
|
Requires a transformer shortname as a string, possible values include, when available, See Query Response transformers for more possible values. |
Defines the format that the return type should be in. default: |
https://<FQDN>:<PORT>/services/catalog/query?q="Aliquam"&count=20
13.6.4.2. OpenSearch Temporal Queries
Queries can also specify a start and end time to narrow results.
OpenSearch Element | HTTPS Parameter | Possible Values | Comments |
---|---|---|---|
|
|
RFC-3399-defined value:`YYYY-MM-DDTHH:mm:ssZ` or |
Specifies the beginning of the time slice of the search. Default value of "1970-01-01T00:00:00Z" is used when |
|
|
RFC-3399-defined value:`YYYY-MM-DDTHH:mm:ssZ` or |
Specifies the ending of the time slice of the search Current GMT date/time is used when |
https://<FQDN>:<PORT>/services/catalog/query?q='*'&dtstart=2019-05-29T00:00:00Z&dtend=2019-05-29T18:00:00Z
Note
|
The start and end temporal criteria must be of the format specified above. Other formats are currently not supported. Example:
The start and end temporal elements are based on modified timestamps for a metacard. |
13.6.4.3. OpenSearch Geospatial Queries
Query by location.
Use geospatial query parameters to create a geospatial INTERSECTS
query, where INTERSECTS
means geometries that are not DISJOINT
to the given geospatial parameters.
OpenSearch Element | HTTPS Parameter | Possible Values | Comments |
---|---|---|---|
|
|
|
Used in conjunction with the |
|
|
|
Used in conjunction with the |
|
|
|
Specifies the search distance in meters from the Used in conjunction with the default: |
|
|
Comma-delimited list of lat/lon ( |
According to the OpenSearch Geo Specification this is deprecated. Use the |
|
|
4 comma-delimited |
|
|
|
WKT Geometries Examples:
|
Make sure to repeat the starting point as the last point to close the polygon. |
https://localhost:8993/services/catalog/query?q='*'&lon=44.792&lat=-6.171
13.6.4.4. Additional OpenSearch Query Parameters
The OpenSearch Endpoint can also use these additional parameters to refine queries
OpenSearch Element | HTTPS Parameter | Possible Values | Comments |
---|---|---|---|
|
|
|
Sorting by default: |
|
|
Integer >= 0 |
Maximum # of results to return. If default: |
|
|
Integer > 0 |
Maximum timeout (milliseconds) for query to respond. default: |
|
|
Integer > 0 |
Specifies an offset (milliseconds), backwards from the current time, to search on the modified time field for entries. |
|
|
Any valid datatype (e.g. |
Specifies the type of data to search for. |
|
|
Comma-delimited list of strings (e.g. 20,30) |
Version values for which to search. |
|
|
Comma-delimited list of XPath string selectors (e.g. |
Selectors to narrow the query. |
OpenSearch Element | HTTPS Parameter | Possible Values | Comments |
---|---|---|---|
|
|
Comma-delimited list of site names to query. Varies depending on the names of the sites in the federation. |
If |
13.6.4.5. Complex Contextual Query Format
The OpenSearch Endpoint supports the following operators: AND
, OR
, and NOT
.
These operators are case sensitive.
Implicit ANDs
are also supported.
Use parentheses to change the order of operations. Use quotes to group keywords into literal expressions.
See the OpenSearch specification for more syntax specifics.
https://<FQDN>:<PORT>/services/catalog/query?q='cat OR dog'
13.6.5. Queries Endpoint
The queries endpoint enables an application to create, retrieve, update, and delete query metacards.
Query metacards represent queries within the UI. A query metacard is what is persisted in the data store.
The queries endpoint can be used for one or more of these operations on an instance of DDF:
-
Create query metacards and store them in the DDF catalog.
-
Retrieve all query metacards stored in the DDF catalog and sort them based on attribute and sort order.
-
Retrieve a specific query metacard stored in the DDF catalog.
-
Update query metacards that are stored in the DDF catalog.
-
Delete query metacards that are stored in the DDF catalog.
https://<HOSTNAME>:<PORT>/search/catalog/internal/queries
13.6.5.1. Queries Endpoint Create Examples
To create a query metacard through the queries endpoint, send a POST
request to the queries endpoint.
1
2
3
4
5
6
7
8
9
10
11
12
13
{
"cql":"(\"anyText\" ILIKE 'foo bar')",
"filterTree":"{\"type\":\"AND\",\"filters\":[{\"type\":\"ILIKE\",\"property\":\"anyText\",\"value\":\"foo bar\"}]}",
"federation":"enterprise",
"sorts":[
{
"attribute":"modified",
"direction":"descending"
}
],
"type":"advanced",
"title":"Search Title"
}
A successful create request will return a status of 201 CREATED
.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
{
"id": "12bfc601cda449d58733eacaf613b93d",
"title": "Search Title",
"created": "Apr 18, 2019 10:20:55 AM",
"modified": "Apr 18, 2019 10:20:55 AM",
"owner": "admin@localhost.local",
"cql": "(\"anyText\" ILIKE 'foo bar')",
"filterTree": "{\"type\":\"AND\",\"filters\":[{\"type\":\"ILIKE\",\"property\":\"anyText\",\"value\":\"foo bar\"}]}",
"enterprise": null,
"sources": [],
"sorts": [
{
"attribute": "modified",
"direction": "descending"
}
],
"polling": null,
"federation": "enterprise",
"type": "advanced",
"detailLevel": null,
"schedules": [],
"facets": []
}
An unsuccessful create request will return a status of 500 SERVER ERROR
.
1
2
3
{
"message": "Something went wrong."
}
13.6.5.2. Queries Endpoint Retrieve All Examples
To retrieve a query metacard through the queries endpoint, send a GET
request to the queries endpoint.
Query Param | Description | Default Value | Valid Values | Type |
---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
A successful retrieval request will return a status of 200 OK
.
13.6.5.3. Queries Endpoint Retrieve All Fuzzy Examples
To retrieve all query metacards based on some text based value through the queries endpoint, send a GET
request to the queries endpoint specifying a value for text
as a query parameters.
https://<HOSTNAME>:<PORT>/search/catalog/internal/queries?text=<VALUE>
A fuzzy search will only be performed against the title
, modified
, owner
, and description
attributes.
13.6.5.4. Queries Endpoint Retrieve Examples
https://<HOSTNAME>:<PORT>/search/catalog/internal/queries/<ID>
To retrieve a specific query metacard through the queries endpoint, send a GET
request to the queries endpoint with an id.
A successful retrieval request will return a status of 200 OK
.
1
2
3
{
"message": "Could not find metacard for id: <metacardId>"
}
An unsuccessful retrieval request will return a status of 404 NOT FOUND
.
13.6.5.5. Queries Endpoint Update Examples
https://<HOSTNAME>:<PORT>/search/catalog/internal/queries/<ID>
To update a specific query metacard through the queries endpoint, send a PUT
request to the queries endpoint with an id.
1
2
3
4
5
6
7
8
9
10
11
12
13
{
"cql":"(\"anyText\" ILIKE 'foo bar')",
"filterTree":"{\"type\":\"AND\",\"filters\":[{\"type\":\"ILIKE\",\"property\":\"anyText\",\"value\":\"foo bar\"}]}",
"federation":"enterprise",
"sorts":[
{
"attribute":"modified",
"direction":"descending"
}
],
"type":"advanced",
"title":"New Search Title"
}
A successful update request will return a status of 200 OK
.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
{
"id": "cd6b83db301544e4bb7ece39564261ca",
"title": "New Search Title",
"created": "Apr 18, 2019 11:09:35 AM",
"modified": "Apr 18, 2019 11:09:35 AM",
"owner": null,
"cql": "(\"anyText\" ILIKE 'foo barararra')",
"filterTree": "{\"type\":\"AND\",\"filters\":[{\"type\":\"ILIKE\",\"property\":\"anyText\",\"value\":\"foo bar\"}]}",
"enterprise": null,
"sources": [],
"sorts": [
{
"attribute": "modified",
"direction": "descending"
}
],
"polling": null,
"federation": "enterprise",
"type": "advanced",
"detailLevel": null,
"schedules": [],
"facets": []
}
An unsuccessful update request will return a status of 404 NOT FOUND
.
1
2
3
{
"message": "Form is either restricted or not found."
}
13.6.5.6. Queries Endpoint Delete Examples
https://<HOSTNAME>:<PORT>/search/catalog/internal/queries/<ID>
To delete a specific query metacard through the queries endpoint, send a GET
request to the queries endpoint with an id.
A successful deletion request will return a status of 204 NO CONTENT
.
An unsuccessful deletion request will return a status of 404 NOT FOUND
.
1
2
3
{
"message": "Form is either restricted or not found."
}
Developing
Developers will build or extend the functionality of the applications.
DDF includes several extension points where external developers can add functionality to support individual use cases.
DDF is written in Java and uses many open source libraries. DDF uses OSGi to provide modularity, lifecycle management, and dynamic services. OSGi services can be installed and uninstalled while DDF is running. DDF development typically means developing new OSGi bundles and deploying them to the running DDF. A complete description of OSGi is outside the scope of this documentation. For more information about OSGi, see the OSGi Alliance website .
Important
|
If developing for a Highly Available Cluster of DDF, see High Availability Guidance. |
14. Catalog Framework API
The CatalogFramework
is the routing mechanism between catalog components that provides integration points for the Catalog Plugins.
An endpoint invokes the active Catalog Framework, which calls any configured Pre-query or Pre-ingest plug-ins.
The selected federation strategy calls the active Catalog Provider and any connected or federated sources.
Then, any Post-query or Post-ingest plug-ins are invoked.
Finally, the appropriate response is returned to the calling endpoint.
The Catalog Framework wires all Catalog components together.
It is responsible for routing Catalog requests and responses to the appropriate target.
Endpoints send Catalog requests to the Catalog Framework. The Catalog Framework then invokes Catalog Plugins, Transformers, and Resource Components as needed before sending requests to the intended destination, such as one or more Sources.
The Catalog Framework decouples clients from service implementations and provides integration points for Catalog Plugins and convenience methods for Endpoint developers.
14.1. Catalog API Design
The Catalog is composed of several components and an API that connects them together. The Catalog API is central to DDF’s architectural qualities of extensibility and flexibility. The Catalog API consists of Java interfaces that define Catalog functionality and specify interactions between components. These interfaces provide the ability for components to interact without a dependency on a particular underlying implementation, thus allowing the possibility of alternate implementations that can maintain interoperability and share developed components. As such, new capabilities can be developed independently, in a modular fashion, using the Catalog API interfaces and reused by other DDF installations.
14.1.1. Ensuring Compatibility
The Catalog API will evolve, but great care is taken to retain backwards compatibility with developed components. Compatibility is reflected in version numbers.
14.1.2. Catalog Framework Sequence Diagrams
Because the Catalog Framework plays a central role to Catalog functionality, it interacts with many different Catalog components. To illustrate these relationships, high-level sequence diagrams with notional class names are provided below. These examples are for illustrative purposes only and do not necessarily represent every step in each procedure.
The Ingest Service Endpoint, the Catalog Framework, and the Catalog Provider are key components of the Reference Implementation.
The Endpoint bundle implements a Web service that allows clients to create, update, and delete metacards.
The Endpoint calls the CatalogFramework
to execute the operations of its specification.
The CatalogFramework
routes the request through optional PreIngest
and PostIngest
Catalog Plugins, which may modify the ingest request/response before/after the Catalog Provider executes the ingest request and provides the response.
Note that a CatalogProvider
must be present for any ingest requests to be successfully processed, otherwise a fault is returned.
This process is similar for updating catalog entries, with update requests calling the update(UpdateRequest)
methods on the Endpoint, CatalogFramework
, and Catalog Provider.
Similarly, for deletion of catalog entries, the delete requests call the delete(DeleteRequest)
methods on the Endpoint
, CatalogFramework
, and CatalogProvider
.
14.1.2.1. Error Handling
Any ingest attempts that fail inside the Catalog Framework (whether the failure comes from the Catalog Framework itself, pre-ingest plugin failures, or issues with the Catalog Provider) will be logged to a separate log file for ease of error handling.
The file is located at <DDF_HOME>/data/log/ingest_error.log
and will log the Metacards that fail, their ID and Title name, and the stack trace associated with their failure.
By default, successful ingest attempts are not logged.
However, that functionality can be achieved by setting the log level of the ingestLogger
to DEBUG (note that enabling DEBUG can cause a non-trivial performance hit).
Tip
|
To turn off logging failed ingest attempts into a separate file, execute the following via the command line console log:set ERROR ingestLogger |
14.1.2.2. Query
The Query Service Endpoint, the Catalog Framework, and the CatalogProvider
are key components for processing a query request as well.
The Endpoint bundle contains a Web service that exposes the interface to query for Metacards
.
The Endpoint calls the CatalogFramework
to execute the operations of its specification.
The CatalogFramework
relies on the CatalogProvider
to execute the actual query.
Optional PreQuery and PostQuery Catalog Plugins may be invoked by the CatalogFramework
to modify the query request/response prior to the Catalog Provider processing the query request and providing the query response.
If a CatalogProvider
is not configured and no other remote Sources are configured, a fault will be returned.
It is possible to have only remote Sources configured and no local CatalogProvider
configured and be able to execute queries to specific remote Sources by specifying the site name(s) in the query request.
14.1.2.3. Product Retrieval
The Query Service Endpoint, the Catalog Framework, and the CatalogProvider
are key components for processing a retrieve product request.
The Endpoint bundle contains a Web service that exposes the interface to retrieve products, also referred to as Resources.
The Endpoint calls the CatalogFramework
to execute the operations of its specification.
The CatalogFramework
relies on the Sources to execute the actual product retrieval.
Optional PreResource
and PostResource
Catalog Plugins may be invoked by the CatalogFramework
to modify the product retrieval request/response prior to the Catalog Provider processing the request and providing the response.
It is possible to retrieve products from specific remote Sources by specifying the site name(s) in the request.
14.1.2.4. Product Caching
The Catalog Framework optionally provides caching of products, so future requests to retrieve the same product will be serviced much quicker.
If caching is enabled, each time a retrieve product request is received, the Catalog Framework will look in its cache (default location <DDF_HOME>/data/product-cache
) to see if the product has been cached locally.
If it has, the product is retrieved from the local site and returned to the client, providing a much quicker turnaround because remote product retrieval and network traffic was avoided.
If the requested product is not in the cache, the product is retrieved from the Source (local or remote) and cached locally while returning the product to the client.
The caching to a local file of the product and the streaming of the product to the client are done simultaneously so that the client does not have to wait for the caching to complete before receiving the product.
If errors are detected during the caching, caching of the product will be abandoned, and the product will be returned to the client.
The Catalog Framework attempts to detect any network problems during the product retrieval, e.g., long pauses where no bytes are read implying a network connection was dropped. (The amount of time defined as a "long pause" is configurable, with the default value being five seconds.) The Catalog Framework will attempt to retrieve the product up to a configurable number of times (default = three), waiting for a configurable amount of time (default = 10 seconds) between each attempt, trying to successfully retrieve the product. If the Catalog Framework is unable to retrieve the product, an error message is returned to the client.
If the admin has enabled the Always Cache When Canceled option, caching of the product will occur even if the client cancels the product retrieval so that future requests will be serviced quickly. Otherwise, caching is canceled if the user cancels the product download.
14.1.2.5. Product Download Status
As part of the caching of products, the Catalog Framework also posts events to the OSGi notification framework. Information includes when the product download started, whether the download is retrying or failed (after the number of retrieval attempts configured for product caching has been exhausted), and when the download completes. These events are retrieved by the Search UI and presented to the user who initiated the download.
14.1.3. Catalog API
The Catalog API is an OSGi bundle (catalog-core-api
) that contains the Java interfaces for the Catalog components and implementation classes for the Catalog Framework, Operations, and Data components.
14.1.3.1. Catalog API Search Interfaces
The Catalog API includes two different search interfaces.
- Search UI Application Search Interface
-
The DDF Search UI application provides a graphic interface to return results and locate them on an interactive globe or map.
- SSH Search Interface
-
Additionally, it is possible to use a client script to remotely access DDF via SSH and send console commands to search and ingest data.
14.1.3.2. Catalog Search Result Objects
Data is returned from searches as Catalog Search Result
objects.
This is a subtype of Catalog Entry
that also contains additional data based on what type of sort policy was applied to the search.
Because it is a subtype of Catalog Entry
, a Catalog Search Result
has all Catalog Entry
’s fields such as metadata, effective time, and modified time.
It also contains some of the following fields, depending on type of search, that are populated by DDF when the search occurs:
- Distance
-
Populated when a point-radius spatial search occurs. Numerical value that indicates the result’s distance from the center point of the search.
- Units
-
Populated when a point-radius spatial search occurs. Indicates the units (kilometer, mile, etc.) for the distance field.
- Relevance
-
Populated when a contextual search occurs. Numerical value that indicates how relevant the text in the result is to the text originally searched for.
14.1.3.3. Search Programmatic Flow
Searching the catalog involves three basic steps:
-
Define the search criteria (contextual, spatial, or temporal).
-
Optionally define a sort policy and assign it to the criteria.
-
For contextual search, optionally set the
fuzzy
flag totrue
orfalse
(the default value for theMetadata Catalog
fuzzy
flag istrue
, while theportal
default value isfalse
). -
For contextual search, optionally set the
caseSensitive
flag to true (the default is thatcaseSensitive
flag is NOT set and queries are not case sensitive). Doing so enables case sensitive matching on the search criteria. For example, ifcaseSensitive
is set to true and the phrase is “Baghdad” then only metadata containing “Baghdad” with the same matching case will be returned. Words such as “baghdad”, “BAGHDAD”, and “baghDad” will not be returned because they do not match the exact case of the search term.
-
-
Issue a search.
-
Examine the results.
14.1.3.4. Sort Policies
Searches can also be sorted according to various built-in policies. A sort policy is applied to the search criteria after its creation but before the search is issued. The policy specifies to the DDF the order the Catalog search results should be in when they are returned to the requesting client. Only one sort policy may be defined per search.
There are three policies available.
Sort Policy | Sorts By | Default Order | Available for |
---|---|---|---|
Temporal |
The catalog search result’s effective time field |
Newest to oldest |
All Search Types |
Distance |
The catalog search result’s distance field |
Nearest to farthest |
Point-Radius Spatial searches |
Relevance |
The catalog search result’s relevance field |
Most to least relevant |
Contextual |
If no sort policy is defined for a particular search, the temporal policy will automatically be applied.
14.1.3.5. Product Retrieval
The DDF is used to catalog resources. A Resource is a URI-addressable entity that is represented by a Metacard. Resources may also be known as products or data. Resources may exist either locally or on a remote data store.
-
NITF image
-
MPEG video
-
Live video stream
-
Audio recording
-
Document
-
SOAP Web services
-
DDF JSON
-
DDF REST
The Query Service Endpoint, the Catalog Framework, and the CatalogProvider
are key
components for processing a retrieve product request.
The Endpoint bundle contains a Web service that exposes the interface to retrieve products, also referred to as Resources.
The Endpoint calls the CatalogFramework
to execute the operations of its specification.
The CatalogFramework
relies on the Sources to execute the actual product retrieval.
Optional PreResource and PostResource Catalog Plugins may be invoked by the CatalogFramework
to modify the product retrieval request/response prior to the Catalog Provider processing the request and providing the response.
It is possible to retrieve products from specific remote Sources by specifying the site name(s) in the request.
Note
|
Product Caching
Existing DDF clients are able to leverage product caching due to the product cache being implemented in the DDF. Enabling the product cache is an administrator function. Product Caching is enabled by default. |
To configure product caching:
-
Navigate to the Admin Console.
-
Select Catalog.
-
Select Configuration.
-
Select Resource Download Settings.
See Resource Download Settings configurations for all possible configurations.
14.1.3.6. Notifications and Activities
DDF can send/receive notifications of "Activities" occuring in the system.
14.1.3.6.1. Notifications
Currently, the notifications provide information about product retrieval only.
14.1.3.6.2. Activities
Activity events include the status and progress of actions that are being performed by the user, such as searches and downloads.
14.2. Included Catalog Frameworks, Associated Components, and Configurations
These catalog frameworks are available in a standard DDF installation:
- Standard Catalog Framework
-
Reference implementation of a Catalog Framework that implements all requirements of the Catalog API.
- Catalog Framework Camel Component
-
Supports creating, updating, and deleting metacards using the Catalog Framework from a Camel route.
14.2.1. Standard Catalog Framework
The Standard Catalog Framework provides the reference implementation of a Catalog Framework that implements all requirements of the Catalog API.
CatalogFrameworkImpl
is the implementation of the DDF Standard Catalog Framework.
The Standard Catalog Framework is the core class of DDF.
It provides the methods for create, update, delete, and resource retrieval (CRUD) operations on the Sources
.
By contrast, the Fanout Catalog Framework only allows for query and resource retrieval operations, no catalog modifications, and all queries are enterprise-wide.
Use this framework if:
-
access to a catalog provider is required to create, update, and delete catalog entries.
-
queries to specific sites are required.
-
queries to only the local provider are required.
It is possible to have only remote Sources configured with no local CatalogProvider
configured and be able to execute queries to specific remote sources by specifying the site name(s) in the query request.
The Standard Catalog Framework also maintains a list of ResourceReaders
for resource retrieval operations.
A resource reader is matched to the scheme (i.e., protocol, such as file://
) in the URI of the resource specified in the request to be retrieved.
Site information about the catalog provider and/or any federated source(s) can be retrieved using the Standard Catalog Framework. Site information includes the source’s name, version, availability, and the list of unique content types currently stored in the source (e.g., NITF). If no local catalog provider is configured, the site information returned includes site info for the catalog framework with no content types included.
14.2.1.1. Installing the Standard Catalog Framework
The Standard Catalog Framework is bundled as the catalog-core-standardframework
feature and can be installed and uninstalled using the normal processes described in Configuration.
14.2.1.2. Configuring the Standard Catalog Framework
These are the configurable properties on the Standard Catalog Framework.
See Catalog Standard Framework configurations for all possible configurations.
Registered Interface | Service Property | Value |
---|---|---|
|
shortname |
|
|
event.topics |
|
|
||
|
||
|
Registered Interface | Availability | Multiple |
---|---|---|
|
optional |
|
|
optional |
|
|
optional |
|
|
optional |
|
|
optional |
|
|
optional |
|
|
optional |
|
|
optional |
|
|
optional |
|
|
optional |
|
|
optional |
|
|
optional |
|
|
optional |
|
|
optional |
|
|
optional |
|
|
optional |
|
|
|
|
|
|
|
14.2.2. Catalog Framework Camel Component
The Catalog Framework Camel Component supports creating, updating, and deleting metacards using the Catalog Framework from a Camel route.
catalog:framework
14.2.2.1. Message Headers
14.2.2.1.1. Catalog Framework Producer
Header | Description |
---|---|
operation |
the operation to perform using the Catalog Framework (possible values are CREATE | UPDATE | DELETE) |
14.2.2.2. Sending Messages to Catalog Framework Endpoint
14.2.2.2.1. Catalog Framework Producer
In Producer mode, the component provides the ability to provide different inputs and have the Catalog framework perform different operations based upon the header values.
For the CREATE and UPDATE operation, the message body can contain a list of metacards or a single metacard object.
For the DELETE operation, the message body can contain a list of strings or a single string object. The string objects represent the IDs of metacards to be deleted. The exchange’s "in" message will be set with the affected metacards. In the case of a CREATE, it will be updated with the created metacards. In the case of the UPDATE, it will be updated with the updated metacards and with the DELETE it will contain the deleted metacards.
Header | Message Body (Input) | Exchange Modification (Output) |
---|---|---|
operation = CREATE |
List<Metacard> or Metacard |
exchange.getIn().getBody() updated with List of Metacards created |
operation = UPDATE |
List<Metacard> or Metacard |
exchange.getIn().getBody() updated with List of Metacards updated |
operation = DELETE |
List<String> or String (representing metacard IDs) |
exchange.getIn().getBody() updated with List of Metacards deleted |
Note
|
If there is an exception thrown while the route is being executed, a FrameworkProducerException will be thrown causing the route to fail with a CamelExecutionException. |
14.2.2.2.2. Samples
This example demonstrates:
-
Reading in some sample data from the file system.
-
Using a Java bean to convert the data into a metacard.
-
Setting a header value on the Exchange.
-
Sending the Metacard to the Catalog Framework component for ingesting.
1
2
3
4
5
6
7
8
<route>
<from uri="file:data/sampleData?noop=true“/>
<bean ref="sampleDataToMetacardConverter" method="covertToMetacard"/>\
<setHeader headerName="operation">
<constant>CREATE</constant>
</setHeader>
<to uri="catalog:framework"/>
</route>
15. Transformers
Transformers transform data to and from various formats. Transformers are categorized by when they are invoked and used. The existing types are Input transformers, Metacard transformers, and Query Response transformers. Additionally, XSLT transformers are provided to aid in developing custom, lightweight Metacard and Query Response transformers.
Transformers are utility objects used to transform a set of standard DDF components into a desired format, such as into PDF, GeoJSON, XML, or any other format. For instance, a transformer can be used to convert a set of query results into an easy-to-read GeoJSON format (GeoJSON Transformer) or convert a set of results into a RSS feed that can be easily published to a URL for RSS feed subscription. Transformers can be registered in the OSGi Service Registry so that any other developer can access them based on their standard interface and self-assigned identifier, referred to as its "shortname." Transformers are often used by endpoints for data conversion in a system standard way. Multiple endpoints can use the same transformer, a different transformer, or their own published transformer.
Warning
|
The current transformers only work for UTF-8 characters and do not support Non-Western Characters (for example, Hebrew). It is recommend not to use international character sets, as they may not be displayed properly. |
Transformers are used to alter the format of a resource or its metadata to or from the catalog’s metacard format.
- Input Transformers
-
Input Transformers create metacards from input. Once converted to a Metacard, the data can be used in a variety of ways, such as in an
UpdateRequest
,CreateResponse
, or within Catalog Endpoints or Sources. For instance, an input transformer could be used to receive and translate XML into a Metacard so that it can be placed within aCreateRequest
to be ingested within the Catalog. Input transformers should be registered within the Service Registry with the interfaceddf.catalog.transform.InputTransformer
to notify Catalog components of any new transformers. - Metacard Transformers
-
Metacard Transformers translate a metacard from catalog metadata to a specific data format.
- Query Response Transformers
-
Query Response transformers convert query responses into other data formats.
15.1. Available Input Transformers
The following input transformers are available in a standard installation of DDF:
- GeoJSON Input Transformer
-
Translates GeoJSON into a Catalog metacard.
- PDF Input Transformer
-
Translates a PDF document into a Catalog Metacard.
- PPTX Input Transformer
-
Translates Microsoft PowerPoint (OOXML only) documents into Catalog Metacards.
- Registry Transformer
-
Creates Registry metacards from
ebrim
messages and translates a Registry metacard. (used by the Registry application) - Tika Input Transformer
-
Translates Microsoft Word, Microsoft Excel, Microsoft PowerPoint, OpenOffice Writer, and PDF documents into Catalog records.
- Video Input Transformer
-
Creates Catalog metacards from certain video file types.
- XML Input Transformer
-
Translates an XML document into a Catalog Metacard.
15.2. Available Metacard Transformers
The following metacard transformers are available in a standard installation of DDF:
- GeoJSON Metacard Transformer
-
Translates a metacard into GeoJSON.
- KML Metacard Transformer
-
Translates a metacard into a KML-formatted document.
- KML Style Mapper
-
Maps a KML Style URL to a metacard based on that metacard’s attributes.
- Metadata Metacard Transformer
-
returns the
Metacard.METADATA
attribute when given a metacard. - Registry Transformer
-
Creates Registry metacards from
ebrim
messages and translates a Registry metacard. (used by the Registry application) - Resource Metacard Transformer
-
Retrieves the resource bytes of a metacard by returning the product associated with the metacard.
- Thumbnail Metacard Transformer
-
Retrieves the thumbnail bytes of a Metacard by returning the
Metacard.THUMBNAIL
attribute value. - XML Metacard Transformer
-
Translates a metacard into an XML-formatted document.
15.3. Available Query Response Transformers
The following query response transformers are available in a standard installation of DDF:
- Atom Query Response Transformer
-
Transforms a query response into an Atom 1.0 feed.
- CSW Query Response Transformer
-
Transforms a query response into a CSW-formatted document.
- GeoJSON Query Response Transformer
-
Translates a query response into a GeoJSON-formatted document.
- KML Query Response Transformer
-
Translates a query response into a KML-formatted document.
- Query Response Transformer Consumer
-
Translates a query response into a Catalog Metacard.
- XML Query Response Transformer
-
Translates a query response into an XML-formatted document.
15.4. Transformers Details
Availability and configuration details of available transformers.
15.4.1. Atom Query Response Transformer
The Atom Query Response Transformer transforms a query response into an Atom 1.0 feed.
The Atom transformer maps a QueryResponse
object as described in the Query Result Mapping.
15.4.1.1. Installing the Atom Query Response Transformer
The Atom Query Response Transformer is installed by default with a standard installation.
15.4.1.2. Configuring the Atom Query Response Transformer
The Atom Query Response Transformer has no configurable properties.
15.4.1.3. Using the Atom Query Response Transformer
Use this transformer when Atom is the preferred medium of communicating information, such as for feed readers or federation. An integrator could use this with an endpoint to transform query responses into an Atom feed.
For example, clients can use the OpenSearch Endpoint.
The client can query with the format option set to the shortname, atom
.
http://{FQDN}:{PORT}/services/catalog/query?q=ddf?format=atom
Developers could use this transformer to programmatically transform QueryResponse
objects on the fly.
QueryResponse
object
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
<feed xmlns="http://www.w3.org/2005/Atom" xmlns:os="http://a9.com/-/spec/opensearch/1.1/">
<title type="text">Query Response</title>
<updated>2017-01-31T23:22:37.298Z</updated>
<id>urn:uuid:a27352c9-f935-45f0-9b8c-5803095164bb</id>
<link href="#" rel="self" />
<author>
<name>Organization Name</name>
</author>
<generator version="2.1.0.20130129-1341">ddf123</generator>
<os:totalResults>1</os:totalResults>
<os:itemsPerPage>10</os:itemsPerPage>
<os:startIndex>1</os:startIndex>
<entry xmlns:relevance="http://a9.com/-/opensearch/extensions/relevance/1.0/" xmlns:fs="http://a9.com/-/opensearch/extensions/federation/1.0/"
xmlns:georss="http://www.georss.org/georss">
<fs:resultSource fs:sourceId="ddf123" />
<relevance:score>0.19</relevance:score>
<id>urn:catalog:id:ee7a161e01754b9db1872bfe39d1ea09</id>
<title type="text">F-15 lands in Libya; Crew Picked Up</title>
<updated>2013-01-31T23:22:31.648Z</updated>
<published>2013-01-31T23:22:31.648Z</published>
<link href="http://123.45.67.123:8181/services/catalog/ddf123/ee7a161e01754b9db1872bfe39d1ea09" rel="alternate" title="View Complete Metacard" />
<category term="Resource" />
<georss:where xmlns:gml="http://www.opengis.net/gml">
<gml:Point>
<gml:pos>32.8751900768792 13.1874561309814</gml:pos>
</gml:Point>
</georss:where>
<content type="application/xml">
<ns3:metacard xmlns:ns3="urn:catalog:metacard" xmlns:ns2="http://www.w3.org/1999/xlink" xmlns:ns1="http://www.opengis.net/gml"
xmlns:ns4="http://www.w3.org/2001/SMIL20/" xmlns:ns5="http://www.w3.org/2001/SMIL20/Language" ns1:id="4535c53fc8bc4404a1d32a5ce7a29585">
<ns3:type>ddf.metacard</ns3:type>
<ns3:source>ddf.distribution</ns3:source>
<ns3:geometry name="location">
<ns3:value>
<ns1:Point>
<ns1:pos>32.8751900768792 13.1874561309814</ns1:pos>
</ns1:Point>
</ns3:value>
</ns3:geometry>
<ns3:dateTime name="created">
<ns3:value>2013-01-31T16:22:31.648-07:00</ns3:value>
</ns3:dateTime>
<ns3:dateTime name="modified">
<ns3:value>2013-01-31T16:22:31.648-07:00</ns3:value>
</ns3:dateTime>
<ns3:stringxml name="metadata">
<ns3:value>
<ns6:xml xmlns:ns6="urn:sample:namespace" xmlns="urn:sample:namespace">Example description.</ns6:xml>
</ns3:value>
</ns3:stringxml>
<ns3:string name="metadata-content-type-version">
<ns3:value>myVersion</ns3:value>
</ns3:string>
<ns3:string name="metadata-content-type">
<ns3:value>myType</ns3:value>
</ns3:string>
<ns3:string name="title">
<ns3:value>Example title</ns3:value>
</ns3:string>
</ns3:metacard>
</content>
</entry>
</feed>
XPath to Atom XML | Value |
---|---|
|
"Query Response" |
|
ISO 8601 dateTime of when the feed was generated |
|
Generated UUID URN |
|
Platform Global Configuration organization |
|
Platform Global Configuration site name |
|
Platform Global Configuration version |
|
SourceResponse Number of Hits |
|
Request’s Page Size |
|
Request’s Start Index |
|
Source Id from which the Result came. |
|
Result’s relevance score if applicable. |
|
|
|
|
|
ISO 8601 dateTime of |
|
ISO 8601 dateTime of |
|
URL to retrieve underlying resource (if applicable and link is available) |
|
Link to alternate view of the Metacard (if a link is available) |
|
|
|
GeoRSS GML of every Metacard attribute with format |
|
Metacard XML generated by
|
15.4.2. CSW Query Response Transformer
The CSW Query Response Transformer transforms a query response into a CSW-formatted document.
15.4.2.1. Installing the CSW Query Response Transformer
The CSW Query Response Transformer is installed by default with a standard installation in the Spatial application.
15.4.2.2. Configuring the CSW Query Response Transformer
The CSW Query Response Transformer has no configurable properties.
15.4.3. GeoJSON Input Transformer
The GeoJSON input transformer is responsible for translating GeoJSON into a Catalog metacard.
Schema | Mime-types |
---|---|
N/A |
|
15.4.3.1. Installing the GeoJSON Input Transformer
The GeoJSON Input Transformer is installed by default with a standard installation.
15.4.3.2. Configuring the GeoJSON Input Transformer
The GeoJSON Input Transformer has no configurable properties.
15.4.3.3. Using the GeoJSON Input Transformer
Using the REST Endpoint, for example, HTTP POST a GeoJSON metacard to the Catalog. Once the REST Endpoint receives the GeoJSON Metacard, it is converted to a Catalog metacard.
metacard.json
File Using the Curl Commandcurl -X POST -i -H "Content-Type: application/json" -d "@metacard.json" https://{FQDN}:{PORT}/services/catalog
15.4.3.4. Conversion to a Metacard
A GeoJSON object consists of a single JSON object.
This can be a geometry, a feature, or a FeatureCollection
.
The GeoJSON input transformer only converts "feature" objects into metacards because feature objects include geometry information and a list of properties.
A geometry object alone does not contain enough information to create a metacard.
Additionally, the input transformer currently does not handle FeatureCollection
s.
Important
|
Cannot create Metacard from this limited GeoJSON
|
The following sample will create a valid metacard:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
{
"properties": {
"title": "myTitle",
"thumbnail": "CA==",
"resource-uri": "http://example.com",
"created": "2012-09-01T00:09:19.368+0000",
"metadata-content-type-version": "myVersion",
"metadata-content-type": "myType",
"metadata": "<xml></xml>",
"modified": "2012-09-01T00:09:19.368+0000"
},
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
30.0,
10.0
]
}
}
In the current implementation, Metacard.LOCATION
is not taken from the properties list as WKT, but instead interpreted from the geometry
JSON object.
The geometry object is formatted according to the GeoJSON standard.
Dates are in the ISO 8601 standard.
White space is ignored, as in most cases with JSON.
Binary data is accepted as Base64.
XML must be properly escaped, such as what is proper for normal JSON.
Currently, only Required Attributes are recognized in the properties.
15.4.3.4.1. Metacard Extensibility
GeoJSON supports custom, extensible properties on the incoming GeoJSON using DDF’s extensible metacard support.
To have those customized attributes understood by the system, a corresponding MetacardType
must be registered with the MetacardTypeRegistry
.
That MetacardType
must be specified by name in the metacard-type property of the incoming GeoJSON.
If a MetacardType
is specified on the GeoJSON input, the customized properties can be processed, cataloged, and indexed.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
{
"properties": {
"title": "myTitle",
"thumbnail": "CA==",
"resource-uri": "http://example.com",
"created": "2012-09-01T00:09:19.368+0000",
"metadata-content-type-version": "myVersion",
"metadata-content-type": "myType",
"metadata": "<xml></xml>",
"modified": "2012-09-01T00:09:19.368+0000",
"min-frequency": "10000000",
"max-frequency": "20000000",
"metacard-type": "ddf.metacard.custom.type"
},
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [
30.0,
10.0
]
}
}
When the GeoJSON Input Transformer gets GeoJSON with the MetacardType
specified, it will perform a lookup in the MetacardTypeRegistry
to obtain the specified MetacardType
in order to understand how to parse the GeoJSON.
If no MetacardType
is specified, the GeoJSON Input Transformer will assume the default MetacardType
.
If an unregistered MetacardType
is specified, an exception will be returned to the client indicating that the MetacardType
was not found.
15.4.3.5. Usage Limitations of the GeoJSON Input Transformer
The GeoJSON Input Transformer does not handle multiple geometries.
15.4.4. GeoJSON Metacard Transformer
GeoJSON Metacard Transformer translates a metacard into GeoJSON.
15.4.4.1. Installing the GeoJSON Metacard Transformer
The GeoJSON Metacard Transformer is not installed by default with a standard installation.
To install:
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Install the
catalog-transformer-json
feature.
15.4.4.2. Configuring the GeoJSON Metacard Transformer
The GeoJSON Metacard Transformer has no configurable properties.
15.4.4.3. Using the GeoJSON Metacard Transformer
The GeoJSON Metacard Transformer can be used programmatically by requesting a MetacardTransformer
with the id geojson
.
It can also be used within the REST Endpoint by providing the transform option as geojson
.
https://{FQDN}:{PORT}/services/catalog/0123456789abcdef0123456789abcdef?transform=geojson
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
{
"properties":{
"title":"myTitle",
"thumbnail":"CA==",
"resource-uri":"http:\/\/example.com",
"created":"2012-08-31T23:55:19.518+0000",
"metadata-content-type-version":"myVersion",
"metadata-content-type":"myType",
"metadata":"<xml>text<\/xml>",
"modified":"2012-08-31T23:55:19.518+0000",
"metacard-type": "ddf.metacard"
},
"type":"Feature",
"geometry":{
"type":"LineString",
"coordinates":[
[
30.0,
10.0
],
[
10.0,
30.0
],
[
40.0,
40.0
]
]
}
}
15.4.5. GeoJSON Query Response Transformer
The GeoJSON Query Response Transformer translates a query response into a GeoJSON-formatted document.
15.4.5.1. Installing the GeoJSON Query Response Transformer
The GeoJSON Query Response Transformer is installed by default with a standard installation in the Catalog application.
15.4.5.2. Configuring the GeoJSON Query Response Transformer
The GeoJSON Query Response Transformer has no configurable properties.
15.4.6. KML Metacard Transformer
The KML Metacard Transformer is responsible for translating a metacard into a KML-formatted document. The KML will contain an HTML description that will display in the pop-up bubble in Google Earth. The HTML contains links to the full metadata view as well as the product.
15.4.6.1. Installing the KML Metacard Transformer
The KML Metacard Transformer is installed by default with a standard installation in the Spatial Application.
15.4.6.2. Configuring the KML Metacard Transformer
The KML Metacard Transformer has no configurable properties.
15.4.6.3. Using the KML Metacard Transformer
Using the REST Endpoint for example, request a metacard with the transform option set to the KML shortname.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<kml xmlns:ns2="http://www.google.com/kml/ext/2.2" xmlns="http://www.opengis.net/kml/2.2" xmlns:ns4="urn:oasis:names:tc:ciq:xsdschema:xAL:2.0" xmlns:ns3="http://www.w3.org/2005/Atom">
<Placemark id="Placemark-0103c77e66d9428d8f48fab939da528e">
<name>MultiPolygon</name>
<description><!DOCTYPE html>
<html>
<head>
<meta content="text/html; charset=windows-1252" http-equiv="content-type">
<style media="screen" type="text/css">
.label {
font-weight: bold
}
.linkTable {
width: 100% }
.thumbnailDiv {
text-align: center
}
img {
max-width: 100px;
max-height: 100px;
border-style:none
}
</style>
</head>
<body>
<div class="thumbnailDiv"><a href="http://{FQDN}:{PORT}/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f48fab939da528e?transform=resource"><img alt="Thumnail" src="data:image/jpeg;charset=utf-8;base64, CA=="></a></div>
<table>
<tr>
<td class="label">Source:</td>
<td>ddf.distribution</td>
</tr>
<tr>
<td class="label">Created:</td>
<td>Wed Oct 30 09:46:29 MDT 2013</td>
</tr>
<tr>
<td class="label">Effective:</td>
<td>2014-01-07T14:58:16-0700</td>
</tr>
</table>
<table class="linkTable">
<tr>
<td><a href="http://{FQDN}:{PORT}/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f48fab939da528e?transform=html">View Details...</a></td>
<td><a href="http://{FQDN}:{PORT}/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f48fab939da528e?transform=resource">Download...</a></td>
</tr>
</table>
</body>
</html>
</description>
<TimeSpan>
<begin>2014-01-07T21:58:16</begin>
</TimeSpan>
<Style id="bluenormal">
<LabelStyle>
<scale>0.0</scale>
</LabelStyle>
<LineStyle>
<color>33ff0000</color>
<width>3.0</width>
</LineStyle>
<PolyStyle>
<color>33ff0000</color>
<fill xsi:type="xs:boolean" xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">true</fill>
</PolyStyle>
<BalloonStyle>
<text><h3><b>$[name]</b></h3><table><tr><td
width="400">$[description]</td></tr></table></text>
</BalloonStyle>
</Style>
<Style id="bluehighlight">
<LabelStyle>
<scale>1.0</scale>
</LabelStyle>
<LineStyle>
<color>99ff0000</color>
<width>6.0</width>
</LineStyle>
<PolyStyle>
<color>99ff0000</color>
<fill xsi:type="xs:boolean" xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">true</fill>
</PolyStyle>
<BalloonStyle>
<text><h3><b>$[name]</b></h3><table><tr><td width="400">$[description]</td></tr></table></text>
</BalloonStyle>
</Style>
<StyleMap id="default">
<Pair>
<key>normal</key>
<styleUrl>#bluenormal</styleUrl>
</Pair>
<Pair>
<key>highlight</key>
<styleUrl>#bluehighlight</styleUrl>
</Pair>
</StyleMap>
<MultiGeometry>
<Point>
<coordinates>102.0,2.0</coordinates>
</Point>
<MultiGeometry>
<Polygon>
<outerBoundaryIs>
<LinearRing>
<coordinates>102.0,2.0 103.0,2.0 103.0,3.0 102.0,3.0 102.0,2.0</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
<Polygon>
100.8,0.2
<outerBoundaryIs>
<LinearRing>
<coordinates>100.0,0.0 101.0,0.0 101.0,1.0 100.0,1.0 100.0,0.0 100.2,0.2 100.8,0.8 100.2,0.8 100.2,0.2</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
</MultiGeometry>
</Placemark>
</kml>
15.4.7. KML Query Response Transformer
The KML Query Response Transformer translates a query response into a KML-formatted document. The KML will contain an HTML description for each metacard that will display in the pop-up bubble in Google Earth. The HTML contains links to the full metadata view as well as the product.
15.4.7.1. Installing the KML Query Response Transformer
The spatial-kml-transformer
feature is installed by default in the Spatial Application.
15.4.7.2. Configuring the KML Query Response Transformer
The KML Query Response Transformer has no configurable properties.
15.4.7.3. Using the KML Query Response Transformer
Using the OpenSearch Endpoint, for example, query with the format option set to the KML shortname: kml
.
http://{FQDN}:{PORT}/services/catalog/query?q=schematypesearch&format=kml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<kml xmlns:ns2="http://www.google.com/kml/ext/2.2" xmlns="http://www.opengis.net/kml/2.2" xmlns:ns4="urn:oasis:names:tc:ciq:xsdschema:xAL:2.0" xmlns:ns3="http://www.w3.org/2005/Atom">
<Document id="f0884d8c-cf9b-44a1-bb5a-d3c6fb9a96b6">
<name>Results (1)</name>
<open xsi:type="xs:boolean" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">false</open>
<Style id="bluenormal">
<LabelStyle>
<scale>0.0</scale>
</LabelStyle>
<LineStyle>
<color>33ff0000</color>
<width>3.0</width>
</LineStyle>
<PolyStyle>
<color>33ff0000</color>
<fill xsi:type="xs:boolean" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">true</fill>
</PolyStyle>
<BalloonStyle>
<text><h3><b>$[name]</b></h3><table><tr><td width="400">$[description]</td></tr></table></text>
</BalloonStyle>
</Style>
<Style id="bluehighlight">
<LabelStyle>
<scale>1.0</scale>
</LabelStyle>
<LineStyle>
<color>99ff0000</color>
<width>6.0</width>
</LineStyle>
<PolyStyle>
<color>99ff0000</color>
<fill xsi:type="xs:boolean" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">true</fill>
</PolyStyle>
<BalloonStyle>
<text><h3><b>$[name]</b></h3><table><tr><td width="400">$[description]</td></tr></table></text>
</BalloonStyle>
</Style>
<StyleMap id="default">
<Pair>
<key>normal</key>
<styleUrl>#bluenormal</styleUrl>
</Pair>
<Pair>
<key>highlight</key>
<styleUrl>#bluehighlight</styleUrl>
</Pair>
</StyleMap>
<Placemark id="Placemark-0103c77e66d9428d8f48fab939da528e">
<name>MultiPolygon</name>
<description><!DOCTYPE html>
<html>
<head>
<meta content="text/html; charset=windows-1252" http-equiv="content-type">
<style media="screen" type="text/css">
.label {
font-weight: bold
}
.linkTable {
width: 100% }
.thumbnailDiv {
text-align: center
} img {
max-width: 100px;
max-height: 100px;
border-style:none
}
</style>
</head>
<body>
<div class="thumbnailDiv"><a
href="http://{FQDN}:{PORT}/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f
48fab939da528e?transform=resource"><img alt="Thumnail"
src="data:image/jpeg;charset=utf-8;base64, CA=="></a></div>
<table>
<tr>
<td class="label">Source:</td>
<td>ddf.distribution</td>
</tr>
<tr>
<td class="label">Created:</td>
<td>Wed Oct 30 09:46:29 MDT 2013</td>
</tr>
<tr>
<td class="label">Effective:</td>
<td>2014-01-07T14:48:47-0700</td>
</tr>
</table>
<table class="linkTable">
<tr>
<td><a
href="http://{FQDN}:{PORT}/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f
48fab939da528e?transform=html">View Details...</a></td>
<td><a
href="http://{FQDN}:{PORT}/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f
48fab939da528e?transform=resource">Download...</a></td>
</tr>
</table>
</body>
</html>
</description>
<TimeSpan>
<begin>2014-01-07T21:48:47</begin>
</TimeSpan>
<styleUrl>#default</styleUrl>
<MultiGeometry>
<Point>
<coordinates>102.0,2.0</coordinates>
</Point>
<MultiGeometry>
<Polygon>
<outerBoundaryIs>
<LinearRing>
<coordinates>102.0,2.0 103.0,2.0 103.0,3.0 102.0,3.0
102.0,2.0</coordinates>
</LinearRing>
100.8,0.2
</outerBoundaryIs>
</Polygon>
<Polygon>
<outerBoundaryIs>
<LinearRing>
<coordinates>100.0,0.0 101.0,0.0 101.0,1.0 100.0,1.0 100.0,0.0 100.2,0.2
100.8,0.8 100.2,0.8 100.2,0.2</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
</MultiGeometry>
</MultiGeometry>
</Placemark>
</Document>
</kml>
15.4.8. KML Style Mapper
The KML Style Mapper provides the ability for the KMLTransformer
to map a KML Style URL to a metacard based on that metacard’s attributes.
For example, if a user wanted all JPEGs to be blue, the KML Style Mapper provides the ability to do so.
This would also allow an administrator to configure metacards from each source to be different colors.
The configured style URLs are expected to be HTTP URLs. For more information on style URL’s, refer to the KML Reference .
The KML Style Mapper supports all basic and extended metacard attributes.
When a style mapping is configured, the resulting transformed KML contain a <styleUrl>
tag pointing to that style, rather than the default KML style supplied by the KMLTransformer
.
15.4.8.1. Installing the KML Style Mapper
The KML Style Mapper is installed by default with a standard installation in the Spatial Application in the spatial-kml-transformer
feature.
15.4.8.2. Configuring the KML Style Mapper
The properties below describe how to configure a style mapping.
The configuration name is Spatial KML Style Map Entry
.
See KML Style Mapper configurations for all possible configurations.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
<xmlns="http://www.opengis.net/kml/2.2"
xmlns:ns4="urn:oasis:names:tc:ciq:xsdschema:xAL:2.0"
xmlns:ns3="http://www.w3.org/2005/Atom">
<Placemark id="Placemark-0103c77e66d9428d8f48fab939da528e">
<name>MultiPolygon</name>
<description><!DOCTYPE html>
<html>
<head>
<meta content="text/html; charset=windows-1252" http-equiv="content-type">
<style media="screen" type="text/css">
.label {
font-weight: bold
}
.linkTable {
width: 100% }
.thumbnailDiv {
text-align: center
} img {
max-width: 100px;
max-height: 100px;
border-style:none
}
</style>
</head>
<body>
<div class="thumbnailDiv"><a
href="http://{FQDN}:{PORT}/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f48fab939da528e?transform=resource"><img alt="Thumnail"
src="data:image/jpeg;charset=utf-8;base64, CA=="></a></div>
<table>
<tr>
<td class="label">Source:</td>
<td>ddf.distribution</td>
</tr>
<tr>
<td class="label">Created:</td>
<td>Wed Oct 30 09:46:29 MDT 2013</td>
</tr>
<tr>
<td class="label">Effective:</td>
<td>2014-01-07T14:58:16-0700</td>
</tr>
</table>
<table class="linkTable">
<tr>
<td><a
href="http://{FQDN}:{PORT}/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f48fab939da528e?transform=html">View Details...</a></td>
<td><a href="http://{FQDN}:{PORT}/services/catalog/sources/ddf.distribution/0103c77e66d9428d8f48fab939da528e?transform=resource">Download...</a></td>
</tr>
</table>
</body>
</html>
</description>
<TimeSpan>
<begin>2014-01-07T21:58:16</begin>
</TimeSpan>
<styleUrl>http://example.com/kml/style#sampleStyle</styleUrl>
<MultiGeometry>
<Point>
<coordinates>102.0,2.0</coordinates>
</Point>
<MultiGeometry>
<Polygon>
<outerBoundaryIs>
<LinearRing>
<coordinates>102.0,2.0 103.0,2.0 103.0,3.0 102.0,3.0
102.0,2.0</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
<Polygon>
100.8,0.2
<outerBoundaryIs>
<LinearRing>
<coordinates>100.0,0.0 101.0,0.0 101.0,1.0 100.0,1.0 100.0,0.0 100.2,0.2
100.8,0.8 100.2,0.8 100.2,0.2</coordinates>
</LinearRing>
</outerBoundaryIs>
</Polygon>
</MultiGeometry>
</MultiGeometry>
</Placemark>
</kml>
15.4.9. Metadata Metacard Transformer
The Metadata Metacard Transformer returns the Metacard.METADATA
attribute when given a metacard.
The MIME Type returned is text/xml
.
15.4.9.1. Installing the Metadata Metacard Transformer
The Metadata Metacard Transformer is installed by default in a standard installation with the Catalog application.
15.4.9.2. Configuring the Metadata Metacard Transformer
The Metadata Metacard Transformer has no configurable properties.
15.4.9.3. Using the Metadata Metacard Transformer
The Metadata Metacard Transformer can be used programmatically by requesting a metacard transformer with the id metadata
.
It can also be used within the REST Endpoint by providing the transform option as metadata
.
http://{FQDN}:{PORT}/services/catalog/0123456789abcdef0123456789abcdef?transform=metadata
15.4.10. PDF Input Transformer
The PDF Input Transformer is responsible for translating a PDF document into a Catalog Metacard.
Schema | Mime-types |
---|---|
N/A |
|
15.4.10.1. Installing the PDF Input Transformer
The PDF Transformer is installed by default with a standard installation in the Catalog application.
15.4.10.2. Configuring the PDF Input Transformer
To configure the PDF Input Transformer:
-
Navigate to the Catalog application.
-
Select the Configuration tab.
-
Select the PDF Input Transformer.
These configurations are available for the PDF Input Transformer:
See PDF Input Transformer configurations for all possible configurations.
15.4.11. PPTX Input Transformer
The PPTX Input Transformer translates Microsoft PowerPoint (OOXML only) documents into Catalog Metacards, using Apache Tika for basic metadata and Apache POI for thumbnail creation. The PPTX Input Transformer ingests PPTX documents into the DDF Content Repository and the Metadata Catalog, and adds a thumbnail of the first page in the PPTX document.
The PPTX Input Transformer will take precedence over the Tika Input Transformer for PPTX documents.
Schema | Mime-types |
---|---|
N/A |
|
15.4.11.1. Installing the PPTX Input Transformer
This transformer is installed by default with a standard installation in the Catalog application.
15.4.11.2. Configuring the PPTX Input Transformer
The PPTX Input Transformer has no configurable properties. '''
15.4.12. Query Response Transformer Consumer
The Query Response Transformer Consumer is responsible for translating a query response into a Catalog Metacard.
15.4.12.1. Installing the Query Response Transformer Consumer
The Query Response Transformer Consumer is installed by default with a standard installation in the Catalog application.
15.4.12.2. Configuring the Query Response Transformer Consumer
The Query Response Transformer Consumer has no configurable properties.
15.4.13. Registry Transformer
The Registry Transformer creates Registry metacards from ebrim
messages.
It also returns the ebrim
message from the metacard metadata.
15.4.13.1. Installing the Registry Transformer
The Registry Transformer is installed with the Registry application.
-
Install Registry application.
15.4.13.2. Configuring the Registry Transformer
The Registry Transformer has no configurable properties.
15.4.14. Resource Metacard Transformer
The Resource Metacard Transformer retrieves a resource associated with a metacard.
15.4.14.1. Installing the Resource Metacard Transformer
The Resource Metacard Transformer is installed by default in a standard installation with the Catalog application as the feature catalog-transformer-resource
.
15.4.14.2. Configuring the Resource Metacard Transformer
The Resource Metacard Transformer has no configurable properties.
15.4.14.3. Using the Resource Metacard Transformer
Endpoints or other components can retrieve an instance of the Resource Metacard Transformer using its id
resource.
<reference id="metacardTransformer" interface="ddf.catalog.transform.MetacardTransformer" filter="(id=resource)"/>
15.4.15. Thumbnail Metacard Transformer
The Thumbnail Metacard Transformer retrieves the thumbnail bytes of a Metacard by returning the Metacard.THUMBNAIL
attribute value.
15.4.15.1. Installing the Thumbnail Metacard Transformer
This transformer is installed by default with a standard installation in the Catalog application.
15.4.15.2. Configuring the Thumbnail Metacard Transformer
The Thumbnail Metacard Transformer has no configurable properties.
15.4.15.3. Using the Thumbnail Metacard Transformer
Endpoints or other components can retrieve an instance of the Thumbnail Metacard Transformer using its id thumbnail
.
1
<reference id="metacardTransformer" interface="ddf.catalog.transform.MetacardTransformer" filter="(id=thumbnail)"/>
The Thumbnail Metacard Transformer returns a BinaryContent
object of the Metacard.THUMBNAIL
bytes and a MIME Type of image/jpeg
.
15.4.16. Tika Input Transformer
The Tika Input Transformer is the default input transformer responsible for translating Microsoft Word, Microsoft Excel, Microsoft PowerPoint, OpenOffice Writer, and PDF documents into Catalog records. This input transformer utilizes Apache Tika to provide basic support for these mime types. The metadata common to all these document types, e.g., creation date, author, last modified date, etc., is extracted and used to create the catalog record. The Tika Input Transformer’s main purpose is to ingest these types of content into the Metadata Catalog.
The Tika input transformer is most basic input transformer and the last to be invoked. This allows any registered input transformers that are more specific to a document type to be invoked instead of this rudimentary input transformer.
Schema | Mime-types |
---|---|
N/A |
This basic transformer can ingest many file types. See All Formats Supported. |
15.4.16.1. Installing the Tika Input Transformer
This transformer is installed by default with a standard installation in the Catalog.
15.4.16.2. Configuring the Tika Input Transformer
The properties below describe how to configure the Tika input transformer.
See Tika Input Transformer configurations for all possible configurations.
15.4.17. Video Input Transformer
The video input transformer Creates Catalog metacards from certain video file types. Currently, it is handles MPEG-2 transport streams as well as MPEG-4, AVI, MOV, and WMV videos. This input transformer uses Apache Tika to extract basic metadata from the video files and applies more sophisticated methods to extract more meaningful metadata from these types of video.
Schema | Mime-types |
---|---|
N/A |
|
15.4.17.1. Installing the Video Input Transformer
This transformer is installed by default with a standard installation in the Catalog application.
15.4.17.1.1. Configuring the Video Input Transformer
The Video Input Transformer has no configurable properties.
15.4.18. XML Input Transformer
The XML Input Transformer is responsible for translating an XML document into a Catalog Metacard.
Schema | Mime-types |
---|---|
urn:catalog:metacard |
|
15.4.18.1. Installing the XML Input Transformer
The XML Input Transformer is installed by default with a standard installation in the Catalog application.
15.4.18.2. Configuring the XML Input Transformer
The XML Input Transformer has no configurable properties.
15.4.19. XML Metacard Transformer
The XML metacard transformer is responsible for translating a metacard into an XML-formatted document.
The metacard element that is generated is an extension of gml:AbstractFeatureType
, which makes the output of this transformer GML 3.1.1 compatible.
15.4.19.1. Installing the XML Metacard Transformer
This transformer comes installed by default with a standard installation in the Catalog application.
To install or uninstall manually, use the catalog-transformer-xml
feature.
15.4.19.2. Configuring the XML Metacard Transformer
The XML Metacard Transformer has no configurable properties.
15.4.19.3. Using the XML Metacard Transformer
Using the REST Endpoint for example, request a metacard with the transform option set to the XML shortname.
https://{FQDN}:{PORT}/services/catalog/ac0c6917d5ee45bfb3c2bf8cd2ebaa67?transform=xml
Metacard Variables | XML Element |
---|---|
|
|
|
|
|
|
|
|
-
boolean
-
base64Binary
-
dateTime
-
double
-
float
-
geometry
-
int
-
long
-
object
-
short
-
string
-
stringxml
15.4.20. XML Query Response Transformer
The XML Query Response Transformer is responsible for translating a query response into an XML-formatted document.
The metacard element generated is an extension of gml:AbstractFeatureCollectionType
, which makes the output of this transformer GML 3.1.1 compatible.
15.4.20.1. Installing the XML Query Response Transformer
This transformer is installed by default with a standard installation in the Catalog application.
To uninstall, uninstall the catalog-transformer-xml
feature.
15.4.20.2. Configuring the XML Query Response Transformer
To configure the XML Query Response Transformer:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Configuration tab.
-
Select the XML Query Response Transformer.
See XML Query Response Transformer configurations for all possible configurations.
15.4.20.3. Using the XML Query Response Transformer
Using the OpenSearch Endpoint, for example, query with the format option set to the XML shortname xml
.
http://{FQDN}:{PORT}/services/catalog/query?q=input?format=xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<ns3:metacards xmlns:ns1="http://www.opengis.net/gml" xmlns:ns2="http://www.w3.org/1999/xlink" xmlns:ns3="urn:catalog:metacard" xmlns:ns4="http://www.w3.org/2001/SMIL20/" xmlns:ns5="http://www.w3.org/2001/SMIL20/Language">
<ns3:metacard ns1:id="000ba4dd7d974e258845a84966d766eb">
<ns3:type>ddf.metacard</ns3:type>
<ns3:source>southwestCatalog1</ns3:source>
<ns3:dateTime name="created">
<ns3:value>2013-04-10T15:30:05.702-07:00</ns3:value>
</ns3:dateTime>
<ns3:string name="title">
<ns3:value>Input 1</ns3:value>
</ns3:string>
</ns3:metacard>
<ns3:metacard ns1:id="00c0eb4ba9b74f8b988ef7060e18a6a7">
<ns3:type>ddf.metacard</ns3:type>
<ns3:source>southwestCatalog1</ns3:source>
<ns3:dateTime name="created">
<ns3:value>2013-04-10T15:30:05.702-07:00</ns3:value>
</ns3:dateTime>
<ns3:string name="title">
<ns3:value>Input 2</ns3:value>
</ns3:string>
</ns3:metacard>
</ns3:metacards>
15.5. Mime Type Mapper
The MimeTypeMapper is the entry point in DDF for resolving file extensions to mime types, and vice versa.
MimeTypeMappers
are used by the ResourceReader
to determine the file extension for a given mime type in aid of retrieving a product.
MimeTypeMappers
are also used by the FileSystemProvider
in the Catalog Framework to read a file from the content file repository.
The MimeTypeMapper
maintains a list of all of the MimeTypeResolvers
in DDF.
The MimeTypeMapper
accesses each MimeTypeResolver
according to its priority until the provided file extension is successfully mapped to its corresponding mime type.
If no mapping is found for the file extension, null
is returned for the mime type.
Similarly, the MimeTypeMapper
accesses each MimeTypeResolver
according to its priority until the provided mime type is successfully mapped to its corresponding file extension.
If no mapping is found for the mime type, null
is returned for the file extension.
For files with no file extension, the MimeTypeMapper will attempt to determine the mime type from the contents of the file. If it is unsuccessful, the file will be ingested as a binary file.
- DDF Mime Type Mapper
-
Core implementation of the DDF Mime API.
15.5.1. DDF Mime Type Mapper
The DDF Mime Type Mapper is the core implementation of the DDF Mime API.
It provides access to all MimeTypeResolvers
within DDF, which provide mapping of mime types to file extensions and file extensions to mime types.
15.5.1.1. Installing the DDF Mime Type Mapper
The DDF Mime Type Mapper is installed by default with a standard installation in the Platform application.
15.6. Mime Type Resolver
A MimeTypeResolver
is a DDF service that can map a file extension to its corresponding mime type and, conversely, can map a mime type to its file extension.
MimeTypeResolvers
are assigned a priority (0-100, with the higher the number indicating the higher priority).
This priority is used to sort all of the MimeTypeResolvers
in the order they should be checked to map a file extension to a mime type (or vice versa).
This priority also allows custom MimeTypeResolvers
to be invoked before default MimeTypeResolvers
by setting custom resolver’s priority higher than the default.
MimeTypeResolvers
are not typically invoked directly.
Rather, the MimeTypeMapper
maintains a list of MimeTypeResolvers
(sorted by their priority) that it invokes to resolve a mime type to its file extension (or to resolve a file extension to its mime type).
- Custom Mime Type Resolver
-
The Custom Mime Type Resolver is a
MimeTypeResolver
that defines the custom mime types that DDF will support. - Tika Mime Type Resolver
-
Provides support for resolving over 1300 mime types.
15.6.1. Custom Mime Type Resolver
These are mime types not supported by the default TikaMimeTypeResolver
.
File Extension | Mime Type |
---|---|
|
|
|
|
|
|
As a MimeTypeResolver
, the Custom Mime Type Resolver will provide methods to map the file extension to the corresponding mime type, and vice versa.
15.6.1.1. Installing the Custom Mime Type Resolver
One Custom Mime Type Resolver is configured and installed for the image/nitf
mime type.
This custom resolver is bundled in the mime-core-app
application and is part of the mime-core
feature.
Additional Custom Mime Type Resolvers can be added for other custom mime types.
15.6.1.1.1. Configuring the Custom Mime Type Resolver
The configurable properties for the Custom Mime Type Resolver are accessed from the MIME Custom Types configuration in the Admin Console.
-
Navigate to the Admin Console.
-
Select the Platform application.
-
Select Configuration.
-
Select MIME Custom Types.
Managed Service Factory PID
-
Ddf_Custom_Mime_Type_Resolver
See Custom Mime Type Resolver configurations for all possible configurations.
15.6.2. Tika Mime Type Resolver
The TikaMimeTypeResolver
is a MimeTypeResolver
that is implemented using the Apache Tika open source product.
Using the Apache Tika content analysis toolkit, the TikaMimeTypeResolver
provides support for resolving over 1300 mime types, but not all mime types yield the same quality metadata.
The TikaMimeTypeResolver
is assigned a default priority of -1
to insure that it is always invoked last by the MimeTypeMapper
.
This insures that any custom MimeTypeResolvers
that may be installed will be invoked before the TikaMimeTypeResolver
.
The TikaMimeTypeResolver
provides the bulk of the default mime type support for DDF.
15.6.2.1. Installing the Tika Mime Type Resolver
The TikaMimeTypeResolver
is bundled as the mime-tika-resolver
feature in the mime-tika-app
application.
This feature is installed by default.
15.6.2.1.1. Configuring the Tika Mime Type Resolver
The Tika Mime Type Resolver has no configurable properties.
16. Catalog Plugins
Plugins are additional tools to use to add additional business logic at certain points, depending on the type of plugin.
The Catalog Framework calls Catalog Plugins to process requests and responses as they enter and leave the Framework.
16.1. Types of Plugins
Plugins can be designed to run before or after certain processes. They are often used for validation, optimization, or logging. Many plugins are designed to be called at more than one time. See Catalog Plugin Compatibility.
- Pre-Authorization Plugins
-
Perform any changes needed before security rules are applied.
- Policy Plugins
-
Allows or denies access to the Catalog operation or response.
- Access Plugins
-
Used to build policy information for requests.
- Pre-Ingest Plugins
-
Perform any changes to a metacard prior to ingest.
- Post-Ingest Plugins
-
Perform actions after ingest is completed.
- Post-Process Plugins
-
Performs additional processing after ingest.
- Pre-Query Plugins
-
Perform any changes to a query before execution.
- Pre-Federated-Query Plugins
-
Perform any changes to a federated query before execution.
- Post-Query Plugins
-
Perform any changes to a response after query completes.
- Post-Federated-Query Plugins
-
Perform any changes to a response after federated query completes.
- Pre-Resource Plugins
-
Perform any changes to a request associated with a metacard prior to download.
- Post-Resource Plugins
-
Perform any changes to a resource after download.
- Pre-Create Storage Plugins
-
Perform any changes before creating a resource.
- Post-Create Storage Plugins
-
Perform any changes after creating a resource.
- Pre-Update Storage Plugins
-
Perform any changes before updating a resource.
- Post-Update Storage Plugins
-
Perform any changes after updating a resource.
- Pre-Subscription Plugins
-
Perform any changes before creating a subscription.
- Pre-Delivery Plugins
-
Perform any changes before delivering a subscribed event.
Plugins are called in a specific order during different operations. Custom Plugins can be added to the chain for special use cases.
Plugin | Pre-Authorization Plugins | Policy Plugins | Access Plugins | Pre-Ingest Plugins | Post-Ingest Plugins | Pre-Query Plugins | Post-Query Plugins | Post-Process Plugins |
---|---|---|---|---|---|---|---|---|
x |
||||||||
x |
x |
x |
||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
x |
|||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
x |
x |
x |
|||||
x |
||||||||
x |
x |
x |
||||||
x |
||||||||
x |
||||||||
x |
||||||||
x |
Plugin | Pre-Federated-Query Plugins | Post-Federated-Query Plugins | Pre-Resource Plugins | Post-Resource Plugins | Pre-Create Storage Plugins | Post-Create Storage Plugins | Pre-Update Storage Plugins | Post-Update Storage Plugins | Pre-Subscription Plugins | Pre-Delivery Plugins |
---|---|---|---|---|---|---|---|---|---|---|
x |
||||||||||
x |
x |
|||||||||
x |
x |
|||||||||
x! |
x |
x |
x |
x |
x |
x |
x |
|||
x |
||||||||||
x |
x |
16.1.1. Pre-Authorization Plugins
Pre-delivery plugins are invoked before any security rules are applied. This is an opportunity to take any action before authorization, including but not limited to:
-
logging.
-
adding network-specific information.
-
adding user-identifying information.
16.1.1.1. Available Pre-Authorization Plugins
- Client Info Plugin
-
Injects request-specific network information into a request.
- Metacard Ingest Network Plugin
-
Adds attributes for network info from ingest request.
16.1.2. Policy Plugins
Policy plugins are invoked to set up the policy for a request/response. This provides an opportunity to attach custom requirements on operations or individual metacards. All the 'requirements' from each Policy plugin will be combined into a single policy that will be included in the request/response. Access plugins will be used to act on this combined policy.
16.1.2.1. Available Policy Plugins
- Catalog Policy Plugin
-
Configures user attributes required for catalog operations.
- Historian Policy Plugin
-
Protects metacard history from being edited by users without the history role.
- Metacard Attribute Security Policy Plugin
-
Collects attributes into a security field for the metacard.
- Metacard Validity Filter Plugin
-
Determines whether to filter metacards with validation errors or warnings.
- Point of Contact Policy Plugin
-
Adds a policy if Point of Contact is updated.
- Registry Policy Plugin
-
Defines user access polices for registry operations.
- Resource URI Policy Plugin
-
Configures required user attributes for setting or altering a resource URI.
- Workspace Sharing Policy Plugin
-
Collects attributes for a workspace to identify the appropriate policy to allow sharing.
- XML Attribute Security Policy Plugin
-
Finds security attributes contained in a metacard’s metadata.
16.1.3. Access Plugins
Access plugins are invoked directly after the Policy plugins have been successfully executed. This is an opportunity to either stop processing or modify the request/response based on policy information.
16.1.3.1. Available Access Plugins
- Content URI Access Plugin
-
Prevents a Metacard’s resource URI from being overridden by an incoming UpdateRequest.
- Filter Plugin
-
Performs filtering on query responses as they pass through the framework.
- Operation Plugin
-
Validates a user or subject’s security attributes.
- Security Audit Plugin
-
Audits specific metacard attributes.
- Security Plugin
-
Identifies the subject for an operation.
- Workspace Access Plugin
-
Prevents non-owner users from changing workspace permissions.
16.1.4. Pre-Ingest Plugins
Pre-ingest plugins are invoked before an ingest operation is sent to the catalog. They are not run on a query. This is an opportunity to take any action on the ingest request, including but not limited to:
-
validation.
-
logging.
-
auditing.
-
optimization.
-
security filtering.
16.1.4.1. Available Pre-Ingest Plugins
- Expiration Date Pre-Ingest Plugin
-
Adds or updates expiration dates for the resource.
- GeoCoder Plugin
-
Populates the
Location.COUNTRY_CODE
attribute if the Metacard has an associated location. - Identification Plugin
-
Manages IDs on registry metacards.
- Metacard Groomer
-
Modifies metacards when created or updated.
- Metacard Validity Marker
-
Modifies metacards when created or ingested according to metacard validator services.
- Security Logging Plugin
-
Logs operations to the security log.
- Workspace Pre-Ingest Plugin
-
Verifies that a workspace has an associated email to enable sharing.
16.1.5. Post-Ingest Plugins
Post-ingest plugins are invoked after data has been created, updated, or deleted in a Catalog Provider.
16.1.5.1. Available Post-Ingest Plugins
- Catalog Backup Plugin
-
Enables backup of the catalog and its metacards.
- Catalog Metrics Plugin
-
Captures metrics on catalog operations.
- Event Processor
-
Creates, updates, and deletes subscriptions.
- Identification Plugin
-
Manages IDs on registry metacards.
- Metacard Backup File Storage Provider
-
Stores backed-up metacards.
- Metacard Backup S3 Storage Provider
-
Stores backed-up metacards in a specified S3 bucket and key.
- Processing Post-Ingest Plugin
-
Submits catalog Create, Update, or Delete requests to the Processing Framework.
- Security Logging Plugin
-
Logs operations to the security log.
- Source Metrics Plugin
-
Captures metrics on catalog operations.
16.1.6. Post-Process Plugins
Note
|
This code is experimental. While this interface is functional and tested, it may change or be removed in a future version of the library. |
Post-Process Plugins are invoked after a metacard has been created, updated, or deleted and committed to the Catalog. They are the last plugins to run and are triggered by a Post-Ingest Plugin. Post-Process plugins are well-suited for asynchronous tasks. See the Asynchronous Processing Framework for more information about how Post-Process Plugins are used.
16.1.7. Pre-Query Plugins
Pre-query plugins are invoked before a query operation is sent to any of the Sources. This is an opportunity to take any action on the query, including but not limited to:
-
validation.
-
logging.
-
auditing.
-
optimization.
-
security filtering.
16.1.7.1. Available Pre-Query Plugins
- Catalog Metrics Plugin
-
Captures metrics on catalog operations.
- Security Logging Plugin
-
Logs operations to the security log.
- Source Metrics Plugin
-
Captures metrics on catalog operations.
16.1.8. Pre-Federated-Query Plugins
Pre-federated-query plugins are invoked before a federated query operation is sent to any of the Sources. This is an opportunity to take any action on the query, including but not limited to:
-
validation.
-
logging.
-
auditing.
-
optimization.
-
security filtering.
16.1.8.1. Available Pre-Federated-Query Plugins
- Security Logging Plugin
-
Logs operations to the security log.
- Tags Filter Plugin
-
Updates queries without filters.
16.1.9. Post-Query Plugins
Post-query plugins are invoked after a query has been executed successfully, but before the response is returned to the endpoint. This is an opportunity to take any action on the query response, including but not limited to:
-
logging.
-
auditing.
-
security filtering/redaction.
-
deduplication.
16.1.9.1. Available Post-Query Plugins
- Catalog Metrics Plugin
-
Captures metrics on catalog operations.
- JPEG2000 Thumbnail Converter
-
Creates thumbnails for jpeg2000 images.
- Metacard Resource Size Plugin
-
Updates the resource size attribute of a metacard.
- Security Logging Plugin
-
Logs operations to the security log.
- Source Metrics Plugin
-
Captures metrics on catalog operations.
16.1.10. Post-Federated-Query Plugins
Post-federated-query plugins are invoked after a federated query has been executed successfully, but before the response is returned to the endpoint. This is an opportunity to take any action on the query response, including but not limited to:
-
logging.
-
auditing.
-
security filtering/redaction.
-
deduplication.
16.1.10.1. Available Post-Federated-Query Plugins
- Security Logging Plugin
-
Logs operations to the security log.
16.1.11. Pre-Resource Plugins
Pre-Resource plugins are invoked before a request to retrieve a resource is sent to a Source. This is an opportunity to take any action on the request, including but not limited to:
-
validation.
-
logging.
-
auditing.
-
optimization.
-
security filtering.
16.1.11.1. Available Pre-Resource Plugins
- Resource Usage Plugin
-
Monitors and limits system data usage.
- Security Logging Plugin
-
Logs operations to the security log.
16.1.12. Post-Resource Plugins
Post-resource plugins are invoked after a resource has been retrieved, but before it is returned to the endpoint. This is an opportunity to take any action on the response, including but not limited to:
-
logging.
-
auditing.
-
security filtering/redaction.
16.1.12.1. Available Post-Resource Plugins
- Catalog Metrics Plugin
-
Captures metrics on catalog operations.
- Resource Usage Plugin
-
Monitors and limits system data usage.
- Security Logging Plugin
-
Logs operations to the security log.
- Source Metrics Plugin
-
Captures metrics on catalog operations.
16.1.13. Pre-Create Storage Plugins
Pre-Create storage plugins are invoked immediately before an item is created in the content repository.
16.1.13.1. Available Pre-Create Storage Plugins
- Checksum Plugin
-
Creates a unique checksum for ingested resources.
- Security Logging Plugin
-
Logs operations to the security log.
16.1.14. Post-Create Storage Plugins
Post-Create storage plugins are invoked immediately after an item is created in the content repository.
16.1.14.1. Available Post-Create Storage Plugins
- Security Logging Plugin
-
Logs operations to the security log.
- Video Thumbnail Plugin
-
Generates thumbnails for video files.
16.1.15. Pre-Update Storage Plugins
Pre-Update storage plugins are invoked immediately before an item is updated in the content repository.
16.1.15.1. Available Pre-Update Storage Plugins
- Checksum Plugin
-
Creates a unique checksum for ingested resources.
- Security Logging Plugin
-
Logs operations to the security log.
16.1.16. Post-Update Storage Plugins
Post-Update storage plugins are invoked immediately after an item is updated in the content repository.
16.1.16.1. Available Post-Update Storage Plugins
- Security Logging Plugin
-
Logs operations to the security log.
- Video Thumbnail Plugin
-
Generates thumbnails for video files.
16.1.17. Pre-Subscription Plugins
Pre-subscription plugins are invoked before a Subscription is activated by an Event Processor. This is an opportunity to take any action on the Subscription, including but not limited to:
-
validation.
-
logging.
-
auditing.
-
optimization.
-
security filtering.
16.1.18. Pre-Delivery Plugins
Pre-delivery plugins are invoked before a Delivery Method is invoked on a Subscription. This is an opportunity to take any action before event delivery, including but not limited to:
-
logging.
-
auditing.
-
security filtering/redaction.
16.2. Catalog Plugin Details
Installation and configuration details listed by plugin name.
16.2.1. Catalog Backup Plugin
The Catalog Backup Plugin is used to enable data backup of the catalog and the metacards it contains.
Warning
|
Catalog Backup Plugin Considerations
Using this plugin may impact performance negatively. |
16.2.1.1. Installing the Catalog Backup Plugin
The Catalog Backup Plugin is installed by default with a standard installation in the Catalog application.
16.2.1.2. Configuring the Catalog Backup Plugin
To configure the Catalog Backup Plugin:
-
Navigate to the Admin Console.
-
Select Catalog application.
-
Select Configuration tab.
-
Select Backup Post-Ingest Plugin.
See Catalog Backup Plugin configurations for all possible configurations.
16.2.1.3. Usage Limitations of the Catalog Backup Plugin
-
May affect performance.
-
Must be installed prior to ingesting any content.
-
Once enabled, disabling may cause incomplete backups.
16.2.2. Catalog Metrics Plugin
The Catalog Metrics Plugin captures metrics on catalog operations. These metrics can be viewed and analyzed using the Metrics Reporting Application in the Admin Console.
16.2.2.2. Installing the Catalog Metrics Plugin
The Catalog Metrics Plugin is installed by default with a standard installation in the Catalog application.
16.2.2.3. Configuring the Catalog Metrics Plugin
The Catalog Metrics Plugin has no configurable properties.
16.2.3. Catalog Policy Plugin
The Catalog Policy Plugin configures the attributes required for users to perform Create, Read, Update, and Delete operations on the catalog.
16.2.3.1. Installing the Catalog Policy Plugin
The Catalog Policy Plugin is installed by default with a standard installation in the Catalog application.
16.2.3.2. Configuring the Catalog Policy Plugin
To configure the Catalog Policy Plugin:
-
Navigate to the Admin Console.
-
Select Catalog application.
-
Select Configuration tab.
-
Select Catalog Policy Plugin.
See Catalog Policy Plugin configurations for all possible configurations.
16.2.4. Checksum Plugin
The Checksum plugin creates a unique checksum for resources input into the system to identify updated content.
16.2.4.1. Installing the Checksum Plugin
The Checksum is installed by default with a standard installation in the Catalog application.
16.2.5. Client Info Plugin
The client info plugin injects request-specific network information into request properties, such as Remote IP Address, Remote Host Name, Servlet Scheme, and Servlet Context.
16.2.5.1. Related Components to the Client Info Plugin
-
Client info filter
16.2.5.2. Installing the Client Info Plugin
The Client Info Plugin is installed by default with a standard installation in the Catalog application.
16.2.6. Content URI Access Plugin
The Content URI Access Plugin prevents a Metacard’s resource URI from being overridden by an incoming UpdateRequest.
16.2.6.1. Installing the Content URI Access Plugin
The Content URI Access Plugin is installed by default with a standard installation in the Catalog application.
16.2.6.2. Configuring the Content URI Access Plugin
The Content URI Access Plugin has no configurable properties.
16.2.7. Event Processor
The Event Processor creates, updates, and deletes subscriptions for event notification. These subscriptions optionally specify a filter criteria so that only events of interest to the subscriber are posted for notification.
As metacards are created, updated, and deleted, the Catalog’s Event Processor is invoked (as a post-ingest plugin) for each of these events. The Event Processor applies the filter criteria for each registered subscription to each of these ingest events to determine if they match the criteria.
For more information on creating subscriptions, see Creating a Subscription.
16.2.7.1. Installing the Event Processor
The Event Processor is installed by default with a standard installation in the Catalog application.
16.2.7.2. Configuring the Event Processor
The Event Processor has no configurable properties.
16.2.7.3. Usage Limitations of the Event Processor
The Standard Event processor currently broadcasts federated events and should not. It should only broadcast events that were generated locally, all other events should be dropped. See DDF-3151 for status.
16.2.8. Expiration Date Pre-Ingest Plugin
The Expiration Date plugin adds or updates expiration dates which can be used later for archiving old data.
16.2.8.1. Installing the Expiration Date Pre-Ingest Plugin
The Expiration Date Pre-Ingest Plugin is not installed by default with a standard installation. To install:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Configuration tab.
-
Select the Expiration Data Pre-Ingest Plugin.
16.2.8.2. Configuring the Expiration Date Pre-Ingest Plugin
To configure the Expiration Date Pre-Ingest Plugin:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Configuration tab.
-
Select the Expiration Date Pre-Ingest Plugin.
See Expiration Date Plugin configurations for all possible configurations.
16.2.9. Filter Plugin
The Filter Plugin performs filtering on query responses as they pass through the framework.
Each metacard result can contain security attributes that are pulled from the metadata record after being processed by a PolicyPlugin
that populates this attribute.
The security attribute is a Map containing a set of keys that map to lists of values.
The metacard is then processed by a filter plugin that creates a KeyValueCollectionPermission
from the metacard’s security attribute.
This permission is then checked against the user subject to determine if the subject has the correct claims to view that metacard.
The decision to filter the metacard eventually relies on the installed Policy Decision Point (PDP).
The PDP that is being used returns a decision, and the metacard will either be filtered or allowed to pass through.
How a metacard gets filtered is left up to any number of FilterStrategy implementations that might be installed. Each FilterStrategy will return a result to the filter plugin that says whether or not it was able to process the metacard, along with the metacard or response itself. This allows a metacard or entire response to be partially filtered to allow some data to pass back to the requester. This could also include filtering any products sent back to a requester.
The security attributes populated on the metacard are completely dependent on the type of the metacard.
Each type of metacard must have its own PolicyPlugin
that reads the metadata being returned and then returns the appropriate attributes.
1
2
3
4
5
6
7
8
9
10
<metacard>
<security>
<map>
<entry assertedAttribute1="A,B" />
<entry assertedAttribute2="X,Y" />
<entry assertedAttribute3="USA,GBR" />
<entry assertedAttribute4="USA,AUS" />
</map>
</security>
</metacard>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<user>
<claim name="subjectAttribute1">
<value>A</value>
<value>B</value>
</claim>
<claim name="subjectAttribute2">
<value>X</value>
<value>Y</value>
</claim>
<claim name="subjectAttribute3">
<value>USA</value>
</claim>
<claim name="subjectAttribute4">
<value>USA</value>
</claim>
</user>
In the above example, the user’s claims are represented very simply and are similar to how they would actually appear in a SAML 2 assertion.
Each of these user (or subject) claims will be converted to a KeyValuePermission
object.
These permission objects will be implied against the permission object generated from the metacard record.
In this particular case, the metacard might be allowed if the policy is configured appropriately because all of the permissions line up correctly.
16.2.9.1. Installing the Filter Plugin
The Filter Plugin is installed by default with a standard installation in the Catalog application.
16.2.10. GeoCoder Plugin
The GeoCoder Plugin is a pre-ingest plugin that is responsible for populating the Metacard’s Location.COUNTRY_CODE
attribute if the Metacard has an associated location.
If there is a valid country code for the Metacard, it will be in ISO 3166-1 alpha-3 format.
If the metacard’s country code is already populated, the plugin will not override it.
The GeoCoder relies on either the WebService or Offline Gazetteer to retrieve country code information.
Warning
|
For a polygon or polygons, this plugin takes the center point of the bounding box to assign the country code. |
16.2.10.1. Installing the GeoCoder Plugin
The GeoCoder Plugin is installed by default with the Spatial application, when the WebService or Offline Gazetteer is started.
16.2.10.2. Configuring the GeoCoder Plugin
To configure the GeoCoder Plugin:
-
Navigate to the Admin Console.
-
Select Spatial application.
-
Select Configuration tab.
-
Select GeoCoder Plugin.
These are the available configurations:
See GeoCoder Plugin configurations for all possible configurations.
16.2.11. Historian Policy Plugin
The Historian Policy Plugin protects metacard history from being edited or deleted by users without the history role (a http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role
of system-history
).
16.2.11.1. Installing the Historian Policy Plugin
The Historian is installed by default with a standard installation in the Catalog application.
16.2.11.2. Configuring the Historian Policy Plugin
The Historian Policy Plugin has no configurable properties.
16.2.12. Identification Plugin
The Identification Plugin assigns IDs to registry metacards and adds/updates IDs on create and update.
16.2.12.1. Installing the Identification Plugin
The Identification Plugin is not installed by default in a standard installation. It is installed by default with the Registry application.
16.2.12.2. Configuring the Identification Plugin
The Identification Plugin has no configurable properties.
16.2.13. JPEG2000 Thumbnail Converter
The JPEG2000 Thumbnail converter creates thumbnails from images ingested in jpeg2000 format.
16.2.13.1. Installing the JPEG2000 Thumbnail Converter
The JPEG2000 Thumbnail Converter is installed by default with a standard installation in the Catalog application.
16.2.13.2. Configuring the JPEG2000 Thumbnail Converter
The JPEG2000 Thumbnail Converter has no configurable properties.
16.2.14. Metacard Attribute Security Policy Plugin
The Metacard Attribute Security Policy Plugin combines existing metacard attributes to make new attributes and adds them to the metacard.
For example, if a metacard has two attributes,
sourceattribute1
and sourceattribute2
, the values of the two attributes could be combined into a new
attribute, destinationattribute1
. The sourceattribute1
and sourceattribute2
are the source attributes
and destinationattribute1
is the destination attribute.
There are two way to combine the values of source attributes. The first, and most common,
is to take all of the attribute values and put them together.
This is called the union.
For example, if the source attributes sourceattribute1
and sourceattribute2
had the values:
sourceattribute1 = MASK, VESSEL
sourceattribute2 = WIRE, SACK, MASK
…the union would result in the new attribute destinationattribute1
:
destinationattribute1 = MASK, VESSEL, WIRE, SACK
The other way to combine attributes is use the values common to all of the attributes.
This is called the intersection. Using our previous example, the intersection of
sourceattribute1
and sourceattribute2
would create the new attribute destinationattribute1
destinationattribute1 = MASK
because only MASK
is common to all of the source attributes.
The policy plugin could also be used to rename attributes. If there is only one source attribute, and the combination policy is union, then the attribute’s values are effectively renamed to the destination attribute.
16.2.14.1. Installing the Metacard Attribute Security Policy Plugin
The Metacard Attribute Security Policy Plugin is installed by default with a standard installation in the Catalog application.
See Metacard Attribute Security Policy Plugin configurations for all possible configurations.
16.2.15. Metacard Backup File Storage Provider
The Metacard Backup File Storage Provider is a storage provider that will store backed-up metacards in a specified file system location.
16.2.15.1. Installing the Metacard Backup File Storage Provider
To install the Metacard Backup File Storage Provider
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Install the
catalog-metacard-backup-filestorage
feature.
16.2.15.2. Configuring the Metacard Backup File Storage Provider
To configure the Metacard Backup File Storage Provider
-
Navigate to the Admin Console.
-
Select Catalog application.
-
Select Configuration tab.
-
Select Metacard Backup File Storage Provider.
See Metacard Backup File Storage Provider configurations for all possible configurations.
16.2.16. Metacard Backup S3 Storage Provider
The Metacard Backup S3 Storage Provider is a storage provider that will store backed up metacards in the specified S3 bucket and key.
16.2.16.1. Installing the Metacard S3 File Storage Provider
To install the Metacard Backup File Storage Provider
-
Navigate to the System tab.
-
Select the Features tab.
-
Install the
catalog-metacard-backup-s3storage
feature.
16.2.16.2. Configuring the Metacard S3 File Storage Provider
To configure the Metacard Backup S3 Storage Provider:
-
Navigate to the Admin Console.
-
Select Catalog application.
-
Select Configuration tab.
-
Select Metacard Backup S3 Storage Provider.
See Metacard Backup S3 Storage Provider configurations for all possible configurations.
16.2.17. Metacard Groomer
The Metacard Groomer Pre-Ingest plugin makes modifications to CreateRequest
and UpdateRequest
metacards.
Use this pre-ingest plugin as a convenience to apply basic rules for your metacards.
This plugin makes the following modifications when metacards are in a CreateRequest
:
-
Overwrites the
Metacard.ID
field with a generated, unique, 32 character hexadecimal value if missing or if the resource URI is not a catalog resource URI. -
Sets
Metacard.CREATED
to the current time stamp if not already set. -
Sets
Metacard.MODIFIED
to the current time stamp if not already set. -
Sets
Core.METACARD_CREATED
to the current time stamp if not present. -
Sets
Core.METACARD_MODIFIED
to the current time stamp.
In an UpdateRequest
, the same operations are performed as a CreateRequest
, except:
-
If no value is provided for
Metacard.ID
in the new metacard, it will be set using theUpdateRequest
ID if applicable.
16.2.17.1. Installing the Metacard Groomer
The Metacard Groomer is included in the catalog-core-plugins
feature. It is not recommended to uninstall this feature.
16.2.18. Metacard Ingest Network Plugin
The Metacard Ingest Network Plugin allows the conditional insertion of new attributes on metacards during ingest based on network information from the ingest request; including IP address and hostname.
For the extent of this section, a 'rule' will refer to a configured, single instance of this plugin.
16.2.18.2. Installing the Metacard Ingest Network Plugin
The Metacard Ingest Network Plugin is installed by default during a standard installation in the Catalog application.
16.2.18.3. Configuring the Metacard Ingest Network Plugin
To configure the Metacard Ingest Network Plugin:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Configuration tab.
-
Select the label Metacard Ingest Network Plugin to setup a network rule.
See Metacard Ingest Network Plugin configurations for all possible configurations.
Multiple instances of the plugin can be configured by clicking on its configuration title within the configuration tab of the Catalog app. Each instance represents a conditional statement, or a 'rule', that gets evaluated for each ingest request. For any request that meets the configured criteria of a rule, that rule will attempt to transform its list of key-value pairs to become new attributes on all metacards in that request.
The rule is divided into two fields: "Criteria" and "Expected Value". The "Criteria" field features a drop-down list containing the four elements for which equality can be tested:
-
IP Address of where the ingest request came from
-
Host Name of where the ingest request came from
-
Scheme that the ingest request arrived on, for example, http vs https
-
Context Path that the ingest request arrived on, for example, /services/catalog
In order for a rule to evaluate to true and the attributes be applied, the value in the "Expected Value" field must be an exact match to the actual value of the selected criteria. For example, if the selected criteria is "IP Address" with an expected value of "192.168.0.1", the rule only evaluates to true for ingest requests coming from "192.168.0.1" and nowhere else.
Important
|
Check for IPv6
Verify your system’s IP configuration. Rules using "IP Address" may need to be written in IPv6 format.
|
The key-value pairs within each rule should take the following form: "key = value" where the "key" is the name of the attribute and the "value" is the value assigned to that attribute. Whitespace is ignored unless it is within the key or value. Multi-valued attributes can be expressed in comma-separated format if necessary.
contact.contributor-name = John Doe contact.contributor-email = john.doe@example.net language = English language = English, French, German security.access-groups = SJ202, SR 101, JS2201
16.2.18.3.1. Useful Attributes
The following table provides some useful attributes that may commonly be set by this plugin:
Attribute Name | Expected Format | Multi-Valued |
---|---|---|
expiration |
ISO DateTime |
no |
description |
Any String |
no |
metacard.owner |
Any String |
no |
language |
Any String |
yes |
security.access-groups |
Any String |
yes |
security.access-individuals |
Any String |
yes |
16.2.18.4. Usage Limitations of the Metacard Ingest Network Plugin
-
This plugin only works for ingest (create requests) performed over a network; data ingested via command line does not get processed by this plugin.
-
Any attribute that is already set on the metacard will not be overwritten by the plugin.
-
The order of execution is not guaranteed. For any rule configuration where two or more rules add different values for the same attribute, it is undefined what the final value for that attribute will be in the case where more than one of those rules evaluates to true.
16.2.19. Metacard Resource Size Plugin
This post-query plugin updates the resource size attribute of each metacard in the query results if there is a cached file for the product and it has a size greater than zero; otherwise, the resource size is unmodified and the original result is returned.
Use this post-query plugin as a convenience to return query results with accurate resource sizes for cached products.
16.2.19.1. Installing the Metacard Resource Size Plugin
The Metacard Resource Size Plugin is installed by default with a standard installation.
16.2.19.2. Configuring the Metacard Resource Size Plugin
The Metacard Resource Size Plugin has no configurable properties.
16.2.20. Metacard Validity Filter Plugin
The Metacard Validity Filter Plugin determines whether metacards with validation errors or warnings are filtered from query results.
16.2.20.2. Installing the Metacard Validity Filter Plugin
The Metacard Validity Filter Plugin is installed by default with a standard installation in the Catalog application.
16.2.21. Metacard Validity Marker
The Metacard Validity Marker Pre-Ingest plugin modifies the metacards contained in create and update requests.
The plugin runs each metacard in the CreateRequest
and UpdateRequest
against each registered MetacardValidator
service.
Note
|
This plugin can make it seem like ingested products are not successfully ingested if a user does not have permissions to access invalid metacards. If an ingest did not fail, there are no errors in the ingest log, but the expected results do not show up after a query, verify either that the ingested data is valid or that the Metacard Validity Filter Plugin is configured to show warnings and/or errors. |
16.2.21.2. Installing Metacard Validity Marker
This plugin is installed by default with a standard installation in the Catalog application.
16.2.21.3. Configuring Metacard Validity Marker
See Metacard Validity Marker Plugin configurations for all possible configurations.
16.2.21.4. Using Metacard Validity Marker
Use this pre-ingest plugin to validate metacards against metacard validators, which can check schemas, schematron, or any other logic.
16.2.22. Operation Plugin
The operation plugin validates the subject’s security attributes to ensure they are adequate to perform the operation.
16.2.22.1. Installing the Operation Plugin
The Operation Plugin is installed by default with a standard installation in the Catalog application.
16.2.23. Point of Contact Policy Plugin
The Point of Contact Policy Plugin is a PreUpdate plugin that will check if the point-of-contact attribute has changed. If it does, then it adds a policy to that metacard’s policy map that cannot be implied. This will deny such an update request, which essentially makes the point-of-contact attribute read-only.
16.2.23.2. Installing the Point of Contact Policy Plugin
The Point of Contact Policy Plugin is installed by default with a standard installation in the Catalog application.
16.2.23.3. Configuring the Point of Contact Policy Plugin
The Point of Contact Policy Plugin has no configurable properties.
16.2.24. Processing Post-Ingest Plugin
The Processing Post Ingest Plugin is responsible for submitting catalog Create, Update, and Delete (CUD) requests to the Processing Framework.
16.2.24.2. Installing the Processing Post-Ingest Plugin
The Processing Post-Ingest Plugin is not installed by default with a standard installation, but is installed by default when the in-memory Processing Framework is installed.
16.2.24.3. Configuring the Processing Post-Ingest Plugin
The Processing Post-Ingest Plugin has no configurable properties.
16.2.25. Registry Policy Plugin
The Registry Policy Plugin defines the policies for user access to registry entries and operations.
16.2.25.1. Installing the Registry Policy Plugin
The Registry Policy Plugin is not installed by default on a standard installation. It is installed with the Registry application.
16.2.25.2. Configuring the Registry Policy Plugin
The Registry Policy Plugin can be configured from the Admin Console:
-
Navigate to the Admin Console.
-
Select the Registry application.
-
Select the Configuration tab.
-
Select Registry Policy Plugin.
See Registry Policy Plugin configurations for all possible configurations.
16.2.26. Resource URI Policy Plugin
The Resource URI Policy Plugin configures the attributes required for users to set the resource URI when creating a metacard or alter the resource URI when updating an existing metacard in the catalog.
16.2.26.1. Installing the Resource URI Policy Plugin
The Resource URI Policy Plugin is installed by default with a standard installation in the Catalog application.
16.2.26.2. Configuring the Resource URI Policy Plugin
To configure the Resource URI Policy Plugin:
-
Navigate to the Admin Console.
-
Select Catalog application.
-
Select Configuration tab.
-
Select Resource URI Policy Plugin.
See Resource URI Policy Plugin configurations for all possible configurations.
16.2.27. Resource Usage Plugin
The Resource Usage Plugin monitors and limits data usage, and enables cancelling long-running queries.
16.2.27.1. Installing the Resource Usage Plugin
The Resource Usage Plugin is not installed by default with a standard installation. It is installed with the Resource Management application.
16.2.27.2. Configuring the Resource Usage Plugin
The Resource Usage Plugin can be configured from the Admin Console:
-
Navigate to the Admin Console.
-
Select the Resource Management application.
-
Select the Configuration tab.
-
Select Data Usage.
See Resource Usage Plugin configurations for all possible configurations.
16.2.28. Security Audit Plugin
The Security Audit Plugin is used to allow the auditing of specific metacard attributes. Any time a metacard attribute listed in the configuration is updated, a log will be generated in the security log.
16.2.28.1. Installing the Security Audit Plugin
The Security Audit Plugin is installed by default with a standard installation in the Catalog application.
16.2.29. Security Logging Plugin
The Security Logging Plugin logs operations to the security log.
16.2.29.1. Installing Security Logging Plugin
The Security Logging Plugin is installed by default in a standard installation in the Security application.
16.2.29.2. Enhancing the Security Log
The security log contains attributes related to the subject acting on the system. To add additional attributes related to the subject to the logs, append the attribute’s key to the comma separated values assigned to security.logger.extra_attributes
in /etc/custom.system.properties
.
16.2.30. Security Plugin
The Security Plugin identifies the subject for an operation.
16.2.30.1. Installing the Security Plugin
The Security Plugin is installed by default with a standard installation in the Catalog application.
16.2.31. Source Metrics Plugin
The Source Metrics Plugin captures metrics on catalog operations. These metrics can be viewed and analyzed using the Metrics Reporting Application in the Admin Console.
16.2.31.2. Installing the Source Metrics Plugin
The Source Metrics Plugin is installed by default with a standard installation in the Catalog application.
16.2.31.3. Configuring the Source Metrics Plugin
The Source Metrics Plugin has no configurable properties.
16.2.32. Tags Filter Plugin
The Tags Filter Plugin updates queries without filters for tags, and adds a default tag of resource
.
For backwards compatibility, a filter will also be added to include metacards without any tags attribute.
16.2.32.2. Installing the Tags Filter Plugin
The Tags Filter Plugin is installed by default with a standard installation in the Catalog application.
16.2.32.3. Configuring the Tags Filter Plugin
The Tags Filter Plugin has no configurable properties.
16.2.33. Video Thumbnail Plugin
The Video Thumbnail Plugin provides the ability to generate thumbnails for video files stored in the Content Repository.
It is an implementation of both the PostCreateStoragePlugin
and PostUpdateStoragePlugin
interfaces. When installed, it is invoked by the Catalog Framework immediately after a content item has been created or updated by the Storage Provider.
This plugin uses a custom 32-bit LGPL build of FFmpeg (a video processing program) to generate thumbnails. When this plugin is installed, it places the FFmpeg executable appropriate for the current operating system in <DDF_HOME>/bin_third_party/ffmpeg
. When invoked, this plugin runs the FFmpeg binary in a separate process to generate the thumbnail. The <DDF_HOME>/bin_third_party/ffmpeg
directory is deleted when the plugin is uninstalled.
Note
|
Prebuilt FFmpeg binaries are provided for Linux, Mac, and Windows only. |
16.2.33.1. Installing the Video Thumbnail Plugin
The Video Thumbnail Plugin is installed by default with a standard installation in the Catalog application.
16.2.33.2. Configuring the Video Thumbnail Plugin
To configure the Video Thumbnail Plugin:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Configuration tab.
-
Select the Video Thumbnail Plugin.
See Video Thumbnail Plugin configurations for all possible configurations.
16.2.34. Workspace Access Plugin
The Workspace Access Plugin prevents non-owner users from changing workspace permissions.
16.2.34.1. Related Components to The Workspace Access Plugin
-
Workspace Extension.
16.2.34.2. Installing the Workspace Access Plugin
The Workspace Access Plugin is installed by default with a standard installation in the Catalog application.
16.2.34.3. Configuring the Workspace Access Plugin
The Workspace Access Plugin has no configurable properties.
16.2.35. Workspace Pre-Ingest Plugin
The Workspace Pre-Ingest Plugin verifies that a workspace has an associated email to enable sharing and assigns that email as "owner".
16.2.35.1. Related Components to The Workspace Pre-Ingest Plugin
-
Workspace Extension.
16.2.35.2. Installing the Workspace Pre-Ingest Plugin
The Workspace Pre-Ingest Plugin is installed by default with a standard installation in the Catalog application.
16.2.35.3. Configuring the Workspace Pre-Ingest Plugin
The Workspace Pre-Ingest Plugin has no configurable properties.
16.2.36. Workspace Sharing Policy Plugin
The Workspace Sharing Policy Plugin collects attributes for a workspace to identify the appropriate policy to apply to allow sharing.
16.2.36.1. Related Components to The Workspace Sharing Policy Plugin
-
Workspace Extension.
16.2.36.2. Installing the Workspace Sharing Policy Plugin
The Workspace Sharing Policy Plugin is installed by default with a standard installation in the Catalog application.
16.2.36.3. Configuring the Workspace Sharing Policy Plugin
The Workspace Sharing Policy Plugin has no configurable properties.
16.2.37. XML Attribute Security Policy Plugin
The XML Attribute Security Policy Plugin parses XML metadata contained within a metacard for security attributes on any number of XML elements in the metadata. The configuration for the plugin contains one field for setting the XML elements that will be parsed for security attributes and the other two configurations contain the XML attributes that will be pulled off of those elements. The Security Attributes (union) field will compute the union of values for each attribute defined and the Security Attributes (intersection) field will compute the intersection of values for each attribute defined.
16.2.37.1. Installing the XML Attribute Security Policy Plugin
The XML Attribute Security Policy Plugin is installed by default with a standard installation in the Security application.
17. Data
The Catalog stores and translates Metadata, which can be transformed into many data formats, shared, and queried.
The primary form of this metadata is the metacard.
A Metacard
is a container for metadata.
CatalogProviders
accept Metacards
as input for ingest, and Sources
search for metadata and return matching Results
that include Metacards
.
17.1. Metacards
A metacard is a single instance of metadata in the Catalog (an instance of a metacard type) which generally contains general information about the product, such as the title of the product, the product’s geo-location, the date the product was created and/or modified, the owner or producer, and/or the security classification.
17.1.1. Metacard Type
A metacard type indicates the attributes available for a particular metacard. It is a model used to define the attributes of a metacard, much like a schema.
A metacard type indicates the attributes available for a particular type of data. For example, an image may have different attributes than a PDF document, so each could be defined to have their own metacard type.
17.1.1.1. Default Metacard Type and Attributes
Most metacards within the system are created using the default metacard type or a metacard type based on the default type.
The default metacard type of the system can be programmatically retrieved by calling ddf.catalog.data.impl.MetacardImpl.BASIC_METACARD
.
The name of the default MetacardType
can be retrieved from ddf.catalog.data.MetacardType.DEFAULT_METACARD_TYPE_NAME
.
The default metacard type has the following required attributes. Though the following attributes are required on all metacard types, setting their values is optional except for ID.
Note
|
It is highly recommended when referencing a default attribute name to use the |
Warning
|
Every Source should at the very least return an ID attribute according to Catalog API. Other fields may or may not be applicable, but a unique ID must be returned by a source. |
17.1.1.2. Extensible Metacards
Metacard extensibility is achieved by creating a new MetacardType
that supports attributes in addition to the required attributes listed above.
Required attributes must be the base of all extensible metacard types.
Warning
|
Not all Catalog Providers support extensible metacards.
Nevertheless, each Catalog Provider should at least have support for the default Consult the documentation of the Catalog Provider in use for more information on its support of extensible metacards. |
Often, the BASIC_METACARD
MetacardType
does not provide all the functionality or attributes necessary for a specific task.
For performance or convenience purposes, it may be necessary to create custom attributes even if others will not be aware of those attributes.
One example could be if a user wanted to optimize a search for a date field that did not fit the definition of CREATED
, MODIFIED
, EXPIRATION
, or EFFECTIVE
.
The user could create an additional java.util.Date
attribute in order to query the attribute separately.
Metacard
objects are extensible because they allow clients to store and retrieve standard and custom key/value Attributes from the Metacard
.
All Metacards
must return a MetacardType
object that includes an AttributeDescriptor
for each Attribute
, indicating it’s key and value type.
AttributeType
support is limited to those types defined by the Catalog.
New MetacardType
implementations can be made by implementing the MetacardType
interface.
17.1.2. Metacard Type Registry
Warning
|
The |
The MetacardTypeRegistry
allows DDF components, primarily catalog providers and sources, to make available the MetacardTypes
that they support.
It maintains a list of all supported MetacardTypes
in the CatalogFramework
, so that other components such as Endpoints, Plugins, and Transformers can make use of those MetacardTypes
.
The MetacardType
is essential for a component in the CatalogFramework
to understand how it should interpret a metacard by knowing what attributes are available in that metacard.
For example, an endpoint receiving incoming metadata can perform a lookup in the MetacardTypeRegistry
to find a corresponding MetacardType
.
The discovered MetacardType
will then be used to help the endpoint populate a metacard based on the specified attributes in the MetacardType
.
By doing this, all the incoming metadata elements can then be available for processing, cataloging, and searching by the rest of the CatalogFramework
.
MetacardTypes
should be registered with the MetacardTypeRegistry
. The MetacardTypeRegistry
makes those MetacardTypes
available to other DDF CatalogFramework
components.
Other components that need to know how to interpret metadata or metacards should look up the appropriate MetacardType
from the registry.
By having these MetacardTypes
available to the CatalogFramework
, these components can be aware of the custom attributes.
The MetacardTypeRegistry
is accessible as an OSGi service.
The following blueprint snippet shows how to inject that service into another component:
1
2
3
4
5
6
<bean id="sampleComponent" class="ddf.catalog.SampleComponent">
<argument ref="metacardTypeRegistry" />
</bean>
<!-- Access MetacardTypeRegistry -->
<reference id="metacardTypeRegistry" interface="ddf.catalog.data.MetacardTypeRegistry"/>
The reference to this service can then be used to register new MetacardTypes
or to lookup existing ones.
Typically, new MetacardTypes
will be registered by CatalogProviders
or sources indicating they know how to persist, index, and query attributes from that type.
Typically, Endpoints or InputTransformers
will use the lookup functionality to access a MetacardType
based on a parameter in the incoming metadata.
Once the appropriate MetacardType
is discovered and obtained from the registry, the component will know how to translate incoming raw metadata into a DDF Metacard.
17.1.3. Attributes
An attribute is a single field of a metacard, an instance of an attribute type. Attributes are typically indexed for searching by a source or catalog provider.
17.1.3.1. Attribute Types
An attribute type indicates the attribute format of the value stored as an attribute. It is a model for an attribute.
17.1.3.1.1. Attribute Format
An enumeration of attribute formats are available in the catalog. Only these attribute formats may be used.
AttributeFormat | Description |
---|---|
|
Attributes of this attribute format must have a value that is a Java |
|
Attributes of this attribute format must have a value that is a Java boolean. |
|
Attributes of this attribute format must have a value that is a Java date. |
|
Attributes of this attribute format must have a value that is a Java double. |
|
Attributes of this attribute format must have a value that is a Java float. |
|
Attributes of this attribute format must have a value that is a WKT-formatted Java string. |
|
Attributes of this attribute format must have a value that is a Java integer. |
|
Attributes of this attribute format must have a value that is a Java long. |
|
Attributes of this attribute format must have a value that implements the serializable interface. |
|
Attributes of this attribute format must have a value that is a Java short. |
|
Attributes of this attribute format must have a value that is a Java string and treated as plain text. |
|
Attributes of this attribute format must have a value that is a XML-formatted Java string. |
17.1.3.1.2. Attribute Naming Conventions
Catalog taxonomy elements follow the naming convention of group-or-namespace.specific-term
, except for extension fields outside of the core taxonomy.
These follow the naming convention of ext.group-or-namespace.specific-term
and must be namespaced.
Nesting is not permitted.
17.1.3.2. Result
A single "hit" included in a query response.
A result object consists of the following:
-
a metacard.
-
a relevance score if included.
-
distance in meters if included.
17.1.4. Creating Metacards
The quickest way to create a Metacard
is to extend or construct the MetacardImpl
object.
MetacardImpl
is the most commonly used and extended Metacard
implementation in the system because it provides a convenient way for developers to retrieve and set Attributes
without having to create a new MetacardType
(see below).
MetacardImpl
uses BASIC_METACARD
as its MetacardType
.
17.1.4.1. Limitations
A given developer does not have all the information necessary to programmatically interact with any arbitrary source.
Developers hoping to query custom fields from extensible Metacards
of other sources cannot easily accomplish that task with the current API.
A developer cannot question a source for all its queryable fields.
A developer only knows about the MetacardTypes
which that individual developer has used or created previously.
The only exception to this limitation is the Metacard.ID
field, which is required in every Metacard
that is stored in a source.
A developer can always request Metacards
from a source for which that developer has the Metacard.ID
value.
The developer could also perform a wildcard search on the Metacard.ID
field if the source allows.
17.1.4.2. Processing Metacards
As Metacard
objects are created, updated, and read throughout the Catalog, care should be taken by all catalog components to interrogate the MetacardType
to ensure that additional Attributes
are processed accordingly.
17.1.4.3. Basic Types
The Catalog includes definitions of several basic types all found in the ddf.catalog.data.BasicTypes
class.
Name | Type | Description |
---|---|---|
|
MetacardType |
Represents all required Metacard Attributes. |
|
AttributeType |
A Constant for an |
|
AttributeType |
A Constant for an |
|
AttributeType |
A Constant for an |
|
AttributeType |
A Constant for an |
|
AttributeType |
A Constant for an |
|
AttributeType |
A Constant for an |
|
AttributeType |
A Constant for an |
|
AttributeType |
A Constant for an |
|
AttributeType |
A Constant for an |
|
AttributeType |
A Constant for an |
|
AttributeType |
A Constant for an |
|
AttributeType |
A Constant for an |
18. Operations
The Catalog provides the capability to query, create, update, and delete metacards; retrieve resources; and retrieve information about the sources in the enterprise.
Each of these operations follow a request/response paradigm.
The request is the input to the operation and contains all of the input parameters needed by the Catalog Framework’s operation to communicate with the Sources.
The response is the output from the execution of the operation that is returned to the client, which contains all of the data returned by the sources.
For each operation there is an associated request/response pair, e.g., the QueryRequest
and QueryResponse
pair for the Catalog Framework’s query operation.
All of the request and response objects are extensible in that they can contain additional key/value properties on each request/response. This allows additional capability to be added without changing the Catalog API, helping to maintain backwards compatibility.
19. Resources
Resources are the data that is represented by the cataloged metadata in DDF.
Metacards are used to describe those resources through metadata.
This metadata includes the time the resource was created, the location where the resource was created, etc.
A DDF Metacard
contains the getResourceUri
method, which is used to locate and retrieve its corresponding resource.
19.1. Content Item
ContentItem is the domain object populated by the Storage Provider that represents the information about the content to be stored or content that has been stored in the Storage Provider. A ContentItem encapsulates the content’s globally unique ID, mime type, and input stream (i.e., the actual content). The unique ID of a ContentItem will always correspond to a Metacard ID.
19.1.1. Retrieving Resources
When a client attempts to retrieve a resource, it must provide a metacard ID or URI corresponding to a unique resource.
As mentioned above, the resource URI is obtained from a Metacard’s `getResourceUri
method.
The CatalogFramework
has three methods that can be used by clients to obtain a resource: getEnterpriseResource
, getResource
, and getLocalResource
.
The getEnterpriseResource
method invokes the retrieveResource
method on a local ResourceReader
as well as all the Federated
and Connected
Sources inthe DDF enterprise.
The second method, getResource
, takes in a source ID as a parameter and only invokes retrieveResource
on the specified Source
.
The third method invokes retrieveResource
on a local ResourceReader
.
The parameter for each of these methods in the CatalogFramework
is a ResourceRequest
.
DDF includes two implementations of ResourceRequest
: ResourceRequestById
and ResourceRequestByProductUri
.
Since these implementations extend OperationImpl
, they can pass a Map
of generic properties through the CatalogFramework
to customize how the resource request is carried out.
One example of this is explained in the Retrieving Resource Options section below.
The following is a basic example of how to create a ResourceRequest
and invoke the CatalogFramework
resource retrieval methods to process the request.
1
2
3
4
5
6
7
Map<String, Serializable> properties = new HashMap<String, Serializable>();
properties.put("PropertyKey1", "propertyA"); //properties to customize Resource retrieval
ResourceRequestById resourceRequest = new ResourceRequestById("0123456789abcdef0123456789abcdef", properties); //object containing ID of Resource to be retrieved
String sourceName = "LOCAL_SOURCE"; //the Source ID or name of the local Catalog or a Federated Source
ResourceResponse resourceResponse; //object containing the retrieved Resource and the request that was made to get it.
resourceResponse = catalogFramework.getResource(resourceRequest, sourceName); //Source-based retrieve Resource request
Resource resource = resourceResponse.getResource(); //actual Resource object containing InputStream, mime type, and Resource name
DDF.catalog.resource.ResourceReader
instances can be discovered via the OSGi Service Registry.
The system can contain multiple ResourceReaders
.
The CatalogFramework
determines which one to call based on the scheme of the resource’s URI and what schemes the ResourceReader
supports.
The supported schemes are obtained by a ResourceReader’s `getSupportedSchemes
method.
As an example, one ResourceReader
may know how to handle file-based URIs with the scheme file
, whereas another ResourceReader
may support HTTP-based URIs with the scheme http
.
The ResourceReader
or Source
is responsible for locating the resource, reading its bytes, adding the binary data to a Resource
implementation, then returning that Resource
in a ResourceResponse
.
The ResourceReader
or Source
is also responsible for determining the Resource’s name and mime type, which it sends back in the `Resource
implementation.
19.1.1.1. BinaryContent
BinaryContent
is an object used as a container to store translated or transformed DDF components.
Resource
extends BinaryContent
and includes a getName
method. `
BinaryContent` has methods to get the InputStream
, byte
array, MIME type, and size of the represented binary data.
An implementation of BinaryContent
(BinaryContentImpl
) can be found in the Catalog API in the DDF.catalog.data
package.
19.1.2. Retrieving Resource Options
Options can be specified on a retrieve resource request made through any of the supporting endpoint.
To specify an option for a retrieve resource request, the endpoint needs to first instantiate a ResourceRequestByProductUri
or a ResourceRequestById
.
Both of these ResourceRequest
implementations allow a Map
of properties to be specified.
Put the specified option into the Map
under the key RESOURCE_OPTION
.
1
2
3
Map<String, Serializable> properties = new HashMap<String, Serializable>();
properties.put("RESOURCE_OPTION", "OptionA");
ResourceRequestById resourceRequest = new ResourceRequestById("0123456789abcdef0123456789abcdef", properties);
Depending on the support that the ResourceReader
or Source
provides for options, the properties``Map
will be checked for the RESOURCE_OPTION
entry.
If that entry is found, the option will be handled.
If the ResourceReader
or Source
does not support options, that entry will be ignored.
A new ResourceReader
or Source
implementation can be created to support options in a way that is most appropriate.
Since the option is passed through the catalog framework as a property, the ResourceReader
or Source
will have access to that option as long as the endpoint supports options.
19.1.3. Storing Resources
Resources are saved using a ResourceWriter
.
DDF.catalog.resource.ResourceWriter
instances can be discovered via the OSGi Service Registry.
Once retrieved, the ResourceWriter
instance provides clients with a way to store resources and get a corresponding URI that can be used to subsequently retrieve the resource via a ResourceReader
.
Simply invoke either of the storeResource
methods with a resource and any potential arguments.
The ResourceWriter
implementation is responsible for determining where the resource is saved and how it is saved.
This allows flexibility for a resource to be saved in any one of a variety of data stores or file systems.
The following is an example of how to use a generic implementation of ResourceWriter
.
1
2
3
4
5
6
7
8
InputStream inputStream = <Video_Input_Stream>; //InputStream of raw Resource data
MimeType mimeType = new MimeType("video/mpeg"); //Mime Type or content type of Resource
String name = "Facility_Video"; //Descriptive Resource name
Resource resource = new ResourceImpl(inputStream, mimeType, name);
Map<String, Object> optionalArguments = new HashMap<String, Object>();
ResourceWriter writer = new ResourceWriterImpl();
URI resourceUri; //URI that can be used to retrieve Resource
resourceUri = writer.storeResource(resource, optionalArguments); //Null can be passed in here
19.2. Resource Components
Resource components are used when working with resources
A resource is a URI-addressable entity that is represented by a metacard. Resources may also be known as products or data.
Resources may exist either locally or on a remote data store.
Examples of resources include:
-
NITF image
-
MPEG video
-
Live video stream
-
Audio recording
-
Document
A resource object in DDF contains an InputStream
with the binary data of the resource.
It describes that resource with a name, which could be a file name, URI, or another identifier.
It also contains a mime type or content type that a client can use to interpret the binary data.
19.3. Resource Readers
A resource reader retrieves resources associated with metacards via URIs. Each resource reader must know how to interpret the resource’s URI and how to interact with the data store to retrieve the resource.
There can be multiple resource readers in a Catalog instance.
The Catalog Framework
selects the appropriate resource reader based on the scheme of the resource’s URI.
In order to make a resource reader available to the Catalog Framework, it must be exported to the OSGi Service Registry as a DDF.catalog.resource.ResourceReader
.
19.3.1. URL Resource Reader
The URLResourceReader
is an implementation of ResourceReader
which is included in the DDF Catalog.
It obtains a resource given an http, https, or file-based URL.
The URLResourceReader
will connect to the provided Resource URL and read the resource’s bytes into an InputStream
.
Warning
|
When a resource linked using a file-based URL is in the product cache, the |
19.3.1.1. Installing the URL Resource Reader
The URLResourceReader
is installed by default with a standard installation in the Catalog application.
19.3.1.2. Configuring Permissions for the URL Resource Reader
Configuring the URL Resource Reader to retrieve files requires adding Security Manager read permission entries for the directory containing the resources. To add the correct permission entries, edit the file <DDF_HOME>/security/configurations.policy. In the URL Resource Reader section of the file, add two new permission for each top-level directory that the Resource Reader needs to access. The Resource Reader needs one permission to read the directory and another to read its contents.
Warning
|
Adding New Permissions
After adding permission entries, a system restart is required for them to take effect. |
grant codeBase "file:/org.apache.tika.core/catalog-core-urlresourcereader" { permission java.io.FilePermission "<DIRECTORY_PATH>", "read"; permission java.io.FilePermission "<OTHER_DIRECTORY_PATH>", "read"; }
Trailing slashes after <DIRECTORY_PATH> have no effect on the permissions granted. For example, adding a permission for "${/}test${/}path" and "${/}test${/}path${/}" are equivalent. The recursive forms "${/}test${/}path${/}-", and "${/}test${/}path${/}${/}-" are also equivalent.
19.3.1.3. Configuring the URL Resource Reader
Configure the URL Resource Reader from the Admin Console.
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Configuration tab.
-
Select the URL Resource Reader.
See URL Resource Reader configurations for all possible configurations.
19.3.2. Using the URL Resource Reader
URLResourceReader
will be used by the Catalog Framework to obtain a resource whose metacard is cataloged in the local data store.
This particular ResourceReader
will be chosen by the CatalogFramework
if the requested resource’s URL has a protocol of http
, https
, or file
.
For example, requesting a resource with the following URL will make the Catalog Framework invoke the URLResourceReader
to retrieve the product.
file:///home/users/DDF_user/data/example.txt
If a resource was requested with the URL udp://123.45.67.89:80/SampleResourceStream
, the URLResourceReader
would not be invoked.
-
http
-
https
-
file
Note
|
If a file-based URL is passed to the |
19.4. Resource Writers
A resource writer stores a resource and produces a URI that can be used to retrieve the resource at a later time. The resource URI uniquely locates and identifies the resource. Resource writers can interact with an underlying data store and store the resource in the proper place. Each implementation can do this differently, providing flexibility in the data stores used to persist the resources.
Resource Writers should be used within the Content Framework if and when implementing a custom Storage Provider to store the product. The default Storage Provider that comes with the DDF writes the products to the file system.
20. Queries
Clients use ddf.catalog.operation.Query
objects to describe which metacards are needed from Sources.
Query objects have two major components:
A Source uses the Filter criteria constraints to find the requested set of metacards within its domain of metacards. The Query Options are used to further restrict the Filter’s set of requested metacards.
20.1. Filters
An OGC Filter is a Open Geospatial Consortium (OGC) standard that describes a query expression in terms of Extensible Markup Language (XML) and key-value pairs (KVP). The OGC Filter is used to represent a query to be sent to sources and the Catalog Provider, as well as to represent a Subscription. The OGC Filter provides support for expression processing, such as adding or dividing expressions in a query, but that is not the intended use for DDF.
The Catalog Framework does not use the XML representation of the OGC Filter standard. DDF instead uses the Java implementation provided by GeoTools .
GeoTools provides Java equivalent classes for OGC Filter XML elements.
GeoTools originally provided the standard Java classes for the OGC Filter Encoding 1.0 under the package name org.opengis.filter
.
The same package name is used today and is currently used by DDF.
Java developers do not parse or view the XML representation of a Filter in DDF. Instead, developers use only the Java objects to complete query tasks.
Note that the ddf.catalog.operation.Query
interface extends the org.opengis.filter.Filter
interface, which means that a Query object is an OGC Java Filter with Query Options.
public interface Query extends Filter
20.1.1. FilterBuilder API
To avoid the complexities of working with the Filter interface directly and implementing the DDF Profile of the Filter specification, the Catalog includes an API, primarily in DDF.filter
, to build Filters using a fluent API.
To use the FilterBuilder
API, an instance of DDF.filter.FilterBuilder
should be used via the OSGi registry.
Typically, this will be injected via a dependency injection framework.
Once an instance of FilterBuilder
is available, methods can be called to create and combine Filters.
Tip
|
The fluent API is best accessed using an IDE that supports code-completion. For additional details, refer to the [Catalog API Javadoc]. |
20.1.2. Boolean Operators
Filters use a number of boolean operators.
FilterBuilder.allOf(Filter …)
-
creates a new Filter that requires all provided Filters are satisfied (Boolean AND), either from a List or Array of Filter instances.
FilterBuilder.anyOf(Filter …)
-
creates a new Filter that requires at least one of the provided Filters are satisfied (Boolean OR), either from a List or Array of Filter instances.
FilterBuilder.not(Filter filter)
-
creates a new Filter that requires the provided Filter must not match (Boolean NOT).
20.1.3. Attribute
Filters can be based on specific attributes.
FilterBuilder.attribute(String attributeName)
:: begins a fluent API for creating an Attribute-based Filter, i.e., a Filter that matches on Metacards with Attributes of a particular value.
20.1.4. XPath
Filters can be based on XML attributes.
FilterBuilder.xpath(String xpath)
:: begins a fluent API for creating an XPath-based Filter, i.e., a Filter that matches on Metacards with Attributes of type XML that match when evaluating a provided XPath selector.
1
2
3
FilterBuilder.attribute(attributeName).is().like().text(String contextualSearchPhrase);
FilterBuilder.attribute(attributeName).is().like().caseSensitiveText(StringcaseSensitiveContextualSearchPhrase);
FilterBuilder.attribute(attributeName).is().like().fuzzyText(String fuzzySearchPhrase);
21. Metrics
DDF includes a system of data-collection to enable monitoring system health, user interactions, and overall system performance: Metrics Collection.
The Metrics Collection Application collects data for all of the pre-configured metrics in DDF and stores them in custom JMX Management Bean (MBean) attributes.
Samples of each metric’s data is collected every 60 seconds and stored in the <DDF_HOME>/data/metrics
directory with each metric stored in its own .rrd
file.
Refer to the Metrics Reporting Application for how the stored metrics data can be viewed.
Warning
|
Do not remove the Also note that if DDF is uninstalled/re-installed that all existing metrics data will be permanently lost. |
- Catalog Metrics
-
Metrics collected about the catalog status.
- Source Metrics
-
Metrics collected per source.
21.1. Metrics Collection Application
The Metrics Collection Application is responsible for collecting both Catalog and Source metrics.
Use Metrics Collection to collect historical metrics data, such as catalog query metrics, message latency, or individual sources' metrics type of data.
21.1.1. Installing Metrics Collection
The Metrics Collection application is installed by default with a standard installation.
The catalog-level metrics are packaged as the catalog-core-metricsplugin
feature, and the source-level metrics are packaged as the catalog-core-sourcemetricsplugin
feature.
21.1.2. Configuring Metrics Collection
No configuration is made for the Metrics Collection application. All metrics collected are either pre-configured in DDF or dynamically created as sources are created or deleted.
21.1.3. Catalog Metrics
21.1.4. Source Metrics
Metrics are also collected on a per source basis for each configured Federated Source and Catalog Provider.
When the source is configured, the metrics listed in the table below are automatically created.
Metrics are collected for each request(whether enterprise query or a source-specific query).
When the source is deleted (or renamed), the associated metrics' MBeans and Collectors are also deleted.
However, the RRD file in the data/metrics
directory containing the collected metrics remain indefinitely and remain accessible from the Metrics tab in the Admin Console.
In the table below, the metric name is based on the Source’s ID (indicated by <sourceId>
).
Metric | JMX MBean Name | MBean AttributeName | Description |
---|---|---|---|
Source <sourceId> Exceptions |
|
Count |
A count of the total number of exceptions, of all types, thrown from catalog queries executed on this source. |
Source <sourceId> Queries |
|
Count |
A count of the number of queries attempted on this source. |
Source <sourceId> Queries Total Results |
|
Mean |
An average of the total number of results returned from executed queries on this source. This total results data is averaged over the metric’s sample rate. |
For example, if a Federated Source was created with a name of fs-1
, then the following metrics would be created for it:
-
Source Fs1 Exceptions
-
Source Fs1 Queries
-
Source Fs1 Queries Total Results
If this federated source is then renamed to fs-1-rename
, the MBeans and Collectors for the fs-1
metrics are deleted, and new MBeans and Collectors are created with the new names:
-
Source Fs1 Rename Exceptions
-
Source Fs1 Rename Queries
-
Source Fs1 Rename Queries Total Results
Note that the metrics with the previous name remain on the Metrics tab because the data collected while the Source had this name remains valid and thus needs to be accessible.
Therefore, it is possible to access metrics data for sources renamed months ago, i.e., until DDF is reinstalled or the metrics data is deleted from the <DDF_HOME>/data/metrics
directory.
Also note that the source metrics' names are modified to remove all non-alphanumeric characters and renamed in camelCase.
21.2. Metrics Reporting Application
The DDF Metrics Reporting Application provides access to historical data in several formats: a graphic, a comma-separated values file, a spreadsheet, a PowerPoint file, XML, and JSON formats for system metrics collected while DDF is running. Aggregate reports (weekly, monthly, and yearly) are also provided where all collected metrics are included in the report. Aggregate reports are available in Excel and PowerPoint formats.
To use the Metrics Reporting Application:
-
Navigate to the Admin Console.
-
Select the Platform Application.
-
Select the Metrics tab.
With each metric in the list, a set of hyperlinks is displayed under each column. Each column’s header is displayed with the available time ranges. The time ranges currently supported are 15 minutes, 1 hour, 1 day, 1 week, 1 month, 3 months, 6 months, and 1 year, measured from the time that the hyperlink is clicked.
All metrics reports are generated by accessing the collected metric data stored in the <DDF_HOME>/data/metrics
directory.
All files in this directory are generated by the JmxCollector using RRD4J, a Round Robin Database for a Java open source product.
All files in this directory will have the .rrd
file extension and are binary files, hence they cannot be opened directly.
These files should only be accessed using the Metrics tab’s hyperlinks.
There is one RRD file per metric being collected.
Each RRD file is sized at creation time and will never increase in size as data is collected.
One year’s worth of metric data requires approximately 1 MB file storage.
Warning
|
Do not remove the Also note that if DDF is uninstalled/re-installed, all existing metrics data will be permanently lost. |
Hyperlinks are provided for each metric and each format in which data can be displayed.
For example, the PNG hyperlink for 15m for the Catalog Queries metric maps to https://{FQDN}:{PORT}/services/internal/metrics/catalogQueries.png?dateOffset=900, where the dateOffset=900
indicates the previous 900 seconds (15 minutes) to be graphed.
Note that the date format will vary according to the regional/locale settings for the server.
All of the metric graphs displayed are in PNG format and are displayed on their own page.
The user may use the back button in the browser to return to the Admin Console, or, when selecting the hyperlink for a graph, they can use the right mouse button in the browser to display the graph in a separate browser tab or window, which will keep the Admin Console displayed.
The user can also specify custom time ranges by adjusting the URL used to access the metric’s graph.
The Catalog Queries metric data may also be graphed for a specific time range by specifying the startDate
and endDate
query parameters in the URL.
For example, to map the Catalog Queries metric data for March 31, 6:00 am, to April 1, 2013, 11:00 am, (Arizona timezone, which is -07:00) the URL would be:
https://{FQDN}:{PORT}/services/internal/metrics/catalogQueries.png?startDate=2013-03-31T06:00:00-07:00&endDate=2013-04-01T11:00:00-07:00
Or to view the last 30 minutes of data for the Catalog Queries metric, a custom URL with a dateOffset=1800
(30 minutes in seconds) could be used:
https://{FQDN}:{PORT}/services/internal/metrics/catalogQueries.png?dateOffset=1800
21.2.1. Metrics Aggregate Reports
The Metrics tab also provides aggregate reports for the collected metrics. These are reports that include data for all of the collected metrics for the specified time range.
The aggregate reports provided are:
-
Weekly reports for each week up to the past four complete weeks from current time. A complete week is defined as a week from Monday through Sunday. For example, if current time is Thursday, April 11, 2013, the past complete week would be from April 1 through April 7.
-
Monthly reports for each month up to the past 12 complete months from current time. A complete month is defined as the full month(s) preceding current time. For example, if current time is Thursday, April 11, 2013, the past complete 12 months would be from April 2012 through March 2013.
-
Yearly reports for the past complete year from current time. A complete year is defined as the full year preceding current time. For example, if current time is Thursday, April 11, 2013, the past complete year would be 2012.
An aggregate report in XLS format would consist of a single workbook (spreadsheet) with multiple worksheets in it, where a separate worksheet exists for each collected metric’s data. Each worksheet would display:
-
the metric’s name and the time range of the collected data,
-
two columns: Timestamp and Value, for each sample of the metric’s data that was collected during the time range, and
-
a total count (if applicable) at the bottom of the worksheet.
An aggregate report in PPT format would consist of a single slideshow with a separate slide for each collected metric’s data. Each slide would display:
-
a title with the metric’s name.
-
the PNG graph for the metric’s collected data during the time range.
-
a total count (if applicable) at the bottom of the slide.
Hyperlinks are provided for each aggregate report’s time range in the supported display formats, which include Excel (XLS) and PowerPoint (PPT). Aggregate reports for custom time ranges can also be accessed directly via the URL:
https://{FQDN}:{PORT}/services/internal/metrics/report.<format>?startDate=<start_date_value>&endDate=<end_date_value>
where <format>
is either xls
or ppt
and the <start_date_value>
and <end_date_value>
specify the custom time range for the report.
These example reports represent custom aggregate reports. NOTE: all example URLs begin with https://{FQDN}:{PORT}, which is omitted in the table for brevity.
Description | URL |
---|---|
XLS aggregate report for March 15, 2013 to April 15, 2013 |
|
XLS aggregate report for last 8 hours |
|
PPT aggregate report for March 15, 2013 to April 15, 2013 |
|
PPT aggregate report for last 8 hours |
|
21.2.2. Viewing Metrics
The Metrics Viewer has reports in various formats.
-
Navigate to the Admin Console.
-
Select the Platform application.
-
Select the Metrics tab.
Reports are organized by timeframe and output format.
Standard time increments:
* 15m
: 15 Minutes
* 1h
: 1 Hour
* 1d
: 1 Day
* 1w
: 1 Week
* 1M
: 1 Month
* 3M
: 3 Month
* 6M
: 6 Month
* 1y
: 1 Year
Custom timeframes are also available via the selectors at the bottom of the page.
Output formats: * PNG * CSV (Comma-separated values) * XLS
Note
|
Based on the browser’s configuration, either the |
22. Action Framework
The Action Framework was designed as a way to limit dependencies between applications (apps) in a system. For instance, a feature in an app, such as an Atom feed generator, might want to include an external link as part of its feed’s entries. That feature does not have to be coupled to a REST endpoint to work, nor does it have to depend on a specific implementation to get a link. In reality, the feature does not identify how the link is generated, but it does identify whether the link works or does not work when retrieving the intended entry’s metadata. Instead of creating its own mechanism or adding an unrelated feature, it could use the Action Framework to query the OSGi container for any service that can provide a link. This does two things: it allows the feature to be independent of implementations, and it encourages reuse of common services.
The Action Framework consists of two major Java interfaces in its API:
-
ddf.action.Action
-
ddf.action.ActionProvider
- Actions
-
Specific tasks that can be performed as services.
- Action Providers
-
Lists of related actions that a service is capable of performing.
22.1. Action Providers
- Download Resource ActionProvider
-
Downloads a resource to the local product cache.
- IdP Logout Action Provider
-
Identity Provider Logout.
- Karaf Logout Action
-
Local Logout.
- LDAP Logout Action
-
Ldap Logout.
- Overlay ActionProvider
-
Provides a metacard URL that transforms the metacard into a geographically aligned image (suitable for overlaying on a map).
- View Metacard ActionProvider
-
Provides a URL to a metacard.
- Metacard Transformer ActionProvider
-
Provides a URL to a metacard that has been transformed into a specified format.
23. Asynchronous Processing Framework
Note
|
This code is experimental. While this interface is functional and tested, it may change or be removed in a future version of the library. |
The Asynchronous Processing Framework is a way to run plugins asynchronously. Generally, plugins that take a significant amount of processing time and whose
results are not immediately required are good candidates for being asynchronously processed. A Processing Framework can either be run on the local or
remote system. Once the Processing Framework finishes processing incoming requests, it may submit (Create|Update|Delete)Request
s to the Catalog. The type of plugins that a Processing Framework
runs are the Post-Process Plugins. The Post-Process Plugins are triggered by the Processing Post Ingest Plugin, which is a Post-Ingest Plugin. Post-Ingest Plugins are run after the metacard
has been ingested into the Catalog. This feature is uninstalled by default.
Warning
|
The Processing Framework does not support partial updates to the Catalog. This means that if any changes are made to a metacard in the Catalog between the time asynchronous processing starts and ends, those changes will be overwritten by the ProcessingFramework updates sent back to the Catalog. This feature should be used with caution. |
-
org.codice.ddf.catalog.async.processingframework.api.internal.ProcessingFramework
-
org.codice.ddf.catalog.async.plugin.api.internal.PostProcessPlugin
-
org.codice.ddf.catalog.async.data.api.internal.ProcessItem
-
org.codice.ddf.catalog.async.data.api.internal.ProcessCreateItem
-
org.codice.ddf.catalog.async.data.api.internal.ProcessUpdateItem
-
org.codice.ddf.catalog.async.data.api.internal.ProcessDeleteItem
-
org.codice.ddf.catalog.async.data.api.internal.ProcessRequest
-
org.codice.ddf.catalog.async.data.api.internal.ProcessResoure
-
org.codice.ddf.catalog.async.data.api.internal.ProcessResourceItem
The ProcessingFramework
is responsible for processing incoming ProcessRequest
s that contain a ProcessItem
. A ProcessingFramework
should never block. It receives
its ProcessRequest
s from a PostIngestPlugin
on all CUD operations to the Catalog. In order to determine whether or not asynchronous processing
is required by the ProcessingFramework
, the ProcessingFramework
should mark any request it has submitted back the Catalog, otherwise a processing loop may occur.
For example, the default In-Memory Processing Framework adds a POST_PROCESS_COMPLETE
flag to the Catalog CUD request after processing. This flag is checked by the
ProcessingPostIngestPlugin
before a ProcessRequest
is sent to the ProcessingFramework
. For an example of a ProcessingFramework
, please refer to the org.codice.ddf.catalog.async.processingframework.impl.InMemoryProcessingFramework
.
A ProcessRequest
contains a list of ProcessItem
s for the ProcessingFramework
to process. Once a ProcessRequest
has been processed by a ProcessingFramework
, the ProcessingFramework
should mark the ProcessRequest
as already been processed, so that it does not process it again.
The PostProcessPlugin
is a plugin that will be run by the ProcessingFramework
. It is capable of processing ProcessCreateItem
s, ProcessUpdateItem
s, and ProcessDeleteItem
s.
Warning
|
ProcessItem
Do not implement |
The ProcessItem
is contained by a ProcessRequest
. It can be either a ProcessCreateItem
, ProcessUpdateItem
, or ProcessDeleteItem
.
The ProcessResource
is a piece of content that is attached to a metacard. The piece of content can be either local or remote.
The ProcessResourceItem
indicates that the item being processed may have a ProcessResource
associated with it.
Warning
|
ProcessResourceItem Warning
Do not implement |
The ProcessCreateItem
is an item for a metacard that has been created in the Catalog. It contains the created metacard and, optionally, a ProcessResource
.
The ProcessUpdateItem
is an item for a metacard that has been updated in the Catalog. It contains the original metacard, the updated metacard and, optionally, a ProcessResource
.
The ProcessDeleteItem
is an item for a metacard that has been deleted in the Catalog. It contains the deleted metacard.
24. Eventing
The Eventing capability of the Catalog allows endpoints (and thus external users) to create a "standing query" and be notified when a matching metacard is created, updated, or deleted.
Notably, the Catalog allows event evaluation on both the previous value (if available) and new value of a Metacard when an update occurs.
Eventing allows DDFs to receive events on operations (e.g. create, update, delete) based on particular queries or actions. Once subscribed, users will receive notifications of events such as update or create on any source.
25. Migration API
Note
|
This code is experimental. While the interfaces and classes provided are functional and tested, they may change or be removed in a future version of the library. |
DDF currently has an experimental API for making bundles migratable. Interfaces and classes in platform/migration/platform-migratable-api
are
used by the system to identify bundles that provide implementations for export and import operations.
The migration API provides a mechanism for bundles to handle exporting data required to clone or backup/restore a DDF system. The migration process is meant to
be flexible, so an implementation of org.codice.ddf.migration.Migratable
can handle exporting data for a single bundle or groups of bundles such as applications.
For example, the org.codice.ddf.platform.migratable.impl.PlatformMigratable
handles exporting core system files for the Platform application. Each migratable
must provide a unique identifier via its getId()
method used by the migration API to uniquely identify the migratable between exports and imports.
DDF defines migratables of its own to export/import all configurations stored in org.osgi.service.cm.ConfigurationAdmin
.
These do not need to be handled by implementations of org.codice.ddf.migration.Migratable
.
An export and an import operation can be performed through the Command Console.
When an export operation is processed, the migration API will do a look-up for all registered OSGi services
that are implementing Migratable
and call their doExport()
method. As part of the exported data, information about the migratable as required by the org.codice.ddf.platform.services.common.Describable
interface will be included. In particular the version string returned will help the migration API identify the version of the exported data from the corresponding migratable and must
be provided as a non-blank string.
When an import operation is processed, the migration API will do another look-up for all registered OSGi services that are implementing Migratable
and call their doImport()
or doIncompatibleImport()
methods based on whether the recorded version string at export time is equal to the version string currently provided by the migratable or not. The
doMissingImport()
method will be called instead of one of the other two methods when the migration API detects that the corresponding migratable data is missing from the exported data.
Any migratables that are tagged using the OptionalMigratable
tag interface will automatically be skipped unless otherwise specified when the import phase is initiated.
The services that implement the migratable interface will be called one at a time based on their service ranking order, and do not need to be thread safe. A bundle or a feature can have as many services implementing the interfaces as needed.
25.1. The Migration API Interfaces and Classes
-
org.codice.ddf.migration.Migratable
-
org.codice.ddf.migration.OptionalMigratable
-
org.codice.ddf.migration.MigrationContext
-
org.codice.ddf.migration.ExportMigrationContext
-
org.codice.ddf.migration.ImportMigrationContext
-
org.codice.ddf.migration.MigrationEntry
-
org.codice.ddf.migration.ExportMigrationEntry
-
org.codice.ddf.migration.ImportMigrationEntry
-
org.codice.ddf.migration.MigrationOperation
-
org.codice.ddf.migration.MigrationReport
-
org.codice.ddf.migration.MigrationMessage
-
org.codice.ddf.migration.MigrationException
-
org.codice.ddf.migration.MigrationWarning
-
org.codice.ddf.migration.MigrationInformation
-
org.codice.ddf.migration.MigrationSuccessfulInformation
25.1.1. Migratable
The contract for a migratable is stored here. This is the only interface that should be implemented by implementers and registered as an OSGi service. All other interfaces will be implemented by the migration API that provides support for migratables.
The org.codice.ddf.migration.Migratable
interface defines these methods:
-
String getId()
-
String getVersion()
-
String getTitle()
-
String getDescription()
-
String getOrganization()
-
void doExport(ExportMigrationContext context)
-
void doImport(ImportMigrationContext context)
-
void doIncompatibleImport(ImportMigrationContext context)
-
void doMissingImport(ImportMigrationContext context)
The getId()
method returns a unique identifier for this migratable that must remain constant between the export and the import operations in order for the migration API to correlate the exported data with the migratable during the import operation. It
must be unique across all migratables.
The getVersion()
method returns a unique version string which is meant to identify the version of the data exported or supported at import time by the migratable. It cannot be blank and its format is left to the
migratable. The only noticeable requirement is that when the string compares equal using the String.equals()
method, the migration API will call doImport()
instead of doIncompatibleImport()
to restore previously exported data for the migratable.
The getTitle()
method returns a simple title for the migratable.
The getDescription()
method returns a short description of the type of data exported by the migratable.
The getOrganization()
method provides the name of the organization responsible
for the migratable.
The doExport()
method is called by the migration API along with a context for the current export operation to store data.
The doImport()
method is called by the migration API along with a context for the current import operation when
the version of exported data matches the current version reported by the migratable. This method can be used to restore previously exported data.
The doIncompatibleImport()
method is called to restore incompatible data which might require transformation. It is provided a context for the current import operation and the previously exported version. It can then proceed with restoring incompatible data which might require transformation.
Finally, the doMissingImport()
method will be called along with the context for the current import operation when data had not been exported for the corresponding migratable.
This will be the case when a migratable is later introduced in the software distribution.
In order to create a Migratable
for a module of the system, the org.codice.ddf.migration.Migratable
interface must be implemented and the implementation must be registered under the org.codice.ddf.migration.Migratable
interface as an OSGi service in the OSGi service registry.
Creating an OSGi service allows for the migration API to lookup all implementations of org.codice.ddf.migration.Migratable
and command them to export or import.
25.1.2. OptionalMigratable
This interface is designed as a tagged interface to identify optional migratables. An optional migratable will be skipped by default during the import phase. It can still be manually marked as mandatory when initiating the import phase.
25.1.3. MigrationContext
The org.codice.ddf.migration.MigrationContext
provides contextual information about an operation in progress for a given migratable. This is a sort of sandbox that is unique to each migratable. This interface defines the following methods:
-
MigrationReport getReport()
-
String getId()
The getReport()
method returns a migration report that can be used to record messages while processing an export or an import operation.
The getId()
method returns the identifier for the currently processing migratable.
25.1.4. ExportMigrationContext
The export migration context provides methods for creating new migration entries and system property referenced migration entries to track exported migration files for a given migratable while processing an export migration operation. It defines the following methods:
-
Optional<ExportMigrationEntry> getSystemPropertyReferencedEntry(String name)
-
Optional<ExportMigrationEntry> getSystemPropertyReferencedEntry(String name, BiPredicate<MigrationReport, String> validator)
-
ExportMigrationEntry getEntry(Path path)
-
Stream<ExportMigrationEntry> entries(Path path)
-
Stream<ExportMigrationEntry> entries(Path path, PathMatcher filter)
-
Stream<ExportMigrationEntry> entries(Path path, boolean recurse)
-
Stream<ExportMigrationEntry> entries(Path path, boolean recurse, PathMatcher filter)
The getSystemPropertyReferencedEntry()
methods create a migration entry to track a file referenced by a given system property value.
The getEntry()
method creates a migration entry given the path for a specific file or directory.
The entries()
methods create multiple entries corresponding to all files recursively (or not) located underneath a given path with an optional path matcher to filter which files to create entries for.
Once an entry is created, it is not stored with the exported data. It is the migratable’s responsibility to store the data using one of the entry’s provided methods. Entries are uniquely identified using a relative path and are specific to each migratable meaning that an entry with the same path in two migratables will not conflict with each other. Each migratable is given its own context (a.k.a. sandbox) to work with.
25.1.5. ImportMigrationContext
The import migration context provides methods for retrieving migration entries and system property referenced migration entries corresponding to exported files for a given migratable while processing an import migration operation. It defines the following methods:
-
Optional<ImportMigrationEntry> getSystemPropertyReferencedEntry(String name)
-
ImportMigrationEntry getEntry(Path path)
-
Stream<ImportMigrationEntry> entries(Path path)
-
Stream<ImportMigrationEntry> entries(Path path, PathMatcher filter)
The getSystemPropertyReferencedEntry()
method retrieves a migration entry for a file that was referenced by a given system property value.
The getEntry()
method retrieves a migration entry given the path for a specific file or directory.
The entries()
methods retrieve multiple entries corresponding to all exported files recursively located underneath a given relative path with an optional path matcher to filter which files to retreive entries for.
Once an entry is retrieved, its exported data is not restored. It is the migratable’s responsibility to restore the data using one of the entry’s provided methods. Entries are uniquely identified using a relative path and are specific to each migratable meaning that an entry with the same path in two migratables will not conflict with each other. Each migratable is given its own context (a.k.a. sandbox) to work with.
25.1.6. MigrationEntry
This interface provides supports for exported files. It defines the following methods:
-
MigrationReport getReport()
-
String getId()
-
String getName()
-
Path getPath()
-
boolean isDirectory()
-
boolean isFile()
-
long getLastModifiedTime()
The getReport()
method provides access to the associated migration report where messages can be recorded.
The getId()
method returns the identifier for the migratable responsible for this entry.
The getName()
method provides the unique name for this entry in an OS-independent way.
The getPath()
method provides the unique path to the corresponding file for this entry in an OS-specific way.
The isDirectory()
method indicates if the entry represents a directory.
The isFile()
method indicates if the entry represents a file.
The getLastModifiedTime()
method provides the last modification time for the corresponding file or directory as available when the file or directory is exported.
25.1.7. ExportMigrationEntry
The export migration entry provides additional methods available for entries created at export time. It defines the following methods:
-
Optional<ExportMigrationEntry> getPropertyReferencedEntry(String name)
-
Optional<ExportMigrationEntry> getPropertyReferencedEntry(String name, BiPredicate<MigrationReport, String> validator)
-
boolean store()
-
boolean store(boolean required)
-
boolean store(PathMatcher filter)
-
boolean store(boolean required, PathMatcher filter)
-
boolean store(BiThrowingConsumer<MigrationReport, OutputStream, IOException> consumer)
-
OutputStream getOutputStream() throws IOException
The getPropertyReferencedEntry()
methods create another migration entry for a file that was referenced by a given property value in the file represented by this entry.
The store()
and store(boolean required)
methods will automatically copy the content of the corresponding file as part of the export making sure the file exists (if required) on disk otherwise an error will be recorded. If the path represents a directory then all files recursively found under the path will be automatically exported.
The store(PathMatcher filter)
and store(boolean required, PathMatcher filter)
methods will automatically copy the content of the corresponding file if it matches the filter as part of the export making sure the file exists (if required) on disk otherwise an error will be recorded. If the path represents a directory then all matching files recursively found under the path will be automatically exported.
The store(BiThrowingConsumer<MigrationReport, OutputStream, IOException> consumer)
method allows the migratable to control the export process by specifying a callback consumer that will be called back with an output stream where the data can be writen to instead of having a file on disk being copied by the migration API.
The OutputStream getOutputStream()
method provides access to the low-level output stream where the migratable can write data directly as opposed to having a file on disk copied automatically.
25.1.8. ImportMigrationEntry
The import migration entry provides additional methods available for entries retrieved at import time. It defines the following methods:
-
Optional<ImportMigrationEntry> getPropertyReferencedEntry(String name)
-
boolean restore()
-
boolean restore(boolean required)
-
boolean restore(PathMatcher filter)
-
boolean restore(boolean required, PathMatcher filter)
-
boolean restore(BiThrowingConsumer<MigrationReport, Optional<InputStream>, IOException> consumer)
-
Optional<InputStream getInputStream() throws IOException
The getPropertyReferencedEntry()
method retrieves another migration entry for a file that was referenced by a given property value in the file represented by this entry.
The restore()
and restore(boolean required)
methods will automatically copy the exported content of the corresponding file back to disk if it was exported; otherwise an error will be recorded. If the path represents a directory then all file entries originally recursively exported under this entry’s path will be automatically imported. If the directory had been completely exported using one of the store()
or store(boolean required)
methods then in addition to restoring all entries recursively, calling this method will also remove any existing files or directories that were not on the original system.
The restore(PathMatcher filter)
and restore(boolean required, PathMatcher filter)
methods will automatically copy the exported content of the corresponding file if it matches the filter back to disk if it was exported; otherwise an error will be recorded. If the path represents a directory then all matching file entries originally recursively exported under this entry’s path will be automatically imported.
The restore(BiThrowingConsumer<MigrationReport, Optional<InputStream>, IOException> consumer)
method allows the migratable to control the import process by specifying a callback consumer that will be called back with an optional input stream (empty if the data was not exported) where the data can be read from instead of having a file on disk being created or updated by the migration API.
The Optional<InputStream> getInputStream()
method provides access to the optional low-level input stream (empty if the data was not exported) where the migratable can read data directly as opposed to having a file on disk created or updated automatically.
25.1.9. MigrationOperation
The org.codice.ddf.migration.MigrationOperation
provides a simple enumeration for identifying the various migration operations available.
25.1.10. MigrationReport
The org.codice.ddf.migration.MigrationReport
interface provides information about the execution of a migration operation. It defines the following methods:
-
MigrationOperation getOperation()
-
Instant getStartTime()
-
Optional<Instant> getEndTime()
-
MigrationReport record(String msg)
-
MigrationReport record(String format, @Nullable Object… args)
-
MigrationReport record(MigrationMessage msg)
-
MigrationReport doAfterCompletion(Consumer<MigrationReport> code)
-
Stream<MigrationMessage> messages()
-
default Stream<MigrationException> errors()
-
Stream<MigrationWarning> warnings()
-
Stream<MigrationInformation> infos()
-
boolean wasSuccessful()
-
boolean wasSuccessful(@Nullable Runnable code)
-
boolean wasIOSuccessful(@Nullable ThrowingRunnable<IOException> code) throws IOException
-
boolean hasInfos()
-
boolean hasWarnings()
-
boolean hasErrors()
-
void verifyCompletion()
The getOperation()
method provides the type of migration operation (i.e. export or import) currently in progress.
The getStartTime()
method provides the time at which the corresponding operation started.
The getEndTime()
method provides the optional time at which the corresponding operation ended. The time is only available if the operation has ended.
The record()
methods enable messages to be recorded with the report. Messages are displayed on the console for the administrator.
The doAfterCompletion()
methods enable code to be registered such that it is invoked at the end before a successful result is returned. Such code can still affect the result of the operation.
The messages()
method provides access to all recorded messages so far.
The errors()
method provides access to all recorded error messages so far.
The warnings()
method provides access to all recorded warning messages so far.
The infos()
method provides access to all recorded informational messages so far.
The wasSuccessful()
method provides a quick check to see if the report is successful. A successful report might have warnings recorded but cannot have errors recorded.
The wasSuccessful(Runnable code) method allows code to be executed. It will return true if no new errors are recorded as a result of executing the provided code.
method will return true if at least one information message has been recorded so far.
The `wasIOSuccessful(ThrowingRunnable<IOException> code) method allows code to be executed which can throw I/O exceptions which are automatically recorded as errors. It will return true if no new errors are recorded as a result of executing the provided code.
The `hasInfos()
The hasWarnings()
method will return true if at least one warning message has been recorded so far.
The hasErrors()
method will return true if at least one error message has been recorded so far.
The `verifyCompletion() method will verify if the report is successful and if not, it will throw back the first recorded exception and attach as suppressed exceptions all other recorded exceptions.
25.1.11. MigrationMessage
The `org.codice.ddf.migration.MigrationException is defined as a base class for all recordable messages during migration operations. It defines the following methods:
-
String getMessage()
The getMessage()
method provides a message for the corresponding exception, warning, or info that will be displayed to the administrator on the console.
25.1.12. MigrationException
An org.codice.ddf.migration.MigrationException
should be thrown when an unrecoverable exception occurs that prevents the export or the import operation from continuing. It is also possible to simply record one or many exception(s) with the migration report in order to fail the export or import operation
while not aborting it right away. This provides for the ability to record as many errors as possible and report all of them back to the administrator. All migration exception messages are displayed to the administrator.
25.1.13. MigrationWarning
An org.codice.ddf.migration.MigrationWarning
should be used when a migratable wants to warn the administrator that certain aspects of the export or the import may cause problems. For example, if an absolute path is encountered, that path may not exist on the target system and cause the installation to fail.
All migration warning messages are displayed to the administrator.
25.1.14. MigrationInformation
An org.codice.ddf.migration.MigrationInformation
should be used when a migratable simply wants to provide useful information to the administrator. All
migration information messages are displayed to the administrator.
25.1.15. MigrationSuccessfulInformation
The org.codice.ddf.migration.MigrationSuccessfulInformation
can be used to further qualify an information message as representing the success of an operation.
26. Security Framework
The DDF Security Framework utilizes Apache Shiro as the underlying security framework. The classes mentioned in this section will have their full package name listed, to make it easy to tell which classes come with the core Shiro framework and which are added by DDF.
26.1. Subject
ddf.security.Subject <extends> org.apache.shiro.subject.Subject
The Subject is the key object in the security framework. Most of the workflow and implementations revolve around creating and using a Subject. The Subject object in DDF is a class that encapsulates all information about the user performing the current operation. The Subject can also be used to perform permission checks to see if the calling user has acceptable permission to perform a certain action (e.g., calling a service or returning a metacard). This class was made DDF-specific because the Shiro interface cannot be added to the Query Request property map.
Classname | Description |
---|---|
ddf.security.impl.SubjectImpl |
Extends |
26.1.1. Security Manager
ddf.security.service.SecurityManager
The Security Manager is a service that handles the creation of Subject objects.
A proxy to this service should be obtained by an endpoint to create a Subject and add it to the outgoing QueryRequest
.
The Shiro framework relies on creating the subject by obtaining it from the current thread.
Due to the multi-threaded and stateless nature of the DDF framework, utilizing the Security Manager interface makes retrieving Subjects easier and safer.
Classname | Description |
---|---|
|
This implementation of the Security Manager handles taking in both |
26.1.2. Realms
DDF uses Apache Shiro for the concept of Realms for Authentication and Authorization. Realms are components that access security data such as such as users or permissions.
26.1.2.1. Authenticating Realms
org.apache.shiro.realm.AuthenticatingRealm
Authenticating Realms are used to authenticate an incoming authentication token and create a Subject on successful authentication. A Subject is an application user and all available security-relevant information about that user.
Classname | Description |
---|---|
|
This realm delegates authentication to the Secure Token Service (STS). It creates a |
26.1.2.2. Authorizing Realms
org.apache.shiro.realm.AuthorizingRealm
Authorizing Realms are used to perform authorization on the current Subject.
These are used when performing both service authorization and filtering.
They are passed in the AuthorizationInfo
of the Subject along with the permissions of the object wanting to be accessed. The response from these realms is a true (if the Subject has permission to access) or false (if the Subject does not).
Classname | Description |
---|---|
|
The |
|
This filter is the main security filter that works with a number of handlers to protect a variety of web contexts, each using different authentication schemes and policies. |
|
This handler is executed by the WebSSOFilter for any contexts configured to use it.
This handler should always come first when configured in the Web Context Policy Manager, as it provides a caching capability to web contexts that use it.
The handler will first check for the existence of an HTTP Authorization header of type SAML, whose value is a Base64 + deflate SAML assertion.
If that is not found, then the handler will check for the existence of the deprecated |
|
Checks for basic authentication credentials in the http request header.
If they exist, they are retrieved and passed to the |
|
Handler for PKI based authentication.
X509 chain will be extracted from the HTTP request and converted to a |
|
Handler that allows guest user access via a guest user account.
The guest account credentials are configured via the org.codice.ddf.security.claims.guest.GuestClaimsHandler.
The |
|
This filter runs immediately after the WebSSOFilter and exchanges any authentication information found in the request with a Subject via Shiro. |
|
This filter runs immediately after the |
|
This is an abstract authenticating realm that exchanges an |
|
This realm is an implementation of |
|
This is an abstract authorizing realm that takes care of caching and parsing the Subject’s |
|
This realm performs the authorization decision and may or may not delegate out to the external XACML processing engine. It uses the incoming permissions to create a decision. However, it is possible to extend this realm using the ddf.security.policy.extension.PolicyExtension interface. This interface allows an integrator to add additional policy information to the PDP that can’t be covered via its generic matching policies. This approach is often easier to configure for those that are not familiar with XACML. |
|
A number of STS validators are provided for X.509 (BinarySecurityToken), UsernameToken, SAML Assertion, and DDF custom tokens.
The DDF custom tokens are all |
Warning
|
An update was made to the SAML Assertion Handler to pass SAML assertions through the Authorization HTTP header. Cookies are still accepted and processed to maintain legacy federation compatibility, but assertions are sent in the header on outbound requests. While a machine’s identity will still federate between versions, a user’s identity will ONLY be federated when a DDF version 2.7.x server communicates with a DDF version 2.8.x+ server, or between two servers whose versions are 2.8.x or higher. |
26.2. Security Core
The Security Core application contains all of the necessary components that are used to perform security operations (authentication, authorization, and auditing) required in the framework.
26.2.1. Security Core API
The Security Core API contains all of the DDF APIs that are used to perform security operations within DDF.
26.2.1.1. Installing the Security Core API
The Security Services App installs the Security Core API by default. Do not uninstall the Security Core API as it is integral to system function and all of the other security services depend upon it.
26.2.1.2. Configuring the Security Core API
The Security Core API has no configurable properties.
26.2.2. Security Core Implementation
The Security Core Implementation contains the reference implementations for the Security Core API interfaces that come with the DDF distribution.
26.2.2.1. Installing the Security Core Implementation
The Security Core app installs this bundle by default. It is recommended to use this bundle as it contains the reference implementations for many classes used within the Security Framework.
26.2.2.2. Configuring the Security Core Implementation
The Security Core Implementation has no configurable properties.
26.2.3. Security Core Commons
The Security Core Commons bundle contains helper and utility classes that are used within DDF to help with performing common security operations.
Most notably, this bundle contains the ddf.security.common.audit.SecurityLogger
class that performs the security audit logging within DDF.
26.2.3.1. Configuring the Security Core Commons
The Security Core Commons bundle has no configurable properties.
26.3. Security IdP
The Security IdP application provides service provider handling that satisfies the SAML 2.0 Web SSO profile in order to support external IdPs (Identity Providers) or SPs (Service Providers). This capability allows use of DDF as the SSO solution for an entire enterprise.
Bundle Name | Located in Feature | Description |
---|---|---|
|
|
The IdP client that interacts with the specified Identity Provider. |
|
|
An internal Identity Provider solution. |
Note
|
Limitations
The internal Identity Provider solution should be used in favor of any external solutions until the IdP Service Provider fully satisfies the SAML 2.0 Web SSO profile . |
26.4. Security Encryption
The Security Encryption application offers an encryption framework and service implementation for other applications to use. This service is commonly used to encrypt and decrypt default passwords that are located within the metatype and Admin Console.
The encryption service and encryption command, which are based on tink , provide an easy way for developers to add encryption capabilities to DDF.
26.4.1. Security Encryption API
The Security Encryption API bundle provides the framework for the encryption service. Applications that use the encryption service should use the interfaces defined within it instead of calling an implementation directly.
26.4.1.1. Installing Security Encryption API
This bundle is installed by default as part of the security-encryption
feature.
Many applications that come with DDF depend on this bundle and it should not be uninstalled.
26.4.1.2. Configuring the Security Encryption API
The Security Encryption API has no configurable properties.
26.4.2. Security Encryption Implementation
The Security Encryption Implementation bundle contains all of the service implementations for the Encryption Framework and exports those implementations as services to the OSGi service registry.
26.4.2.1. Installing Security Encryption Implementation
This bundle is installed by default as part of the security-encryption
feature.
Other projects are dependent on the services this bundle exports and it should not be uninstalled unless another security service implementation is being added.
26.4.2.2. Configuring Security Encryption Implementation
The Security Encryption Implementation has no configurable properties.
26.4.3. Security Encryption Commands
The Security Encryption Commands bundle enhances the DDF system console by allowing administrators and integrators to encrypt and decrypt values directly from the console.
The security:encrypt
command allows plain text to be encrypted using HMAC + AES for encryption with a randomly generated key that is created when the system is installed.
This is useful when displaying password fields in a GUI.
Below is an example of the security:encrypt command used to encrypt the plain text "myPasswordToEncrypt".
The output, bR9mJpDVo8bTRwqGwIFxHJ5yFJzatKwjXjIo/8USWm8=
, is the encrypted value.
ddf@local>security:encrypt myPasswordToEncrypt
bR9mJpDVo8bTRwqGwIFxHJ5yFJzatKwjXjIo/8USWm8=
26.4.3.1. Installing the Security Encryption Commands
This bundle is installed by default with the security-encryption
feature.
This bundle is tied specifically to the DDF console and can be uninstalled if not needed.
When uninstalled, however, administrators will not be able to encrypt and decrypt data from the console.
26.4.3.2. Configuring the Security Encryption Commands
The Security Encryption Commands have no configurable properties.
26.5. Security LDAP
The DDF LDAP application allows the user to configure either an embedded or a standalone LDAP server. The provided features contain a default set of schemas and users loaded to help facilitate authentication and authorization testing.
26.5.1. Embedded LDAP Server
DDF includes an embedded LDAP server (OpenDJ) for testing and demonstration purposes.
Warning
|
The embedded LDAP server is intended for testing purposes only and is not recommended for production use. |
26.5.1.1. Installing the Embedded LDAP Server
The embedded LDAP server is not installed by default with a standard installation.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Install the
opendj-embedded
feature.
26.5.1.2. Configuring the Embedded LDAP
Configure the Embedded LDAP from the Admin Console:
-
Navigate to the Admin Console.
-
Select the OpenDj Embedded application.
-
Select the Configuration tab.
Configuration Name | Description |
---|---|
LDAP Port |
Sets the port for LDAP (plaintext and startTLS). 0 will disable the port. |
LDAPS Port |
Sets the port for LDAPS. 0 will disable the port. |
Base LDIF File |
Location on the server for a LDIF file. This file will be loaded into the LDAP and overwrite any existing entries. This option should be used when updating the default groups/users with a new LDIF file for testing. The LDIF file being loaded may contain any LDAP entries (schemas, users, groups, etc.). If the location is left blank, the default base LDIF file will be used that comes with DDF. |
26.5.1.3. Connecting to Standalone LDAP Servers
DDF instances can connect to external LDAP servers by installing and configuring the security-sts-ldaplogin
and security-sts-ldapclaimshandler
features detailed here.
In order to connect to more than one LDAP server, configure these features for each LDAP server.
26.5.1.4. Embedded LDAP Configuration
The Embedded LDAP application contains an LDAP server (OpenDJ version 2.6.2) that has a default set of schemas and users loaded to help facilitate authentication and authorization testing.
Protocol | Default Port |
---|---|
|
1389 |
|
1636 |
|
1389 |
Username | Password | Groups | Description |
---|---|---|---|
|
|
General test user for authentication |
|
|
|
|
General test user for authentication |
|
|
|
General test user for authentication |
|
|
|
General test user for authentication, Admin user for karaf |
|
|
|
General test user for authentication, Admin user for karaf |
|
|
|
General test user for authentication, Admin user for karaf |
|
|
|
General test user for authentication, Admin user for karaf |
|
|
|
General test user for authentication, Admin user for karaf |
|
|
|
General test user for authentication, Admin user for karaf |
|
|
|
General test user for authentication, Admin user for karaf |
|
|
|
Admin user for karaf |
Username | Password | Groups | Attributes | Description |
---|---|---|---|---|
|
|
Administrative User for LDAP |
26.5.1.5. Schemas
The default schemas loaded into the LDAP instance are the same defaults that come with OpenDJ.
Schema File Name | Schema Description |
---|---|
|
This file contains a core set of attribute type and objectlass definitions from several standard LDAP documents, including |
|
This file contains schema definitions from |
|
This file contains the attribute type and |
|
This file contains schema definitions from |
|
This file contains schema definitions from RFC 2713, which defines a mechanism for storing serialized Java objects in the directory server. |
|
This file contains schema definitions from RFC 2714, which defines a mechanism for storing CORBA objects in the directory server. |
|
This file contains schema definitions from RFC 2739, which defines a mechanism for storing calendar and vCard objects in the directory server. Note that the definition in RFC 2739 contains a number of errors, and this schema file has been altered from the standard definition in order to fix a number of those problems. |
|
This file contains schema definitions from RFC 2926, which defines a mechanism for mapping between Service Location Protocol (SLP) advertisements and LDAP. |
|
This file contains schema definitions from RFC 3112, which defines the authentication password schema. |
|
This file contains schema definitions from RFC 3712, which defines a mechanism for storing printer information in the directory server. |
|
This file contains schema definitions from RFC 4403, which defines a mechanism for storing UDDIv3 information in the directory server. |
|
This file contains schema definitions from the |
|
This file contains schema definitions from RFC 4876, which defines a schema for storing Directory User Agent (DUA) profiles and preferences in the directory server. |
|
This file contains schema definitions required when storing Samba user accounts in the directory server. |
|
This file contains schema definitions required for Solaris and OpenSolaris LDAP naming services. |
|
This file contains the attribute type and |
26.5.1.6. Starting and Stopping the Embedded LDAP
The embedded LDAP application installs a feature with the name ldap-embedded
.
Installing and uninstalling this feature will start and stop the embedded LDAP server.
This will also install a fresh instance of the server each time.
If changes need to persist, stop then start the embedded-ldap-opendj
bundle (rather than installing/uninstalling the feature).
All settings, configurations, and changes made to the embedded LDAP instances are persisted across DDF restarts. If DDF is stopped while the LDAP feature is installed and started, it will automatically restart with the saved settings on the next DDF start.
26.5.1.7. Limitations of the Embedded LDAP
Current limitations for the embedded LDAP instances include:
-
Inability to store the LDAP files/storage outside of the DDF installation directory. This results in any LDAP data (i.e., LDAP user information) being lost when the
ldap-embedded
feature is uninstalled. -
Cannot be run standalone from DDF. In order to run
embedded-ldap
, the DDF must be started.
26.5.1.8. External Links for the Embedded LDAP
Location to the default base LDIF file in the DDF source code .
26.5.1.9. LDAP Administration
OpenDJ provides a number of tools for LDAP administration. Refer to the OpenDJ Admin Guide .
26.5.1.10. Downloading the Admin Tools
Download OpenDJ (Version 2.6.4) and the included tool suite.
26.5.1.11. Using the Admin Tools
The admin tools are located in <opendj-installation>/bat
for Windows and <opendj-installation>/bin
for nix
.
These tools can be used to administer both local and remote LDAP servers by setting the *host and port parameters appropriately.
In this example, the user Bruce Banner (uid=bbanner) is disabled using the manage-account command on Windows. Run manage-account --help for usage instructions.
D:\OpenDJ-2.4.6\bat>manage-account set-account-is-disabled -h localhost -p 4444 -O true -D "cn=admin" -w secret -b "uid=bbanner,ou=users,dc=example,dc=com" The server is using the following certificate: Subject DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate Issuer DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate Validity: Wed Sep 04 15:36:46 MST 2013 through Fri Sep 04 15:36:46 MST 2015 Do you wish to trust this certificate and continue connecting to the server? Please enter "yes" or "no":yes Account Is Disabled: true
Notice Account Is Disabled: true
in the listing:
D:\OpenDJ-2.4.6\bat>manage-account get-all -h localhost -p 4444 -D "cn=admin" -w secret -b "uid=bbanner,ou=users,dc=example,dc=com" The server is using the following certificate: Subject DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate Issuer DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate Validity: Wed Sep 04 15:36:46 MST 2013 through Fri Sep 04 15:36:46 MST 2015 Do you wish to trust this certificate and continue connecting to the server? Please enter "yes" or "no":yes Password Policy DN: cn=Default Password Policy,cn=Password Policies,cn=config Account Is Disabled: true Account Expiration Time: Seconds Until Account Expiration: Password Changed Time: 19700101000000.000Z Password Expiration Warned Time: Seconds Until Password Expiration: Seconds Until Password Expiration Warning: Authentication Failure Times: Seconds Until Authentication Failure Unlock: Remaining Authentication Failure Count: Last Login Time: Seconds Until Idle Account Lockout: Password Is Reset: false Seconds Until Password Reset Lockout: Grace Login Use Times: Remaining Grace Login Count: 0 Password Changed by Required Time: Seconds Until Required Change Time: Password History:
D:\OpenDJ-2.4.6\bat>manage-account clear-account-is-disabled -h localhost -p 4444 -D "cn=admin" -w secret -b "uid=bbanner,ou=users,dc=example,dc=com" The server is using the following certificate: Subject DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate Issuer DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate Validity: Wed Sep 04 15:36:46 MST 2013 through Fri Sep 04 15:36:46 MST 2015 Do you wish to trust this certificate and continue connecting to the server? Please enter "yes" or "no":yes Account Is Disabled: false
Notice Account Is Disabled: false
in the listing.
D:\OpenDJ-2.4.6\bat>manage-account get-all -h localhost -p 4444 -D "cn=admin" -w secret -b "uid=bbanner,ou=users,dc=example,dc=com" The server is using the following certificate: Subject DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate Issuer DN: CN=Win7-1, O=Administration Connector Self-Signed Certificate Validity: Wed Sep 04 15:36:46 MST 2013 through Fri Sep 04 15:36:46 MST 2015 Do you wish to trust this certificate and continue connecting to the server? Please enter "yes" or "no":yes Password Policy DN: cn=Default Password Policy,cn=Password Policies,cn=config Account Is Disabled: false Account Expiration Time: Seconds Until Account Expiration: Password Changed Time: 19700101000000.000Z Password Expiration Warned Time: Seconds Until Password Expiration: Seconds Until Password Expiration Warning: Authentication Failure Times: Seconds Until Authentication Failure Unlock: Remaining Authentication Failure Count: Last Login Time: Seconds Until Idle Account Lockout: Password Is Reset: false Seconds Until Password Reset Lockout: Grace Login Use Times: Remaining Grace Login Count: 0 Password Changed by Required Time: Seconds Until Required Change Time: Password History:
26.6. Security PDP
The Security Policy Decision Point (PDP) module contains services that are able to perform authorization decisions based on configurations and policies.
In the Security Framework, these components are called realms, and they implement the org.apache.shiro.realm.Realm
and org.apache.shiro.authz.Authorizer
interfaces.
Although these components perform decisions on access control, enforcement of this decision is performed by components within the notional PEP application.
26.6.1. Security PDP AuthZ Realm
The Security PDP AuthZ Realm exposes a realm service that makes decisions on authorization requests using the attributes stored within the metacard to determine if access should be granted. This realm can use XACML and will delegate decisions to an external processing engine if internal processing fails. Decisions are first made based on the "match-all" and "match-one" logic. Any attributes listed in the "match-all" or "match-one" sections will not be passed to the XACML processing engine and they will be matched internally. It is recommended to list as many attributes as possible in these sections to avoid going out to the XACML processing engine for performance reasons. If it is desired that all decisions be passed to the XACML processing engine, remove all of the "match-all" and "match-one" configurations. The configuration below provides the mapping between user attributes and the attributes being asserted - one map exists for each type of mapping (each map may contain multiple values).
Match-All Mapping:: This mapping is used to guarantee that all values present in the specified metacard attribute exist in the corresponding user attribute. Match-One Mapping:: This mapping is used to guarantee that at least one of the values present in the specified metacard attribute exists in the corresponding user attribute.
26.6.1.1. Configuring the Security PDP AuthZ Realm
-
Navigate to the Admin Console.
-
Select Security Application.
-
Select Configuration tab.
-
Select Security AuthZ Realm.
See Security AuthZ Realm for all possible configurations.
26.6.2. Guest Interceptor
The goal of the GuestInterceptor
is to allow non-secure clients (such as SOAP requests without security headers) to access secure service endpoints.
All requests to secure endpoints must satisfy the WS-SecurityPolicy that is included in the WSDL.
Rather than reject requests without user credentials, the guest interceptor detects the missing credentials and inserts an assertion that represents the "guest" user. The attributes included in this guest user assertion are configured by the administrator to represent any unknown user on the current network.
26.6.2.1. Installing Guest Interceptor
The GuestInterceptor
is installed by default with Security Application.
26.6.2.2. Configuring Guest Interceptor
Configure the Guest Interceptor from the Admin Console:
-
Navigate to the Admin Console at https://{FQDN}:{PORT}/admin
-
Select the Security application.
-
Select the Configuration tab.
-
Select the Security STS Guest Claims Handler configuration.
-
Select the
+
next to Attributes to add a new attribute. -
Add any additional attributes that will apply to every user.
-
Select Save changes.
Once these configurations have been added, the GuestInterceptor is ready for use. Both secure and non-secure requests will be accepted by all secure DDF service endpoints.
26.7. Web Service Security Architecture
The Web Service Security (WSS) functionality that comes with DDF is integrated throughout the system. This is a central resource describing how all of the pieces work together and where they are located within the system.
DDF comes with a Security Framework and Security Services. The Security Framework is the set of APIs that define the integration with the DDF framework and the Security Services are the reference implementations of those APIs built for a realistic end-to-end use case.
26.7.1. Securing REST
The Jetty Authenticator is the topmost handler of all requests. It initializes all Security Filters and runs them in order according to service ranking:
-
The Web SSO Filter reads from the web context policy manager and functions as the first decision point. If the request is from a whitelisted context, no further authentication is needed and the request goes directly to the desired endpoint. If the context is not on the whitelist, the filter will attempt to get a claims handler for the context. The filter loops through all configured context handlers until one signals that it has found authentication information that it can use to build a token. This configuration can be changed by modifying the web context policy manager configuration. If unable to resolve the context, the filter will return an authentication error and the process stops. If a handler is successfully found, an auth token is assigned and the request continues to the login filter.
-
The Login Filter receives a token and returns a subject. To retrieve the subject, the token is sent through Shiro to the STS Realm where the token will be exchanged for a SAML assertion through a SOAP call to an STS server.
-
If the Subject is returned, the request moves to the AuthZ Filter to check permissions on the user. If the user has the correct permissions to access that web context, the request can hit the endpoint.
IdP Architecture
The IdP Handler is a configured handler on the Web SSO Filter just like the other handlers in the previous diagram. The IdP Handler and the Assertion Consumer Service are both part of the IdP client that can be used to interface with any compliant SAML 2.0 Web SSO Identity Provider.
The Metadata Exchange happens asynchronously from any login event.
The exchange can happen via HTTP or File, or the metadata XML itself can be pasted into the configuration for either the IdP client or the IdP server that the system ships with.
The metadata contains information about what bindings are accepted by the client or server and whether or not either expects messages to be signed, etc.
The redirect from the Assertion Consumer Service to the Endpoint will cause the client to pass back through the entire filter chain, which will get caught at the Has Session
point of the IdP Handler.
The request will proceed through the rest of the filters as any other connection would in the previous diagram.
Unauthenticated non-browser clients that pass the HTTP headers signaling that they understand SAML ECP can authenticate via that mechanism as explained below.
SAML ECP can be used to authenticate a non-browser client or non-person entity (NPE).
This method of authentication is useful when there is no human in the loop, but authentication with an IdP is still desired.
The IdP Handler will send a PAOS (Reverse SOAP) request as an initial response back to the Secure Client, assuming the client has sent the necessary HTTP headers to declare that it supports this function.
That response does not complete the request/response loop, but is instead caught by a SOAP intermediary, which is implemented through a CXF interceptor.
The PAOS response contains an <AuthNRequest>
request message, which is intended to be rerouted to an IdP via SOAP.
The SOAP intermediary will then contact an IdP (selection of the IdP is not covered by the spec).
The IdP will either reject the login attempt, or issue a Signed <Response>
that is to be delivered to the Assertion Consumer Service by the intermediary.
The method of logging into the IdP is not covered by the spec and is up to the implementation.
The SP is then signaled to supply the originally requested resource, assuming the signed Response message is valid and the user has permission to view the resource.
The ambiguity in parts of the spec with regard to selecting an IdP to use and logging into that IdP can lead to integration issues between different systems. However, this method of authentication is not necessarily expected to work by default with anything other than other instances of DDF. It does, however, provide a starting point that downstream projects can leverage in order to provide ECP based authentication for their particular scenario or to connect to other systems that utilize SAML ECP.
26.7.2. Securing SOAP
26.7.2.1. SOAP Secure Client
When calling to an endpoint from a SOAP secure client, it first requests the WSDL from the endpoint and the SOAP endpoint returns the WSDL. The client then calls to STS for authentication token to proceed. If the client receives the token, it makes a secure call to the endpoint and receives results.
26.7.2.2. Policy-unaware SOAP Client
If calling an endpoint from a non-secure client, at the point the of the initial call, the Guest Interceptor catches the request and prepares it to be accepted by the endpoint.
First, the interceptor reads the configured policy, builds a security header, and gets an anonymous SAML assertion.
Using this, it makes a getSubject
call which is sent through Shiro to the STS realm.
Upon success, the STS realm returns the subject and the call is made to the endpoint.
26.8. Security PEP
The Security Policy Enforcement Point (PEP) application contains bundles that allow for policies to be enforced at various parts of the system, for example: to reach contexts, view metacards, access catalog operations, and others.
26.8.1. Security PEP Interceptor
The Security PEP Interceptor bundle contains the ddf.security.pep.interceptor.PEPAuthorizingInterceptor
class.
This class uses CXF to intercept incoming SOAP messages and enforces service authorization policies by sending the service request to the security framework.
26.8.1.1. Installing the Security PEP Interceptor
This bundle is not installed by default but can be added by installing the security-pep-serviceauthz
feature.
Warning
|
To perform service authorization within a default install of DDF, this bundle MUST be installed. |
26.8.1.2. Configuring the Security PEP Interceptor
The Security PEP Interceptor has no configurable properties.
26.9. Filtering
Metacard filtering is performed by the Filter Plugin after a query has been performed, but before the results are returned to the requestor.
Each metacard result will contain security attributes that are populated by the CatalogFramework based on the PolicyPlugins (Not provided! You must create your own plugin for your specific metadata!) that populates this attribute.
The security attribute is a HashMap containing a set of keys that map to lists of values.
The metacard is then processed by a filter plugin that creates a KeyValueCollectionPermission
from the metacard’s security attribute.
This permission is then checked against the user subject to determine if the subject has the correct claims to view that metacard.
The decision to filter the metacard eventually relies on the PDP (feature:install security-pdp-authz
).
The PDP returns a decision, and the metacard will either be filtered or allowed to pass through.
The security attributes populated on the metacard are completely dependent on the type of the metacard. Each type of metacard must have its own PolicyPlugin that reads the metadata being returned and returns the metacard’s security attribute. If the subject permissions are missing during filtering, all resources will be filtered.
1
2
3
4
5
6
7
8
9
10
<metacard>
<security>
<map>
<entry key="entry1" value="A,B" />
<entry key="entry2" value="X,Y" />
<entry key="entry3" value="USA,GBR" />
<entry key="entry4" value="USA,AUS" />
</map>
</security>
</metacard>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
<user>
<claim name="claim1">
<value>A</value>
<value>B</value>
</claim>
<claim name="claim2">
<value>X</value>
<value>Y</value>
</claim>
<claim name="claim3">
<value>USA</value>
</claim>
<claim name="claim4">
<value>USA</value>
</claim>
</user>
In the above example, the user’s claims are represented very simply and are similar to how they would actually appear in a SAML 2 assertion. Each of these user (or subject) claims will be converted to a KeyValuePermission object. These permission objects will be implied against the permission object generated from the metacard record. In this particular case, the metacard might be allowed if the policy is configured appropriately because all of the permissions line up correctly.
To enable filtering on a new type of record, implement a PolicyPlugin that is able to read the string metadata contained within the metacard record. Note that, in DDF, there is no default plugin that parses a metacard. A plugin must be created to create a policy for the metacard.
26.10. Expansion Service
The Expansion Service and its corresponding expansion-related commands provide an easy way for developers to add expansion capabilities to DDF during user attribute and metadata card processing. In addition to these two defined uses of the expansion service, developers are free to utilize the service in their own implementations.
Each instance of the expansion service consists of a collection of rulesets. Each ruleset consists of a key value and its associated set of rules. Callers of the expansion service provide a key and a value to be expanded. The expansion service then looks up the set of rules for the specified key. The expansion service cumulatively applies each of the rules in the set, starting with the original value. The result is returned to the caller.
Key (Attribute) | Rules (original → new) | |
---|---|---|
key1 |
|
|
value2 |
|
|
value3 |
|
|
key2 |
|
|
value2 |
|
Note that the rules listed for each key are processed in order, so they may build upon each other, i.e., a new value from the new replacement string may be expanded by a subsequent rule.
In the example Location:Goodyear
would expand to Goodyear AZ USA
and Title:VP-Sales
would expand to VP-Sales VP Sales
.
To use the expansion service, modify the following two files within the <DDF_HOME>/etc/pdp
directory:
-
<DDF_HOME>/etc/pdp/ddf-metacard-attribute-ruleset.cfg
-
<DDF_HOME>/etc/pdp/ddf-user-attribute-ruleset.cfg
The examples below use the following collection of rulesets:
Key (Attribute) | Rules (original → new) | |
---|---|---|
Location |
|
|
AZ |
|
|
CA |
|
|
Title |
|
|
VP-Engineering |
|
It is expected that multiple instances of the expansion service will be running at the same time. Each instance of the service defines a unique property that is useful for retrieving specific instances of the expansion service. There are two pre-defined instances used by DDF: one for expanding user attributes and one for metacard attributes.
Property Name | Value | Description |
---|---|---|
mapping |
|
This instance is configured with rules that expand the user’s attribute values for security checking. |
mapping |
|
This instance is configured with rules that expand the metacard’s security attributes before comparing with the user’s attributes. |
Additional instance of the expansion service can be configured using a configuration file. The configuration file can have three different types of lines:
- comments
-
any line prefixed with the
#
character is ignored as a comment (for readability, blank lines are also ignored) - attribute separator
-
a line starting with
separator=
defines the attribute separator string. - rule
-
all other lines are assumed to be rules defined in a string format
<key>:<original value>:<new value>
The following configuration file defines the rules shown above in the example table (using the space as a separator):
# This defines the separator that will be used when the expansion string contains multiple # values - each will be separated by this string. The expanded string will be split at the # separator string and each resulting attribute added to the attribute set (duplicates are # suppressed). No value indicates the default value of ' ' (space). separator= # The following rules define the attribute expansion to be performed. The rules are of the # form: # <attribute name>:<original value>:<expanded value> # The rules are ordered, so replacements from the first rules may be found in the original # values of subsequent rules. Location:Goodyear:Goodyear AZ Location:AZ:AZ USA Location:CA:CA USA Title:VP-Sales:VP-Sales VP Sales Title:VP-Engineering:VP-Engineering VP Engineering
DDF includes commands to work with the Expansion service.
Title | Namespace | Description |
---|---|---|
DDF::Security::Expansion::Commands |
security |
The expansion commands provide detailed information about the expansion rules in place and the ability to see the results of expanding specific values against the active ruleset. |
|
Description |
|
|
|
Runs the expansion service on the provided data returning the expanded value. It takes an attribute and an original value, expands the original value using the current expansion service and ruleset and dumps the results. |
|
|
|
|
||
|
|
||
|
Displays the ruleset for each active expansion service. |
|
|
|
|
26.11. Security Token Service
The Security Token Service (STS) is a service running in DDF that generates SAML v2.0 assertions. These assertions are then used to authenticate a client allowing them to issue other requests, such as ingests or queries to DDF services.
The STS is an extension of Apache CXF-STS. It is a SOAP web service that utilizes WS-Trust. The generated SAML assertions contain attributes about a user and is used by the Policy Enforcement Point (PEP) in the secure endpoints. Specific configuration details on the bundles that come with DDF can be found on the Security STS application page. This page details all of the STS components that come out of the box with DDF, along with configuration options, installation help, and which services they import and export.
The STS server contains validators, claim handlers, and token issuers to process incoming requests. When a request is received, the validators first ensure that it is valid. The validators verify authentication against configured services, such as LDAP, DIAS, PKI. If the request is found to be invalid, the process ends and an error is returned. Next, the claims handlers determine how to handle the request, adding user attributes or properties as configured. The token issuer creates a SAML 2.0 assertion and associates it with the subject. The STS server sends an assertion back to the requestor, which is used to authenticate and authorize subsequent SOAP and REST requests.
The STS can be used to generate SAML v2.0 assertions via a SOAP web service request. Out of the box, the STS supports authentication from existing SAML tokens, username/password, and x509 certificates. It also supports retrieving claims using LDAP and properties files.
26.11.1. STS Claims Handlers
Claims handlers are classes that convert the incoming user credentials into a set of attribute claims that will be populated in the SAML assertion. An example in action would be the LDAPClaimsHandler that takes in the user’s credentials and retrieves the user’s attributes from a backend LDAP server. These attributes are then mapped and added to the SAML assertion being created. Integrators and developers can add more claims handlers that can handle other types of external services that store user attributes.
26.11.2. Security STS
The Security STS application contains the bundles and services necessary to run and talk to a Security Token Service (STS). It builds off of the Apache CXF STS code and adds components specific to DDF functionality.
Bundle Name | Located in Feature | Description/Link to Bundle Page |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Contains the default CXF SAML validator and exposes it as a service for the STS. |
|
|
Contains the default CXF x509 validator and exposes it as a service for the STS. |
26.11.3. Security STS Client Config
The Security STS Client Config bundle keeps track and exposes configurations and settings for the CXF STS client. This client can be used by other services to create their own STS client. Once a service is registered as a watcher of the configuration, it will be updated whenever the settings change for the sts client.
26.11.3.1. Installing the Security STS Client Config
This bundle is installed by default.
26.11.3.2. Configuring the Security STS Client Config
Configure the Security STS Client Config from the Admin Console:
-
Navigate to the Admin Console.
-
Select Security Application.
-
Select Configuration tab.
-
Select Security STS Client.
See Security STS Client configurations for all possible configurations.
26.11.4. External/WS-S STS Support
This configuration works just like the STS Client Config for the internal STS, but produces standard requests instead of the custom DDF ones. It supports two new auth types for the context policy manager, WSSBASIC and WSSPKI. Use these auth types when connecting to a non-DDF STS or if ignoring realms.
26.11.4.1. Security STS Address Provider
This allows one to select which STS address will be used (e.g. in SOAP sources) for clients of this service. Default is off (internal).
26.11.5. Security STS LDAP Login
The Security STS LDAP Login bundle enables functionality within the STS that allows it to use an LDAP to perform authentication when passed a UsernameToken
in a RequestSecurityToken
SOAP request.
26.11.5.1. Installing the Security STS LDAP Login
This bundle is not installed by default but can be added by installing the security-sts-ldaplogin
feature.
26.11.5.2. Configuring the Security STS LDAP Login
Configure the Security STS LDAP Login from the Admin Console:
-
Navigate to the Admin Console.
-
Select Security Application.
-
Select Configuration tab
-
Select Security STS LDAP Login.
Configuration Name | Default Value | Additional Information |
---|---|---|
LDAP URL |
|
|
StartTLS |
|
Ignored if the URL uses ldaps. |
LDAP Bind User DN |
|
This user should have the ability to verify passwords and read attributes for any user. |
LDAP Bind User Password |
|
This password value is encrypted by default using the Security Encryption application. |
LDAP Group User Membership Attribute |
|
Attribute used as the membership attribute for the user in the group. Usually this is uid, cn, or something similar. |
LDAP User Login Attribute |
|
Attribute used as the login username. Usually this is uid, cn, or something similar. |
LDAP Base User DN |
|
|
LDAP Base Group DN |
|
26.11.6. Security STS LDAP Claims Handler
The Security STS LDAP Claims Handler bundle adds functionality to the STS server that allows it to retrieve claims from an LDAP server. It also adds mappings for the LDAP attributes to the STS SAML claims.
Note
|
All claims handlers are queried for user attributes regardless of realm. This means that two different users with the same username in different LDAP servers will end up with both of their claims in each of their individual assertions. |
26.11.6.1. Installing Security STS LDAP Claims Handler
This bundle is not installed by default and can be added by installing the
security-sts-ldapclaimshandler
feature.
26.11.6.2. Configuring the Security STS LDAP Claims Handler
Configure the Security STS LDAP Claims Handler from the Admin Console:
-
Navigate to the Admin Console.
-
Select Security Application
-
Select Configuration tab.
-
Select Security STS LDAP and Roles Claims Handler.
Configuration Name | Default Value | Additional Information |
---|---|---|
LDAP URL |
|
|
StartTLS |
|
Ignored if the URL uses ldaps. |
LDAP Bind User DN |
|
This user should have the ability to verify passwords and read attributes for any user. |
LDAP Bind User Password |
|
This password value is encrypted by default using the Security Encryption application. |
LDAP Username Attribute |
|
|
LDAP Base User DN |
|
|
LDAP Group ObjectClass |
|
|
LDAP Membership Attribute |
|
Attribute used to designate the user’s name as a member of the group in LDAP. Usually this is member or uniqueMember |
LDAP Base Group DN |
|
|
User Attribute Map File |
|
Properties file that contains mappings from Claim=LDAP attribute. |
Registered Interface | Availability | Multiple |
---|---|---|
|
optional |
false |
Registered Interface | Implementation Class | Properties Set |
---|---|---|
|
|
Properties from the settings |
|
|
Properties from the settings |
26.11.7. Security STS Server
The Security STS Server is a bundle that starts up an implementation of the CXF STS. The STS obtains many of its configurations (Claims Handlers, Token Validators, etc.) from the OSGi service registry as those items are registered as services using the CXF interfaces. The various services that the STS Server imports are listed in the Implementation Details section of this page.
Note
|
The WSDL for the STS is located at the |
26.11.7.1. Installing the Security STS Server
This bundle is installed by default and is required for DDF to operate.
26.11.7.2. Configuring the Security STS Server
Configure the Security STS Server from the Admin Console:
-
Navigate to the Admin Console.
-
Select Security Application
-
Select Configuration tab.
-
Select Security STS Server.
Configuration Name | Default Value | Additional Information |
---|---|---|
SAML Assertion Lifetime |
|
|
Token Issuer |
The name of the server issuing tokens. Generally this is unique identifier of this IdP. |
|
Signature Username |
|
Alias of the private key in the STS Server’s keystore used to sign messages. |
Encryption Username |
|
Alias of the private key in the STS Server’s keystore used to encrypt messages. |
26.11.8. Security STS Service
The Security STS Service performs authentication of a user by delegating the authentication request to an STS. This is different than the services located within the Security PDP application as those ones only perform authorization and not authentication.
26.11.8.1. Installing the Security STS Realm
This bundle is installed by default and should not be uninstalled.
26.11.8.2. Configuring the Security STS Realm
The Security STS Realm has no configurable properties.
Registered Interface | Availability | Multiple |
---|---|---|
|
optional |
false |
Registered Interfaces | Implementation Class | Properties Set |
---|---|---|
|
|
None |
26.12. Federated Identity
Each instance of DDF may be configured with its own security policy that determines the resources a user may access and the actions they may perform. To decide whether a given request is permitted, DDF references the SAML assertion stored internally in the requestor’s Subject. This assertion is generated by the STS during authentication and contains a collection of attributes that identify the requestor. Based on these attributes and the configured policy, DDF makes an authorization decision. See Security PDP for more information.
This authorization process works when the requestor authenticates directly with DDF as they are guaranteed to have a Subject. However, when federating, DDF proxies requests to federated Sources and this poses a problem. The requestor doesn’t authenticate with federated Sources, but Sources still need to make authorization decisions.
To solve this problem, DDF uses federated identity. When performing any federated request (query, resource retrival, etc), DDF attaches the requestor’s SAML assertion to the outgoing request. The federated Source extracts the assertion and validates its signature to make sure it was generated by a trusted entity. If so, the federated Source will construct a Subject for the requestor and perform the request using that Subject. The Source can then make authorization decisions using the process already described.
How DDF attaches SAML assertions to federated requests depends on the endpoint used to connect to a federated Source. When using a REST endpoint such as CSW, DDF places the assertion in the HTTP Authorization header. When using a SOAP endpoint, it places the assertion in the SOAP security header.
The figure below shows a federated query between two instances of DDF that support federated identity.
-
A user submits a search to DDF.
-
DDF generates a catalog request, attaches the user’s Subject, and sends the request to the Catalog Framework.
-
The Catalog Framework extracts the SAML assertion from the Subject and sends an HTTP request to each federated Source with the assertion attached.
-
A federated Source receives this request and extracts the SAML assertion. The federated Source then validates the authenticity of the SAML Assertion. If the assertion is valid, the federated Source generates a Subject from the assertion to represent the user who initiated the request.
-
The federated Source filters all results that the user is not authorized to view and returns the rest to DDF.
-
DDF takes the results from all Sources, filters those that the user is not authorized to view and returns the remaining results to the user.
Note
|
With federated identity, results are filtered both by the federated Source and client DDF. This is important as each may have different authorization policies. |
Warning
|
Support for federated identity was added in DDF 2.8.x. Federated Sources older than this will not perform any filtering. Instead, they will return all available results and leave filtering up to the client. |
27. Developing DDF Components
Create custom implementations of DDF components.
27.1. Developing Complementary Catalog Frameworks
DDF and the underlying OSGi technology can serve as a robust infrastructure for developing frameworks that complement the Catalog.
27.1.1. Simple Catalog API Implementations
The Catalog API implementations, which are denoted with the suffix of Impl
on the Java file names, have multiple purposes and uses:
-
First, they provide a good starting point for other developers to extend functionality in the framework. For instance, extending the
MetacardImpl
allows developers to focus less on the inner workings of DDF and more on the developer’s intended purposes and objectives. -
Second, the Catalog API Implementations display the proper usage of an interface and an interface’s intentions. Also, they are good code examples for future implementations. If a developer does not want to extend the simple implementations, the developer can at least have a working code reference on which to base future development.
27.1.2. Use of the Whiteboard Design Pattern
The Catalog makes extensive use of the Whiteboard Design Pattern. Catalog Components are registered as services in the OSGi Service Registry, and the Catalog Framework or any other clients tracking the OSGi Service Registry are automatically notified by the OSGi Framework of additions and removals of relevant services.
The Whiteboard Design Pattern is a common OSGi technique that is derived from a technical whitepaper provided by the OSGi Alliance in 2004. It is recommended to use the Whiteboard pattern over the Listener pattern in OSGi because it provides less complexity in code (both on the client and server sides), fewer deadlock possibilities than the Listener pattern, and closely models the intended usage of the OSGi framework.
27.1.3. Recommendations for Framework Development
-
Provide extensibility similar to that of the Catalog.
-
Provide a stable API with interfaces and simple implementations (refer to
http://www.ibm.com/developerworks/websphere/techjournal/1007_charters/1007_charters.html
).
-
-
Make use of the Catalog wherever possible to store, search, and transform information.
-
Utilize OSGi standards wherever possible.
-
ConfigurationAdmin
-
MetaType
-
-
Utilize the sub-frameworks available in DDF.
-
Karaf
-
CXF
-
PAX Web and Jetty
-
27.1.4. Catalog Framework Reference
The Catalog Framework can be requested from the OSGi Service Registry.
<reference id="catalogFramework" interface="DDF.catalog.CatalogFramework" />
27.1.4.1. Methods
The CatalogFramework
provides convenient methods to transform Metacards
and QueryResponses
using a reference to the CatalogFramework
.
27.1.4.1.1. Create, Update, and Delete Methods
Create, Update, and Delete (CUD) methods add, change, or remove stored metadata in the local Catalog Provider.
1
2
3
public CreateResponse create(CreateRequest createRequest) throws IngestException, SourceUnavailableException;
public UpdateResponse update(UpdateRequest updateRequest) throws IngestException, SourceUnavailableException;
public DeleteResponse delete(DeleteRequest deleteRequest) throws IngestException, SourceUnavailableException;
CUD operations process PolicyPlugin
, AccessPlugin
, and PreIngestPlugin
instances before execution and PostIngestPlugin
instances after execution.
27.1.4.1.2. Query Methods
Query methods search metadata from available Sources based on the QueryRequest
properties and Federation Strategy.
Sources could include Catalog Provider, Connected Sources, and Federated Sources.
1
2
public QueryResponse query(QueryRequest query) throws UnsupportedQueryException,SourceUnavailableException, FederationException;
public QueryResponse query(QueryRequest queryRequest, FederationStrategy strategy) throws SourceUnavailableException, UnsupportedQueryException, FederationException;
Query requests process PolicyPlugin
, AccessPlugin
, and PreQueryPlugin
instances before execution and PolicyPlugin
, AccessPlugin
, and PostQueryPlugin
instances after execution.
27.1.4.1.3. Resource Methods
Resource methods retrieve products from Sources.
1
2
3
public ResourceResponse getEnterpriseResource(ResourceRequest request) throwsIOException, ResourceNotFoundException, ResourceNotSupportedException;
public ResourceResponse getLocalResource(ResourceRequest request) throws IOException, ResourceNotFoundException, ResourceNotSupportedException;
public ResourceResponse getResource(ResourceRequest request, String resourceSiteName) throws IOException, ResourceNotFoundException, ResourceNotSupportedException;
Resource requests process `PreResourcePlugin`s before execution and `PostResourcePlugin`s after execution.
27.1.4.1.4. Source Methods
Source methods can get a list of Source identifiers or request descriptions about Sources.
1
2
public Set<String> getSourceIds();
public SourceInfoResponse getSourceInfo(SourceInfoRequest sourceInfoRequest) throws SourceUnavailableException;
27.1.4.1.5. Transform Methods
Transform methods provide convenience methods for using Metacard Transformers and Query Response Transformers.
1
2
3
4
5
// Metacard Transformer
public BinaryContent transform(Metacard metacard, String transformerId, Map<String,Serializable> requestProperties) throws CatalogTransformerException;
// Query Response Transformer
public BinaryContent transform(SourceResponse response, String transformerId, Map<String, Serializable> requestProperties) throws CatalogTransformerException;
27.1.4.2. Implementing Catalog Methods
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
// inject CatalogFramework instance or retrieve an instance
private CatalogFramework catalogFramework;
public RSSEndpoint(CatalogFramework catalogFramework)
{
this.catalogFramework = catalogFramework ;
// implementation
}
// Other implementation details ...
private void convert(QueryResponse queryResponse ) {
// ...
String transformerId = "rss";
BinaryContent content = catalogFramework.transform(queryResponse, transformerId, null);
// ...
}
27.1.4.3. Dependency Injection
Using Blueprint or another injection framework, transformers can be injected from the OSGi Service Registry.
<reference id="[[Reference Id" interface="DDF.catalog.transform.[[Transformer Interface Name]]" filter="(shortname=[[Transformer Identifier]])" />
Each transformer has one or more transform
methods that can be used to get the desired output.
1
2
3
DDF.catalog.transform.InputTransformer inputTransformer = retrieveInjectedInstance() ;
Metacard entry = inputTransformer.transform(messageInputStream);
1
2
3
DDF.catalog.transform.MetacardTransformer metacardTransformer = retrieveInjectedInstance() ;
BinaryContent content = metacardTransformer.transform(metacard, arguments);
1
2
3
DDF.catalog.transform.QueryResponseTransformer queryResponseTransformer = retrieveInjectedInstance() ;
BinaryContent content = queryResponseTransformer.transform(sourceSesponse, arguments);
27.1.4.4. OSGi Service Registry
Important
|
In the vast majority of cases, working with the OSGi Service Reference directly should be avoided. Instead, dependencies should be injected via a dependency injection framework like Blueprint. |
Transformers are registered with the OSGi Service Registry.
Using a BundleContext
and a filter, references to a registered service can be retrieved.
1
2
3
4
ServiceReference[] refs =
bundleContext.getServiceReferences(DDF.catalog.transform.InputTransformer.class.getName(),"(shortname=" + transformerId + ")");
InputTransformer inputTransformer = (InputTransformer) context.getService(refs[0]);
Metacard entry = inputTransformer.transform(messageInputStream);
27.2. Developing Metacard Types
Create custome Metacard types with Metacard Type definition files.
27.2.1. Metacard Type Definition File
To define Metacard Types, the definition file must have a metacardTypes
key in the root object.
{
"metacardTypes": [...]
}
The value of metacardTypes
must be an array of Metacard Type Objects, which are composed of the type
(required), extendsTypes
(optional), and attributes
(optional) keys.
{
"metacardTypes": [
{
"type": "my-metacard-type",
"extendsTypes": ["core", "security"],
"attributes": {...}
}
]
}
The value of the type
key is the name of the metacard type being defined. This field is required.
The value of the extendsTypes
key is an array of metacard type names (strings) whose attributes you wish to include in your type. Valid Metacard Types already defined in the system or any Metacard Types already defined in this file will work. Please note this section is evaluated from top to bottom so order any types used in other definitions above where they are used in the extendsTypes
of other definitions. This key and value may be completely omitted to not extend any types.
The value of the attributes
key is a map where each key is the name of an attribute type to include in this metacard type and each value is a map with a single key named required
and a boolean value. Required attributes are used for metacard validation - metacards that lack required attributes will be flagged with validation errors. attributes
may be completely omitted. required
may be omitted.
{
"metacardTypes": [
{
"type": "my-metacard-type",
"attributes": {
"resolution": {
"required": true
},
"target-areas": {
"required": false
},
"expiration": {},
"point-of-contact": {
"required": true
}
}
}
]
}
Note
|
The DDF basic metacard attribute types are added to custom metacard types by default. If any attribute types are required by a metacard type, just include them in the |
{
"metacardTypes": [
{
"type": "my-metacard-type",
"attributes": {
"resolution": {
"required": true
},
"target-areas": {
"required": false
}
}
},
{
"type": "another-metacard-type",
"attributes": {
"effective": {
"required": true
},
"resolution": {
"required": false
}
}
}
]
}
27.3. Developing Global Attribute Validators
27.3.1. Global Attribute Validators File
To define Validators, the definition file must have a validators
key in the root object.
{
"validators": {...}
}
The value of validators
is a map of the attribute name to a list of validators for that attribute.
{
"validators": {
"point-of-contact": [...]
}
}
Each object in the list of validators is the validator name and list of arguments for that validator.
{
"validators": {
"point-of-contact": [
{
"validator": "pattern",
"arguments": [".*regex.+\\s"]
}
]
}
}
Warning
|
The value of the |
The validator
key must have a value of one of the following:
validator
Possible Values-
size
(validates the size of Strings, Arrays, Collections, and Maps)-
arguments
: (2) [integer: lower bound (inclusive), integer: upper bound (inclusive)]-
lower bound must be greater than or equal to zero and the upper bound must be greater than or equal to the lower bound
-
-
-
pattern
-
arguments
: (1) [regular expression]
-
-
pastdate
-
arguments
: (0) [NO ARGUMENTS]
-
-
futuredate
-
arguments
: (0) [NO ARGUMENTS]
-
-
range
-
(2) [number (decimal or integer): inclusive lower bound, number (decimal or integer): inclusive upper bound]
-
uses a default epsilon of 1E-6 on either side of the range to account for floating point representation inaccuracies
-
-
(3) [number (decimal or integer): inclusive lower bound, number (decimal or integer): inclusive upper bound, decimal number: epsilon (the maximum tolerable error on either side of the range)]
-
-
enumeration
-
arguments
: (unlimited) [list of strings: each argument is one case-sensitive, valid enumeration value]
-
-
relationship
-
arguments
: (4+) [attribute value or null, one of mustHave|cannotHave|canOnlyHave, target attribute name, null or target attribute value(s) as additional arguments]
-
-
match_any
-
validators
: (unlimited) [list of previously defined validators: valid if any validator succeeds]
-
{
"validators": {
"title": [
{
"validator": "size",
"arguments": ["1", "50"]
},
{
"validator": "pattern",
"arguments": ["\\D+"]
}
],
"created": [
{
"validator": "pastdate",
"arguments": []
}
],
"expiration": [
{
"validator": "futuredate",
"arguments": []
}
],
"page-count": [
{
"validator": "range",
"arguments": ["1", "500"]
}
],
"temperature": [
{
"validator": "range",
"arguments": ["12.2", "19.8", "0.01"]
}
],
"resolution": [
{
"validator": "enumeration",
"arguments": ["1080p", "1080i", "720p"]
}
],
"datatype": [
{
"validator": "match_any",
"validators": [
{
"validator": "range",
"arguments": ["1", "25"]
},
{
"validator": "enumeration",
"arguments": ["Collection", "Dataset", "Event"]
}
]
}
],
"topic.vocabulary": [
{
"validator": "relationship",
"arguments": ["animal", "canOnlyHave", "topic.category", "cat", "dog", "lizard"]
}
]
}
}
27.4. Developing Attribute Types
Create custom attribute types with Attribute Type definition files.
27.4.1. Attribute Type Definition File
To define Attribute Types, the definition file must have an attributeTypes
key in the root object.
{
"attributeTypes": {...}
}
The value of attributeTypes
must be a map where each key is the attribute type’s name and each value is a map that includes the data type and whether the attribute type is stored, indexed, tokenized, or multi-valued.
{
"attributeTypes": {
"temperature": {
"type": "DOUBLE_TYPE",
"stored": true,
"indexed": true,
"tokenized": false,
"multivalued": false
}
}
}
The attributes stored
, indexed
, tokenized
, and multivalued
must be included and must have a boolean value.
stored
-
If true, the value of the attribute should be stored in the underlying datastore. Some attributes may only be indexed or used in transit and do not need to be persisted.
indexed
-
If true, then the value of the attribute should be included in the datastore’s index and therefore be part of query evaluation.
tokenized
-
Only applicable to STRING_TYPE attributes, if true then stopwords and punctuation will be stripped prior to storing and/or indexing. If false, only an exact string will match.
multi-valued
-
If true, then the attribute values will be Lists of the attribute type rather than single values.
The type
attribute must also be included and must have one of the allowed values:
type
Attribute Possible Values-
DATE_TYPE
-
STRING_TYPE
-
XML_TYPE
-
LONG_TYPE
-
BINARY_TYPE
-
GEO_TYPE
-
BOOLEAN_TYPE
-
DOUBLE_TYPE
-
FLOAT_TYPE
-
INTEGER_TYPE
-
OBJECT_TYPE
-
SHORT_TYPE
An example with multiple attributes defined:
{
"attributeTypes": {
"resolution": {
"type": "STRING_TYPE",
"stored": true,
"indexed": true,
"tokenized": false,
"multivalued": false
},
"target-areas": {
"type": "GEO_TYPE",
"stored": true,
"indexed": true,
"tokenized": false,
"multivalued": true
}
}
}
27.5. Developing Default Attribute Types
Create custom default attribute types.
27.5.1. Default Attribute Values
To define default attribute values, the definition file must have a defaults
key in the root object.
{
"defaults": [...]
}
The value of defaults
is a list of objects where each object contains the keys attribute
, value
, and optionally metacardTypes
.
{
"defaults": [
{
"attribute": ...,
"value": ...,
"metacardTypes": [...]
}
]
}
The value corresponding to the attribute
key is the name of the attribute to which the default value will be applied. The value corresponding to the value
key is the default value of the attribute.
Note
|
The attribute’s default value must be of the same type as the attribute, but it has to be written as a string (i.e., enclosed in quotation marks) in the JSON file. Dates must be UTC datetimes in the ISO 8601 format, i.e., |
The metacardTypes
key is optional. If it is left out, then the default attribute value will be applied to every metacard that has that attribute. It can be thought of as a 'global' default value. If the metacardTypes
key is included, then its value must be a list of strings where each string is the name of a metacard type. In this case, the default attribute value will be applied only to metacards that match one of the types given in the list.
Note
|
In the event that an attribute has a 'global' default value as well as a default value for a specific metacard type, the default value for the specific metacard type will be applied (i.e., the more specific default value wins). |
Example:
{
"defaults": [
{
"attribute": "title",
"value": "Default Title"
},
{
"attribute": "description",
"value": "Default video description",
"metacardTypes": ["video"]
},
{
"attribute": "expiration",
"value": "2020-05-06T12:00:00Z",
"metacardTypes": ["video", "nitf"]
},
{
"attribute": "frame-rate",
"value": "30"
}
]
}
27.6. Developing Attribute Injections
Attribute injections are defined attributes that will be injected into all metacard types or into specific metacard types. This capability allows metacard types to be extended with new attributes.
27.6.1. Attribute Injection Definition
To define attribute injections, create a JSON file in the <DDF_HOME>/etc/definitions
directory. The definition file must have an inject
key in the root object.
{
"inject": [...]
}
The value of inject
is simply a list of objects where each object contains the key attribute
and optionally metacardTypes
.
{
"inject": [
{
"attribute": ...,
"metacardTypes": [...]
}
]
}
The value corresponding to the attribute
key is the name of the attribute to inject.
The metacardTypes
key is optional.
If it is left out, then the attribute will be injected into every metacard type.
In that case it can be thought of as a 'global' attribute injection.
If the metacardTypes
key is included, then its value must be a list of strings where each string is the name of a metacard type.
In this case, the attribute will be injected only into metacard types that match one of the types given in the list.
{
"inject": [
// Global attribute injection, all metacards
{
"attribute": "rating"
},
// Specific attribute injection, only "video" metacards
{
"attribute": "cloud-cover",
"metacardTypes": "video"
}
]
}
Note
|
Attributes must be registered in the attribute registry (see the |
Add a second key for attributeTypes
to register the new types defined previously. For each attribute injections, specify the name and properties for that attribute.
-
type: Data type of the possible values for this attribute.
-
indexed: Boolean, attribute is indexed.
-
stored: Boolean, attribute is stored.
-
tokenized: Boolean, attribute is stored.
-
multivalued: Boolean, attribute can hold multiple values.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
{
"inject": [
// Global attribute injection, all metacards
{
"attribute": "rating"
},
// Specific attribute injection, only "video" metacards
{
"attribute": "cloud-cover",
"metacardTypes": "video"
}
],
"attributeTypes": {
"rating": {
"type": "STRING_TYPE",
"indexed": true,
"stored": true,
"tokenized": true,
"multivalued": true
},
"cloud-cover": {
"type": "STRING_TYPE",
"indexed": true,
"stored": true,
"tokenized": true,
"multivalued": false
}
}
}
27.7. Developing Endpoints
Custom endpoints can be created, if necessary. See Endpoints for descriptions of provided endpoints.
Complete the following procedure to create an endpoint.
-
Create a Java class that implements the endpoint’s business logic. Example: Creating a web service that external clients can invoke.
-
Add the endpoint’s business logic, invoking
CatalogFramework
calls as needed. -
Import the DDF packages to the bundle’s manifest for run-time (in addition to any other required packages):
Import-Package: ddf.catalog, ddf.catalog.*
-
Retrieve an instance of
CatalogFramework
from the OSGi registry. (Refer to OSGi Basics - Service Registry for examples.) -
Deploy the packaged service to DDF. (Refer to OSGi Basics - Bundles.)
Note
|
It is recommended to use the maven bundle plugin to create the Endpoint bundle’s manifest as opposed to directly editing the manifest file. |
Tip
|
No implementation of an interface is required |
Methods | Use |
---|---|
|
Add, modify, and remove metadata using the ingest-related create, update, and delete. |
|
Request metadata using the |
|
Get available |
|
Retrieve products referenced in Metacards from Sources. |
|
Convert common Catalog Framework data types to and from other data formats. |
27.8. Developing Input Transformers
DDF supports the creation of custom input transformers for use cases not covered by the included implementations.
-
Create a new Java class that implements ddf.catalog.transform.InputTransformer.
public class SampleInputTransformer implements ddf.catalog.transform.InputTransformer
-
Implement the transform methods.
public Metacard transform(InputStream input) throws IOException, CatalogTransformerException
public Metacard transform(InputStream input, String id) throws IOException, CatalogTransformerException
-
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.transform
-
Create an OSGi descriptor file to communicate with the OSGi Service Registry (described in the OSGi Basics section). Export the service to the OSGi Registry and declare service properties.
Input Transformer Blueprint Descriptor Example1 2 3 4 5 6 7 8 9
... <service ref="SampleInputTransformer" interface="ddf.catalog.transform.InputTransformer"> <service-properties> <entry key="shortname" value="[[sampletransform]]" /> <entry key="title" value="[[Sample Input Transformer]]" /> <entry key="description" value="[[A new transformer for metacard input.]]" /> </service-properties> </service> ...
Table 94. Input Transformer Variable Descriptions / Blueprint Service Properties Key Description of Value Example shortname
(Required) An abbreviation for the return-type of the
BinaryContent
being sent to the user.atom
title
(Optional) A user-readable title that describes (in greater detail than the shortname) the service.
Atom Entry Transformer Service
description
(Optional) A short, human-readable description that describes the functionality of the service and the output.
This service converts a single metacard xml document to an atom entry element.
-
Deploy OSGi Bundle to OSGi runtime.
27.8.1. Create an XML Input Transformer using SaxEventHandlers
For a transformer to transform XML, (as opposed to JSON or a Word document, for example) there is a simpler solution than fully implementing a MetacardValidator
.
DDF includes an extensible, configurable XmlInputTransformer
.
This transformer can be instantiated via blueprint as a managed service factory and configured via metatype.
The XmlInputTransformer
takes a configuration of SaxEventHandlers
.
A SaxEventHandler
is a class that handles SAX Events (a very fast XML parser) to parse metadata and create metacards.
Any number of SaxEventHandlers
can be implemented and included in the XmlInputTransformer
configuration.
See the catalog-transformer-streaming-impl
bundle for examples (XmlSaxEventHandlerImpl
which parses the DDF Metacard XML Metadata and the GmlHandler
which parses GML 2.0)
Each SaxEventHandler
implementation has a SaxEventHandlerFactory
associated with it.
The SaxEventHandlerFactory
is responsible for instantiating new SaxEventHandlers
- each transform request gets a new instance of XmlInputTransformer
and set of SaxEventHandlers
to be thread- and state-safe.
The following diagrams intend to clarify implementation details:
The XmlInputTransformer
Configuration diagram shows the XmlInputTransformer
configuration, which is configured using the metatype and has the SaxEventHandlerFactory
ids.
Then, when a transform request is received, the ManagedServiceFactory
instantiates a new XmlInputTransformer
.
This XmlInputTransformer
then instantiates a new SaxEventHandlerDelegate
with the configured SaxEventHandlersFactory
ids.
The factories all in turn instantiate a SaxEventHandler
.
Then, the SaxEventHandlerDelegate
begins parsing the XML input document, handing the SAX Events off to each SaxEventHandler
, which handle them if they can.
After parsing is finished, each SaxEventHandler
returns a list of Attributes
to the SaxEventHandlerDelegate
and XmlInputTransformer
which add the attributes to the metacard and then return the fully constructed metacard.
For more specific details, see the Javadoc for the org.codice.ddf.transformer.xml.streaming.*
package.
Additionally, see the source code for the org.codice.ddf.transformer.xml.streaming.impl.GmlHandler.java
, org.codice.ddf.transformer.xml.streaming.impl.GmlHandlerFactory
, org.codice.ddf.transformer.xml.streaming.impl.XmlInputTransformerImpl
, and org.codice.ddf.transformer.xml.streaming.impl.XmlInputTransformerImplFactory
.
Note
|
|
27.8.2. Create an Input Transformer Using Apache Camel
Alternatively, make an Apache Camel route in a blueprint file and deploy it using a feature file or via hot deploy.
27.8.2.1. Input Transformer Design Pattern (Camel)
Follow this design pattern for compatibility:
When using from, catalog:inputtransformer?id=text/xml
, an Input Transformer will be created and registered in the OSGi registry with an id of text/xml
.
When using to, catalog:inputtransformer?id=text/xml
, an Input Transformer with an id matching text/xml will be discovered from the OSGi registry and invoked.
Exchange Type |
Field |
|
Request (comes from |
body |
|
Response (returned after called via |
body |
|
Tip
|
Its always a good idea to wrap the |
1
2
3
4
5
6
7
8
9
<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0">
<camelContext xmlns="http://camel.apache.org/schema/blueprint">
<route>
<from uri="catalog:inputtransformer?mimeType=RAW(id=text/xml;id=vehicle)"/>
<to uri="xslt:vehicle.xslt" /> <!-- must be on classpath for this bundle -->
<to uri="catalog:inputtransformer?mimeType=RAW(id=application/json;id=geojson)" />
</route>
</camelContext>
</blueprint>
-
Defines this as an Apache Aries blueprint file.
-
Defines the Apache Camel context that contains the route.
-
Defines start of an Apache Camel route.
-
Defines the endpoint/consumer for the route. In this case it is the DDF custom catalog component that is an
InputTransformer
registered with an id oftext/xml;id=vehicle
meaning it can transform anInputStream
of vehicle data into a metacard. Note that the specified XSL stylesheet must be on the classpath of the bundle that this blueprint file is packaged in. -
Defines the XSLT to be used to transform the vehicle input into GeoJSON format using the Apache Camel provided XSLT component.
-
Defines the route node that accepts GeoJSON formatted input and transforms it into a Mmtacard, using the DDF custom catalog component that is an InputTransformer registered with an id of application/json;id=geojson.
Note
|
An example of using an Apache Camel route to define an |
27.8.3. Input Transformer Boot Service Flag
The org.codice.ddf.platform.bootflag.BootServiceFlag
service with a service property of id=inputTransformerBootFlag
is used to indicate certain Input Transformers are ready in the system.
Adding an Input Transformers ID to a new or existing JSON file under <DDF_HOME>/etc/transformers
will cause the service to wait for an Input Transformer with the given ID.
27.9. Developing Metacard Transformers
In general, a MetacardTransformer
is used to transform a Metacard
into some desired format useful to the end user or as input to another process.
Programmatically, a MetacardTransformer
transforms a Metacard
into a BinaryContent
instance, which translates the Metacard
into the desired final format.
Metacard transformers can be used through the Catalog Framework transform
convenience method or requested from the OSGi Service Registry by endpoints or other bundles.
27.9.1. Creating a New Metacard Transformer
Existing metacard transformers are written as Java classes, and these steps walk through the steps to create a custom metacard transformer.
-
Create a new Java class that implements
ddf.catalog.transform.MetacardTransformer
.
public class SampleMetacardTransformer implements ddf.catalog.transform.MetacardTransformer
-
Implement the
transform
method.
public BinaryContent transform(Metacard metacard, Map<String, Serializable> arguments) throws CatalogTransformerException
-
transform
must return aMetacard
or throw an exception. It cannot return null.
-
-
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.transform
-
Create an OSGi descriptor file to communicate with the OSGi Service registry (described in the OSGi Basics section). Export the service to the OSGi registry and declare service properties.
Metacard Transformer Blueprint Descriptor Example1 2 3 4 5 6 7 8 9
... <service ref="SampleMetacardTransformer" interface="ddf.catalog.transform.MetacardTransformer"> <service-properties> <entry key="shortname" value="[[sampletransform]]" /> <entry key="title" value="[[Sample Metacard Transformer]]" /> <entry key="description" value="[[A new transformer for metacards.]]" /> </service-properties> </service> ...
-
Deploy OSGi Bundle to OSGi runtime.
Key | Description of Value | Example |
---|---|---|
|
(Required) An abbreviation for the return type of the BinaryContent being sent to the user. |
atom |
|
(Optional) A user-readable title that describes (in greater detail than the shortname) the service. |
Atom Entry Transformer Service |
|
(Optional) A short, human-readable description that describes the functionality of the service and the output. |
This service converts a single metacard xml document to an atom entry element. |
27.10. Developing Query Response Transformers
A QueryResponseTransformer
is used to transform a List of Results from a SourceResponse
.
Query Response Transformers can be used through the Catalog transform convenience method or requested from the OSGi Service Registry by endpoints or other bundles.
-
Create a new Java class that implements
ddf.catalog.transform.QueryResponseTransformer
.
public class SampleResponseTransformer implements ddf.catalog.transform.QueryResponseTransformer
-
Implement the
transform
method.
public BinaryContent transform(SourceResponse upstreamResponse, Map<String, Serializable> arguments) throws CatalogTransformerException
-
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog, ddf.catalog.transform
-
Create an OSGi descriptor file to communicate with the OSGi Service Registry (described in OSGi Basics). Export the service to the OSGi registry and declare service properties.
-
Deploy OSGi Bundle to OSGi runtime.
Query Response Transformer Blueprint Descriptor Example1 2 3 4 5 6 7 8 9 10
... <service ref="SampleResponseTransformer" interface="ddf.catalog.transform.QueryResponseTransformer"> <service-properties> <entry key="id" value="[[sampleId]]" /> <entry key="shortname" value="[[sampletransform]]" /> <entry key="title" value="[[Sample Response Transformer]]" /> <entry key="description" value="[[A new transformer for response queues.]]" /> </service-properties> </service> ...
Key | Description of Value | Example |
---|---|---|
|
A unique identifier to target a specific query response transformer. |
atom |
|
An abbreviation for the return type of the BinaryContent being sent to the user. |
atom |
|
A user-readable title that describes (in greater detail than the shortname) the service. |
Atom Entry Transformer Service |
|
A short, human-readable description that describes the functionality of the service and the output. |
This service converts a single metacard xml document to an atom entry element. |
27.11. Developing Sources
Sources are components that enable DDF to talk to back-end services. They let DDF perform query and ingest operations on catalog stores and query operations on federated sources.
27.11.1. Implement a Source Interface
There are three types of sources that can be created to perform query operations. All of these sources must also be able to return their availability and the list of content types currently stored in their back-end data stores.
- Catalog Provider
-
ddf.catalog.source.CatalogProvider
is used to communicate with back-end storage and allows for Query and Create/Update/Delete operations. - Federated Source
-
ddf.catalog.source.FederatedSource
is used to communicate with remote systems and only allows query operations. - Connected Source
-
ddf.catalog.source.ConnectedSource
is similar to a Federated Source with the following exceptions:-
Queried on all local queries
-
`SiteName` is hidden (masked with the DDF sourceId) in query results
-
`SiteService` does not show this Source’s information separate from DDF’s.
-
- Catalog Store
-
catalog.store.interface
is used to store data.
The procedure for implementing any of the source types follows a similar format:
-
Create a new class that implements the specified Source interface, the
ConfiguredService
and the required methods. -
Create an OSGi descriptor file to communicate with the OSGi registry. (Refer to OSGi Services.)
-
Import DDF packages.
-
Register source class as service to the OSGi registry.
-
-
Deploy to DDF.
Important
|
The |
Note
|
Remote sources currently extend the |
27.11.1.1. Developing Catalog Providers
Create a custom implementation of a catalog provider.
-
Create a Java class that implements
CatalogProvider
.
public class TestCatalogProvider implements ddf.catalog.source.CatalogProvider
-
Implement the required methods from the
ddf.catalog.source.CatalogProvider
interface.
public CreateResponse create(CreateRequest createRequest) throws IngestException;
public UpdateResponset update(UpdateRequest updateRequest) throws IngestException;
public DeleteResponse delete(DeleteRequest deleteRequest) throws IngestException;
-
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog, ddf.catalog.source
-
Export the service to the OSGi registry.
<service ref="TestCatalogProvider" interface="ddf.catalog.source.CatalogProvider" />
See the existing Catalog Provider list for examples of Catalog Providers included in DDF.
27.11.1.2. Developing Federated Sources
-
Create a Java class that implements
FederatedSource
andConfiguredService
.
public class TestFederatedSource implements ddf.catalog.source.FederatedSource, ddf.catalog.service.ConfiguredService
-
Implement the required methods of the
ddf.catalog.source.FederatedSource
andddf.catalog.service.ConfiguredService
interfaces. -
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog, ddf.catalog.source
-
Export the service to the OSGi registry.
<service ref="TestFederatedSource" interface="ddf.catalog.source.FederatedSource" />
27.11.1.3. Developing Connected Sources
Create a custom implementation of a connected source.
-
Create a Java class that implements
ConnectedSource
andConfiguredService
.
public class TestConnectedSource implements ddf.catalog.source.ConnectedSource, ddf.catalog.service.ConfiguredService
-
Implement the required methods of the
ddf.catalog.source.ConnectedSource
andddf.catalog.service.ConfiguredService
interfaces. -
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog, ddf.catalog.source
-
Export the service to the OSGi registry.
1
<service ref="TestConnectedSource" interface="ddf.catalog.source.ConnectedSource" />
Important
|
In some Providers that are created, there is a need to make Web Service calls through JAXB clients. It is best to NOT create a JAXB client as a global variable. There may be intermittent failures with the creation of Providers and federated sources when clients are created in this manner. To avoid this issue, create any JAXB within the methods requiring it. |
27.11.1.4. Exception Handling
In general, sources should only send information back related to the call, not implementation details.
27.11.1.4.1. Exception Examples
Follow these guidelines for effective exception handling:
-
Use a "Site XYZ not found" message rather than the full stack trace with the original site not found exception.
-
If the caller issues a malformed search request, return an error describing the right form, or specifically what was not recognized in the request. Do not return the exception and stack trace where the parsing broke.
-
If the caller leaves something out, do not return the null pointer exception with a stack trace, rather return a generic exception with the message "xyz was missing."
27.12. Developing Catalog Plugins
Plugins extend the functionality of the Catalog Framework by performing actions at specified times during a transaction. Plugin interfaces are located in the Catalog Core API. By implementing a plugin interface, actions can be performed at the desired time.
The following types of plugins can be created:
Plugin Type | Plugin Interface | Invocation Order |
---|---|---|
|
Before any security rules are applied. |
|
|
After pre-authorization plugins, but before other catalog plugins to establish the policy for requests/responses. |
|
|
Directly after any policy plugins |
|
|
Before the Create/Update/Delete method is sent to the Catalog Provider. |
|
|
After the Create/Update/Delete method is sent to the Catalog Provider. |
|
|
Prior to the Query/Read method being sent to the Source. |
|
|
After results have been retrieved from the query but before they are posted to the Endpoint. |
|
|
Before a federated query is executed. |
|
|
After a federated query has been executed. |
|
|
Prior to a Resource being retrieved. |
|
|
After a Resource is retrieved, but before it is sent to the Endpoint. |
|
|
Experimental Before an item is created in the content repository. |
|
|
Experimental After an item is created in the content repository. |
|
|
Experimental Before an item is updated in the content repository. |
|
|
Experimental After an item is updated in the content repository. |
|
|
Prior to a Subscription being created or updated. |
|
|
Prior to the delivery of a Metacard when an event is posted. |
27.12.1. Implementing Catalog Plugins
The procedure for implementing any of the plugins follows a similar format:
-
Create a new class that implements the specified plugin interface.
-
Implement the required methods.
-
Create an OSGi descriptor file to communicate with the OSGi registry.
-
Register the plugin class as a service to OSGi registry.
-
-
Deploy to DDF.
Note
|
Plugin Performance Concerns
Plugins should include a check to determine if requests are local or not. It is usually preferable to take no action on non-local requests. |
Tip
|
Refer to the Javadoc for more information on all Requests and Responses in the |
27.12.1.1. Catalog Plugin Failure Behavior
In the event that this Catalog Plugin cannot operate but does not wish to fail the transaction, a PluginExecutionException
should be thrown.
If processing is to be explicitly stopped, a StopProcessingException
should be thrown.
For any other exceptions, the Catalog should "fail fast" and cancel the Operation.
27.12.1.2. Implementing Pre-Ingest Plugins
Develop a custom Pre-Ingest Plugin.
-
Create a Java class that implements
PreIngestPlugin
.
public class SamplePreIngestPlugin implements ddf.catalog.plugin.PreIngestPlugin
-
Implement the required methods.
-
public CreateRequest process(CreateRequest input) throws PluginExecutionException, StopProcessingException;
-
public UpdateRequest process(UpdateRequest input) throws PluginExecutionException, StopProcessingException;
-
public DeleteRequest process(DeleteRequest input) throws PluginExecutionException, StopProcessingException;
-
-
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin
-
Export the service to the OSGi registry.
Blueprint descriptor example<service ref="SamplePreIngestPlugin" interface="ddf.catalog.plugin.PreIngestPlugin" />
27.12.1.3. Implementing Post-Ingest Plugins
Develop a custom Post-Ingest Plugin.
-
Create a Java class that implements
PostIngestPlugin
.
public class SamplePostIngestPlugin implements ddf.catalog.plugin.PostIngestPlugin
-
Implement the required methods.
-
public CreateResponse process(CreateResponse input) throws PluginExecutionException;
-
public UpdateResponse process(UpdateResponse input) throws PluginExecutionException;
-
public DeleteResponse process(DeleteResponse input) throws PluginExecutionException;
-
-
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin
-
Export the service to the OSGi registry.
Blueprint descriptor example<service ref="SamplePostIngestPlugin" interface="ddf.catalog.plugin.PostIngestPlugin" />
27.12.1.4. Implementing Pre-Query Plugins
Develop a custom Pre-Query Plugin
-
Create a Java class that implements
PreQueryPlugin
.
public class SamplePreQueryPlugin implements ddf.catalog.plugin.PreQueryPlugin
-
Implement the required method.
public QueryRequest process(QueryRequest input) throws PluginExecutionException, StopProcessingException;
-
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin
-
Export the service to the OSGi registry.
<service ref="SamplePreQueryPlugin" interface="ddf.catalog.plugin.PreQueryPlugin" />
27.12.1.5. Implementing Post-Query Plugins
Develop a custom Post-Query Plugin
-
Create a Java class that implements
PostQueryPlugin
.
public class SamplePostQueryPlugin implements ddf.catalog.plugin.PostQueryPlugin
-
Implement the required method.
public QueryResponse process(QueryResponse input) throws PluginExecutionException, StopProcessingException;
-
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin
-
Export the service to the OSGi registry.
<service ref="SamplePostQueryPlugin" interface="ddf.catalog.plugin.PostQueryPlugin" />
27.12.1.6. Implementing Pre-Delivery Plugins
Develop a custom Pre-Delivery Plugin.
-
Create a Java class that implements
PreDeliveryPlugin
.
public class SamplePreDeliveryPlugin implements ddf.catalog.plugin.PreDeliveryPlugin
-
Implement the required methods.
public Metacard processCreate(Metacard metacard) throws PluginExecutionException, StopProcessingException;
public Update processUpdateMiss(Update update) throws PluginExecutionException, StopProcessingException;
-
public Update processUpdateHit(Update update) throws PluginExecutionException, StopProcessingException;
-
public Metacard processCreate(Metacard metacard) throws PluginExecutionException, StopProcessingException;
-
-
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin,ddf.catalog.operation,ddf.catalog.event
-
Export the service to the OSGi registry.
Blueprint descriptor example
<service ref="SamplePreDeliveryPlugin" interface="ddf.catalog.plugin.PreDeliveryPlugin" />
27.12.1.7. Implementing Pre-Subscription Plugins
Develop a custom Pre-Subscription Plugin.
-
Create a Java class that implements
PreSubscriptionPlugin
.
`public class SamplePreSubscriptionPlugin implements ddf.catalog.plugin.PreSubscriptionPlugin -
Implement the required method.
-
public Subscription process(Subscription input) throws PluginExecutionException, StopProcessingException;
-
27.12.1.8. Implementing Pre-Resource Plugins
Develop a custom Pre-Resource Plugin.
-
Create a Java class that implements
PreResourcePlugin
.public class SamplePreResourcePlugin implements ddf.catalog.plugin.PreResourcePlugin
-
Implement the required method.
-
public ResourceRequest process(ResourceRequest input) throws PluginExecutionException, StopProcessingException;
-
-
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin,ddf.catalog.operation
-
Export the service to the OSGi registry. .Blueprint descriptor example
<service ref="SamplePreResourcePlugin" interface="ddf.catalog.plugin.PreResourcePlugin" />
27.12.1.9. Implementing Post-Resource Plugins
Develop a custom Post-Resource Plugin.
-
Create a Java class that implements
PostResourcePlugin
.
public class SamplePostResourcePlugin implements ddf.catalog.plugin.PostResourcePlugin
-
Implement the required method.
-
public ResourceResponse process(ResourceResponse input) throws PluginExecutionException, StopProcessingException;
-
-
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin,ddf.catalog.operation
-
Export the service to the OSGi registry.
<]]" inter"[[SamplePostResourcePlugin" interface="ddf.catalog.plugin.PostResourcePlugin" />
27.12.1.10. Implementing Policy Plugins
Develop a custom Policy Plugin.
-
Create a Java class that implements
PolicyPlugin
.
public class SamplePolicyPlugin implements ddf.catalog.plugin.PolicyPlugin
-
Implement the required methods.
-
PolicyResponse processPreCreate(Metacard input, Map<String, Serializable> properties) throws StopProcessingException;
-
PolicyResponse processPreUpdate(Metacard input, Map<String, Serializable> properties) throws StopProcessingException;
-
PolicyResponse processPreDelete(String attributeName, List<Serializable> attributeValues, Map<String, Serializable> properties) throws StopProcessingException;
-
PolicyResponse processPreQuery(Query query, Map<String, Serializable> properties) throws StopProcessingException;
-
PolicyResponse processPostQuery(Result input, Map<String, Serializable> properties) throws StopProcessingException;
-
-
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin,ddf.catalog.operation
-
Export the service to the OSGi registry.
Blueprint descriptor example
<]]" inter"[[SamplePolicyPlugin" interface="ddf.catalog.plugin.PolicyPlugin" />
27.12.1.11. Implementing Access Plugins
Develop a custom Access Plugin.
-
Create a Java class that implements
AccessPlugin
.
public class SamplePostResourcePlugin implements ddf.catalog.plugin.AccessPlugin
-
Implement the required methods.
-
CreateRequest processPreCreate(CreateRequest input) throws StopProcessingException;
-
UpdateRequest processPreUpdate(UpdateRequest input) throws StopProcessingException;
-
DeleteRequest processPreDelete(DeleteRequest input) throws StopProcessingException;
-
QueryRequest processPreQuery(QueryRequest input) throws StopProcessingException;
-
QueryResponse processPostQuery(QueryResponse input) throws StopProcessingException;
-
-
Import the DDF interface packages to the bundle manifest (in addition to any other required packages).
Import-Package: ddf.catalog,ddf.catalog.plugin,ddf.catalog.operation
-
Export the service to the OSGi registry.
Blueprint descriptor example
<]]" inter"[[SampleAccessPlugin" interface="ddf.catalog.plugin.AccessPlugin" />
27.13. Developing Token Validators
Token validators are used by the Security Token Service (STS) to validate incoming token requests.
The TokenValidator
CXF interface must be implemented by all custom token validators.
The canHandleToken
and validateToken
methods must be overridden.
The canHandleToken
method should return true or false based on the ValueType
value of the token that the validator is associated with.
The validator may be able to handle any number of different tokens that you specify.
The validateToken
method returns a TokenValidatorResponse
object that contains the Principal
of the identity being validated and also validates the ReceivedToken
object collected from the RST (RequestSecurityToken
) message.
27.14. Developing STS Claims Handlers
Develop a custom claims handler to retrieve attributes from an external attribute store.
A claim is an additional piece of data about a subject that can be included in a token along with basic token data. A claims manager provides hooks for a developer to plug in claims handlers to ensure that the STS includes the specified claims in the issued token.
The following steps define the procedure for adding a custom claims handler to the STS.
-
The new claims handler must implement the
org.apache.cxf.sts.claims.ClaimsHander
interface.1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
/** * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ package org.apache.cxf.sts.claims; import java.net.URI; import java.util.List; /** * This interface provides a pluggable way to handle Claims. */ public interface ClaimsHandler { List<URI> getSupportedClaimTypes(); ClaimCollection retrieveClaimValues(RequestClaimCollection claims, ClaimsParameters parameters); }
-
Expose the new claims handler as an OSGi service under the
org.apache.cxf.sts.claims.ClaimsHandler
interface.1 2 3 4 5 6 7 8
<?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <bean id="CustomClaimsHandler" class="security.sts.claimsHandler.CustomClaimsHandler" /> <service ref="customClaimsHandler" interface="org.apache.cxf.sts.claims.ClaimsHandler"/> </blueprint>
-
Deploy the bundle.
If the new claims handler is hitting an external service that is secured with SSL/TLS, a developer may need to add the root CA of the external site to the DDF trustStore and add a valid certificate into the DDF keyStore. For more information on certificates, refer to Configuring a Java Keystore for Secure Communications.
Note
|
This XML file is found inside of the STS bundle and is named |
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
<?xml version="1.0" encoding="UTF-8"?>
<wsdl:definitions xmlns:tns="http://docs.oasis-open.org/ws-sx/ws-trust/200512/" xmlns:wstrust="http://docs.oasis-open.org/ws-sx/ws-trust/200512/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:wsap10="http://www.w3.org/2006/05/addressing/wsdl" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://www.w3.org/ns/ws-policy" xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:wsam="http://www.w3.org/2007/05/addressing/metadata" targetNamespace="http://docs.oasis-open.org/ws-sx/ws-trust/200512/">
<wsdl:types>
<xs:schema elementFormDefault="qualified" targetNamespace="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<xs:element name="RequestSecurityToken" type="wst:AbstractRequestSecurityTokenType"/>
<xs:element name="RequestSecurityTokenResponse" type="wst:AbstractRequestSecurityTokenType"/>
<xs:complexType name="AbstractRequestSecurityTokenType">
<xs:sequence>
<xs:any namespace="##any" processContents="lax" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
<xs:attribute name="Context" type="xs:anyURI" use="optional"/>
<xs:anyAttribute namespace="##other" processContents="lax"/>
</xs:complexType>
<xs:element name="RequestSecurityTokenCollection" type="wst:RequestSecurityTokenCollectionType"/>
<xs:complexType name="RequestSecurityTokenCollectionType">
<xs:sequence>
<xs:element name="RequestSecurityToken" type="wst:AbstractRequestSecurityTokenType" minOccurs="2" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
<xs:element name="RequestSecurityTokenResponseCollection" type="wst:RequestSecurityTokenResponseCollectionType"/>
<xs:complexType name="RequestSecurityTokenResponseCollectionType">
<xs:sequence>
<xs:element ref="wst:RequestSecurityTokenResponse" minOccurs="1" maxOccurs="unbounded"/>
</xs:sequence>
<xs:anyAttribute namespace="##other" processContents="lax"/>
</xs:complexType>
</xs:schema>
</wsdl:types>
<!-- WS-Trust defines the following GEDs -->
<wsdl:message name="RequestSecurityTokenMsg">
<wsdl:part name="request" element="wst:RequestSecurityToken"/>
</wsdl:message>
<wsdl:message name="RequestSecurityTokenResponseMsg">
<wsdl:part name="response" element="wst:RequestSecurityTokenResponse"/>
</wsdl:message>
<wsdl:message name="RequestSecurityTokenCollectionMsg">
<wsdl:part name="requestCollection" element="wst:RequestSecurityTokenCollection"/>
</wsdl:message>
<wsdl:message name="RequestSecurityTokenResponseCollectionMsg">
<wsdl:part name="responseCollection" element="wst:RequestSecurityTokenResponseCollection"/>
</wsdl:message>
<!-- This portType an example of a Requestor (or other) endpoint that
Accepts SOAP-based challenges from a Security Token Service -->
<wsdl:portType name="WSSecurityRequestor">
<wsdl:operation name="Challenge">
<wsdl:input message="tns:RequestSecurityTokenResponseMsg"/>
<wsdl:output message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
</wsdl:portType>
<!-- This portType is an example of an STS supporting full protocol -->
<wsdl:portType name="STS">
<wsdl:operation name="Cancel">
<wsdl:input wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Cancel" message="tns:RequestSecurityTokenMsg"/>
<wsdl:output wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/CancelFinal" message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
<wsdl:operation name="Issue">
<wsdl:input wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue" message="tns:RequestSecurityTokenMsg"/>
<wsdl:output wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTRC/IssueFinal" message="tns:RequestSecurityTokenResponseCollectionMsg"/>
</wsdl:operation>
<wsdl:operation name="Renew">
<wsdl:input wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Renew" message="tns:RequestSecurityTokenMsg"/>
<wsdl:output wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/RenewFinal" message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
<wsdl:operation name="Validate">
<wsdl:input wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Validate" message="tns:RequestSecurityTokenMsg"/>
<wsdl:output wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/ValidateFinal" message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
<wsdl:operation name="KeyExchangeToken">
<wsdl:input wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/KET" message="tns:RequestSecurityTokenMsg"/>
<wsdl:output wsam:Action="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTR/KETFinal" message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
<wsdl:operation name="RequestCollection">
<wsdl:input message="tns:RequestSecurityTokenCollectionMsg"/>
<wsdl:output message="tns:RequestSecurityTokenResponseCollectionMsg"/>
</wsdl:operation>
</wsdl:portType>
<!-- This portType is an example of an endpoint that accepts
Unsolicited RequestSecurityTokenResponse messages -->
<wsdl:portType name="SecurityTokenResponseService">
<wsdl:operation name="RequestSecurityTokenResponse">
<wsdl:input message="tns:RequestSecurityTokenResponseMsg"/>
</wsdl:operation>
</wsdl:portType>
<wsdl:binding name="STS_Binding" type="wstrust:STS">
<wsp:PolicyReference URI="#STS_policy"/>
<soap:binding style="document" transport="http://schemas.xmlsoap.org/soap/http"/>
<wsdl:operation name="Issue">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
<wsdl:operation name="Validate">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Validate"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
<wsdl:operation name="Cancel">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Cancel"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
<wsdl:operation name="Renew">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Renew"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
<wsdl:operation name="KeyExchangeToken">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/KeyExchangeToken"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
<wsdl:operation name="RequestCollection">
<soap:operation soapAction="http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/RequestCollection"/>
<wsdl:input>
<soap:body use="literal"/>
</wsdl:input>
<wsdl:output>
<soap:body use="literal"/>
</wsdl:output>
</wsdl:operation>
</wsdl:binding>
<wsp:Policy wsu:Id="STS_policy">
<wsp:ExactlyOne>
<wsp:All>
<wsap10:UsingAddressing/>
<wsp:ExactlyOne>
<sp:TransportBinding xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<wsp:Policy>
<sp:TransportToken>
<wsp:Policy>
<sp:HttpsToken>
<wsp:Policy/>
</sp:HttpsToken>
</wsp:Policy>
</sp:TransportToken>
<sp:AlgorithmSuite>
<wsp:Policy>
<sp:Basic128/>
</wsp:Policy>
</sp:AlgorithmSuite>
<sp:Layout>
<wsp:Policy>
<sp:Lax/>
</wsp:Policy>
</sp:Layout>
<sp:IncludeTimestamp/>
</wsp:Policy>
</sp:TransportBinding>
</wsp:ExactlyOne>
<sp:Wss11 xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<wsp:Policy>
<sp:MustSupportRefKeyIdentifier/>
<sp:MustSupportRefIssuerSerial/>
<sp:MustSupportRefThumbprint/>
<sp:MustSupportRefEncryptedKey/>
</wsp:Policy>
</sp:Wss11>
<sp:Trust13 xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<wsp:Policy>
<sp:MustSupportIssuedTokens/>
<sp:RequireClientEntropy/>
<sp:RequireServerEntropy/>
</wsp:Policy>
</sp:Trust13>
</wsp:All>
</wsp:ExactlyOne>
</wsp:Policy>
<wsp:Policy wsu:Id="Input_policy">
<wsp:ExactlyOne>
<wsp:All>
<sp:SignedParts xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<sp:Body/>
<sp:Header Name="To" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="From" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="FaultTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="ReplyTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="MessageID" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="RelatesTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="Action" Namespace="http://www.w3.org/2005/08/addressing"/>
</sp:SignedParts>
<sp:EncryptedParts xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<sp:Body/>
</sp:EncryptedParts>
</wsp:All>
</wsp:ExactlyOne>
</wsp:Policy>
<wsp:Policy wsu:Id="Output_policy">
<wsp:ExactlyOne>
<wsp:All>
<sp:SignedParts xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<sp:Body/>
<sp:Header Name="To" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="From" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="FaultTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="ReplyTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="MessageID" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="RelatesTo" Namespace="http://www.w3.org/2005/08/addressing"/>
<sp:Header Name="Action" Namespace="http://www.w3.org/2005/08/addressing"/>
</sp:SignedParts>
<sp:EncryptedParts xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
<sp:Body/>
</sp:EncryptedParts>
</wsp:All>
</wsp:ExactlyOne>
</wsp:Policy>
<wsdl:service name="SecurityTokenService">
<wsdl:port name="STS_Port" binding="tns:STS_Binding">
<soap:address location="http://{FQDN}:{PORT}/services/SecurityTokenService"/>
</wsdl:port>
</wsdl:service>
</wsdl:definitions>
27.14.1. Example Requests and Responses for SAML Assertions
A client performs a RequestSecurityToken operation against the STS to receive a SAML assertion. The DDF STS offers several different ways to request a SAML assertion. For help in understanding the various request and response formats, samples have been provided. The samples are divided out into different request token types.
27.14.2. BinarySecurityToken (CAS) SAML Security Token Samples
Most endpoints in DDF require the X.509 PublicKey SAML assertion.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<Action xmlns="http://www.w3.org/2005/08/addressing">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue</Action>
<MessageID xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:60652909-faca-4e4a-a4a7-8a5ce243a7cb</MessageID>
<To xmlns="http://www.w3.org/2005/08/addressing">https://server:8993/services/SecurityTokenService</To>
<ReplyTo xmlns="http://www.w3.org/2005/08/addressing">
<Address>http://www.w3.org/2005/08/addressing/anonymous</Address>
</ReplyTo>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" soap:mustUnderstand="1">
<wsu:Timestamp wsu:Id="TS-1">
<wsu:Created>2013-04-29T18:35:10.688Z</wsu:Created>
<wsu:Expires>2013-04-29T18:40:10.688Z</wsu:Expires>
</wsu:Timestamp>
</wsse:Security>
</soap:Header>
<soap:Body>
<wst:RequestSecurityToken xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<wst:RequestType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</wst:RequestType>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://server:8993/services/SecurityTokenService</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<wst:Claims xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512" Dialect="http://schemas.xmlsoap.org/ws/2005/05/identity">
<ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"/>
<ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"/>
<ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname"/>
<ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname"/>
<ic:ClaimType xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role"/>
</wst:Claims>
<wst:OnBehalfOf>
<BinarySecurityToken ValueType="#CAS" EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary" ns1:Id="CAS" xmlns="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:ns1="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">U1QtMTQtYUtmcDYxcFRtS0FxZG1pVDMzOWMtY2FzfGh0dHBzOi8vdG9rZW5pc3N1ZXI6ODk5My9zZXJ2aWNlcy9TZWN1cml0eVRva2VuU2VydmljZQ==</BinarySecurityToken>
</wst:OnBehalfOf>
<wst:TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</wst:TokenType>
<wst:KeyType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey</wst:KeyType>
<wst:UseKey>
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>
MIIC5DCCAk2gAwIBAgIJAKj7ROPHjo1yMA0GCSqGSIb3DQEBCwUAMIGKMQswCQYDVQQGEwJVUzEQ
MA4GA1UECAwHQXJpem9uYTERMA8GA1UEBwwIR29vZHllYXIxGDAWBgNVBAoMD0xvY2toZWVkIE1h
cnRpbjENMAsGA1UECwwESTRDRTEPMA0GA1UEAwwGY2xpZW50MRwwGgYJKoZIhvcNAQkBFg1pNGNl
QGxtY28uY29tMB4XDTEyMDYyMDE5NDMwOVoXDTIyMDYxODE5NDMwOVowgYoxCzAJBgNVBAYTAlVT
MRAwDgYDVQQIDAdBcml6b25hMREwDwYDVQQHDAhHb29keWVhcjEYMBYGA1UECgwPTG9ja2hlZWQg
TWFydGluMQ0wCwYDVQQLDARJNENFMQ8wDQYDVQQDDAZjbGllbnQxHDAaBgkqhkiG9w0BCQEWDWk0
Y2VAbG1jby5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAIpHxCBLYE7xfDLcITS9SsPG
4Q04Z6S32/+TriGsRgpGTj/7GuMG7oJ98m6Ws5cTYl7nyunyHTkZuP7rBzy4esDIHheyx18EgdSJ
vvACgGVCnEmHndkf9bWUlAOfNaxW+vZwljUkRUVdkhPbPdPwOcMdKg/SsLSNjZfsQIjoWd4rAgMB
AAGjUDBOMB0GA1UdDgQWBBQx11VLtYXLvFGpFdHnhlNW9+lxBDAfBgNVHSMEGDAWgBQx11VLtYXL
vFGpFdHnhlNW9+lxBDAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4GBAHYs2OI0K6yVXzyS
sKcv2fmfw6XCICGTnyA7BOdAjYoqq6wD+33dHJUCFDqye7AWdcivuc7RWJt9jnlfJZKIm2BHcDTR
Hhk6CvjJ14Gf40WQdeMHoX8U8b0diq7Iy5Ravx+zRg7SdiyJUqFYjRh/O5tywXRT1+freI3bwAN0
L6tQ
</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</wst:UseKey>
<wst:Renewing/>
</wst:RequestSecurityToken>
</soap:Body>
</soap:Envelope>
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<Action xmlns="http://www.w3.org/2005/08/addressing">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTRC/IssueFinal</Action>
<MessageID xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:7a6fde04-9013-41ef-b08b-0689ffa9c93e</MessageID>
<To xmlns="http://www.w3.org/2005/08/addressing">http://www.w3.org/2005/08/addressing/anonymous</To>
<RelatesTo xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:60652909-faca-4e4a-a4a7-8a5ce243a7cb</RelatesTo>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" soap:mustUnderstand="1">
<wsu:Timestamp wsu:Id="TS-2">
<wsu:Created>2013-04-29T18:35:11.459Z</wsu:Created>
<wsu:Expires>2013-04-29T18:40:11.459Z</wsu:Expires>
</wsu:Timestamp>
</wsse:Security>
</soap:Header>
<soap:Body>
<RequestSecurityTokenResponseCollection xmlns="http://docs.oasis-open.org/ws-sx/ws-trust/200512" xmlns:ns2="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:ns3="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:ns4="http://www.w3.org/2005/08/addressing" xmlns:ns5="http://docs.oasis-open.org/ws-sx/ws-trust/200802">
<RequestSecurityTokenResponse>
<TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</TokenType>
<RequestedSecurityToken>
<saml2:Assertion xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ID="_BDC44EB8593F47D1B213672605113671" IssueInstant="2013-04-29T18:35:11.370Z" Version="2.0" xsi:type="saml2:AssertionType">
<saml2:Issuer>tokenissuer</saml2:Issuer>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
<ds:Reference URI="#_BDC44EB8593F47D1B213672605113671">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">
<ec:InclusiveNamespaces xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="xs"/>
</ds:Transform>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
<ds:DigestValue>6wnWbft6Pz5XOF5Q9AG59gcGwLY=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>h+NvkgXGdQtca3/eKebhAKgG38tHp3i2n5uLLy8xXXIg02qyKgEP0FCowp2LiYlsQU9YjKfSwCUbH3WR6jhbAv9zj29CE+ePfEny7MeXvgNl3wId+vcHqti/DGGhhgtO2Mbx/tyX1BhHQUwKRlcHajxHeecwmvV7D85NMdV48tI=</ds:SignatureValue>
<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>MIIDmjCCAwOgAwIBAgIBBDANBgkqhkiG9w0BAQQFADB1MQswCQYDVQQGEwJVUzEQMA4GA1UECBMH
QXJpem9uYTERMA8GA1UEBxMIR29vZHllYXIxEDAOBgNVBAoTB0V4YW1wbGUxEDAOBgNVBAoTB0V4
YW1wbGUxEDAOBgNVBAsTB0V4YW1wbGUxCzAJBgNVBAMTAkNBMB4XDTEzMDQwOTE4MzcxMVoXDTIz
MDQwNzE4MzcxMVowgaYxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6b25hMREwDwYDVQQHEwhH
b29keWVhcjEQMA4GA1UEChMHRXhhbXBsZTEQMA4GA1UEChMHRXhhbXBsZTEQMA4GA1UECxMHRXhh
bXBsZTEUMBIGA1UEAxMLdG9rZW5pc3N1ZXIxJjAkBgkqhkiG9w0BCQEWF3Rva2VuaXNzdWVyQGV4
YW1wbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDDfktpA8Lrp9rTfRibKdgtxtN9
uB44diiIqq3JOzDGfDhGLu6mjpuHO1hrKItv42hBOhhmH7lS9ipiaQCIpVfgIG63MB7fa5dBrfGF
G69vFrU1Lfi7IvsVVsNrtAEQljOMmw9sxS3SUsRQX+bD8jq7Uj1hpoF7DdqpV8Kb0COOGwIDAQAB
o4IBBjCCAQIwCQYDVR0TBAIwADAsBglghkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2Vy
dGlmaWNhdGUwHQYDVR0OBBYEFD1mHviop2Tc4HaNu8yPXR6GqWP1MIGnBgNVHSMEgZ8wgZyAFBcn
en6/j05DzaVwORwrteKc7TZOoXmkdzB1MQswCQYDVQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTER
MA8GA1UEBxMIR29vZHllYXIxEDAOBgNVBAoTB0V4YW1wbGUxEDAOBgNVBAoTB0V4YW1wbGUxEDAO
BgNVBAsTB0V4YW1wbGUxCzAJBgNVBAMTAkNBggkAwXk7OcwO7gwwDQYJKoZIhvcNAQEEBQADgYEA
PiTX5kYXwdhmijutSkrObKpRbQkvkkzcyZlO6VrAxRQ+eFeN6NyuyhgYy5K6l/sIWdaGou5iJOQx
2pQYWx1v8Klyl0W22IfEAXYv/epiO89hpdACryuDJpioXI/X8TAwvRwLKL21Dk3k2b+eyCgA0O++
HM0dPfiQLQ99ElWkv/0=</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</ds:Signature>
<saml2:Subject>
<saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified" NameQualifier="http://cxf.apache.org/sts">srogers</saml2:NameID>
<saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:holder-of-key">
<saml2:SubjectConfirmationData xsi:type="saml2:KeyInfoConfirmationDataType">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>MIIC5DCCAk2gAwIBAgIJAKj7ROPHjo1yMA0GCSqGSIb3DQEBCwUAMIGKMQswCQYDVQQGEwJVUzEQ
MA4GA1UECAwHQXJpem9uYTERMA8GA1UEBwwIR29vZHllYXIxGDAWBgNVBAoMD0xvY2toZWVkIE1h
cnRpbjENMAsGA1UECwwESTRDRTEPMA0GA1UEAwwGY2xpZW50MRwwGgYJKoZIhvcNAQkBFg1pNGNl
QGxtY28uY29tMB4XDTEyMDYyMDE5NDMwOVoXDTIyMDYxODE5NDMwOVowgYoxCzAJBgNVBAYTAlVT
MRAwDgYDVQQIDAdBcml6b25hMREwDwYDVQQHDAhHb29keWVhcjEYMBYGA1UECgwPTG9ja2hlZWQg
TWFydGluMQ0wCwYDVQQLDARJNENFMQ8wDQYDVQQDDAZjbGllbnQxHDAaBgkqhkiG9w0BCQEWDWk0
Y2VAbG1jby5jb20wgZ8wDQYJKoZIhvcNAQEBBQADgY0AMIGJAoGBAIpHxCBLYE7xfDLcITS9SsPG
4Q04Z6S32/+TriGsRgpGTj/7GuMG7oJ98m6Ws5cTYl7nyunyHTkZuP7rBzy4esDIHheyx18EgdSJ
vvACgGVCnEmHndkf9bWUlAOfNaxW+vZwljUkRUVdkhPbPdPwOcMdKg/SsLSNjZfsQIjoWd4rAgMB
AAGjUDBOMB0GA1UdDgQWBBQx11VLtYXLvFGpFdHnhlNW9+lxBDAfBgNVHSMEGDAWgBQx11VLtYXL
vFGpFdHnhlNW9+lxBDAMBgNVHRMEBTADAQH/MA0GCSqGSIb3DQEBCwUAA4GBAHYs2OI0K6yVXzyS
sKcv2fmfw6XCICGTnyA7BOdAjYoqq6wD+33dHJUCFDqye7AWdcivuc7RWJt9jnlfJZKIm2BHcDTR
Hhk6CvjJ14Gf40WQdeMHoX8U8b0diq7Iy5Ravx+zRg7SdiyJUqFYjRh/O5tywXRT1+freI3bwAN0
L6tQ</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</saml2:SubjectConfirmationData>
</saml2:SubjectConfirmation>
</saml2:Subject>
<saml2:Conditions NotBefore="2013-04-29T18:35:11.407Z" NotOnOrAfter="2013-04-29T19:05:11.407Z">
<saml2:AudienceRestriction>
<saml2:Audience>https://server:8993/services/SecurityTokenService</saml2:Audience>
</saml2:AudienceRestriction>
</saml2:Conditions>
<saml2:AuthnStatement AuthnInstant="2013-04-29T18:35:11.392Z">
<saml2:AuthnContext>
<saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified</saml2:AuthnContextClassRef>
</saml2:AuthnContext>
</saml2:AuthnStatement>
<saml2:AttributeStatement>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers@example.com</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">Steve Rogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">avengers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">admin</saml2:AttributeValue>
</saml2:Attribute>
</saml2:AttributeStatement>
</saml2:Assertion>
</RequestedSecurityToken>
<RequestedAttachedReference>
<ns3:SecurityTokenReference xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd" wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0">
<ns3:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_BDC44EB8593F47D1B213672605113671</ns3:KeyIdentifier>
</ns3:SecurityTokenReference>
</RequestedAttachedReference>
<RequestedUnattachedReference>
<ns3:SecurityTokenReference xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd" wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0">
<ns3:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_BDC44EB8593F47D1B213672605113671</ns3:KeyIdentifier>
</ns3:SecurityTokenReference>
</RequestedUnattachedReference>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://server:8993/services/SecurityTokenService</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<Lifetime>
<ns2:Created>2013-04-29T18:35:11.444Z</ns2:Created>
<ns2:Expires>2013-04-29T19:05:11.444Z</ns2:Expires>
</Lifetime>
</RequestSecurityTokenResponse>
</RequestSecurityTokenResponseCollection>
</soap:Body>
</soap:Envelope>
To obtain a SAML assertion to use in secure communication to DDF, a RequestSecurityToken (RST) request has to be made to the STS.
A Bearer SAML assertion is automatically trusted by the endpoint. The client doesn’t have to prove it can own that SAML assertion. It is the simplest way to request a SAML assertion, but many endpoints won’t accept a KeyType of Bearer.
27.14.3. UsernameToken Bearer SAML Security Token Sample
-
WS-Addressing header with Action, To, and Message ID
-
Valid, non-expired timestamp
-
Username Token containing a username and password that the STS will authenticate
-
Issued over HTTPS
-
KeyType of http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer
-
Claims (optional): Some endpoints may require that the SAML assertion include attributes of the user, such as an authenticated user’s role, name identifier, email address, etc. If the SAML assertion needs those attributes, the
RequestSecurityToken
must specify which ones to include.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" soap:mustUnderstand="1">
<wsu:Timestamp wsu:Id="TS-1">
<wsu:Created>2013-04-29T17:47:37.817Z</wsu:Created>
<wsu:Expires>2013-04-29T17:57:37.817Z</wsu:Expires>
</wsu:Timestamp>
<wsse:UsernameToken wsu:Id="UsernameToken-1">
<wsse:Username>srogers</wsse:Username>
<wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">password1</wsse:Password>
</wsse:UsernameToken>
</wsse:Security>
<wsa:Action>http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue</wsa:Action>
<wsa:MessageID>uuid:a1bba87b-0f00-46cc-975f-001391658cbe</wsa:MessageID>
<wsa:To>https://server:8993/services/SecurityTokenService</wsa:To>
</soap:Header>
<soap:Body>
<wst:RequestSecurityToken xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<wst:SecondaryParameters>
<t:TokenType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</t:TokenType>
<t:KeyType xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512">http://docs.oasis-open.org/ws-sx/ws-trust/200512/Bearer</t:KeyType>
<t:Claims xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity" xmlns:t="http://docs.oasis-open.org/ws-sx/ws-trust/200512" Dialect="http://schemas.xmlsoap.org/ws/2005/05/identity">
<!--Add any additional claims you want to grab for the service-->
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/uid"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname"/>
</t:Claims>
</wst:SecondaryParameters>
<wst:RequestType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</wst:RequestType>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://server:8993/services/QueryService</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<wst:Renewing/>
</wst:RequestSecurityToken>
</soap:Body>
</soap:Envelope>
This is the response from the STS containing the SAML assertion to be used in subsequent requests to QCRUD endpoints:
The saml2:Assertion
block contains the entire SAML assertion.
The Signature
block contains a signature from the STS’s private key.
The endpoint receiving the SAML assertion will verify that it trusts the signer and ensure that the message wasn’t tampered with.
The AttributeStatement
block contains all the Claims requested.
The Lifetime
block indicates the valid time interval in which the SAML assertion can be used.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<Action xmlns="http://www.w3.org/2005/08/addressing">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTRC/IssueFinal</Action>
<MessageID xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:eee4c6ef-ac10-4cbc-a53c-13d960e3b6e8</MessageID>
<To xmlns="http://www.w3.org/2005/08/addressing">http://www.w3.org/2005/08/addressing/anonymous</To>
<RelatesTo xmlns="http://www.w3.org/2005/08/addressing">uuid:a1bba87b-0f00-46cc-975f-001391658cbe</RelatesTo>
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" soap:mustUnderstand="1">
<wsu:Timestamp wsu:Id="TS-2">
<wsu:Created>2013-04-29T17:49:12.624Z</wsu:Created>
<wsu:Expires>2013-04-29T17:54:12.624Z</wsu:Expires>
</wsu:Timestamp>
</wsse:Security>
</soap:Header>
<soap:Body>
<RequestSecurityTokenResponseCollection xmlns="http://docs.oasis-open.org/ws-sx/ws-trust/200512" xmlns:ns2="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:ns3="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:ns4="http://www.w3.org/2005/08/addressing" xmlns:ns5="http://docs.oasis-open.org/ws-sx/ws-trust/200802">
<RequestSecurityTokenResponse>
<TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</TokenType>
<RequestedSecurityToken>
<saml2:Assertion xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" ID="_7437C1A55F19AFF22113672577526132" IssueInstant="2013-04-29T17:49:12.613Z" Version="2.0" xsi:type="saml2:AssertionType">
<saml2:Issuer>tokenissuer</saml2:Issuer>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
<ds:Reference URI="#_7437C1A55F19AFF22113672577526132">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">
<ec:InclusiveNamespaces xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="xs"/>
</ds:Transform>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
<ds:DigestValue>ReOqEbGZlyplW5kqiynXOjPnVEA=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>X5Kzd54PrKIlGVV2XxzCmWFRzHRoybF7hU6zxbEhSLMR0AWS9R7Me3epq91XqeOwvIDDbwmE/oJNC7vI0fIw/rqXkx4aZsY5a5nbAs7f+aXF9TGdk82x2eNhNGYpViq0YZJfsJ5WSyMtG8w5nRekmDMy9oTLsHG+Y/OhJDEwq58=</ds:SignatureValue>
<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>MIIDmjCCAwOgAwIBAgIBBDANBgkqhkiG9w0BAQQFADB1MQswCQYDVQQGEwJVUzEQMA4GA1UECBMH
QXJpem9uYTERMA8GA1UEBxMIR29vZHllYXIxEDAOBgNVBAoTB0V4YW1wbGUxEDAOBgNVBAoTB0V4
YW1wbGUxEDAOBgNVBAsTB0V4YW1wbGUxCzAJBgNVBAMTAkNBMB4XDTEzMDQwOTE4MzcxMVoXDTIz
MDQwNzE4MzcxMVowgaYxCzAJBgNVBAYTAlVTMRAwDgYDVQQIEwdBcml6b25hMREwDwYDVQQHEwhH
b29keWVhcjEQMA4GA1UEChMHRXhhbXBsZTEQMA4GA1UEChMHRXhhbXBsZTEQMA4GA1UECxMHRXhh
bXBsZTEUMBIGA1UEAxMLdG9rZW5pc3N1ZXIxJjAkBgkqhkiG9w0BCQEWF3Rva2VuaXNzdWVyQGV4
YW1wbGUuY29tMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDDfktpA8Lrp9rTfRibKdgtxtN9
uB44diiIqq3JOzDGfDhGLu6mjpuHO1hrKItv42hBOhhmH7lS9ipiaQCIpVfgIG63MB7fa5dBrfGF
G69vFrU1Lfi7IvsVVsNrtAEQljOMmw9sxS3SUsRQX+bD8jq7Uj1hpoF7DdqpV8Kb0COOGwIDAQAB
o4IBBjCCAQIwCQYDVR0TBAIwADAsBglghkgBhvhCAQ0EHxYdT3BlblNTTCBHZW5lcmF0ZWQgQ2Vy
dGlmaWNhdGUwHQYDVR0OBBYEFD1mHviop2Tc4HaNu8yPXR6GqWP1MIGnBgNVHSMEgZ8wgZyAFBcn
en6/j05DzaVwORwrteKc7TZOoXmkdzB1MQswCQYDVQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTER
MA8GA1UEBxMIR29vZHllYXIxEDAOBgNVBAoTB0V4YW1wbGUxEDAOBgNVBAoTB0V4YW1wbGUxEDAO
BgNVBAsTB0V4YW1wbGUxCzAJBgNVBAMTAkNBggkAwXk7OcwO7gwwDQYJKoZIhvcNAQEEBQADgYEA
PiTX5kYXwdhmijutSkrObKpRbQkvkkzcyZlO6VrAxRQ+eFeN6NyuyhgYy5K6l/sIWdaGou5iJOQx
2pQYWx1v8Klyl0W22IfEAXYv/epiO89hpdACryuDJpioXI/X8TAwvRwLKL21Dk3k2b+eyCgA0O++
HM0dPfiQLQ99ElWkv/0=</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</ds:Signature>
<saml2:Subject>
<saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified" NameQualifier="http://cxf.apache.org/sts">srogers</saml2:NameID>
<saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:bearer"/>
</saml2:Subject>
<saml2:Conditions NotBefore="2013-04-29T17:49:12.614Z" NotOnOrAfter="2013-04-29T18:19:12.614Z">
<saml2:AudienceRestriction>
<saml2:Audience>https://server:8993/services/QueryService</saml2:Audience>
</saml2:AudienceRestriction>
</saml2:Conditions>
<saml2:AuthnStatement AuthnInstant="2013-04-29T17:49:12.613Z">
<saml2:AuthnContext>
<saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified</saml2:AuthnContextClassRef>
</saml2:AuthnContext>
</saml2:AuthnStatement>
<saml2:AttributeStatement>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers@example.com</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">srogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">Steve Rogers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">avengers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">admin</saml2:AttributeValue>
</saml2:Attribute>
</saml2:AttributeStatement>
</saml2:Assertion>
</RequestedSecurityToken>
<RequestedAttachedReference>
<ns3:SecurityTokenReference xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd" wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0">
<ns3:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_7437C1A55F19AFF22113672577526132</ns3:KeyIdentifier>
</ns3:SecurityTokenReference>
</RequestedAttachedReference>
<RequestedUnattachedReference>
<ns3:SecurityTokenReference xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd" wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0">
<ns3:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_7437C1A55F19AFF22113672577526132</ns3:KeyIdentifier>
</ns3:SecurityTokenReference>
</RequestedUnattachedReference>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy" xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://server:8993/services/QueryService</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<Lifetime>
<ns2:Created>2013-04-29T17:49:12.620Z</ns2:Created>
<ns2:Expires>2013-04-29T18:19:12.620Z</ns2:Expires>
</Lifetime>
</RequestSecurityTokenResponse>
</RequestSecurityTokenResponseCollection>
</soap:Body>
</soap:Envelope>
In order to obtain a SAML assertion to use in secure communication to DDF, a RequestSecurityToken
(RST) request has to be made to the STS.
An endpoint’s policy will specify the type of security token needed. Most of the endpoints that have been used with DDF require a SAML v2.0 assertion with a required KeyType of http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey. This means that the SAML assertion provided by the client to a DDF endpoint must contain a SubjectConfirmation block with a type of "holder-of-key" containing the client’s public key. This is used to prove that the client can possess the SAML assertion returned by the STS.
27.14.4. X.509 PublicKey SAML Security Token Sample
The STS that comes with DDF requires the following to be in the RequestSecurityToken request in order to issue a valid SAML assertion. See the request block below for an example of how these components should be populated.
-
WS-Addressing header containing Action, To, and MessageID blocks
-
Valid, non-expired timestamp
-
Issued over HTTPS
-
TokenType of http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0
-
KeyType of http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey
-
X509 Certificate as the Proof of Possession or POP. This needs to be the certificate of the client that will be both requesting the SAML assertion and using the SAML assertion to issue a query
-
Claims (optional): Some endpoints may require that the SAML assertion include attributes of the user, such as an authenticated user’s role, name identifier, email address, etc. If the SAML assertion needs those attributes, the RequestSecurityToken must specify which ones to include.
-
UsernameToken: If Claims are required, the RequestSecurityToken security header must contain a UsernameToken element with a username and password.
-
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
<soapenv:Envelope xmlns:ns="http://docs.oasis-open.org/ws-sx/ws-trust/200512" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Header xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Action>http://docs.oasis-open.org/ws-sx/ws-trust/200512/RST/Issue</wsa:Action>
<wsa:MessageID>uuid:527243af-94bd-4b5c-a1d8-024fd7e694c5</wsa:MessageID>
<wsa:To>https://server:8993/services/SecurityTokenService</wsa:To>
<wsse:Security soapenv:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
<wsu:Timestamp wsu:Id="TS-17">
<wsu:Created>2014-02-19T17:30:40.771Z</wsu:Created>
<wsu:Expires>2014-02-19T19:10:40.771Z</wsu:Expires>
</wsu:Timestamp>
<!-- OPTIONAL: Only required if the endpoint that the SAML assertion will be sent to requires claims. -->
<wsse:UsernameToken wsu:Id="UsernameToken-16">
<wsse:Username>pparker</wsse:Username>
<wsse:Password Type="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText">password1</wsse:Password>
<wsse:Nonce EncodingType="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary">LCTD+5Y7hlWIP6SpsEg9XA==</wsse:Nonce>
<wsu:Created>2014-02-19T17:30:37.355Z</wsu:Created>
</wsse:UsernameToken>
</wsse:Security>
</soapenv:Header>
<soapenv:Body>
<wst:RequestSecurityToken xmlns:wst="http://docs.oasis-open.org/ws-sx/ws-trust/200512">
<wst:TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</wst:TokenType>
<wst:KeyType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/PublicKey</wst:KeyType>
<!-- OPTIONAL: Only required if the endpoint that the SAML assertion will be sent to requires claims. -->
<wst:Claims Dialect="http://schemas.xmlsoap.org/ws/2005/05/identity" xmlns:ic="http://schemas.xmlsoap.org/ws/2005/05/identity">
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname"/>
<ic:ClaimType Optional="true" Uri="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname"/>
</wst:Claims>
<wst:RequestType>http://docs.oasis-open.org/ws-sx/ws-trust/200512/Issue</wst:RequestType>
<wsp:AppliesTo xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">
<wsa:EndpointReference xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsa:Address>https://server:8993/services/QueryService</wsa:Address>
</wsa:EndpointReference>
</wsp:AppliesTo>
<wst:UseKey>
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>MIIFGDCCBACgAwIBAgICJe0wDQYJKoZIhvcNAQEFBQAwXDELMAkGA1UEBhMCVVMxGDAWBgNVBAoT
D1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kxFzAVBgNVBAMTDkRP
RCBKSVRDIENBLTI3MB4XDTEzMDUwNzAwMjU0OVoXDTE2MDUwNzAwMjU0OVowaTELMAkGA1UEBhMC
VVMxGDAWBgNVBAoTD1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kx
EzARBgNVBAsTCkNPTlRSQUNUT1IxDzANBgNVBAMTBmNsaWVudDCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBAOq6L1/jjZ5cyhjhHEbOHr5WQpboKACYbrsn8lg85LGNoAfcwImr9KBmOxGb
ZCxHYIhkW7pJ+kppyH8DbbbDMviIvvdkvrAIU0l8OBRn2wReCBGQ01Imdc3+WzFF2svW75d6wii2ZVd
eMvUO15p/pAD/sdIfXmAfyu8+tqtiO8KVZGkTnlg3AMzfeSrkci5UHMVWj0qUSuzLk9SAg/9STgb
Kf2xBpHUYecWFSB+dTpdZN2pC85tj9xIoWGh5dFWG1fPcYRgzGPxsybiGOylbJ7rHDJuL7IIIyx5
EnkCuxmQwoQ6XQAhiWRGyPlY08w1LZixI2v+Cv/ZjUfIHv49I9P4Mt8CAwEAAaOCAdUwggHRMB8G
A1UdIwQYMBaAFCMUNCBNXy43NZLBBlnDjDplNZJoMB0GA1UdDgQWBBRPGiX6zZzKTqQSx/tjg6hx
9opDoTAOBgNVHQ8BAf8EBAMCBaAwgdoGA1UdHwSB0jCBzzA2oDSgMoYwaHR0cDovL2NybC5nZHMu
bml0LmRpc2EubWlsL2NybC9ET0RKSVRDQ0FfMjcuY3JsMIGUoIGRoIGOhoGLbGRhcDovL2NybC5n
ZHMubml0LmRpc2EubWlsL2NuJTNkRE9EJTIwSklUQyUyMENBLTI3JTJjb3UlM2RQS0klMmNvdSUz
ZERvRCUyY28lM2RVLlMuJTIwR292ZXJubWVudCUyY2MlM2RVUz9jZXJ0aWZpY2F0ZXJldm9jYXRp
b25saXN0O2JpbmFyeTAjBgNVHSAEHDAaMAsGCWCGSAFlAgELBTALBglghkgBZQIBCxIwfQYIKwYB
BQUHAQEEcTBvMD0GCCsGAQUFBzAChjFodHRwOi8vY3JsLmdkcy5uaXQuZGlzYS5taWwvc2lnbi9E
T0RKSVRDQ0FfMjcuY2VyMC4GCCsGAQUFBzABhiJodHRwOi8vb2NzcC5uc24wLnJjdnMubml0LmRp
c2EubWlsMA0GCSqGSIb3DQEBBQUAA4IBAQCGUJPGh4iGCbr2xCMqCq04SFQ+iaLmTIFAxZPFvup1
4E9Ir6CSDalpF9eBx9fS+Z2xuesKyM/g3YqWU1LtfWGRRIxzEujaC4YpwHuffkx9QqkwSkXXIsim
EhmzSgzxnT4Q9X8WwalqVYOfNZ6sSLZ8qPPFrLHkkw/zIFRzo62wXLu0tfcpOr+iaJBhyDRinIHr
hwtE3xo6qQRRWlO3/clC4RnTev1crFVJQVBF3yfpRu8udJ2SOGdqU0vjUSu1h7aMkYJMHIu08Whj
8KASjJBFeHPirMV1oddJ5ydZCQ+Jmnpbwq+XsCxg1LjC4dmbjKVr9s4QK+/JLNjxD8IkJiZE</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</wst:UseKey>
</wst:RequestSecurityToken>
</soapenv:Body>
</soapenv:Envelope>
27.14.5. X.509 PublicKey SAML Security Token Sample
This is the response from the STS containing the SAML assertion to be used in subsequent requests to QCRUD endpoints.
The saml2:Assertion
block contains the entire SAML assertion.
The Signature
block contains a signature from the STS’s private key.
The endpoint receiving the SAML assertion will verify that it trusts the signer and ensure that the message wasn’t tampered with.
The SubjectConfirmation
block contains the client’s public key, so the server can verify that the client has permission to hold this SAML assertion.
The AttributeStatement
block contains all of the claims requested.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Header>
<Action xmlns="http://www.w3.org/2005/08/addressing">http://docs.oasis-open.org/ws-sx/ws-trust/200512/RSTRC/IssueFinal</Action>
<MessageID xmlns="http://www.w3.org/2005/08/addressing">urn:uuid:b46c35ad-3120-4233-ae07-b9e10c7911f3</MessageID>
<To xmlns="http://www.w3.org/2005/08/addressing">http://www.w3.org/2005/08/addressing/anonymous</To>
<RelatesTo xmlns="http://www.w3.org/2005/08/addressing">uuid:527243af-94bd-4b5c-a1d8-024fd7e694c5</RelatesTo>
<wsse:Security soap:mustUnderstand="1" xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd">
<wsu:Timestamp wsu:Id="TS-90DBA0754E55B4FE7013928310431357">
<wsu:Created>2014-02-19T17:30:43.135Z</wsu:Created>
<wsu:Expires>2014-02-19T17:35:43.135Z</wsu:Expires>
</wsu:Timestamp>
</wsse:Security>
</soap:Header>
<soap:Body>
<ns2:RequestSecurityTokenResponseCollection xmlns="http://docs.oasis-open.org/ws-sx/ws-trust/200802" xmlns:ns2="http://docs.oasis-open.org/ws-sx/ws-trust/200512" xmlns:ns3="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:ns4="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" xmlns:ns5="http://www.w3.org/2005/08/addressing">
<ns2:RequestSecurityTokenResponse>
<ns2:TokenType>http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0</ns2:TokenType>
<ns2:RequestedSecurityToken>
<saml2:Assertion ID="_90DBA0754E55B4FE7013928310431176" IssueInstant="2014-02-19T17:30:43.117Z" Version="2.0" xsi:type="saml2:AssertionType" xmlns:saml2="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<saml2:Issuer>tokenissuer</saml2:Issuer>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/>
<ds:Reference URI="#_90DBA0754E55B4FE7013928310431176">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#">
<ec:InclusiveNamespaces PrefixList="xs" xmlns:ec="http://www.w3.org/2001/10/xml-exc-c14n#"/>
</ds:Transform>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/>
<ds:DigestValue>/bEGqsRGHVJbx298WPmGd8I53zs=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>
mYR7w1/dnuh8Z7t9xjCb4XkYQLshj+UuYlGOuTwDYsUPcS2qI0nAgMD1VsDP7y1fDJxeqsq7HYhFKsnqRfebMM4WLH1D/lJ4rD4UO+i9l3tuiHml7SN24WM1/bOqfDUCoDqmwG8afUJ3r4vmTNPxfwfOss8BZ/8ODgZzm08ndlkxDfvcN7OrExbV/3/45JwF/MMPZoqvi2MJGfX56E9fErJNuzezpWnRqPOlWPxyffKMAlVaB9zF6gvVnUqcW2k/Z8X9lN7O5jouBI281ZnIfsIPuBJERFtYNVDHsIXM1pJnrY6FlKIaOsi55LQu3Ruir/n82pU7BT5aWtxwrn7akBg== </ds:SignatureValue>
<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>MIIFHTCCBAWgAwIBAgICJe8wDQYJKoZIhvcNAQEFBQAwXDELMAkGA1UEBhMCVVMxGDAWBgNVBAoT
D1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kxFzAVBgNVBAMTDkRP
RCBKSVRDIENBLTI3MB4XDTEzMDUwNzAwMjYzN1oXDTE2MDUwNzAwMjYzN1owbjELMAkGA1UEBhMC
VVMxGDAWBgNVBAoTD1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kx
EzARBgNVBAsTCkNPTlRSQUNUT1IxFDASBgNVBAMTC3Rva2VuaXNzdWVyMIIBIjANBgkqhkiG9w0B
AQEFAAOCAQ8AMIIBCgKCAQEAx01/U4M1wG+wL1JxX2RL1glj101FkJXMk3KFt3zD//N8x/Dcwwvs
ngCQjXrV6YhbB2V7scHwnThPv3RSwYYiO62z+g6ptfBbKGGBLSZOzLe3fyJR4RxblFKsELFgPHfX
vgUHS/keG5uSRk9S/Okqps/yxKB7+ZlxeFxsIz5QywXvBpMiXtc2zF+M7BsbSIdSx5LcPcDFBwjF
c66rE3/y/25VMht9EZX1QoKr7f8rWD4xgd5J6DYMFWEcmiCz4BDJH9sfTw+n1P+CYgrhwslWGqxt
cDME9t6SWR3GLT4Sdtr8ziIM5uUteEhPIV3rVC3/u23JbYEeS8mpnp0bxt5eHQIDAQABo4IB1TCC
AdEwHwYDVR0jBBgwFoAUIxQ0IE1fLjc1ksEGWcOMOmU1kmgwHQYDVR0OBBYEFGBjdkdey+bMHMhC
Z7gwiQ/mJf5VMA4GA1UdDwEB/wQEAwIFoDCB2gYDVR0fBIHSMIHPMDagNKAyhjBodHRwOi8vY3Js
Lmdkcy5uaXQuZGlzYS5taWwvY3JsL0RPREpJVENDQV8yNy5jcmwwgZSggZGggY6GgYtsZGFwOi8v
Y3JsLmdkcy5uaXQuZGlzYS5taWwvY24lM2RET0QlMjBKSVRDJTIwQ0EtMjclMmNvdSUzZFBLSSUy
Y291JTNkRG9EJTJjbyUzZFUuUy4lMjBHb3Zlcm5tZW50JTJjYyUzZFVTP2NlcnRpZmljYXRlcmV2
b2NhdGlvbmxpc3Q7YmluYXJ5MCMGA1UdIAQcMBowCwYJYIZIAWUCAQsFMAsGCWCGSAFlAgELEjB9
BggrBgEFBQcBAQRxMG8wPQYIKwYBBQUHMAKGMWh0dHA6Ly9jcmwuZ2RzLm5pdC5kaXNhLm1pbC9z
aWduL0RPREpJVENDQV8yNy5jZXIwLgYIKwYBBQUHMAGGImh0dHA6Ly9vY3NwLm5zbjAucmN2cy5u
aXQuZGlzYS5taWwwDQYJKoZIhvcNAQEFBQADggEBAIHZQTINU3bMpJ/PkwTYLWPmwCqAYgEUzSYx
bNcVY5MWD8b4XCdw5nM3GnFlOqr4IrHeyyOzsEbIebTe3bv0l1pHx0Uyj059nAhx/AP8DjVtuRU1
/Mp4b6uJ/4yaoMjIGceqBzHqhHIJinG0Y2azua7eM9hVbWZsa912ihbiupCq22mYuHFP7NUNzBvV
j03YUcsy/sES5sRx9Rops/CBN+LUUYOdJOxYWxo8oAbtF8ABE5ATLAwqz4ttsToKPUYh1sxdx5Ef
APeZ+wYDmMu4OfLckwnCKZgkEtJOxXpdIJHY+VmyZtQSB0LkR5toeH/ANV4259Ia5ZT8h2/vIJBg
6B4=</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</ds:Signature>
<saml2:Subject>
<saml2:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified" NameQualifier="http://cxf.apache.org/sts">pparker</saml2:NameID>
<saml2:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:holder-of-key">
<saml2:SubjectConfirmationData xsi:type="saml2:KeyInfoConfirmationDataType">
<ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:X509Data>
<ds:X509Certificate>MIIFGDCCBACgAwIBAgICJe0wDQYJKoZIhvcNAQEFBQAwXDELMAkGA1UEBhMCVVMxGDAWBgNVBAoT
D1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kxFzAVBgNVBAMTDkRP
RCBKSVRDIENBLTI3MB4XDTEzMDUwNzAwMjU0OVoXDTE2MDUwNzAwMjU0OVowaTELMAkGA1UEBhMC
VVMxGDAWBgNVBAoTD1UuUy4gR292ZXJubWVudDEMMAoGA1UECxMDRG9EMQwwCgYDVQQLEwNQS0kx
EzARBgNVBAsTCkNPTlRSQUNUT1IxDzANBgNVBAMTBmNsaWVudDCCASIwDQYJKoZIhvcNAQEBBQAD
ggEPADCCAQoCggEBAOq6L1/jjZ5cyhjhHEbOHr5WQpboKACYbrsn8lg85LGNoAfcwImr9KBmOxGb
ZCxHYIhkW7pJ+kppyH8bbbviIvvdkvrAIU0l8OBRn2wReCBGQ01Imdc3+WzFF2svW75d6wii2ZVd
eMvUO15p/pAD/sdIfXmAfyu8+tqtiO8KVZGkTnlg3AMzfeSrkci5UHMVWj0qUSuzLk9SAg/9STgb
Kf2xBpHUYecWFSB+dTpdZN2pC85tj9xIoWGh5dFWG1fPcYRgzGPxsybiGOylbJ7rHDJuL7IIIyx5
EnkCuxmQwoQ6XQAhiWRGyPlY08w1LZixI2v+Cv/ZjUfIHv49I9P4Mt8CAwEAAaOCAdUwggHRMB8G
A1UdIwQYMBaAFCMUNCBNXy43NZLBBlnDjDplNZJoMB0GA1UdDgQWBBRPGiX6zZzKTqQSx/tjg6hx
9opDoTAOBgNVHQ8BAf8EBAMCBaAwgdoGA1UdHwSB0jCBzzA2oDSgMoYwaHR0cDovL2NybC5nZHMu
bml0LmRpc2EubWlsL2NybC9ET0RKSVRDQ0FfMjcuY3JsMIGUoIGRoIGOhoGLbGRhcDovL2NybC5n
ZHMubml0LmRpc2EubWlsL2NuJTNkRE9EJTIwSklUQyUyMENBLTI3JTJjb3UlM2RQS0klMmNvdSUz
ZERvRCUyY28lM2RVLlMuJTIwR292ZXJubWVudCUyY2MlM2RVUz9jZXJ0aWZpY2F0ZXJldm9jYXRp
b25saXN0O2JpbmFyeTAjBgNVHSAEHDAaMAsGCWCGSAFlAgELBTALBglghkgBZQIBCxIwfQYIKwYB
BQUHAQEEcTBvMD0GCCsGAQUFBzAChjFodHRwOi8vY3JsLmdkcy5uaXQuZGlzYS5taWwvc2lnbi9E
T0RKSVRDQ0FfMjcuY2VyMC4GCCsGAQUFBzABhiJodHRwOi8vb2NzcC5uc24wLnJjdnMubml0LmRp
c2EubWlsMA0GCSqGSIb3DQEBBQUAA4IBAQCGUJPGh4iGCbr2xCMqCq04SFQ+iaLmTIFAxZPFvup1
4E9Ir6CSDalpF9eBx9fS+Z2xuesKyM/g3YqWU1LtfWGRRIxzEujaC4YpwHuffkx9QqkwSkXXIsim
EhmzSgzxnT4Q9X8WwalqVYOfNZ6sSLZ8qPPFrLHkkw/zIFRzo62wXLu0tfcpOr+iaJBhyDRinIHr
hwtE3xo6qQRRWlO3/clC4RnTev1crFVJQVBF3yfpRu8udJ2SOGdqU0vjUSu1h7aMkYJMHIu08Whj
8KASjJBFeHPirMV1oddJ5ydZCQ+Jmnpbwq+XsCxg1LjC4dmbjKVr9s4QK+/JLNjxD8IkJiZE</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</saml2:SubjectConfirmationData>
</saml2:SubjectConfirmation>
</saml2:Subject>
<saml2:Conditions NotBefore="2014-02-19T17:30:43.119Z" NotOnOrAfter="2014-02-19T18:00:43.119Z"/>
<saml2:AuthnStatement AuthnInstant="2014-02-19T17:30:43.117Z">
<saml2:AuthnContext>
<saml2:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:unspecified</saml2:AuthnContextClassRef>
</saml2:AuthnContext>
</saml2:AuthnStatement>
<!-- This block will only be included if Claims were requested in the RST. -->
<saml2:AttributeStatement>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">pparker</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">pparker@example.com</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/surname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">pparker</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/givenname" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">Peter Parker</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">users</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">users</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">avengers</saml2:AttributeValue>
</saml2:Attribute>
<saml2:Attribute Name="http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:unspecified">
<saml2:AttributeValue xsi:type="xs:string">admin</saml2:AttributeValue>
</saml2:Attribute>
</saml2:AttributeStatement>
</saml2:Assertion>
</ns2:RequestedSecurityToken>
<ns2:RequestedAttachedReference>
<ns4:SecurityTokenReference wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0" xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd">
<ns4:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_90DBA0754E55B4FE7013928310431176</ns4:KeyIdentifier>
</ns4:SecurityTokenReference>
</ns2:RequestedAttachedReference>
<ns2:RequestedUnattachedReference>
<ns4:SecurityTokenReference wsse11:TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0" xmlns:wsse11="http://docs.oasis-open.org/wss/oasis-wss-wssecurity-secext-1.1.xsd">
<ns4:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_90DBA0754E55B4FE7013928310431176</ns4:KeyIdentifier>
</ns4:SecurityTokenReference>
</ns2:RequestedUnattachedReference>
<ns2:Lifetime>
<ns3:Created>2014-02-19T17:30:43.119Z</ns3:Created>
<ns3:Expires>2014-02-19T18:00:43.119Z</ns3:Expires>
</ns2:Lifetime>
</ns2:RequestSecurityTokenResponse>
</ns2:RequestSecurityTokenResponseCollection>
</soap:Body>
</soap:Envelope>
27.15. Developing Registry Clients
Registry Clients create Federated Sources using the OSGi Configuration Admin.
Developers should reference an individual Source’s (Federated, Connected, or Catalog Provider) documentation for the Configuration properties (such as a Factory PID, addresses, intervals, etc) necessary to establish that `Source
in the framework.
1
2
3
4
5
org.osgi.service.cm.ConfigurationAdmin configurationAdmin = getConfigurationAdmin() ;
org.osgi.service.cm.Configuration currentConfiguration = configurationAdmin.createFactoryConfiguration(getFactoryPid(), null);
Dictionary properties = new Dictionary() ;
properties.put(QUERY_ADDRESS_PROPERTY,queryAddress);
currentConfiguration.update( properties );
Note that the QUERY_ADDRESS_PROPERTY
is specific to this Configuration and might not be required for every Source
.
The properties necessary for creating a Configuration are different for every Source
.
27.16. Developing Resource Readers
A ResourceReader
is a class that retrieves a resource or product from a native/external source and returns it to DDF.
A simple example is that of a File ResourceReader
.
It takes a file from the local file system and passes it back to DDF.
New implementations can be created in order to support obtaining Resources from various Resource data stores.
27.16.1. Creating a New ResourceReader
Complete the following procedure to create a ResourceReader
.
-
Create a Java class that implements the
DDF.catalog.resource.ResourceReader
interface. -
Deploy the OSGi bundled packaged service to the DDF run-time.
27.16.1.1. Implementing the ResourceReader
Interface
1
public class TestResourceReader implements DDF.catalog.resource.ResourceReader
ResourceReader
has a couple of key methods where most of the work is performed.
Note
|
URI |
27.16.1.2. retrieveResource
1
public ResourceResponse retrieveResource( URI uri, Map<String, Serializable> arguments )throws IOException, ResourceNotFoundException, ResourceNotSupportedException;
This method is the main entry to the ResourceReader
.
It is used to retrieve a Resource
and send it back to the caller (generally the CatalogFramework
).
Information needed to obtain the entry is contained in the URI
reference.
The URI Scheme will need to match a scheme specified in the getSupportedSchemes
method.
This is how the CatalogFramework determines which ResourceReader
implementation to use.
If there are multiple ResourceReaders
supporting the same scheme, these ResourceReaders
will be invoked iteratively.
Invocation of the ResourceReaders
stops once one of them returns a Resource
.
Arguments are also passed in.
These can be used by the ResourceReader
to perform additional operations on the resource.
The URLResourceReader
is an example ResourceReader
that reads a file from a URI.
Note
|
The |
27.16.1.3. Implement retrieveResource()
-
Define supported schemes (e.g., file, http, etc.).
-
Check if the incoming URI matches a supported scheme. If it does not, throw
ResourceNotSupportedException
.
1
2
3
4
if ( !uri.getScheme().equals("http") )
{
throw new ResourceNotSupportedException("Unsupported scheme received, was expecting http")
}
-
Implement the business logic.
-
For example, the
URLResourceReader
will obtain the resource through a connection:
1
2
3
4
5
6
7
URL url = uri.toURL();
URLConnection conn = url.openConnection();
String mimeType = conn.getContentType();
if ( mimeType == null ) {
mimeType = URLConnection.guessContentTypeFromName( url.getFile() );
}
InputStream is = conn.getInputStream();
Note
|
The |
-
Return
Resource
inResourceResponse
.
For example:
1
return ResourceResponseImpl( new ResourceImpl( new BufferedInputStream( is ), new MimeType( mimeType ), url.getFile() ) );
If the Resource cannot be found, throw a ResourceNotFoundException
.
27.16.1.4. getSupportedSchemes
public Set<String> getSupportedSchemes();
This method lets the ResourceReader
inform the CatalogFramework about the type of URI scheme that it accepts and should be passed.
For single-use ResourceReaders (like a URLResourceReader), there may be only one scheme that it can accept while others may understand more than one.
A ResourceReader must, at minimum, accept one qualifier.
As mentioned before, this method is used by the CatalogFramework
to determine which ResourceReader
to invoke.
Note
|
|
27.16.1.5. Export to OSGi Service Registry
In order for the ResourceReader
to be used by the CatalogFramework
, it should be exported to the OSGi Service Registry as a DDF.catalog.resource.ResourceReader
.
See the XML below for an example:
1
2
<bean id="customResourceReaderId]" class="example.resource.reader.impl.CustomResourceReader" />
<service ref="customResourceReaderId" interface="DDF.catalog.source.ResourceReader" />
27.17. Developing Resource Writers
A ResourceWriter
is an object used to store or delete a Resource
.
ResourceWriter
objects should be registered within the OSGi Service Registry, so clients can retrieve an instance when they need to store a Resource
.
27.17.1. Create a New ResourceWriter
Complete the following procedure to create a ResourceWriter
.
-
Create a Java class that implements the
DDF.catalog.resource.ResourceWriter
interface.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
import java.io.IOException;
import java.net.URI;
import java.util.Map;
import DDF.catalog.resource.Resource;
import DDF.catalog.resource.ResourceNotFoundException;
import DDF.catalog.resource.ResourceNotSupportedException;
import DDF.catalog.resource.ResourceWriter;
public class SampleResourceWriter implements ResourceWriter {
@Override
public void deleteResource(URI uri, Map<String, Object> arguments) throws ResourceNotFoundException, IOException {
// WRITE IMPLEMENTATION
}
@Override
public URI storeResource(Resource resource, Map<String, Object> arguments)throws ResourceNotSupportedException, IOException {
// WRITE IMPLEMENTATION
return null;
}
@Override
public URI storeResource(Resource resource, String id, Map<String, Object> arguments) throws ResourceNotSupportedException, IOException {
// WRITE IMPLEMENTATION
return null;
}
}
-
Register the implementation as a Service in the OSGi Service Registry.
1
2
3
...
<service ref="ResourceWriterReference" interface="DDF.catalog.resource.ResourceWriter" />
...
-
Deploy the OSGi bundled packaged service to the DDF run-time (Refer to the OSGi Basics - Bundles section.)
Tip
|
ResourceWriter Javadoc |
27.18. Developing Filters
The common way to create a Filter
is to use the GeoTools FilterFactoryImpl
object, which provides Java implementations for the various types of filters in the Filter Specification.
Examples are the easiest way to understand how to properly create a Filter
and a Query
.
Note
|
Refer to the GeoTools javadoc for more information on |
Warning
|
Implementing the Filter interface directly is only for extremely advanced use cases and is highly discouraged.
Instead, use of the DDF-specific |
Developers create a Filter
object in order to filter or constrain the amount of records returned from a Source
.
The OGC Filter Specification has several types of filters that can be combined in a tree-like structure to describe the set of metacards that should be returned.
-
Comparison Operators
-
Logical Operators
-
Expressions
-
Literals
-
Functions
-
Spatial Operators
-
Temporal Operators
27.18.1. Units of Measure
According to the OGC Filter Specifications: 09-026r1 and OGC Filter Specifications: 04-095 , units of measure can be expressed as a URI.
To fulfill that requirement, DDF utilizes the GeoTools class org.geotools.styling.UomOgcMapping
for spatial filters requiring a standard for units of measure for scalar distances.
Essentially, the UomOgcMapping
maps the OGC Symbology Encoding standard URIs to Java Units.
This class provides three options for units of measure:
-
FOOT
-
METRE
-
PIXEL
DDF only supports FOOT and METRE since they are the most applicable to scalar distances.
27.18.2. Filter Examples
The example below illustrates creating a query, and thus an OGC Filter, that does a case-insensitive search for the phrase "mission" in the entire metacard’s text.
Note that the OGC PropertyIsLike
Filter is used for this simple contextual query.
1
2
3
4
5
6
7
8
9
10
11
12
org.opengis.filter.FilterFactory filterFactory = new FilterFactoryImpl() ;
boolean isCaseSensitive = false ;
String wildcardChar = "*" ; // used to match zero or more characters
String singleChar = "?" ; // used to match exactly one character
String escapeChar = "\\" ; // used to escape the meaning of the wildCard, singleChar,
and the escapeChar itself
String searchPhrase = "mission" ;
org.opengis.filter.Filter propertyIsLikeFilter =
filterFactory.like(filterFactory.property(Metacard.ANY_TEXT), searchPhrase, wildcardChar, singleChar, escapeChar, isCaseSensitive);
DDF.catalog.operation.QueryImpl query = new QueryImpl( propertyIsLikeFilter );
The example below illustrates creating an absolute temporal query, meaning the query is searching for Metacards whose modified timestamp occurred during a specific time range.
Note that this query uses the During
OGC Filter for an absolute temporal query.
1
2
3
4
5
6
7
8
9
10
11
12
org.opengis.filter.FilterFactory filterFactory = new FilterFactoryImpl() ;
org.opengis.temporal.Instant startInstant = new org.geotools.temporal.object.DefaultInstant(new DefaultPosition(start));
org.opengis.temporal.Instant endInstant = new org.geotools.temporal.object.DefaultInstant(new DefaultPosition(end));
org.opengis.temporal.Period period = new org.geotools.temporal.object.DefaultPeriod(startInstant, endInstant);
String property = Metacard.MODIFIED ; // modified date of a metacard
org.opengis.filter.Filter filter = filterFactory.during( filterFactory.property(property), filterFactory.literal(period) );
DDF.catalog.operation.QueryImpl query = new QueryImpl(filter) ;
27.18.2.1. Contextual Searches
Most contextual searches can be expressed using the PropertyIsLike
filter. The special characters that have meaning in a PropertyIsLike
filter are the wildcard, single wildcard, and escape characters (see Example Creating-Filters-1).
Character | Description |
---|---|
Wildcard |
Matches zero or more characters. |
Single Wildcard |
Matches exactly one character. |
Escape |
Escapes the meaning of the Wildcard, Single Wildcard, and the Escape character itself |
Characters and words, such as AND
, &
, and
, OR
, |
, or
, NOT
, ~
, not
, {
, and }
, are treated as literals in a PropertyIsLike
filter. In order to create equivalent logical queries, a developer must instead use the Logical Operator filters {AND
, OR
, NOT
}. The Logical Operator filters can be combined together with PropertyIsLike
filters to create a tree that represents the search phrase expression.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
org.opengis.filter.FilterFactory filterFactory = new FilterFactoryImpl() ;
boolean isCaseSensitive = false ;
String wildcardChar = "*" ; // used to match zero or more characters
String singleChar = "?" ; // used to match exactly one character
String escapeChar = "\\" ; // used to escape the meaning of the wildCard, singleChar, and the escapeChar itself
Filter filter =
filterFactory.and(
filterFactory.like(filterFactory.property(Metacard.METADATA), "mission" ,
wildcardChar, singleChar, escapeChar, isCaseSensitive),
filterFactory.like(filterFactory.property(Metacard.METADATA), "planning" ,
wildcardChar, singleChar, escapeChar, isCaseSensitive)
);
DDF.catalog.operation.QueryImpl query = new QueryImpl( filter );
27.18.2.1.1. Tree View of Creating Filters
Filters used in DDF can always be represented in a tree diagram.
27.18.2.1.2. XML View of Creating Filters
Another way to view this type of Filter is through an XML model, which is shown below.
1
2
3
4
5
6
7
8
9
10
11
12
<Filter>
<And>
<PropertyIsLike wildCard="*" singleChar="?" escapeChar="\">
<PropertyName>metadata</PropertyName>
<Literal>mission</Literal>
</PropertyIsLike>
<PropertyIsLike wildCard="*" singleChar="?" escapeChar="\">
<PropertyName>metadata</PropertyName>
<Literal>planning</Literal>
</PropertyIsLike>
<And>
</Filter>
Using the Logical Operators and PropertyIsLike
filters, a developer can create a whole language of search phrase expressions.
27.18.2.2. Fuzzy Operations
DDF only supports one custom function.
The Filter specification does not include a fuzzy operator, so a Filter function was created to represent a fuzzy operation.
The function and class is called FuzzyFunction
, which is used by clients to notify the Sources to perform a fuzzy search.
The syntax expected by providers is similar to the Fuzzy Function.
Refer to the example below.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
String wildcardChar = "*" ; // used to match zero or more characters
String singleChar = "?" ; // used to match exactly one character
String escapeChar = "\\" ; // used to escape the meaning of the wildCard, singleChar
boolean isCaseSensitive = false ;
Filter fuzzyFilter = filterFactory.like(
new DDF.catalog.impl.filter.FuzzyFunction(
Arrays.asList((Expression) (filterFactory.property(Metacard.ANY_TEXT))),
filterFactory.literal("")),
searchPhrase,
wildcardChar,
singleChar,
escapeChar,
isCaseSensitive);
QueryImpl query = new QueryImpl(fuzzyFilter);
27.18.3. Parsing Filters
According to the OGC Filter Specification 04-095 : a "(filter expression) representation can be … parsed and then transformed into whatever target language is required to retrieve or modify object instances stored in some persistent object store."
Filters can be thought of as the WHERE
clause for a SQL SELECT statement to "fetch data stored in a SQL-based relational database."
Sources can parse OGC Filters using the FilterAdapter
and FilterDelegate
.
See Developing a Filter Delegate for more details on implementing a new FilterDelegate
.
This is the preferred way to handle OGC Filters in a consistent manner.
Alternately, org.opengis.filter.Filter
implementations can be parsed using implementations of the interface org.opengis.filter.FilterVisitor
.
The FilterVisitor
uses the Visitor pattern . Essentially, FilterVisitor
instances "visit" each part of the Filter
tree allowing developers to implement logic to handle the filter’s operations.
GeoTools 8 includes implementations of the FilterVisitor
interface.
The DefaultFilterVisitor
, as an example, provides only business logic to visit every node in the Filter
tree.
The DefaultFilterVisitor
methods are meant to be overwritten with the correct business logic.
The simplest approach when using FilterVisitor
instances is to build the appropriate query syntax for a target language as each part of the Filter
is visited.
For instance, when given an incoming Filter
object to be evaluated against a RDBMS, a CatalogProvider instance could use a `FilterVisitor
to interpret each filter operation on the Filter
object and translate those operations into SQL.
The FilterVisitor
may be needed to support Filter
functionality not currently handled by the FilterAdapter
and FilterDelegate
reference implementation.
27.18.3.1. Interpreting a Filter to Create SQL
If the FilterAdapter
encountered or "visited" a PropertyIsLike
filter with its property assigned as title
and its literal expression assigned as mission
, the FilterDelegate
could create the proper SQL syntax similar to title LIKE
mission.
27.18.3.2. Interpreting a Filter to Create XQuery
If the FilterAdapter
encountered an OR
filter, such as in Figure Parsing-Filters2 and the target language was XQuery, the FilterDelegate
could yield an expression such as
ft:query(//inventory:book/@subject,'math') union
ft:query(//inventory:book/@subject,'science').
27.18.3.2.1. FilterAdapter/Delegate Process for Figure Parsing
-
FilterAdapter
visits theOR
filter first. -
OR
filter visits its children in a loop. -
The first child in the loop that is encountered is the LHS
PropertyIsLike
. -
The
FilterAdapter
will call theFilterDelegate
`PropertyIsLike`method with the LHS property and literal. -
The LHS
PropertyIsLike
delegate method builds the XQuery syntax that makes sense for this particular underlying object store. In this case, the subject property is specific to this XML database, and the business logic maps the subject property to its index at//inventory:book/@subject
Note thatft:query
in this instance is a custom XQuery module for this specific XML database that does full text searches. -
The
FilterAdapter
then moves back to theOR
filter, which visits its second child. -
The
FilterAdapter
will call theFilterDelegate
PropertyIsLike
method with the RHS property and literal. -
The RHS
PropertyIsLike
delegate method builds the XQuery syntax that makes sense for this particular underlying object store. In this case, the subject property is specific to this XML database, and the business logic maps the subject property to its index at//inventory:book/@subject
Note thatft:query
in this instance is a custom XQuery module for this specific XML database that does full text searches. . TheFilterAdapter
then moves back to its `OR Filter which is now done with its children. -
It then collects the output of each child and sends the list of results to the
FilterDelegate OR
method. -
The final result object will be returned from the
FilterAdapter
adapt method.
27.18.3.2.2. FilterVisitor Process for Figure Parsing
-
FilterVisitor visits the
OR
filter first. -
OR
filter visits its children in a loop. -
The first child in the loop that is encountered is the LHS
PropertyIsLike
. -
The LHS
PropertyIsLike
builds the XQuery syntax that makes sense for this particular underlying object store. In this case, the subject property is specific to this XML database, and the business logic maps the subject property to its index at//inventory:book/@subject
. Note thatft:query
in this instance is a custom XQuery module for this specific XML database that does full text searches. -
The FilterVisitor then moves back to the
OR
filter, which visits its second child. -
The RHS
PropertyIsLike
builds the XQuery syntax that makes sense for this particular underlying object store. In this case, the subject property is specific to this XML database, and the business logic maps the subject property to its index at//inventory:book/@subject
. Note thatft:query
in this instance is a custom XQuery module for this specific XML database that does full text searches. -
The FilterVisitor then moves back to its
OR
filter, which is now done with its children. It then collects the output of each child and could potentially execute the following code to produce the above expression.
1
2
3
4
5
6
public visit( Or filter, Object data) {
...
/* the equivalent statement for the OR filter in this domain (XQuery) */
xQuery = childFilter1Output + " union " + childFilter2Output;
...
}
27.18.4. Filter Profile
The filter profile maps filters to metacard types.
27.18.4.1. Role of the OGC Filter
Both Queries and Subscriptions extend the OGC GeoAPI Filter interface.
The Filter Builder and Adapter do not fully implement the OGC Filter Specification.
The filter support profile contains suggested filter to metacard type mappings.
For example, even though a Source could support a PropertyIsGreaterThan
filter on XML_TYPE
, it would not likely be useful.
27.18.4.2. Catalog Filter Profile
The following table displays the common metacard attributes with their respective types for reference.
Metacard Attribute | Metacard Type |
---|---|
ANY_DATE |
DATE_TYPE |
ANY_GEO |
GEO_TYPE |
ANY_TEXT |
STRING_TYPE |
CONTENT_TYPE |
STRING_TYPE |
CONTENT_TYPE_VERSION |
STRING_TYPE |
CREATED |
DATE_TYPE |
EFFECTIVE |
DATE_TYPE |
GEOGRAPHY |
GEO_TYPE |
ID |
STRING_TYPE |
METADATA |
XML_TYPE |
MODIFIED |
DATE_TYPE |
RESOURCE_SIZE |
STRING_TYPE |
RESOURCE_URI |
STRING_TYPE |
SOURCE_ID |
STRING_TYPE |
TARGET_NAMESPACE |
STRING_TYPE |
THUMBNAIL |
BINARY_TYPE |
TITLE |
STRING_TYPE |
27.18.4.2.1. Comparison Operators
Comparison operators compare the value associated with a property name with a given Literal value.
Endpoints and sources should try to use metacard types other than the object type.
The object type only supports backwards compatibility with java.net.URI
.
Endpoints that send other objects will not be supported by standard sources.
The following table maps the metacard types to supported comparison operators.
PropertyIs | Between | EqualTo | GreaterThan | GreaterThan | OrEqualTo | LessThan | LessThan | OrEqualTo | Like | NotEqualTo | Null |
---|---|---|---|---|---|---|---|---|---|---|---|
BINARY_TYPE |
X |
||||||||||
BOOLEAN_TYPE |
X |
||||||||||
DATE_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
|
DOUBLE_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
|
FLOAT_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X
|
|
GEO_TYPE |
X |
||||||||||
INTEGER_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
|
LONG_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
|
OBJECT_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
|
SHORT_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
|
STRING_TYPE |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
X |
XML_TYPE |
X |
X |
X |
Operator | Description |
---|---|
PropertyIsBetween |
Lower ⇐ Property ⇐ Upper |
PropertyIsEqualTo |
Property == Literal |
PropertyIsGreaterThan |
Property > Literal |
PropertyIsGreaterThanOrEqualTo |
Property >= Literal |
PropertyIsLessThan |
Property < Literal |
PropertyIsLessThanOrEqualTo |
Property ⇐ Literal |
PropertyIsLike |
Property LIKE Literal Equivalent to SQL "like" |
PropertyIsNotEqualTo |
Property != Literal |
PropertyIsNull |
Property == null |
27.18.4.2.2. Logical Operators
Logical operators apply Boolean logic to one or more child filters.
And | Not | Or | |
---|---|---|---|
Supported Filters |
X |
X |
X |
27.18.4.2.3. Temporal Operators
Temporal operators compare a date associated with a property name to a given Literal date or date range.
After | AnyInteracts | Before | Begins | BegunBy | During | EndedBy | Meets | MetBy | OverlappedBy | TContains | |
---|---|---|---|---|---|---|---|---|---|---|---|
DATE_TYPE |
X |
X |
X |
Literal values can be either date instants or date periods.
Operator | Description |
---|---|
After |
Property > (Literal || Literal.end) |
Before |
Property < (Literal || Literal.start) |
During |
Literal.start < Property < Literal.end |
27.18.4.2.4. Spatial Operators
Spatial operators compare a geometry associated with a property name to a given Literal geometry.
BBox |
Beyond |
Contains |
Crosses |
Disjoint |
Equals |
DWithin |
Intersects |
Overlaps |
Touches |
Within |
GEO_TYPE |
X |
X |
X |
X |
X |
X |
X |
Geometries are usually represented as Well-Known Text (WKT).
Operator | Description |
---|---|
Beyond |
Property geometries beyond given distance of Literal geometry |
Contains |
Property geometry contains Literal geometry |
Crosses |
Property geometry crosses Literal geometry |
Disjoint |
Property geometry direct positions are not interior to Literal geometry |
DWithin |
Property geometry lies within distance to Literal geometry |
Intersects |
Property geometry intersects Literal geometry; opposite to the Disjoint operator |
Overlaps |
Property geometry interior overlaps Literal geometry interior somewhere |
Touches |
Property geometry touches but does not overlap Literal geometry |
Within |
Property geometry completely contains Literal geometry |
27.19. Developing Filter Delegates
Filter Delegates help reduce the complexity of parsing OGC Filters. The reference Filter Adapter implementation contains the necessary boilerplate visitor code and input normalization to handle commonly supported OGC Filters.
27.19.1. Creating a New Filter Delegate
A Filter Delegate contains the logic that converts normalized filter input into a form that the target data source can handle. Delegate methods will be called in a depth first order as the Filter Adapter visits filter nodes.
27.19.1.1. Implementing the Filter Delegate
-
Create a Java class extending
FilterDelegate
.
public class ExampleDelegate extends DDF.catalog.filter.FilterDelegate<ExampleReturnObjectType> {
-
FilterDelegate
will throw an appropriate exception for all methods not implemented. Refer to the DDF JavaDoc for more details about what is expected of eachFilterDelegate
method.
Note
|
A code example of a Filter Delegate can be found in |
27.19.1.2. Throwing Exceptions
Filter delegate methods can throw UnsupportedOperationException
run-time exceptions.
The GeotoolsFilterAdapterImpl
will catch and re-throw these exceptions as UnsupportedQueryExceptions
.
27.19.1.3. Using the Filter Adapter
The FilterAdapter can be requested from the OSGi registry.
<reference id="filterAdapter" interface="DDF.catalog.filter.FilterAdapter" />
The Query in a QueryRequest implements the Filter interface.
The Query can be passed to a FilterAdapter
and FilterDelegate
to process the Filter.
1
2
3
4
5
6
7
8
9
10
11
@Override
public DDF.catalog.operation.QueryResponse query(DDF.catalog.operation.QueryRequest queryRequest)
throws DDF.catalog.source.UnsupportedQueryException {
DDF.catalog.operation.Query query = queryRequest.getQuery();
DDF.catalog.filter.FilterDelegate<ExampleReturnObjectType> delegate = new ExampleDelegate();
// DDF.catalog.filter.FilterAdapter adapter injected via Blueprint
ExampleReturnObjectType result = adapter.adapt(query, delegate);
}
Import the Catalog API Filter package and the reference implementation package of the Filter Adapter in the bundle manifest (in addition to any other required packages).
Import-Package: DDF.catalog, DDF.catalog.filter, DDF.catalog.source
27.19.1.4. Filter Support
Not all OGC Filters are exposed at this time. If demand for further OGC Filter functionality is requested, it can be added to the Filter Adapter and Delegate so sources can support more complex filters. The following OGC Filter types are currently available:
Logical |
---|
And |
Or |
Not |
Include |
Exclude |
Property Comparison |
---|
|
|
|
|
|
|
|
|
|
Spatial |
Definition |
---|---|
|
True if the geometry being tested is beyond the stated distance of the geometry provided. |
|
True if the second geometry is wholly inside the first geometry. |
|
True if: * the intersection of the two geometries results in a value whose dimension is less than the geometries * the maximum dimension of the intersection value includes points interior to both the geometries * the intersection value is not equal to either of the geometries. |
|
True if the two geometries do not touch or intersect. |
|
True if the geometry being tested is within the stated distance of the geometry provided. |
|
True if the two geometries intersect. This is a convenience method as |
|
True if the intersection of the geometries results in a value of the same dimension as the geometries that is different from both of the geometries. |
|
True if and only if the only common points of the two geometries are in the union of the boundaries of the geometries. |
|
True if the first geometry is wholly inside the second geometry. |
Temporal |
---|
27.20. Developing Action Components
To provide a service, such as a link to a metacard, the ActionProvider
interface should be implemented.
An ActionProvider
essentially provides a List of Actions
when given input that it can recognize and handle.
For instance, if a REST endpoint ActionProvider was given a metacard, it could provide a link based on the metacard’s ID.
An Action Provider performs an action when given a subject that it understands.
If it does not understand the subject or does not know how to handle the given input, it will return Collections.emptyList()
.
An Action Provider is required to have an ActionProvider id.
The Action Provider must register itself in the OSGi Service Registry with the ddf.action.ActionProvider
interface and must also have a service property value for id
.
An action is a URL that, when invoked, provides a resource or executes intended business logic.
27.20.1. Action Component Naming Convention
For each Action, a title and description should be provided to describe what the action does. The recommended naming convention is to use the verb 'Get' when retrieving a portion of a metacard, such as the metadata or thumbnail, or when downloading a product. The verb 'Export' or the expression 'Export as' is recommended when the metacard is being exported in a different format or presented after going some transformation.
27.20.1.1. Action Component Taxonomy
An Action Provider registers an id
as a service property in the OGSi Service Registry based on the type of service or action that is provided.
Regardless of implementation, if more than one Action Provider provides the same service, such as providing a URL to a thumbnail for a given metacard, they must both register under the same id
.
Therefore, Action Provider implementers must follow an Action Taxonomy.
The following is a sample taxonomy:
-
catalog.data.metacard
shall be the grouping that represents Actions on a Catalog metacard.-
catalog.data.metacard.view
-
catalog.data.metacard.thumbnail
-
catalog.data.metacard.html
-
catalog.data.metacard.resource
-
catalog.data.metacard.metadata
-
ID | Required Action | Naming Convention |
---|---|---|
|
Provides a valid URL to view a metacard. Format of data is not specified; i.e. the representation can be in XML, JSON, or other. |
Export as … |
|
Provides a valid URL to the bytes of a thumbnail ( |
Export as Thumbnail |
|
Provides a metacard URL that translates the metacard into a geographically aligned image (suitable for overlaying on a map). |
Export as Thumbnail Overlay |
|
Provides a valid URL that, when invoked, provides an HTML representation of the metacard. |
Export as HTML |
|
Provides a valid URL that, when invoked, provides an XML representation of the metacard. |
Export as XML |
|
Provides a valid URL that, when invoked, provides an XML representation of the metacard. |
Export as GeoJSON |
|
Provides a valid URL that, when invoked, provides the underlying resource of the metacard. |
Export as Resource |
|
Provides a valid URL to the XML metadata in the metacard ( |
Export as Metadata |
27.21. Developing Query Options
The easiest way to create a Query is to use the ddf.catalog.operation.QueryImpl
object.
It is first necessary to create an OGC Filter object then set the Query Options after QueryImpl
has been constructed.
QueryImpl
Example
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
/*
Builds a query that requests a total results count and
that the first record to be returned is the second record found from
the requested set of metacards.
*/
String property = ...;
String value = ...;
org.geotools.filter.FilterFactoryImpl filterFactory = new FilterFactoryImpl() ;
QueryImpl query = new QueryImpl( filterFactory.equals(filterFactory.property(property),
filterFactory.literal(value))) ;
query.setStartIndex(2) ;
query.setRequestsTotalResultsCount(true);
27.21.1. Evaluating a query
Every Source must be able to evaluate a Query object. Nevertheless, each Source could evaluate the Query differently depending on what that Source supports as to properties and query capabilities. For instance, a common property all Sources understand is id, but a Source could possibly store frequency values under the property name "frequency." Some Sources may not support frequency property inquiries and will throw an error stating it cannot interpret the property. In addition, some Sources might be able to handle spatial operations, while others might not. A developer should consult a Source’s documentation for the limitations, capabilities, and properties that a Source can support.
27.21.2. Commons-DDF Utilities
The `commons-DDF`bundle provides utilities and functionality commonly used across other DDF components, such as the endpoints and providers.
27.21.2.1. FuzzyFunction
DDF.catalog.impl.filter.FuzzyFunction
class is used to indicate that a PropertyIsLike
filter should interpret the search as a fuzzy query.
27.21.2.2. XPathHelper
DDF.util.XPathHelper
provides convenience methods for executing XPath operations on XML.
It also provides convenience methods for converting XML as a String
from a org.w3c.dom.Document
object and vice versa.
27.22. Configuring Managed Service Factory Bundles
27.22.1. Configuring Managed Service Factory Bundles
Services that are created using a Managed Service Factory can be configured using .config
files as well.
These configuration files, however, follow a different naming convention than .cfg
files.
The filenames must start with the Managed Service Factory PID, be followed by a dash and a unique identifier, and have a .config
extension.
For instance, assuming that the Managed Service Factory PID is org.codice.ddf.factory.pid
and two instances of the service need to be configured, files org.codice.ddf.factory.pid-<UNIQUE ID 1>.config
and org.codice.ddf.factory.pid-<UNIQUE ID 2>.config
should be created and added to <DDF_HOME>/etc
.
The unique identifiers used in the file names have no impact on the order in which the configuration files are processed. No specific processing order should be assumed. Also, a new service will be created and configured every time a configuration file matching the Managed Service Factory PID is added to the directory, regardless of the unique id used.
Any service.factoryPid
and service.pid
values in these .config
files will be overridden by the values parsed from the file name, so .config
files should not contain these properties.
27.22.1.1. File Format
The basic syntax of the .config
configuration files is similar to the older .cfg
files but introduces support for lists and types other than simple strings.
The type associated with a property must match the type attribute used in the corresponding metatype.xml
file when applicable.
The following table shows the format to use for each property type supported.
Type | Format (see details below for variations) | Example |
---|---|---|
String |
name="value" |
|
Boolean |
name=B"true|false" |
|
Integer |
name=I"value" |
|
Long |
name=L"value" |
|
Float |
name=F"value" |
|
Double |
name=D"value" |
|
List of Strings |
name=["value1","value2",…] |
|
List of Booleans |
name=B["true|false","true|false",…] |
|
List of Integers |
name=I["value1","value2",…] |
|
List of Longs |
name=L["value1","value2",…] |
|
List of Floats |
name=F["value1","value2",…] |
|
List of Doubles |
name=D["value1","value2",…] |
|
Note
|
|
authenticationTypes=[ \
"/\=SAML|GUEST", \
"/admin\=SAML|basic", \
"/system\=basic", \
"/sources\=SAML|basic", \
"/security-config\=SAML|basic", \
"/search\=basic", \
]
realms=[ \
"/\=karaf", \
]
requiredAttributes=[ \
"/\=", \
"/admin\={http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role\=admin}", \
"/system\={http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role\=admin}", \
"/security-config\={http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role\=admin}", \
]
whiteListContexts=[ \
"/services/SecurityTokenService", \
"/services/internal/metrics", \
"/services/saml", \
"/proxy", \
"/services/csw", \
]
27.23. Developing XACML Policies
This document assumes familiarity with the XACML schema and does not go into detail on the XACML language. When creating a policy, a target is used to indicate that a certain action should be run only for one type of request. Targets can be used on both the main policy element and any individual rules. Targets are geared toward the actions that are set in the request. These actions generally consist of the standard CRUD operations (create, read, update, delete) or a SOAPAction if the request is coming through a SOAP endpoint.
Note
|
These are only the action values that are currently created by the components that come with DDF. Additional components can be created and added to DDF to identify specific actions. |
In the examples below, the policy has specified targets for the above type of calls.
For the Filtering code, the target was set for "filter", and the Service validation code targets were geared toward two services: query
and LocalSiteName
.
In a production environment, these actions for service authorization will generally be full URNs that are described within the SOAP WSDL.
27.23.1. XACML Policy Attributes
Attributes for the XACML request are populated with the information in the calling subject and the resource being checked.
27.23.2. XACML Policy Subject
The attributes for the subject are obtained from the SAML claims and populated within the XACML policy as individual attributes under the urn:oasis:names:tc:xacml:1.0:subject-category:access-subject
category.
The name of the claim is used for the AttributeId
value.
Examples of the items being populated are available at the end of this page.
27.23.3. XACML Policy Resource
The attributes for resources are obtained through the permissions process. When checking permissions, the XACML processing engine retrieves a list of permissions that should be checked against the subject. These permissions are populated outside of the engine and should be populated with the attributes that should be asserted against the subject. When the permissions are of a key-value type, the key being used is populated as the AttributeId value under the urn:oasis:names:tc:xacml:3.0:attribute-category:resource category.
27.23.4. Using a XACML Policy
To use a XACML policy, copy the XACML policy into the <DDF_HOME>/etc/pdp/policies
directory.
27.24. Assuring Authenticity of Bundles and Applications
DDF Artifacts in the JAR file format (such as bundles or KAR files) can be signed and verified using the tools included as part of the Java Runtime Environment.
27.24.1. Prerequisites
To work with Java signatures, a keystore/truststore is required. For testing or trial purposes DDF can sign and validate using a self-signed certificate, generated with the keytool utility. In an actuall installation, a certificate issued from a trusted Certificate Authority will be used.
Additional documentation on keytool can be found at Keytool home .
~ $ keytool -genkey -keyalg RSA -alias selfsigned -keystore keystore.jks -storepass password -validity 360 -keysize 2048
What is your first and last name?
[Unknown]: Nick Fury
What is the name of your organizational unit?
[Unknown]: Marvel
What is the name of your organization?
[Unknown]: SHIELD
What is the name of your City or Locality?
[Unknown]: New York
What is the name of your State or Province?
[Unknown]: NY
What is the two-letter country code for this unit?
[Unknown]: US
Is CN=Nick Fury, OU=SHIELD, O=Marvel, L="New York", ST=NY, C=US correct?
[no]: yes
Enter key password for <selfsigned>
(RETURN if same as keystore password):
Re-enter new password:
27.24.2. Signing a JAR/KAR
Once a keystore is available, the JAR can be signed using the jarsigner
tool.
Additional documentation on jarsigner can be found at Jarsigner .
~ $ jarsigner -keystore keystore.jks -keypass shield -storepass password catalog-app-2.5.1.kar selfsigned
27.24.2.1. Verifying a JAR/KAR
The jarsigner utility is also used to verify a signature in a JAR-formatted file.
~ $ jarsigner -verify -verbose -keystore keystore.jks catalog-app-2.5.1.kar
9447 Mon Oct 06 17:05:46 MST 2014 META-INF/MANIFEST.MF
9503 Mon Oct 06 17:05:46 MST 2014 META-INF/SELFSIGN.SF
[... section abbreviated for space]
smk 6768 Wed Sep 17 17:13:58 MST 2014 repository/ddf/catalog/security/catalog-security-logging/2.5.1/catalog-security-logging-2.5.1.jar
s = signature was verified
m = entry is listed in manifest
k = at least one certificate was found in keystore
i = at least one certificate was found in identity scope
jar verified.
Note the last line: jar verified. This indicates that the signatures used to sign the JAR (or in this case, KAR) were valid according to the trust relationships specified by the keystore.
27.25. WFS Services
The Web Feature Service (WFS) is an Open Geospatial Consortium (OGC) Specification. DDF supports the ability to integrate WFS 1.0, 1.1, and 2.0 Web Services.
Note
|
DDF does not include a supported WFS Web Service (Endpoint) implementation. Therefore, federation for 2 DDF instances is not possible via WFS. |
When a query is issued to a WFS server, the output of the query is an XML document that contains a collection of feature member elements.
Each WFS server can have one or more feature types with each type being defined by a schema that extends the WFS featureMember
schema.
The schema for each type can be discovered by issuing a DescribeFeatureType
request to the WFS server for the feature type in question.
The WFS source handles WFS capability discovery and requests for feature type description when an instance of the WFS source is configured and created.
See the WFS v1.0.0 Source, WFS v1.1.0 Source, or WFS v2.0.0 Source for more information about how to configure a WFS source.
In order to expose WFS features to DDF clients, the WFS feature must be converted into the common data format of the DDF, a metacard.
The OGC package contains a GenericFeatureConverter
that attempts to populate mandatory metacard fields with properties from the WFS feature XML.
All properties will be mapped directly to new attributes in the metacard.
However, the GenericFeatureConverter
may not be able to populate the default metacard fields with properties from the feature XML.
To more accurately map WFS feature properties to fields in the metacard, a custom converter can be created.
The OGC package contains an interface, FeatureConverter
, which extends the http://xstream.codehaus.org/javadoc/com/thoughtworks/xstream/converters/Converter.htmlConverter interface provided by the XStream project.
XStream is an open source API for serializing XML into Java objects and vice-versa.
Additionally, a base class, AbstractFeatureConverter
, has been created to handle the mapping of many fields to reduce code duplication in the custom converter classes.
-
Create the
CustomConverter
class extending theogc.catalog.common.converter.AbstractFeatureConverter
class.public class CustomConverter extends ogc.catalog.common.converter.AbstractFeatureConverter
-
Implement the
FeatureConverterFactory
interface and thecreateConverter()
method for theCustomConverter
.1 2 3 4 5 6 7 8 9 10 11 12
public class CustomConverterFactory implements FeatureConverterFactory { private final featureType; public CustomConverterFactory(String featureType) { this.featureType = featureType; } public FeatureConverter createConverter() { return new CustomConverter(); } public String getFeatureType() { return featureType; } }
-
Implement the
unmarshal
method required by theFeatureConverter
interface. ThecreateMetacardFromFeature(reader, metacardType)
method implemented in theAbstractFeatureConverter
is recommended.1 2 3 4 5 6 7 8
public Metacard unmarshal(HierarchicalStreamReader reader, UnmarshallingContext ctx) { MetacardImpl mc = createMetacardFromFeature(reader, metacardType); //set your feature specific fields on the metacard object here // //if you want to map a property called "beginningDate" to the Metacard.createdDate field //you would do: mc.setCreatedDate(mc.getAttribute("beginningDate").getValue()); }
-
Export the
ConverterFactory
to the OSGi registry by creating ablueprint.xml
file for its bundle. The bean id and argument value must match the WFS Feature type being converted.1 2 3 4 5 6 7
<?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0"> <bean id="custom_type" class="com.example.converter.factory.CustomConverterFactory"> <argument value="custom_type"/> </bean> <service ref="custom_type" interface="ogc.catalog.common.converter.factory.FeatureConverterFactory"/> </blueprint>
27.26. JSON Definition Files
DDF supports adding new attribute types, metacard types, validators, and more using json-formatted definition files.
The following may be defined in a JSON definition file:
27.26.1. Definition File Format
A definition file follows the JSON format as specified in ECMA-404 . All definition files must be valid JSON in order to be parsed.
A single definition file may define as many of the types as needed. This means that types can be defined across multiple files for grouping or clarity.
27.26.2. Deploying Definition Files
The file must have a .json
extension in order to be picked up by the deployer.
Once the definition file is ready to be deployed, put the definition file <filename>.json
into the etc/definitions
folder.
Definition files can be added, updated, and/or deleted in the etc/definitions
folder.
The changes are applied dynamically and no restart is required.
If a definition file is removed from the etc/definitions
folder, the changes that were applied by that file will be undone.
27.27. Developing Subscriptions
Subscriptions represent "standing queries" in the Catalog. Like a query, subscriptions are based on the OGC Filter specification.
27.27.1. Subscription Lifecycle
A Subscription itself is a series of events during which various plugins or transformers can be called to process the subscription.
27.27.1.1. Creation
-
Subscriptions are created directly with the Event Processor or declaratively through use of the Whiteboard Design Pattern.
-
The Event Processor will invoke each Pre-Subscription Plugin and, if the subscription is not rejected, the subscription will be activated.
27.27.1.2. Evaluation
-
When a metacard matching the subscription is created, updated, or deleted in any Source, each Pre-Delivery Plugin will be invoked.
-
If the delivery is not rejected, the associated Delivery Method callback will be invoked.
27.27.1.3. Update Evaluation
Notably, the Catalog allows event evaluation on both the previous value (if available) and new value of a Metacard when an update occurs.
27.27.1.4. Durability
Subscription durability is not provided by the Event Processor. Thus, all subscriptions are transient and will not be recreated in the event of a system restart. It is the responsibility of Endpoints using subscriptions to persist and re-establish the subscription on startup. This decision was made for the sake of simplicity, flexibility, and the inability of the Event Processor to recreate a fully-configured Delivery Method without being overly restrictive.
Important
|
Subscriptions are not persisted by the Catalog itself. |
27.27.2. Creating a Subscription
Currently, the Catalog reference implementation does not contain a subscription endpoint. Therefore, an endpoint that exposes a web service interface to create, update, and delete subscriptions would provide a client’s subscription filtering criteria to be used by Catalog’s Event Processor to determine which events are of interest to the client. The endpoint client also provides the callback URL of the event consumer to be called when an event matching the subscription’s criteria is found. This callback to the event consumer is made by a Delivery Method implementation that the client provides when the subscription is created. Whenever an event occurs in the Catalog matching the subscription, the Delivery Method implementation will be called by the Event Processor. The Delivery Method will, in turn, send the event notification out to the event consumer. As part of the subscription creation process, the Catalog verifies that the event consumer at the specified callback URL is available to receive callbacks. Therefore, the client must ensure the event consumer is running prior to creating the subscription. The Catalog completes the subscription creation by executing any pre-subscription Catalog Plugins, and then registering the subscription with the OSGi Service Registry. The Catalog does not persist subscriptions by default.
27.27.2.1. Event Processing and Notification
If an event matches a subscription’s criteria, any pre-delivery plugins that are installed are invoked, the subscription’s DeliveryMethod
is retrieved, and its operation corresponding to the type of ingest event is invoked.
For example, the DeliveryMethod
created()
function is called when a metacard is created.
The DeliveryMethod
operations subsequently invoke the corresponding operation in the client’s event consumer service, which is specified by the callback URL provided when the DeliveryMethod
was created.
An internal subscription tracker monitors the OSGi registry, looking for subscriptions to be added (or deleted).
When it detects a subscription being added, it informs the Event Processor, which sets up the subscription’s filtering and is responsible for posting event notifications to the subscriber when events satisfying their criteria are met.
The Standard Event Processor is an implementation of the Event Processor and provides the ability to create/delete subscriptions. Events are generated by the CatalogFramework as metacards are created/updated/deleted and the Standard Event Processor is called since it is also a Post-Ingest Plugin. The Standard Event Processor checks each event against each subscription’s criteria.
When an event matches a subscription’s criteria the Standard Event Processor:
-
invokes each pre-delivery plugin on the metacard in the event.
-
invokes the
DeliveryMethod
operation corresponding to the type of event being processed, e.g.,created()
operation for the creation of a metacard.
27.27.2.1.1. Using DDF Implementation
If applicable, the implementation of Subscription
that comes with DDF should be used.
It is available at ddf.catalog.event.impl.SubscriptionImpl
and offers a constructor that takes in all of the necessary objects.
Specifically, all that is needed is a Filter
, DeliveryMethod
, Set<String>
of source IDs, and a boolean
for enterprise.
The following is an example code stub showing how to create a new instance of Subscription using the DDF implementation.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// Create a new filter using an imported FilterBuilder
Filter filter = filterBuilder.attribute(Metacard.ANY_TEXT).like().text("*");
// Create a implementation of DeliveryMethod
DeliveryMethod deliveryMethod = new MyCustomDeliveryMethod();
// Create a set of source ids
// This set is empty as the subscription is not specific to any sources
Set<String> sourceIds = new HashSet<String>();
// Set the isEnterprise boolean value
// This subscription example should notifications from all sources (not just local)
boolean isEnterprise = true;
Subscription subscription = new SubscriptionImpl(filter, deliveryMethod, sourceIds,isEnterprise);
27.27.2.2. Delivery Method
A Delivery Method provides the operation (created, updated, deleted) for how an event’s metacard can be delivered.
A Delivery Method is associated with a subscription and contains the callback URL of the event consumer to be notified of events. The Delivery Method encapsulates the operations to be invoked by the Event Processor when an event matches the criteria for the subscription. The Delivery Method’s operations are responsible for invoking the corresponding operations on the event consumer associated with the callback URL.
27.28. Contributing to Documentation
DDF documentation is included in the source code, so it is edited and maintained in much the same way.
src/main/resources
|
Contents |
|
Asciidoctor-formatted files containing documentation contents and the header information needed to organize them. |
|
Screenshots, icons, and other image files used in documentation. |
|
Template files used to compile the documentation for display. |
|
Properties file defining content types and other parameters. |
27.28.1. Editing Existing Documentation
Update existing content when code behavior changes, new capabilities are added to features, or the configuration process changes.
Content is organized within the content
directory in sub directories according to the audience and purpose for each document in the documentation library.
Use this list to determine placement of new content.
- Introduction/Core Concepts
-
This section is intended to be a high-level, executive summary of the features and capabilities of DDF. Content here should be written at a non-technical level.
- Quick Start
-
This section is intended for getting set up with a test, demonstration, or trial instance of DDF. This is the place for non-production shortcuts or workarounds that would not be used in a secured, hardened installation.
- Managing
-
The managing section covers "how-to" instructions to be used to install, configure, and maintain an instance of DDF in a production environment. This content should be aimed at system administrators. Security hardening should be integrated into these sections.
- Using
-
This section is primarily aimed at the final end users who will be performing tasks with DDF. This content should guide users through common tasks and user interfaces.
- Integrating
-
This section guides developers building other projects looking to connect to new or existing instances of DDF.
- Developing
-
This section provides guidance and best practices on developing custom implementations of DDF components, especially ones that may be contributed into the code baseline.
- Architecture
-
This section is a detailed description of the architectural design of DDF and how components work together.
- Reference
-
This section is a comprehensive list of features and possible configurations.
- Metadata Reference
-
This section details how metadata is extracted and normalized by DDF.
- Documentation
-
This is a collection of all of the individual documentation pages in one html or pdf file.
See the style guide for more guidance on stylistic and formatting concerns.
27.28.2. Adding New Documentation Content
If creating a new section is required, there are some minimal requirements for a new .adoc
file.
The templates scan the header information to place it into the correct place within the documentation. Different sections have different headers required, but some common attributes are always required.
-
type
: roughly maps to the section or subSection of the documentation. -
title
: title of the section or subsection contained in the file. -
status
: set topublished
to include within the documentation, set todraft
to hide a work-in-progress section. -
order
: used in sections where order needs to be enforced. -
summary
: brief summary of section contents. Some, but not all, summaries are included by templates.
27.28.3. Creating a New Documentation Template
To create a new, standalone documentation page, create a new template in the templates
directory.
Optionally, this template can include
some of the internal templates in the templates/build
directory, but this is not required.
For guidance on using the freemarker syntax, see the Freemarker documentation .
27.28.4. Extending Documentation in Downstream Distributions
By mimicking the build and directory structure of the documentation, downstream projects are able to leverage the existing documentation and insert content before and after sections of the DDF documentation.
-docs
-src
-main
-resources
-content
-images
-templates
content
-
Contains the .adoc files that make up the content. Sub-directories are organized according to the documents that make up the main library.
images
-
any pre-existing images, such as screenshots, to be included in the documentation.
templates
-
template files used to create documentation artifacts. A
build
sub-directory holds the templates that will not be standalone documents to render specific sections.
28. Development Guidelines
28.1. Contributing
The Distributed Data Framework is free and open-source software offered under the GNU Lesser General Public License. The DDF is managed under the guidance of the Codice Foundation . Contributions are welcomed and encouraged. Please visit the Codice DDF Contributor Guidelines and the DDF source code repository for more information.
28.2. OSGi Basics
DDF runs on top of an OSGi framework, a Java virtual machine (JVM), several choices of operating systems, and the physical hardware infrastructure. The items within the dotted line represent the standard DDF components.
DDF is a customized and branded distribution of Apache Karaf . DDF could also be considered to be a more lightweight OSGi distribution, as compared to Apache ServiceMix, FUSE ESB, or Talend ESB, all of which are also built upon Apache Karaf. Similar to its peers, DDF incorporates (additional upstream dependencies ).
The DDF framework hosts DDF applications, which are extensible by adding components via OSGi. The best example of this is the DDF Catalog (API), which offers extensibility via several types of Catalog Components. The DDF Catalog API serves as the foundation for several applications and resides in the applications tier.
The Catalog Components consist of Endpoints, Plugins, Catalog Frameworks, Sources, and Catalog Providers. Customized components can be added to DDF.
- Capability
-
A general term used to refer to an ability of the system.
- Component
-
Represents a portion of an Application that can be extended.
- Bundle
-
Java Archives (JARs) with special OSGi manifest entries.
- Feature
-
One or more bundles that form an installable unit; defined by Apache Karaf but portable to other OSGi containers.
- Application
-
A JSON file defining a collection of bundles with configurations to be displayed in the Admin Console.
28.2.1. Packaging Capabilities as Bundles
Services and code are physically deployed to DDF using bundles.
The bundles within DDF are created using the maven bundle plug-in.
Bundles are Java JAR files that have additional metadata in the MANIFEST.MF
that is relevant to an OSGi container.
The best resource for learning about the structure and headers in the manifest definition is in section 3.6 of the OSGi Core Specification . The bundles within DDF are created using the maven bundle plug-in , which uses the BND tool .
Tip
|
Alternative Bundle Creation Methods
Using Maven is not necessary to create bundles. Many alternative tools exist, and OSGi manifest files can also be created by hand, although hand-editing should be avoided by most developers. |
28.2.1.1. Creating a Bundle
28.2.1.1.1. Bundle Development Recommendations
- Avoid creating bundles by hand or editing a manifest file
-
Many tools exist for creating bundles, notably the Maven Bundle plugin, which handle the details of OSGi configuration and automate the bundling process including generation of the manifest file.
- Always make a distinction on which imported packages are
optional
orrequired
-
Requiring every package when not necessary can cause an unnecessary dependency ripple effect among bundles.
- Embedding is an implementation detail
-
Using the
Embed-Dependency
instruction provided by themaven-bundle-plugin
will insert the specified jar(s) into the target archive and add them to theBundle-ClassPath
. These jars and their contained packages/classes are not for public consumption; they are for the internal implementation of this service implementation only. - Bundles should never be embedded
-
Bundles expose service implementations; they do not provide arbitrary classes to be used by other bundles.
- Bundles should expose service implementations
-
This is the corollary to the previous rule. Bundles should not be created when arbitrary concrete classes are being extracted to a library. In that case, a library/jar is the appropriate module packaging type.
- Bundles should generally only export service packages
-
If there are packages internal to a bundle that comprise its implementation but not its public manifestation of the API, they should be excluded from export and kept as private packages.
- Concrete objects that are not loaded by the root classloader should not be passed in or out of a bundle
-
This is a general rule with some exceptions (JAXB generated classes being the most prominent example). Where complex objects need to be passed in or out of a service method, an interface should be defined in the API bundle.
Bundles separate contract from implementation and allow for modularized development and deployment of functionality. For that to be effective, they must be defined and used correctly so inadvertent coupling does not occur. Good bundle definition and usage leads to a more flexible environment.
28.2.1.1.2. Maven Bundle Plugin
Below is a code snippet from a Maven pom.xml
for creating an OSGi Bundle using the Maven Bundle plugin.
pom.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
...
<packaging>bundle</packaging>
...
<build>
...
<plugin>
<groupId>org.apache.felix</groupId>
<artifactId>maven-bundle-plugin</artifactId>
<configuration>
<instructions>
<Bundle-Name>${project.name}</Bundle-Name>
<Export-Package />
<Bundle-SymbolicName>${project.groupId}.${project.artifactId}</Bundle-SymbolicName>
<Import-Package>
ddf.catalog,
ddf.catalog.*
</Import-Package>
</instructions>
</configuration>
</plugin>
...
</build>
...
28.2.1.2. Third Party and Utility Bundles
It is recommended to avoid building directly on included third party and utility bundles. These components do provide utility and reuse potential; however, they may be upgraded or even replaced at anytime as bug fixes and new capabilities dictate. For example, web services may be built using CXF. However, the distributions frequently upgrade CXF between releases to take advantage of new features. If building on these components, be aware of the version upgrades with each distribution release.
Instead, component developers should package and deliver their own dependencies to ensure future compatibility. For example, if re-using a bundle, the specific bundle version that you are depending on should be included in your packaged release, and the proper versions should be referenced in your bundle(s).
28.2.1.3. Deploying a Bundle
A bundle is typically installed in one of two ways:
-
Installed as a feature
-
Hot deployed in the
/deploy
directory
The fastest way to deploy a created bundle during development is to copy it to the /deploy
directory of a running DDF.
This directory checks for new bundles and deploys them immediately.
According to Karaf documentation, "Karaf supports hot deployment of OSGi bundles by monitoring JAR files inside the [home]/deploy
directory.
Each time a JAR is copied in this folder, it will be installed inside the runtime.
It can be updated or deleted and changes will be handled automatically.
In addition, Karaf also supports exploded bundles and custom deployers (Blueprint and Spring DM are included by default)."
Once deployed, the bundle should come up in the Active state, if all of the dependencies were properly met.
When this occurs, the service is available to be used.
28.2.1.4. Verifying Bundle State
To verify if a bundle is deployed and running, go to the running command console and view the status.
-
Execute the
list
command. -
If the name of the bundle is known, the
list
command can be piped to thegrep
command to quickly find the bundle.
The example below shows how to verify if a Client is deployed and running.
ddf@local>list | grep -i example [ 162] [Active ] [ ] [ ] [ 80] DDF :: Registry :: example Client (2.0.0)
The state is Active
, indicating that the bundle is ready for program execution.
28.3. High Availability Guidance
Capabilities that need to function in a Highly Available Cluster should have one of the two below properties.
- Stateless
-
Stateless capabilities will function in an Highly Available Cluster because no synchronization between DDF nodes is necessary.
- Common storage
-
If a capability must store data or share state with another node, then the data or shared state must be accessible to all nodes in the Highly Available Cluster. For example, the Catalog’s storage provider must be accessible to all DDF nodes.
Appendix B: Application Reference
Installation and configuration details by application.
B.1. Admin Application Reference
The Admin Application contains components that are integral for the configuration of DDF applications. It contains various services and interfaces that allow administrators control over their systems and enhances administrative capabilities.
B.1.2. Installing the Admin Application
Install the Admin application through the Admin Console.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Install the
admin-app
feature.
B.1.3. Configuring the Admin Application
To configure the Admin Application:
-
Navigate to the Admin Console.
-
Select the Admin application.
-
Select the Configuration tab.
Name | Property | Description |
---|---|---|
org.codice.ddf.admin.config.policy.AdminConfigPolicy |
Admin Configuration Policy configurations. |
|
org.codice.admin.ui.configuration |
Admin UI configurations. |
B.2. Message Broker Application Reference
The Message Broker application gives an administrator the ability to configure and control the behavior of the Message Broker. These configurations will include aspects like the graceful shutdown period of components, names of queues and topics, and routing of messages.
B.2.2. Installing Message Broker Application
Install the Message Broker application through the Admin Console.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Install the
broker-app
feature.
B.2.3. Configuring the Message Broker Application
The standard installation of the Message Broker application has no configurable properties.
B.2.3.1. Configuring the Message Broker for a Highly Available Cluster
Prior to making these configuration changes, follow the instructions in Installing DDF to install DDF on two physically-separate hosts.
-
Configure each of the DDF installations to point to each other in a live/backup server configuration. One server will have an additional step to be designated as the backup.
-
Modify custom.system.properties:
The<DDF_HOME>/etc/custom.system.properties
in each of the installations needs to be updated so that the servers know about each other. The following properties need to have the values on the right side of the=
updated.artemis.live.host=<Hostname.or.ip.here> artemis.backup.host=<Hostname.or.ip.here> artemis.network.iplist=<Comma,separated,IPs> artemis.cluster.password=<Common password across all nodes>
ImportantUsing a Non-Local IP or Hostartemis.network.iplist
should contain a list of non-local IPs or host names that are not hosted on the same physical machine as either the live or backup machines. These IP addresses are pinged in the event of a network outage. If the backup cannot reach the live server but can successfully ping one of these hosts it will then take over as the live server. If the host list is incorrectly configured with a local IP it could break the cluster by causing both servers to go live. It is also recommended that the live server have the backups server’s IP in its list and the backup server have the live server’s IP in its list. -
Configure a Backup Broker:
The installation that is going to be used as the backup needs to have an additional configuration change made so that it knows it’s the backup. The<DDF_HOME>/etc/org.apache.activemq.artemis.cfg
should be modified to point to the providedartemis-backup.xml
instead ofartemis.xml
. Once updated, theconfig
value should look like this:config=file:etc/artemis-backup.xml
-
Restart Servers:
The DDF instances should be restarted in the following order:-
Live server
-
Backup server
In order to maintain connectivity to the broker during a restart, only stop and start a single server at a time. See Starting DDF for detailed steps.
-
-
Verify Cluster Replication:
Once both servers are started, the following command can be run using curl or a browser to verify that the servers have successfully synced.Server Cluster Verification Commandsh curl https://{FQDN}:{PORT}/admin/jolokia/read/org.apache.activemq.artemis:broker=%22artemis%22/ReplicaSync --user admin:admin --header "Origin: https://{FQDN}:{PORT}" --header "X-Requested-With: XMLHttpRequest" --insecure
{
"request": {
"mbean": "org.apache.activemq.artemis:broker=\"artemis\",
"attribute": "ReplicaSync",
"type": "read"
},
"value": true,
"timestamp": 1485967446,
"status": 200
}
Important
|
If LDAP has been configured then the admin user and password for the above command will need to be changed. |
Important
|
Note the |
Additionally, for more details about the health of the cluster, the following command can be run using curl or a https://{FQDN}:{PORT}/admin/jolokia/read/org.apache.activemq.artemis:broker=%22artemis%22,component=cluster-connections,name=%22my-cluster%22/Topologybrowser.
sh
curl https://{FQDN}:{PORT}/admin/jolokia/read/org.apache.activemq.artemis:broker=%22artemis%22,component=cluster-connections,name=%22broker-cluster%22/Topology --user admin:admin --header "Origin: https://{FQDN}:{PORT}" --header "X-Requested-With: XMLHttpRequest" --insecure
This endpoint returns diagnostic info about the cluster that can be used for troubleshooting. Values of interest in the response are the node=2
value which is a count of the nodes in the cluster and the port/host values for each node.
{
"request": {
"mbean": "org.apache.activemq.artemis:broker=\"artemis\",component=cluster-connections,name=\"my-cluster\",
"attribute": "Topology",
"type": "read"
},
"value": "topology on Topology@750c2a56[owner=ClusterConnectionImpl@228651110[nodeUUID=17b48db9-e7ee-11e6-9d56-38c986025a6f, connector=TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=5672&host=10-101-3-185, address=jms, server=ActiveMQServerImpl::serverUUID=17b48db9-e7ee-11e6-9d56-38c986025a6f]]:\n\t17b48db9-e7ee-11e6-9d56-38c986025a6f => TopologyMember[id = 17b48db9-e7ee-11e6-9d56-38c986025a6f, connector=Pair[a=TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=5672&host=10-101-3-185, b=TransportConfiguration(name=netty-connector, factory=org-apache-activemq-artemis-core-remoting-impl-netty-NettyConnectorFactory) ?port=5672&host=10-101-2-97], backupGroupName=null, scaleDownGroupName=null]\n\tnodes=2\tmembers=1",
"timestamp": 1485971158,
"status": 200
}
B.2.3.2. Securing the Message Broker Application
DDF can be configured to use Artemis to perform authentication and authorization against an LDAP server.
Artemis provides the ability to apply role-based security to queues based on addresses (see the Artemis documentation for details). It can be configured to use an LDAP server to perform authentication and authorization for users who connect to it.
Important
|
If you are setting up multiple DDF instances in a cluster for high availability, then you will need to perform these steps on each instance. |
The Security STS LDAP Login and Security STS LDAP Claims Handler bundles are responsible for authenticating and authorizing users with your LDAP server. To configure them for your LDAP server, follow the instructions in STS LDAP Login and STS LDAP Claims Handler.
Once the STS LDAP Login and Claims Handlers are configured, update <DDF_HOME>/etc/org.apache.activemq.artemis.cfg
to use the ldap
realm (just change domain=karaf
to domain=ldap
):
domain=ldap
DDF uses two roles in the security settings for Artemis: manager
and broker-client
.
<security-setting match="#">
<permission type="createNonDurableQueue" roles="manager,broker-client"/>
<permission type="deleteNonDurableQueue" roles="manager,broker-client"/>
<permission type="createDurableQueue" roles="manager"/>
<permission type="deleteDurableQueue" roles="manager"/>
<permission type="consume" roles="manager,broker-client"/>
<permission type="browse" roles="manager,broker-client"/>
<permission type="send" roles="manager,broker-client"/>
<permission type="manage" roles="manager"/>
</security-setting>
Users with the role manager
have full permissions, but users with the role broker-client
cannot
create or delete durable queues or invoke management operations.
Your LDAP should have groups that correspond to these roles so that members of those groups will have
the correct permissions when connecting to Artemis to send or consume messages.
Alternatively, you can choose roles other than manager
and broker-client
, which may be useful if your LDAP already
has groups that you would like to use as Artemis roles.
If you wish to use different roles, just replace manager
and/or broker-client
in the <security-setting>
in artemis.xml
with the roles you would like to use.
B.2.3.3. Artemis Broker Connection Configuration
The Artemis Broker Connection Configuration
manages the parameters for DDF’s connection to
Artemis. The username and password in the Artemis Broker Connection Configuration
need to be updated
so that they correspond to a user in your LDAP. If possible, this user should have the manager
role
(or the role that is being used in place of manager
if the default Artemis role has been changed).
To update the username and password:
-
Navigate to the Admin Console
-
Select the Broker App application.
-
Select the Configuration tab.
-
Select the Artemis Broker Connection Configuration.
-
Enter the username and password and select Save changes.
B.2.4. Using the Message Broker Application
The Message Broker app can be used through the Admin Console. See the Route Manager and the Undelivered Messages UI for more information.
B.2.4.1. Undelivered Messages UI
The Undeliverable Messages tab gives an administrator the ability to view undeliverable messages and then decide whether to resend or delete those messages.
The Undelivered Messages UI is installed as a part of the Message Broker.
To view undelivered messages, an administrator can use the "retrieve" button, which makes an immediate call to the backend and displays all the messages. Alternatively, the "start polling" button makes calls to the backend every 5 seconds and updates the display accordingly.
An administrator can select messages by clicking anywhere in the row of the message. Multiple messages can be selected simply by clicking multiple messages or by clicking the "Select all" option at the head of the table. Deselecting is done by clicking a message again or clicking the "Deselect all" option, next to the "Select all" option.
To attempt to resend messages, select the messages, and then click the "resend" button. Currently, there is no way to identify if a message was successfully redelivered.
To delete messages, select the messages, and then click the "delete" button.
Note
|
Only 200 messages can be viewed at a time, even though there may be more than 200 undelivered messages |
Known issues with the Undelivered Messages UI:
-
If attempting to resend a message, but the listener is no longer available, the message will be "successfully" resent and removed from the UI and the Artemis DLQ but will not be successfully redelivered.
B.2.4.2. Route Manager
The Route Manager gives an administrator the ability to configure and deploy Camel routes, queues, and topics dynamically. The sjms
component is available by default. If a need arises for a new route, an administrator can easily develop a new route and deploy it to satisfy the requirement, rather than spending the time to develop, compile, and test new code.
The Route Manager is installed as a part of the Message Broker application.
The route shutdown timeout can be configured.
To deploy a new route, simply place a route .XML
file in the <DDF_HOME>/etc/routes
directory of DDF. To remove a route (or set of routes), delete the .XML
file.
There are example routes in the <DDF_HOME>/etc/routes
directory by default.
B.3. Catalog Application Reference
The Catalog provides a framework for storing, searching, processing, and transforming information.
Clients typically perform create, read, update, and delete (CRUD) operations against the Catalog.
At the core of the Catalog functionality is the Catalog Framework, which routes all requests and responses through the system, invoking additional processing per the system configuration.
B.3.1. Catalog Application Prerequisites
To use the Catalog Application, the following applications/features must be installed:
-
Platform
B.3.2. Installing the Catalog Application
Install the Catalog application through the Admin Console.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Install the
catalog-app
feature.
B.3.3. Configuring the Catalog Application
To configure the Catalog Application:
-
Navigate to the Admin Console.
-
Select the Catalog application.
-
Select the Configuration tab.
Name | Property | Description |
---|---|---|
ddf.catalog.federation.impl.CachingFederationStrategy |
Catalog Federation Strategy. |
|
ddf.catalog.backup.CatalogBackupPlugin |
Catalog Backup Plugin configurations. |
|
ddf.catalog.CatalogFrameworkImpl |
Catalog Standard Framework configurations. |
|
Confluence_Federated_Source |
Confluence Federated Source. |
|
org.codice.ddf.catalog.content.monitor.ContentDirectoryMonitor |
Content Directory Monitor configurations. |
|
org.codice.ddf.catalog.content.impl.FileSystemStorageProvider |
Content File System Storage Provider. |
|
Csw_Connected_Source |
CSW Connected Source. |
|
org.codice.ddf.catalog.plugin.expiration.ExpirationDatePlugin |
Catalog pre-ingest plugin to set an expiration date on metacards. |
|
ddf.catalog.ftp.FtpServerManager |
FTP Endpoint configurations. |
|
ddf.catalog.history.Historian |
Enables versioning of both metacards and content. |
|
org.codice.ddf.catalog.security.policy.metacard.MetacardAttributeSecurityPolicyPlugin |
Metacard Attribute Security Policy Plugin. |
|
org.codice.ddf.catalog.plugin.metacard.MetacardIngestNetworkPlugin |
Catalog Metacard Ingest Network Plugin. |
|
ddf.catalog.metacard.validation.MetacardValidityFilterPlugin |
Metacard Validation Filter Plugin. |
|
ddf.catalog.metacard.validation.MetacardValidityMarkerPlugin |
Metacard Validation Marker Plugin. |
|
Metacard_File_Storage_Route |
Enable data backup of metacards using a configurable transformer. |
|
Metacard_S3_Storage_Route |
Resource Download Configuration. |
|
OpenSearchSource |
Catalog OpenSearch Federated Source. |
|
ddf.catalog.resource.download.ReliableResourceDownloadManager |
Resource Download configurations. |
|
ddf.services.schematron.SchematronValidationService |
Schematron Validation Services configurations. |
|
org.codice.ddf.catalog.plugin.security.audit.SecurityAuditPlugin |
Security Audit Plugin. |
|
ddf.catalog.transformer.input.tika.TikaInputTransformer |
Tika Input Transformer. |
|
ddf.catalog.resource.impl.URLResourceReader |
URL Resource Reader |
|
org.codice.ddf.catalog.content.plugin.video.VideoThumbnailPlugin |
Video Thumbnail Plugin. |
|
org.codice.ddf.catalog.security.policy.xml.XmlAttributeSecurityPolicyPlugin |
XML Attribute Security Policy Plugin. |
|
ddf.catalog.transformer.xml.XmlResponseQueueTransformer |
Xml Response Query Transformer. |
|
ddf.catalog.transformer.input.pdf.PdfInputTransformer |
PDF Input Transformer configurations. |
|
org.codice.ddf.catalog.security.CatalogPolicy |
Catalog Policy Plugin. |
|
org.codice.ddf.catalog.security.ResourceUriPolicy |
Resource URI Policy Plugin. |
|
org.codice.ddf.catalog.sourcepoller.StatusSourcePollerRunner |
Status Source Poller Runner. |
Name | Property | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Source Name |
|
String |
Yes |
||
Confluence Rest URL |
|
String |
The Confluence Rest API endpoint URL. Example: https://{FQDN}:{PORT}/rest/api/content |
Yes |
|
Username |
|
String |
Username to use with HTTP Basic Authentication. This auth info will overwrite any federated auth info. Only set this if the Confluence endpoint requires basic authentication. |
No |
|
Password |
|
Password |
Password to use with HTTP Basic Authentication. This auth info will overwrite any federated auth info. Only set this if the Confluence endpoint requires basic authentication. |
No |
|
Include Page Contents In Results |
|
Boolean |
Flag indicating if Confluence page contents should be included in the returned results. |
false |
No |
Include Archived Spaces |
|
Boolean |
Flag indicating if archived confluence spaces should be included in search results. |
false |
No |
Exclude Confluence Spaces |
|
Boolean |
Flag indicating if the list of Confluence Spaces should be excluded from searches instead of included. |
false |
No |
Confluence Spaces |
|
String cardinality=1000 |
The confluence spaces to include/exclude from searches. If no spaces are specified, all visible spaces will be searched. |
No |
|
Attribute Overrides |
|
String cardinality=100 |
Attribute Overrides - Optional: Metacard attribute overrides (Key-Value pairs) that can be set on the results comming from this source. If an attribute is specified here, it will overwrite the metacard’s attribute that was created from the Confluence source. The format should be 'key=value'. The maximum allowed size of an attribute override is 65,535 bytes. All attributes in the catalog taxonomy tables are injected into all metacards by default and can be overridden. |
No |
|
Availability Poll Interval |
|
Long |
Availability polling interval in milliseconds. |
60000 |
No |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Directory Path |
|
String |
"Specifies the directory to be monitored, can be a filesystem path or webdav address (only supported for Monitor in place)" |
false |
true |
Maximum Concurrent Files |
|
Integer |
Specifies the maximum number of concurrent files to be processed within a directory (maximum of 8). If this number exceeds 8, 8 will be used in order to preserve system resources. Make sure that your system has enough memory to support the number of concurrent processing threads across all directory monitors. |
1 |
true |
ReadLock Time Interval |
|
Integer |
Specifies the time to wait (in milliseconds) before acquiring a lock on a file in the monitored directory. This interval is used for sleeping between attempts to acquire the read lock on a file to be ingested. The default value of 100 milliseconds is recommended. |
100 |
true |
Processing Mechanism |
|
String |
Choose what happens to the content item after it is ingested. Delete will remove the original file after storing it in the content store. Move will store the item in the content store, and a copy under ./ingested, then remove the original file. (NOTE: this will double the amount of disk space used.) Monitor in place will index the file and serve it from its original location. If in place is used, then the URLResourceReader root resource directories configuration must be updated to allow downloading from the monitored directory (See URL Resource Reader). |
in_place |
false |
Attribute Overrides |
|
String |
Optional: Metacard attribute overrides (Key-Value pairs) that can be set on the content monitor. If an attribute is specified here, it will overwrite the metacard’s attribute that was created from the content directory. The format should be 'key=value'. The maximum allowed size of an attribute override is 65,535 bytes. All attributes in the catalog taxonomy tables are injected into all metacards by default and can be overridden. |
null |
false |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Enable Versioning |
|
Boolean |
Enables versioning of both metacards and content. |
true |
true |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Create Required Attributes |
|
String |
Roles/attributes required for the create operations. Example: role=role1,role2 |
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role=guest |
true |
Update Required Attributes |
|
String |
Roles/attributes required for the update operation. Example: role=role1,role2 |
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role=guest |
true |
Delete Required Attributes |
|
String |
Roles/attributes required for the delete operation. Example: role=role1,role2 |
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role=guest |
true |
Read Required Attributes |
|
String |
Roles/attributes required for the read operations (query and resource). Example: role=role1,role2 |
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role=guest |
true |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Permit Resource URI on Creation |
|
String |
Allow users to provide a resource URI when creating a metacard |
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role=guest |
true |
Permit Resource URI on Update |
|
String |
Allow users to provide a resource URI when updating a metacard |
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role=guest |
true |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Poll Interval (minutes) |
|
Integer |
The interval (in minutes) at which to recheck the availability of all sources. Must be at least 1 minute. WARNING:
There is a maximum delay of 2* |
1 |
true |
B.4. GeoWebCache Application Reference
GeoWebCache enables a server providing a map tile cache and tile service aggregation.
Warning
|
The GeoWebCache application is currently in an EXPERIMENTAL status and should not be installed on a security-hardened installation. |
GeoWebCache enables a server providing a tile cache and tile service aggregation. See (GeoWebCache ) for more information. This application also provides an administrative plugin for the management of GeoWebCached layers. GeoWebCache also provides a user interface for previewing, truncating, or seeding layers at https://{FQDN}:{PORT}/geowebcache/.
B.4.2. Installing GeoWebCache
Install the GeoWebCache application through the Admin Console.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Install the
geowebcache-app
feature.
B.4.3. Configuring GeoWebCache
GeoWebCache can be configured to cache layers locally, using the following procedures.
B.4.3.1. Adding GeoWebCache Layers
Add layers to the local cache:
-
Navigate to the Admin Console.
-
Select the GeoWebCache Application.
-
Select the GeoWebCache Layers tab.
-
Click the Add button.
-
Enter the data in the fields provided.
-
If necessary, click the Add button to add additional MIME types.
-
If necessary, click the Add button to add additional WMS Layer Names.
Name | Property | Type | Description | Default Value |
---|---|---|---|---|
Name |
String |
Unique name assigned. |
||
Mime Formats |
String |
List of mime formats used. |
||
URL |
URI |
URL location of layer to add. |
||
WMS Layer Name |
String |
The name(s) of WMS layers that exist at the URL specified above. If no WMS Layer names are specified, GeoWebCache will look for the Layer Name specified in the name field. Otherwise, it will attempt to find all layer names added here and combine them into one layer. |
B.4.3.2. Editing GeoWebCache Layers
-
Navigate to the Admin Console.
-
Select the GeoWebCache application.
-
Navigate to the GeoWebCache Layers tab.
-
Click the Name field of the layer to edit.
B.4.3.3. Removing GeoWebCache Layers
-
Click the Delete icon at the end of the row of the layer to be deleted.
B.4.3.4. Configuring GWC Disk Quota
Storage usage for a GeoWebCache server is managed by a diskquota.xml
file with configuration details to prevent image-intensive data from filling the available storage.
To view the disk quota XML representative: https://{FQDN}:{PORT}/geowebcache/rest/diskquota.xml
To update the disk quota, a client can post a new XML configuration: curl -v -k -XPUT -H "Content-type: text/xml" -d @diskquota.xml "https://{FQDN}:{PORT}/geowebcache/rest/diskquota.xml"
diskquota.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
<gwcQuotaConfiguration>
<enabled>true</enabled>
<diskBlockSize>2048</diskBlockSize>
<cacheCleanUpFrequency>5</cacheCleanUpFrequency>
<cacheCleanUpUnits>SECONDS</cacheCleanUpUnits>
<maxConcurrentCleanUps>5</maxConcurrentCleanUps>
<globalExpirationPolicyName>LFU</globalExpirationPolicyName>
<globalQuota>
<value>100</value>
<units>GiB</units>
</globalQuota>
<layerQuotas/>
</gwcQuotaConfiguration>
See Disk Quotas for more information on configuration options for disk quota.
B.4.4. Configuring the Standard Search UI for GeoWebCache
Add a new Imagery Provider in the Admin Console:
-
Navigate to the Admin Console.
-
Select Configuration tab.
-
Select Standard Search UI configuration.
-
Click the Add button next to Imagery Providers
-
Enter configuration for Imagery Provider in new textbox:
-
{"type" "WMS" "url" "https://{FQDN}:{PORT}/geowebcache/service/wms" "layers" ["states"] "parameters" {"FORMAT" "image/png"} "alpha" 0.5}
-
Set the Map Projection to
EPSG:900913
orEPSG:4326
. (GeoWebCache supports either of these projections.)
Note
|
Currently, GeoWebCache only supports WMS 1.1.1 and below. If the version number is not specified in the imagery provider, DDF will default to version
|
B.5. Platform Application Reference
The Platform application is considered to be a core application of the distribution. The Platform application provides the fundamental building blocks that the distribution needs to run. These building blocks include subsets of:
A Command Scheduler is also included as part of the Platform application to allow users to schedule Command Line Shell Commands.
B.5.2. Installing Platform
Install the Platform application through the Admin Console.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Install the
platform-app
feature.
B.5.3. Configuring the Platform Application
To configure the Platform Application:
-
Navigate to the Admin Console.
-
Select the Platform application.
-
Select the Configuration tab.
Name | Property | Description |
---|---|---|
DDF_Custom_Mime_Type_Resolver |
DDF Custom Mime Types. |
|
org.codice.ddf.platform.logging.LoggingService |
Logging Service configurations. |
|
MetricsReporting |
Metrics Reporting. |
|
org.codice.ddf.security.response.filter.ResponseHeaderConfig |
HTTP Response Security response configurations. |
|
org.codice.ddf.platform.email.impl.SmtpClientImpl |
Email Service configurations. |
|
org.codice.ddf.distribution.landingpage.properties |
Starting page for users to interact with DDF. |
|
ddf.platform.ui.config |
Platform UI configurations. |
|
ddf.platform.scheduler.Command |
Platform Command Scheduler. |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Description |
|
String |
Specifies the description to display on the landing page. |
As a common data layer, DDF provides secure enterprise-wide data access for both users and systems. |
true |
Phone Number |
|
String |
Specifies the phone number to display on the landing page. |
true |
|
Email Address |
|
String |
Specifies the email address to display on the landing page. |
true |
|
External Web Site |
|
String |
Specifies the external web site URL to display on the landing page. |
true |
|
Announcements |
|
String |
Announcements that will be displayed on the landing page. |
null |
true |
Branding Background |
|
String |
Specifies the landing page background color. Use html css colors or #rrggbb. |
true |
|
Branding Foreground |
|
String |
Specifies the landing page foreground color. Use html css colors or #rrggbb. |
true |
|
Branding Logo |
|
String |
Specifies the landing page logo. Use a base64 encoded image. |
true |
|
Additional Links |
|
String |
Additional links to be displayed on the landing page. Use the format <text>,<link> (e.g. |
yes |
B.6. Registry Application Reference
Registry contains the base registry components, plugins, sources, and interfaces needed for DDF to function as a registry connecting multiple nodes.
B.6.1. Registry Prerequisites
To use the Registry, the following apps/features must be installed:
-
Catalog
-
Admin
-
Spatial
-
Platform
-
Security
B.6.2. Installing Registry
Install the Registry application through the Admin Console.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Install the
registry-app
feature.
B.6.3. Customizing Registry Fields
All the fields that appear in a registry node are customizable. This is done through a JSON configuration file located at <DDF_HOME>/etc/registry/registry-custom-slots.json
that defines the registry fields. In this file there are JSON objects that relate to each part of the edit registry modal.
These objects are
-
General
-
Service
-
ServiceBinding
-
-
Organization
-
Person (Contact)
-
Content (Content Collection)
Each of the objects listed above is a JSON array of field objects that can be modified. There are some other objects in the JSON file like PersonName, Address, TelephoneNumber, and EmailAddress that should not be modified.
Property Key | Required | Property Value |
---|---|---|
key |
yes |
The string value that will be used to identify this field. Must be unique within field grouping array. This value is what will show up in the generated EBRIM xml. |
displayName |
yes |
The string name that will be displayed in the edit node dialog for this field |
description |
yes |
A brief description of what the field represents or is used for. Shown when user hovers or click the question mark icon for the field. |
value |
no |
The initial or default value of the field. For most cases this should be left as an empty array or string. |
type |
yes |
Identifies what type of field this is. Value must be one of string, date, number, boolean, point, or bounds |
required |
no |
Indicates if this field must be filled out. Default is false. If true an asterisk will be displayed next to the field name. |
possibleValues |
no |
An array of values that could be used for this field. If multiValued=true this list will be used for suggestions for autocomplete. If multiValued=false this list will be used to populate a dropdown. |
multiValued |
no |
Flag indicating if this field accepts multiple values or not. Default is false. |
isSlot |
no |
Indicates that this field represents a slot value in the EBRIM document. If this is false the key must match a valid EBRIM attribute for the parent object. Default is true. |
advanced |
no |
A flag indicating if this field should be placed under the Advanced section of the edit modal ui. Default is false. |
regex |
no |
A regular expression for validating users input. |
regexMessage |
no |
A message to show the user if the regular expression test fails. |
isGroup, constructTitle |
N/A |
These fields are used for nesting objects and should not be modified |
B.6.4. Configuring the Registry Application
To configure the Registry Application:
-
Navigate to the Admin Console.
-
Select the Registry application.
-
Select the Configuration tab.
Name | Property | Description |
---|---|---|
Csw_Registry_Store |
Registry CSW Store. |
|
org.codice.ddf.registry.policy.RegistryPolicyPlugin |
Registry Policy Plugin. |
|
Registry_Configuration_Event_Handler |
Registry Source Configuration Handler configurations. |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Registry Create Attributes |
|
String |
Roles/attributes required for Create operations on registry entries. Example: {role=role1;type=type1} |
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role=guest |
true |
Registry Update Attributes |
|
String |
Roles/attributes required for Update operations on registry entries. Example: {role=role1;type=type1} |
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role=guest |
true |
Registry Delete Attributes |
|
String |
Roles/attributes required for Delete operations on registry entries. Example: {role=role1;type=type1} |
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role=guest |
true |
Registry Read Attributes |
|
String |
Roles/attributes required for reading registry entries. Example: {role=role1;type=type1} |
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role=guest |
true |
Registry Admin Attributes |
|
String |
Roles/attributes required for an admin to bypass all filtering/access controls. Example: {role=role1;type=type1} |
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/role=system-admin |
true |
Disable Registry Write Access |
|
Boolean |
Disables all write access to registry entries in the catalog. Only users with Registry Admin Attributes will be able to write registry entries |
null |
false |
Entries are White List |
|
Boolean |
A flag indicating whether or not the Registry Entry Ids represent a 'white list' (allowed - checked) or a 'black list' (blocked - unchecked) ids |
null |
false |
Registry Entries Ids |
|
String |
List of registry entry ids to be used in the white/black list. |
null |
false |
B.7. Resource Management Application Reference
The Resource Management Application provides administrative functionality to monitor and manage data usage on the system. This application allows an administrator to:
-
View data usage.
-
Set limits on users.
-
View and terminate searches that are in progress.
Components of the Resource Management application include:
- Resource Management Data Usage Tab
-
View data usage and configure users' data limits and reset times for those limits.
- Resource Management Queries Tab
-
View and cancel actively running queries.
B.7.1. Resource Management Prerequisites
To use the Resource Management Application, the following apps/features must be installed:
-
Platform
-
Security
-
Admin
-
Catalog
B.7.2. Installing Resource Management
Install the Resource Management application through the Admin Console.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Install the
resourcemanagement-app
feature.
B.7.3. Configuring the Resource Management Application
To configure the Resource Management Application:
-
Navigate to the Admin Console.
-
Select the Resource Management application.
-
Select the Configuration tab.
Name | Property | Description |
---|---|---|
org.codice.ddf.resourcemanagement.usage |
Data Usage configurations. |
B.8. Security Application Reference
The Security application provides authentication, authorization, and auditing services for the DDF. These services comprise both a framework that developers and integrators can extend as well as a reference implementation that meets security requirements.
This section documents the installation, maintenance, and support of this application.
-
Security Core
-
Security Encryption
-
Security IdP
-
Security PEP
-
Security PDP
-
Security STS
B.8.1. Security Prerequisites
To use the Security application, the following applications/features must be installed:
-
Platform
B.8.2. Installing Security
Install the Security application through the Admin Console.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Install the
security-app
feature.
B.8.3. Configuring the Security Application
To configure the Security Application:
-
Navigate to the Admin Console.
-
Select the Security application.
-
Select the Configuration tab.
Name | Property | Description |
---|---|---|
Claims_Handler_Manager |
STS Ldap and Roles Claims Handler Configuration. |
|
org.codice.ddf.security.interceptor.GuestInterceptor |
Security SOAP Guest Interceptor. |
|
org.codice.ddf.security.idp.client.IdpMetadata |
IdP Client configurations. |
|
org.codice.ddf.security.idp.client.LogoutRequestService |
Logout Page configurations. |
|
org.codice.ddf.security.policy.context.impl.PolicyManager |
Web Context Security Policies. |
|
org.codice.ddf.security.sts.claims.property.PropertyFileClaimsHandler |
File Based Claims Handler. |
|
org.codice.ddf.security.filter.login.Session |
Session configurations. |
|
org.codice.ddf.security.idp.client.IdpHandler |
IdP Handler configurations. |
|
ddf.security.pdp.realm.AuthzRealm |
AuthZ Security configurations. |
|
ddf.security.service.SecurityManager |
SAML NameID Policy. |
|
ddf.security.sts |
STS configurations. |
|
ddf.security.sts.client.configuration |
STS Client configurations. |
|
ddf.security.sts.guestclaims |
Guest Claims Handler configurations. |
|
ddf.security.sts.guestvalidator |
Security STS Guest Validator configurations. |
|
org.codice.ddf.security.validator.pki |
STS PKI Token Validator configurations. |
Name | Id | Type | Description | Default Value |
---|---|---|---|---|
IdP Metadata |
|
String |
Refer to metadata by HTTPS URL (https://), file URL (file:), or an XML block(<md:EntityDescriptor>…</md:EntityDescriptor>). |
|
Perform User-Agent Check |
|
Boolean |
If selected, this will allow clients that do not support ECP and are not browsers to fall back to PKI, BASIC, and potentially GUEST authentication, if enabled. |
true |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Role Claim Type |
|
String |
Role claim URI. |
true |
|
ID Claim Type |
|
String |
ID claim URI. |
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier |
true |
User Role File |
|
String |
Location of the file which maps roles to users. |
etc/users.properties |
true |
User Attribute File |
|
String |
Location of the file which maps attributes to users. |
etc/users.attributes |
true |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Session Timeout (in minutes) |
|
Integer |
Specifies the length of inactivity (in minutes) between client requests before the servlet container will invalidate the session (this applies to all client sessions). This value must be 2 minutes or greater, as users are warned when only 1 minute remains. If a value of less than 2 minutes is used, the timeout is set to the default time of 31 minutes. See also: Platform UI Config. |
31 |
true |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
SAML NameID Policy |
|
String |
List of attributes that are considered for replacing the username of the logged in user. If any of these attributes match any of the attributes within the SecurityAssertion, the value of the first matching attribute will be used as the username. (Does not apply when NameIDFormat is of the following: X509, persistent, kerberos or unspecified, and the username is not empty). |
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier, uid |
true |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
SAML Assertion Type |
|
String |
The version of SAML to use. Most services require SAML v2.0. Changing this value from the default could cause services to stop responding. |
http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0 |
true |
SAML Key Type |
|
String |
The key type to use with SAML. Most services require Bearer. Changing this value from the default could cause services to stop responding. |
true |
|
SAML Key Size |
|
String |
The key size to use with SAML. The default key size is 256 and this is fine for most applications. Changing this value from the default could cause services to stop responding. |
256 |
true |
Use Key |
|
Boolean |
Signals whether or not the STS Client should supply a public key to embed as the proof key. Changing this value from the default could cause services to stop responding. |
true |
true |
STS WSDL Address |
|
String |
STS WSDL Address |
${org.codice.ddf.system.protocol}${org.codice.ddf.system.hostname}:${org.codice.ddf.system.port}${org.codice.ddf.system.rootContext}/SecurityTokenService?wsdl |
true |
STS Endpoint Name |
|
String |
STS Endpoint Name. |
{http://docs.oasis-open.org/ws-sx/ws-trust/200512/}STS_Port |
false |
STS Service Name |
|
String |
STS Service Name. |
{http://docs.oasis-open.org/ws-sx/ws-trust/200512/}SecurityTokenService |
false |
Signature Properties |
|
String |
Path to Signature crypto properties. This path can be part of the classpath, relative to <DDF_HOME>, or an absolute path on the system. |
etc/ws-security/server/signature.properties |
true |
Encryption Properties |
|
String |
Path to Encryption crypto properties file. This path can be part of the classpath, relative to <DDF_HOME>, or an absolute path on the system. |
etc/ws-security/server/encryption.properties |
true |
STS Properties |
|
String |
Path to STS crypto properties file. This path can be part of the classpath, relative to <DDF_HOME>, or an absolute path on the system. |
etc/ws-security/server/signature.properties |
true |
Claims |
|
String |
List of claims that should be requested by the STS Client. |
true |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Attributes |
|
String |
The attributes to be returned for any Guest user. |
true |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Supported Realms |
|
String |
The realms that this validator supports. |
karaf,ldap |
true |
B.9. Solr Catalog Application Reference
DDF uses Solr for data storage, by default.
B.9.1. Solr Catalog Prerequisites
To use the Solr Catalog Application, the following apps/features must be installed:
-
Platform
-
Catalog
B.9.2. Installing Solr Catalog
Install the Solr Catalog application through the Admin Console.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Install the
solr-app
feature.
B.9.3. Configuring the Solr Catalog Application
To configure the Solr Catalog Application:
-
Navigate to the Admin Console.
-
Select the Solr Catalog application.
-
Select the Configuration tab.
Name | Property | Description |
---|---|---|
ddf.catalog.solr.provider.SolrCatalogProvider |
Solr Catalog Provider. |
B.10. Spatial Application Reference
The Spatial Application provides KML transformer and a KML network link endpoint that allows a user to generate a View-based KML Query Results Network Link.
B.10.1. Offline Gazetteer Service
In the Spatial Application, the offline-gazetteer
is installed by default.
This feature enables you to use an offline source of GeoNames data (as an alternative to the GeoNames Web service enabled by the webservice-gazetteer
feature) to perform searches via the gazetteer search box in the Search UI.
By default a small set of GeoNames data is included with the offline gazetteer. The GeoNames data is stored as metacards in the core catalog and are tagged with geonames
and gazetteer
. This collection of GeoNames metacards can be expanded or updated by using the gazetteer:update
command.
B.10.1.1. Spatial Gazetteer Console Commands
The gazetteer
commands provide the ability to interact with the local GeoNames metacard collection in the core catalog. These GeoNames metacards are used by the offline-gazetteer
feature, which is an optional feature available in this application and is explained above. Note that these commands are only available if the offline-gazetteer
feature is installed.
Command | Description |
---|---|
|
Adds new gazetteer metacards to the core catalog from a resource. The resource argument can be one of three types:
The |
|
Builds the Solr suggester index used for placename autocompletion in Intrigue when using the offline gazetteer. This index is built automatically whenever gazetteer metacards are created, updated, or deleted, but if those builds fail then this command can be used to attempt to build the index again. |
B.10.2. Spatial Prerequisites
To use the Spatial Application, the following apps/features must be installed:
-
Platform
-
Catalog
B.10.3. Installing Spatial
Install the Spatial application through the Admin Console.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Install the
spatial-app
feature.
B.10.4. Configuring the Spatial Application
To configure the Spatial Application:
-
Navigate to the Admin Console.
-
Select the Spatial application.
-
Select the Configuration tab.
Name | Property | Description |
---|---|---|
Csw_Federated_Source |
CSW Specification Profile Federated Source should be used when federating to an external CSW service. |
|
Csw_Federation_Profile_Source |
DDF’s full-fidelity CSW Federation Profile. Use this when federating to a DDF-based system. |
|
Csw_Transactional_Federated_Source |
CSW Federated Source that supports transactions (create, update, delete). |
|
org.codice.ddf.spatial.geocoding.plugin.GeoCoderPlugin |
GeoCoder Plugin. |
|
Gmd_Csw_Federated_Source |
CSW Federated Source using the Geographic MetaData (GMD) format (ISO 19115:2003). |
|
org.codice.ddf.spatial.kml.endpoint.KmlEndpoint |
Spatial KML Endpoint. |
|
org.codice.ddf.spatial.ogc.wfs.catalog.mapper.MetacardMapper |
Metacard to WFS Feature Map. |
|
Wfs_v1_0_0_Connected_Source |
WFS 1.0.0 Connected Source. |
|
Wfs_v1_0_0_Federated_Source |
WFS v1.0.0 Federated Source. |
|
Wfs_v1_1_0_Federated_Source |
WFS 1.1.0 Federated Source. |
|
Wfs_v2_0_0_Connected_Source |
WFS 2.0.0 Connected Source. |
|
Wfs_v2_0_0_Federated_Source |
WFS 2.0.0 Federated Source. |
|
org.codice.ddf.spatial.kml.style |
Spatial KML Style Map Entry. |
Title | Property | Type | Description | Default Value |
---|---|---|---|---|
Radius |
|
Integer |
The search radius from a Point in kilometers. |
10 |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Style Document |
|
String |
KML Document containing custom styling. This will be served up by the KmlEndpoint. (e.g. file:///path/to/kml/style/doc.kml) |
false |
|
Icons Location |
|
String |
Location of icons for the KML endpoint |
false |
|
Description |
|
String |
Description of this NetworkLink. Enter a short description of what this NetworkLink provides. |
false |
|
Web Site |
|
String |
URL of the web site to be displayed in the description. |
false |
|
Logo |
|
String |
URL to the logo to be displayed in the description. |
false |
|
Visible By Default |
|
Boolean |
Check if the source NetworkLinks should be visible by default. |
false |
false |
Max Number of Results |
|
Integer |
The maximum number of results that should be returned from each layer. |
100 |
false |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Attribute Name |
|
String |
The name of the Metacard Attribute to match against. e.g. title, metadata-content-type, etc |
null |
true |
Attribute Value |
|
String |
The value of the Metacard Attribute. |
null |
true |
Style URL |
|
String |
The full qualified URL to the KML Style. e.g. http://example.com/styles#myStyle |
null |
true |
B.11. Search UI Application Reference
The Search UI is a user interface that enables users to search a catalog and associated sites for content and metadata.
B.11.1. Search UI Prerequisites
To use the Search UI application, the following applications/features must be installed:
-
Platform
-
Catalog
B.11.2. Installing Search UI
Install the Search UI application through the Admin Console.
-
Navigate to the Admin Console.
-
Select the System tab.
-
Select the Features tab.
-
Install the
search-ui-app
feature.
B.11.3. Configuring the Search UI Application
To configure the Search UI Application:
-
Navigate to the Admin Console.
-
Select the Search UI application.
-
Select the Configuration tab.
Name | Property | Description |
---|---|---|
org.codice.ddf.catalog.ui.security.facetwhitelist |
Catalog UI Search Attribute Suggestion Whitelist |
|
org.codice.ddf.catalog.ui.query.monitor.email.EmailNotifier |
Email Notifier. |
|
org.codice.ddf.ui.searchui.filter.RedirectServlet |
Search UI redirect. |
|
org.codice.ddf.catalog.ui.transformer.TransformerDescriptors |
Catalog UI Search Transformer Blacklists. |
|
org.codice.ddf.catalog.ui.query.monitor.impl.WorkspaceQueryService |
Catalog UI Search Workspace Query Monitor. |
|
org.codice.ddf.catalog.ui.query.monitor.impl.WorkspaceServiceImpl |
Catalog UI Search Workspace Service. |
|
org.codice.ddf.catalog.ui |
Catalog UI Search. |
|
org.codice.ddf.catalog.ui.attribute.aliases |
Catalog UI Search Attribute Aliases. |
|
org.codice.ddf.catalog.ui.attribute.descriptions |
Catalog UI Search Attribute Descriptions. |
|
org.codice.ddf.catalog.ui.attribute.hidden |
Catalog UI Search Hidden Attributes. |
|
org.codice.ddf.catalog.ui.theme |
Catalog UI Search Theme. |
|
org.codice.ddf.catalog.ui.whitelist |
Catalog UI Search Metacard Type Whitelist. |
|
org.codice.ddf.catalog.ui.security |
Catalog UI Search Workspace Security. |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Result Count |
|
Integer |
Specifies the number of results to request from each source. |
|
true |
Export Result Limit |
|
Integer |
Specifies the max number of results that can be exported. |
|
true |
Imagery Providers |
|
String |
List of imagery providers to use. The valid types are:
OSM example: AGM example: WMS example: WMT example: TMS example (3d map support only): Multiple layer example: |
false |
|
Terrain Provider |
|
String |
Terrain provider to use for height data. Valid types are: CT (CesiumTerrain), AGS (ArcGisImageServer), and VRW (VRTheWorld). Example:
|
|
false |
Default Layout |
|
String |
The default UI layout and visualization configuration used in the Catalog UI. See http://golden-layout.com/docs/Config.html for more information. Example: |
|
true |
List Templates |
|
String |
Templates for users to quickly create lists that already specify an icon and a filter. Example: {"id":"Pizza Deliveries", "list.icon": "tasks", "list.cql": "(\"anyText\"ILIKE 'pizza')"} |
false |
|
Map Projection |
|
String |
Projection of imagery providers (e.g. |
|
false |
Bing Maps Key |
|
String |
Bing Maps API key. This should only be set if you are using Bing Maps Imagery or Terrain Providers. |
false |
|
Connection Timeout |
|
Integer |
Specifies the client-side connection timeout in milliseconds. |
|
false |
Source Poll Interval |
|
Integer |
Specifies the interval to poll for sources in milliseconds. |
|
true |
Show Sign In |
|
Boolean |
Allow Sign In to Search UI and welcome notice. Enable this if the Search UI is protected. |
|
false |
Show Tasks |
|
Boolean |
Show task menu area for long running actions. |
|
false |
Show Gazetteer |
|
Boolean |
Show gazetteer for searching place names. |
|
false |
Use Online Gazetteer |
|
Boolean |
Should the online gazetteer be used? If unchecked a local gazetteer service will be used. This only applies to the search gazetteer in Intrigue. |
|
false |
Show Uploader |
|
Boolean |
Show upload menu for adding new record. |
|
false |
Use External Authentication |
|
Boolean |
Use an external authentication point, such as IdP. |
|
false |
Enable Cache |
|
Boolean |
Locally cached results will be returned in search results. |
|
true |
Allow Editing |
|
Boolean |
Allow editing capability to be visible in the UI. |
|
true |
Enable Web Sockets |
|
Boolean |
Enables use of Web Sockets |
|
false |
Enable Local Catalog |
|
Boolean |
Enables queries to the local catalog. |
|
true |
Enable Historical Search |
|
Boolean |
Enable searching for historical metacards. |
|
true |
Enable Archive Search |
|
Boolean |
Enable searching for archived metacards. |
|
true |
Enable Query Feedback |
|
Boolean |
Enable the query comments option. |
|
true |
Enable Experimental Features |
|
Boolean |
WARNING: Enables experimental features in the UI. This allows users to preview upcoming features. |
|
true |
Show Relevance Scores |
|
Boolean |
Toggle the display of relevance scores of search results. |
|
false |
Relevance Score Precision |
|
Integer |
Set the number of digits to display in for each relevance score. The default is 5 (i.e. 12.345). |
|
false |
Show Logo in Title Bar |
|
Boolean |
Toggles the visibility of the logo in the menu bar. |
|
false |
Enable Unknown Error Box |
|
Boolean |
Enable Unknown Error Box visibility. |
|
false |
Enable Metacard Preview |
|
Boolean |
Enable Metacard Preview in the Inspector. |
|
true |
Enable Spellcheck |
|
Boolean |
Enable spellcheck for searches. |
|
false |
Enable Similar Word Matching |
|
Boolean |
Enable phonetic and synonym matching for searches. |
|
false |
Basic Search Temporal Selections |
|
String |
Enable Basic Search Temporal Selections. |
|
true |
Basic Search Match Type Metacard Attribute |
|
String |
Metacard attribute used for Basic Search Type Match. |
|
true |
Type Name Mapping |
|
String |
Mapping of display names to content types in the form name=type. |
false |
|
Read Only Metacard Attributes |
|
String |
List of metacard attributes that are read-only. NOTE: the provided values will be evaluated as JavaScript regular expressions when matched against metacard attributes. |
|
false |
Summary Metacard Attributes |
|
String |
List of metacard attributes to display in the summary view. |
|
false |
Result Preview Metacard Attributes |
|
String |
List of metacard attributes to display in the result preview. |
false |
|
Query Schedule Frequencies |
|
Long |
Custom list of schedule frequencies in seconds. This will override the frequency list in the query schedule tab. Leave this empty to use the frequency list on the Catalog UI. |
|
true |
Auto Merge Time |
|
Integer |
Specifies the interval during which new results can be merged automatically. This is the time allowed since last merge (in milliseconds). |
|
true |
Query Feedback Email Subject Template |
|
String |
See Configuring Query Feedback for Intrigue for more details about Query Feedback templates. |
|
true |
Query Feedback Email Body Template |
|
String |
See Configuring Query Feedback for Intrigue for more details about Query Feedback templates. |
|
true |
Query Feedback Email Destination |
|
String |
Email destination to send Query Feedback results. |
true |
|
Maximum Endpoint Upload Size |
|
Integer |
The maximum size (in bytes) to allow per client when receiving a POST/PATCH/PUT. Note: This does not affect product upload size, just the maximum size allowed for calls from Intrigue. |
|
true |
Map Home |
|
String |
Specifies the default home view for the map by bounding box. The format is "West, South, East, North", where North, East, South, and West are coordinates in degrees. An example is: |
false |
|
UI Branding Name |
|
String |
Specifies a custom UI branding name in the UI. |
|
true |
Upload Editor: Attribute Configuration |
|
String |
List of attributes to show in the upload editor. See Catalog Taxonomy for a list of supported attributes. Supported entry syntax: Using the first syntax, the editor will attempt to determine the appropriate control to display based on the attribute datatype. The second syntax will force the editor to use a dropdown selector populated with the provided values. This is intended for use with String datatypes, which by default may be assigned any value. |
false |
|
Upload Editor: Required Attributes |
|
String |
List of attributes which must be set before an upload is permitted. If an attribute is listed as required but not shown in the editor, it will be ignored. |
false |
Name | Id | Type | Description | Default Value | Required |
---|---|---|---|---|---|
Hidden Attributes |
|
String |
List of attributes to be hidden. NOTE: the provided values will be evaluated as JavaScript regular expressions when matched against metacard attributes. |
|
false |
Appendix C: Application Whitelists
Within each DDF application, certain packages are exported for use by third parties.
C.1. Packages Removed From Whitelist
In the transition of the whitelist from the ambiguous package listing to the new class listing several errors were found. The packages originally listed that were removed either did not exist, contained experimental interfaces, or contained only internal implementations and should have never been included in the whitelist. The following is a list of packages that were listed in error and have been removed from the whitelist.
Note
|
None of the packages in this list have been removed from the distribution. They may however be changed or removed in the future. |
Admin
-
org.codice.ddf.ui.admin.api.plugin
-
org.codice.ddf.admin.configuration.plugin
Catalog
-
org.codice.ddf.admin.configuration.plugin
-
ddf.catalog.data.metacardtype
-
ddf.catalog.federation.impl
-
ddf.catalog.plugin.groomer
-
ddf.catalog.pubsub
-
ddf.catalog.pubsub.tracker
-
ddf.catalog.resource.data
-
ddf.catalog.resource.impl
-
ddf.catalog.resourceretriever
-
ddf.catalog.transformer.metacard.geojson
-
ddf.common
-
org.codice.ddf.endpoints
-
org.codice.ddf.endpoints.rest
-
org.codice.ddf.endpoints.rest.action
-
org.codice.ddf.opensearch.query
-
org.codice.ddf.opensearch.query.filter
Platform
-
org.codice.ddf.configuration.admin
-
org.codice.ddf.configuration.migration
-
org.codice.ddf.configuration.persistence
-
org.codice.ddf.configuration.persistence.felix
-
org.codice.ddf.configuration.status
-
org.codice.ddf.parser
-
org.codice.ddf.parser.xml
-
org.codice.ddf.platform.error.handler
-
org.codice.ddf.platform.util
Security
-
ddf.security.assertion.impl
-
ddf.security.common.audit
-
ddf.security.http.impl
-
ddf.security.impl
-
ddf.security.pdp.realm
-
ddf.security.permission
-
ddf.security.principal
-
ddf.security.realm.sts
-
ddf.security.samlp.impl
-
ddf.security.service.impl
-
ddf.security.settings
-
ddf.security.soap.impl
-
ddf.security.sts
-
ddf.security.ws.policy.impl
-
org.codice.ddf.security.certificate.generator
-
org.codice.ddf.security.certificate.keystore.editor
-
org.codice.ddf.security.common
-
org.codice.ddf.security.filter.authorization
-
org.codice.ddf.security.filter.login
-
org.codice.ddf.security.filter.websso
-
org.codice.ddf.security.handler.basic
-
org.codice.ddf.security.handler.guest.configuration
-
org.codice.ddf.security.handler.guest
-
org.codice.ddf.security.handler.pki
-
org.codice.ddf.security.handler.saml
-
org.codice.ddf.security.interceptor
-
org.codice.ddf.security.interceptor
-
org.codice.ddf.security.policy.context.impl
-
org.codice.ddf.security.servlet.logout
-
org.codice.ddf.security.validator.username
Spatial
-
org.codice.ddf.spatial.geocoder
-
org.codice.ddf.spatial.geocoder.geonames
-
org.codice.ddf.spatial.geocoding
-
org.codice.ddf.spatial.geocoding.context
-
org.codice.ddf.spatial.kml.endpoint
-
org.codice.ddf.spatial.ogc.catalog.resource.impl
C.2. Catalog Whitelist
The following classes have been exported by the Catalog application and are approved for use by third parties:
In package ddf.catalog
-
CatalogFramework
-
Constants
In package ddf.catalog.cache
-
ResourceCacheInterface
Deprecated
In package ddf.catalog.data
-
Attribute
-
AttributeDescriptor
-
AttributeType
-
BinaryContent
-
ContentType
-
Metacard
-
MetacardCreationException
-
MetacardType
-
MetacardTypeUnregistrationException
-
Result
In package ddf.catalog.event
-
DeliveryException
-
DeliveryMethod
-
EventException
-
EventProcessor
-
InvalidSubscriptionException
-
Subscriber
-
Subscription
-
SubscriptionExistsException
-
SubscriptionNotFoundException
In package ddf.catalog.federation
-
Federatable
-
FederationException
-
FederationStrategy
In package ddf.catalog.filter
-
AttributeBuilder
-
BufferedSpatialExpressionBuilder
-
ContextualExpressionBuilder
-
EqualityExpressionBuilder
-
ExpressionBuilder
-
FilterAdapter
-
FilterBuilder
-
FilterDelegate
-
NumericalExpressionBuilder
-
NumericalRangeExpressionBuilder
-
SpatialExpressionBuilder
-
TemporalInstantExpressionBuilder
-
TemporalRangeExpressionBuilder
-
XPathBasicBuilder
-
XPathBuilder
In package ddf.catalog.filter.delegate
-
CopyFilterDelegate
-
FilterToTextDelegate
In package ddf.catalog.operation
-
CreateRequest
-
CreateResponse
-
DeleteRequest
-
DeleteResponse
-
Operation
-
OperationTransaction
-
Pingable
-
ProcessingDetails
-
Query
-
QueryRequest
-
QueryResponse
-
Request
-
ResourceRequest
-
ResourceResponse
-
Response
-
SourceInfoRequest
-
SourceInfoResponse
-
SourceProcessingDetails
-
SourceResponse
-
Update
-
UpdateRequest
-
UpdateResponse
In package ddf.catalog.plugin
-
AccessPlugin
-
PluginExecutionException
-
PolicyPlugin
-
PolicyResponse
-
PostFederatedQueryPlugin
-
PostIngestPlugin
-
PostQueryPlugin
-
PostResourcePlugin
-
PreDeliveryPlugin
-
PreFederatedQueryPlugin
-
PreIngestPlugin
-
PreQueryPlugin
-
PreResourcePlugin
-
PreSubscriptionPlugin
-
StopProcessingException
In package ddf.catalog.resource
-
DataUsageLimitExceededException
-
Resource
-
ResourceNotFoundException
-
ResourceNotSupportedException
-
ResourceReader
-
ResourceWriter
In package ddf.catalog.service
-
ConfiguredService
In package ddf.catalog.source
-
CatalogProvider
-
ConnectedSource
-
FederatedSource
-
IngestException
-
InternalIngestException
-
RemoteSource
-
Source
-
SourceDescriptor
-
SourceMonitor
-
SourceUnavailableException
-
UnsupportedQueryException
In package ddf.catalog.transform
-
CatalogTransformerException
-
InputCollectionTransformer
-
InputTransformer
-
MetacardTransformer
-
QueryResponseTransformer
In package ddf.catalog.transformer.api
-
MetacardMarshaller
-
PrintWriter
-
PrintWriterProvider
In package ddf.catalog.util
-
Describable
Deprecated -
Maskable
In package ddf.catalog.validation
-
MetacardValidator
-
ValidationException
In package ddf.geo.formatter
-
CompositeGeometry
-
GeometryCollection
-
LineString
-
MultiLineString
-
MultiPoint
-
MultiPolygon
-
Point
-
Polygon
In package ddf.util
-
InetAddressUtil
-
NamespaceMapImpl
-
NamespaceResolver
-
WktStandard
-
XPathCache
-
XPathHelper
-
XSLTUtil
C.3. Platform Whitelist
The following classes have been exported by the Platform application and are approved for use by third parties:
In package ddf.action
-
Action
-
ActionProvider
-
ActionRegistry
In package org.codice.ddf.branding
-
BrandingPlugin
-
BrandingRegistry
In package org.codice.ddf.configuration
-
ConfigurationWatcher
Deprecated
C.4. Registry Whitelist
The following classes have been exported by the Registry Application and are approved for use by third parties:
None.
C.5. Security Whitelist
The following classes have been exported by the Security application and are approved for use by third parties:
In package ddf.security
-
SecurityConstants
-
Subject
In package ddf.security.assertion
-
SecurityAssertion
In package ddf.security.common.util
-
Security
Deprecated -
SecurityProperties
-
ServiceComparator
-
SortedServiceList
Deprecated
In package ddf.security.encryption
-
EncryptionService
In package ddf.security.expansion
-
Expansion
In package ddf.security.http
-
SessionFactory
In package ddf.security.service
-
SecurityManager
-
SecurityServiceException
-
TokenRequestHandler
In package ddf.security.sts.client.configuration
-
STSClientConfiguration
In package ddf.security.ws.policy
-
AbstractOverrideInterceptor
-
PolicyLoader
In package ddf.security.ws.proxy
-
ProxyServiceFactory
In package org.codice.ddf.security.handler.api
-
AuthenticationHandler
In package org.codice.ddf.security.policy.context.attributes
-
ContextAttributeMapping
In package org.codice.ddf.security.policy.context
-
ContextPolicy
-
ContextPolicyManager
C.6. Solr Catalog Whitelist
The following classes have been exported by the Solr Catalog application and are approved for use by third parties:
None.
C.7. Search UI Whitelist
The following classes have been exported by the Search UI application and are approved for use by third parties:
None.
Appendix D: DDF Dependency List
This list of DDF dependencies is automatically generated:
-
c3p0:c3p0:jar:0.9.1.1
-
ca.juliusdavies:not-yet-commons-ssl:jar:0.3.11
-
cglib:cglib-nodep:jar:3.2.6
-
ch.qos.logback:logback-access:jar:1.2.3
-
ch.qos.logback:logback-classic:jar:1.2.3
-
ch.qos.logback:logback-core:jar:1.2.3
-
com.codahale.metrics:metrics-core:jar:3.0.1
-
com.connexta.arbitro:arbitro-core:jar:1.0.0
-
com.fasterxml.jackson.core:jackson-annotations:jar:2.9.8
-
com.fasterxml.jackson.core:jackson-core:jar:2.9.8
-
com.fasterxml.jackson.core:jackson-databind:jar:2.9.8
-
com.fasterxml.jackson.datatype:jackson-datatype-jdk8:jar:2.9.8
-
com.fasterxml.woodstox:woodstox-core:jar:5.0.3
-
com.github.drapostolos:type-parser:jar:0.5.0
-
com.github.jai-imageio:jai-imageio-core:jar:1.3.1
-
com.github.jai-imageio:jai-imageio-jpeg2000:jar:1.3.1_CODICE_3
-
com.github.jknack:handlebars-jackson2:jar:1.0.0
-
com.github.jknack:handlebars:jar:1.1.2
-
com.github.jknack:handlebars:jar:2.0.0
-
com.github.lookfirst:sardine:jar:5.7
-
com.github.rvesse:airline:jar:2.1.0
-
com.google.code.gson:gson:jar:2.8.5
-
com.google.crypto.tink:tink:jar:1.2.2
-
com.google.guava:guava:jar:20.0
-
com.google.http-client:google-http-client:jar:1.22.0
-
com.googlecode.json-simple:json-simple:jar:1.1.1
-
com.googlecode.owasp-java-html-sanitizer:owasp-java-html-sanitizer:jar:20171016.1
-
com.hazelcast:hazelcast:jar:3.2.1
-
com.jayway.restassured:rest-assured:jar:2.9.0
-
com.jhlabs:filters:jar:2.0.235-1
-
com.rometools:rome-utils:jar:1.9.0
-
com.rometools:rome:jar:1.9.0
-
com.sparkjava:spark-core:jar:2.5.5
-
com.sun.mail:javax.mail:jar:1.5.6
-
com.sun.xml.bind:jaxb-core:jar:2.2.11
-
com.sun.xml.bind:jaxb-impl:jar:2.2.11
-
com.thoughtworks.xstream:xstream:jar:1.4.9
-
com.unboundid:unboundid-ldapsdk:jar:3.2.1
-
com.vividsolutions:jts-core:jar:1.14.0
-
com.vividsolutions:jts-io:jar:1.14.0
-
com.xebialabs.restito:restito:jar:0.8.2
-
commons-beanutils:commons-beanutils:jar:1.9.3
-
commons-codec:commons-codec:jar:1.10
-
commons-codec:commons-codec:jar:1.11
-
commons-collections:commons-collections:jar:3.2.2
-
commons-configuration:commons-configuration:jar:1.10
-
commons-digester:commons-digester:jar:1.8.1
-
commons-fileupload:commons-fileupload:jar:1.3.2
-
commons-io:commons-io:jar:2.1
-
commons-io:commons-io:jar:2.4
-
commons-io:commons-io:jar:2.6
-
commons-lang:commons-lang:jar:2.6
-
commons-logging:commons-logging:jar:1.2
-
commons-net:commons-net:jar:3.5
-
commons-validator:commons-validator:jar:1.6
-
de.micromata.jak:JavaAPIforKml:jar:2.2.0
-
de.micromata.jak:JavaAPIforKml:jar:2.2.1_CODICE_1
-
io.dropwizard.metrics:metrics-core:jar:3.1.2
-
io.dropwizard.metrics:metrics-core:jar:3.2.6
-
io.dropwizard.metrics:metrics-ganglia:jar:3.2.6
-
io.dropwizard.metrics:metrics-graphite:jar:3.2.6
-
io.dropwizard.metrics:metrics-jetty9:jar:3.2.6
-
io.dropwizard.metrics:metrics-jvm:jar:3.2.6
-
io.netty:netty-buffer:jar:4.1.16.Final
-
io.netty:netty-codec:jar:4.1.16.Final
-
io.netty:netty-common:jar:4.1.16.Final
-
io.netty:netty-handler:jar:4.1.16.Final
-
io.netty:netty-resolver:jar:4.1.16.Final
-
io.netty:netty-transport-native-epoll:jar:4.1.16.Final
-
io.netty:netty-transport:jar:4.1.16.Final
-
io.sgr:s2-geometry-library-java:jar:1.0.0
-
javax.annotation:javax.annotation-api:jar:1.2
-
javax.inject:javax.inject:jar:1
-
javax.mail:mail:jar:1.4.5
-
javax.servlet:javax.servlet-api:jar:3.1.0
-
javax.servlet:servlet-api:jar:2.5
-
javax.validation:validation-api:jar:1.1.0.Final
-
javax.ws.rs:javax.ws.rs-api:jar:2.1
-
javax.xml.bind:jaxb-api:jar:2.2.11
-
joda-time:joda-time:jar:2.9.9
-
junit:junit:jar:4.12
-
log4j:log4j:jar:1.2.17
-
net.iharder:base64:jar:2.3.9
-
net.jodah:failsafe:jar:0.9.3
-
net.jodah:failsafe:jar:0.9.5
-
net.jodah:failsafe:jar:1.0.0
-
net.markenwerk:commons-nulls:jar:1.0.3
-
net.markenwerk:utils-data-fetcher:jar:4.0.1
-
net.minidev:asm:jar:1.0.2
-
net.minidev:json-smart:jar:2.3
-
net.sf.saxon:Saxon-HE:jar:9.5.1-3
-
net.sf.saxon:Saxon-HE:jar:9.6.0-4
-
org.antlr:antlr4-runtime:jar:4.1
-
org.antlr:antlr4-runtime:jar:4.3
-
org.apache.abdera:abdera-extensions-geo:jar:1.1.3
-
org.apache.abdera:abdera-extensions-opensearch:jar:1.1.3
-
org.apache.activemq:activemq-all:jar:5.14.5
-
org.apache.activemq:artemis-amqp-protocol:jar:2.4.0
-
org.apache.activemq:artemis-jms-client:jar:2.4.0
-
org.apache.activemq:artemis-server:jar:2.4.0
-
org.apache.ant:ant-launcher:jar:1.9.7
-
org.apache.ant:ant:jar:1.9.7
-
org.apache.aries.jmx:org.apache.aries.jmx.api:jar:1.1.5
-
org.apache.aries.jmx:org.apache.aries.jmx.core:jar:1.1.7
-
org.apache.aries:org.apache.aries.util:jar:1.1.3
-
org.apache.camel:camel-amqp:jar:2.19.5
-
org.apache.camel:camel-aws:jar:2.19.5
-
org.apache.camel:camel-blueprint:jar:2.19.5
-
org.apache.camel:camel-context:jar:2.19.5
-
org.apache.camel:camel-core-osgi:jar:2.19.5
-
org.apache.camel:camel-core:jar:2.19.5
-
org.apache.camel:camel-cxf:jar:2.19.5
-
org.apache.camel:camel-http-common:jar:2.19.5
-
org.apache.camel:camel-http4:jar:2.19.5
-
org.apache.camel:camel-http:jar:2.19.5
-
org.apache.camel:camel-quartz2:jar:2.19.5
-
org.apache.camel:camel-quartz:jar:2.19.5
-
org.apache.camel:camel-saxon:jar:2.19.5
-
org.apache.camel:camel-servlet:jar:2.19.5
-
org.apache.camel:camel-sjms:jar:2.19.5
-
org.apache.camel:camel-stream:jar:2.19.5
-
org.apache.commons:commons-collections4:jar:4.1
-
org.apache.commons:commons-compress:jar:1.17
-
org.apache.commons:commons-csv:jar:1.4
-
org.apache.commons:commons-exec:jar:1.3
-
org.apache.commons:commons-lang3:jar:3.0
-
org.apache.commons:commons-lang3:jar:3.1
-
org.apache.commons:commons-lang3:jar:3.3.2
-
org.apache.commons:commons-lang3:jar:3.4
-
org.apache.commons:commons-lang3:jar:3.7
-
org.apache.commons:commons-math:jar:2.2
-
org.apache.commons:commons-pool2:jar:2.4.2
-
org.apache.commons:commons-pool2:jar:2.5.0
-
org.apache.cxf.services.sts:cxf-services-sts-core:jar:3.2.5
-
org.apache.cxf:cxf-core:jar:3.2.5
-
org.apache.cxf:cxf-rt-bindings-soap:jar:3.0.4
-
org.apache.cxf:cxf-rt-databinding-jaxb:jar:3.0.4
-
org.apache.cxf:cxf-rt-frontend-jaxrs:jar:3.2.5
-
org.apache.cxf:cxf-rt-frontend-jaxws:jar:3.0.4
-
org.apache.cxf:cxf-rt-frontend-jaxws:jar:3.2.5
-
org.apache.cxf:cxf-rt-rs-client:jar:3.2.5
-
org.apache.cxf:cxf-rt-rs-security-sso-saml:jar:3.2.5
-
org.apache.cxf:cxf-rt-rs-security-xml:jar:3.0.4
-
org.apache.cxf:cxf-rt-rs-security-xml:jar:3.2.5
-
org.apache.cxf:cxf-rt-transports-http:jar:3.2.5
-
org.apache.cxf:cxf-rt-ws-policy:jar:3.2.5
-
org.apache.cxf:cxf-rt-ws-security:jar:3.2.5
-
org.apache.felix:org.apache.felix.configadmin:jar:1.8.14
-
org.apache.felix:org.apache.felix.fileinstall:jar:3.6.0
-
org.apache.felix:org.apache.felix.framework:jar:5.6.6
-
org.apache.felix:org.apache.felix.utils:jar:1.10.0
-
org.apache.ftpserver:ftplet-api:jar:1.0.6
-
org.apache.ftpserver:ftpserver-core:jar:1.0.6
-
org.apache.geronimo.specs:geronimo-servlet_3.0_spec:jar:1.0
-
org.apache.httpcomponents:httpclient:jar:4.5.3
-
org.apache.httpcomponents:httpclient:jar:4.5.5
-
org.apache.httpcomponents:httpcore:jar:4.4.6
-
org.apache.httpcomponents:httpmime:jar:4.5.3
-
org.apache.httpcomponents:httpmime:jar:4.5.5
-
org.apache.karaf.bundle:org.apache.karaf.bundle.core:jar:4.2.2
-
org.apache.karaf.features:org.apache.karaf.features.core:jar:4.2.2
-
org.apache.karaf.features:standard:xml:features:4.2.2
-
org.apache.karaf.jaas:org.apache.karaf.jaas.boot:jar:4.2.2
-
org.apache.karaf.jaas:org.apache.karaf.jaas.config:jar:4.2.2
-
org.apache.karaf.jaas:org.apache.karaf.jaas.modules:jar:4.2.2
-
org.apache.karaf.shell:org.apache.karaf.shell.console:jar:4.2.2
-
org.apache.karaf.shell:org.apache.karaf.shell.core:jar:4.2.2
-
org.apache.karaf.system:org.apache.karaf.system.core:jar:4.2.2
-
org.apache.karaf:apache-karaf:tar.gz:4.2.2
-
org.apache.karaf:apache-karaf:zip:4.2.2
-
org.apache.karaf:org.apache.karaf.util:jar:4.2.2
-
org.apache.logging.log4j:log4j-1.2-api:jar:2.11.0
-
org.apache.logging.log4j:log4j-api:jar:2.11.0
-
org.apache.logging.log4j:log4j-api:jar:2.4.1
-
org.apache.logging.log4j:log4j-core:jar:2.11.0
-
org.apache.logging.log4j:log4j-slf4j-impl:jar:2.11.0
-
org.apache.lucene:lucene-analyzers-common:jar:7.4.0
-
org.apache.lucene:lucene-core:jar:3.0.2
-
org.apache.lucene:lucene-core:jar:7.4.0
-
org.apache.lucene:lucene-queries:jar:7.4.0
-
org.apache.lucene:lucene-queryparser:jar:7.4.0
-
org.apache.lucene:lucene-sandbox:jar:7.4.0
-
org.apache.lucene:lucene-spatial-extras:jar:7.4.0
-
org.apache.lucene:lucene-spatial3d:jar:7.4.0
-
org.apache.lucene:lucene-spatial:jar:7.4.0
-
org.apache.maven.shared:maven-invoker:jar:2.2
-
org.apache.mina:mina-core:jar:2.0.6
-
org.apache.pdfbox:fontbox:jar:2.0.11
-
org.apache.pdfbox:pdfbox-tools:jar:2.0.11
-
org.apache.pdfbox:pdfbox:jar:2.0.11
-
org.apache.poi:poi-ooxml:jar:3.17
-
org.apache.poi:poi-scratchpad:jar:3.17
-
org.apache.poi:poi:jar:3.17
-
org.apache.servicemix.bundles:org.apache.servicemix.bundles.poi:jar:3.17_1
-
org.apache.servicemix.specs:org.apache.servicemix.specs.jsr339-api-2.0:jar:2.6.0
-
org.apache.shiro:shiro-core:jar:1.3.2
-
org.apache.solr:solr-core:jar:7.4.0
-
org.apache.solr:solr-solrj:jar:7.4.0
-
org.apache.tika:tika-core:jar:1.18
-
org.apache.tika:tika-parsers:jar:1.18
-
org.apache.ws.commons.axiom:axiom-api:jar:1.2.14
-
org.apache.ws.xmlschema:xmlschema-core:jar:2.2.2
-
org.apache.wss4j:wss4j-bindings:jar:2.2.2
-
org.apache.wss4j:wss4j-policy:jar:2.2.2
-
org.apache.wss4j:wss4j-ws-security-common:jar:2.2.2
-
org.apache.wss4j:wss4j-ws-security-dom:jar:2.2.2
-
org.apache.wss4j:wss4j-ws-security-policy-stax:jar:2.2.2
-
org.apache.wss4j:wss4j-ws-security-stax:jar:2.2.2
-
org.asciidoctor:asciidoctorj-diagram:jar:1.5.4.1
-
org.asciidoctor:asciidoctorj:jar:1.5.6
-
org.assertj:assertj-core:jar:2.1.0
-
org.awaitility:awaitility:jar:3.0.0
-
org.awaitility:awaitility:jar:3.1.0
-
org.bouncycastle:bcmail-jdk15on:jar:1.60
-
org.bouncycastle:bcpkix-jdk15on:jar:1.60
-
org.bouncycastle:bcprov-jdk15on:jar:1.60
-
org.codehaus.groovy:groovy-all:jar:2.4.7
-
org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13
-
org.codice.countrycode:converter:jar:0.1.2
-
org.codice.geowebcache:geowebcache-server-standalone:war:0.7.0
-
org.codice.geowebcache:geowebcache-server-standalone:xml:geowebcache:0.7.0
-
org.codice.httpproxy:proxy-camel-route:jar:2.14.0
-
org.codice.httpproxy:proxy-camel-servlet:jar:2.14.0
-
org.codice.opendj.embedded:opendj-embedded-app:xml:features:1.3.3
-
org.codice.pro-grade:pro-grade:jar:1.1.3
-
org.codice.thirdparty:commons-httpclient:jar:3.1.0_1
-
org.codice.thirdparty:ffmpeg:zip:bin:4.0_2
-
org.codice.thirdparty:geotools-suite:jar:19.1_1
-
org.codice.thirdparty:gt-opengis:jar:19.1_1
-
org.codice.thirdparty:jts:jar:1.14.0_1
-
org.codice.thirdparty:lucene-core:jar:3.0.2_1
-
org.codice.thirdparty:ogc-filter-v_1_1_0-schema:jar:1.1.0_5
-
org.codice.thirdparty:picocontainer:jar:1.3_1
-
org.codice.thirdparty:tika-bundle:jar:1.18.0_1
-
org.codice.usng4j:usng4j-api:jar:0.1
-
org.codice.usng4j:usng4j-impl:jar:0.1
-
org.codice:lux:jar:1.2
-
org.eclipse.jetty:jetty-http:jar:9.4.11.v20180605
-
org.eclipse.jetty:jetty-server:jar:9.4.11.v20180605
-
org.eclipse.jetty:jetty-servlet:jar:9.4.11.v20180605
-
org.eclipse.jetty:jetty-servlets:jar:9.4.11.v20180605
-
org.eclipse.jetty:jetty-util:jar:9.4.11.v20180605
-
org.forgerock.commons:forgerock-util:jar:3.0.2
-
org.forgerock.commons:i18n-core:jar:1.4.2
-
org.forgerock.commons:i18n-slf4j:jar:1.4.2
-
org.forgerock.opendj:opendj-core:jar:3.0.0
-
org.forgerock.opendj:opendj-grizzly:jar:3.0.0
-
org.fusesource.jansi:jansi:jar:1.16
-
org.geotools.xsd:gt-xsd-gml3:jar:19.1
-
org.geotools:gt-cql:jar:13.0
-
org.geotools:gt-cql:jar:19.1
-
org.geotools:gt-epsg-hsql:jar:19.1
-
org.geotools:gt-jts-wrapper:jar:19.1
-
org.geotools:gt-main:jar:19.1
-
org.geotools:gt-opengis:jar:19.1
-
org.geotools:gt-referencing:jar:19.1
-
org.geotools:gt-shapefile:jar:19.1
-
org.geotools:gt-xml:jar:19.1
-
org.glassfish.grizzly:grizzly-framework:jar:2.3.30
-
org.glassfish.grizzly:grizzly-http-server:jar:2.3.25
-
org.hamcrest:hamcrest-all:jar:1.3
-
org.hisrc.w3c:xlink-v_1_0:jar:1.4.0
-
org.hisrc.w3c:xmlschema-v_1_0:jar:1.4.0
-
org.imgscalr:imgscalr-lib:jar:4.2
-
org.jasypt:jasypt:jar:1.9.0
-
org.jasypt:jasypt:jar:1.9.2
-
org.javassist:javassist:jar:3.22.0-GA
-
org.jcodec:jcodec:jar:0.2.0_1
-
org.jdom:jdom2:jar:2.0.6
-
org.joda:joda-convert:jar:1.2
-
org.jolokia:jolokia-osgi:jar:1.2.3
-
org.jruby:jruby-complete:jar:9.0.4.0
-
org.jscience:jscience:jar:4.3.1
-
org.jsoup:jsoup:jar:1.9.2
-
org.jvnet.jaxb2_commons:jaxb2-basics-runtime:jar:0.11.0
-
org.jvnet.jaxb2_commons:jaxb2-basics-runtime:jar:0.6.0
-
org.jvnet.jaxb2_commons:jaxb2-basics-runtime:jar:0.9.4
-
org.jvnet.ogc:filter-v_1_1_0:jar:2.6.1
-
org.jvnet.ogc:filter-v_2_0:jar:2.6.1
-
org.jvnet.ogc:filter-v_2_0_0-schema:jar:1.1.0
-
org.jvnet.ogc:gml-v_3_1_1-schema:jar:1.1.0
-
org.jvnet.ogc:gml-v_3_1_1:jar:2.6.1
-
org.jvnet.ogc:gml-v_3_2_1-schema:jar:1.1.0
-
org.jvnet.ogc:gml-v_3_2_1:pom:1.1.0
-
org.jvnet.ogc:ogc-tools-gml-jts:jar:1.0.3
-
org.jvnet.ogc:ows-v_1_0_0-schema:jar:1.1.0
-
org.jvnet.ogc:ows-v_1_0_0:jar:2.6.1
-
org.jvnet.ogc:ows-v_1_1_0-schema:jar:1.1.0
-
org.jvnet.ogc:ows-v_2_0:jar:2.6.1
-
org.jvnet.ogc:wcs-v_1_0_0-schema:jar:1.1.0
-
org.jvnet.ogc:wfs-v_1_1_0:jar:2.6.1
-
org.jvnet.ogc:wps-v_2_0:jar:2.6.1
-
org.la4j:la4j:jar:0.6.0
-
org.locationtech.jts:jts-core:jar:1.15.0
-
org.locationtech.spatial4j:spatial4j:jar:0.6
-
org.locationtech.spatial4j:spatial4j:jar:0.7
-
org.mockito:mockito-core:jar:1.10.19
-
org.noggit:noggit:jar:0.6
-
org.objenesis:objenesis:jar:2.5.1
-
org.objenesis:objenesis:jar:2.6
-
org.openexi:nagasena-rta:jar:0000.0002.0049.0
-
org.openexi:nagasena:jar:0000.0002.0049.0
-
org.opensaml:opensaml-core:jar:3.3.0
-
org.opensaml:opensaml-soap-impl:jar:3.3.0
-
org.opensaml:opensaml-xmlsec-api:jar:3.3.0
-
org.opensaml:opensaml-xmlsec-impl:jar:3.3.0
-
org.ops4j.pax.exam:pax-exam-container-karaf:jar:4.11.0
-
org.ops4j.pax.exam:pax-exam-junit4:jar:4.11.0
-
org.ops4j.pax.exam:pax-exam-link-mvn:jar:4.11.0
-
org.ops4j.pax.exam:pax-exam:jar:4.11.0
-
org.ops4j.pax.swissbox:pax-swissbox-extender:jar:1.8.2
-
org.ops4j.pax.tinybundles:tinybundles:jar:2.1.1
-
org.ops4j.pax.url:pax-url-aether:jar:2.4.5
-
org.ops4j.pax.url:pax-url-wrap:jar:2.4.5
-
org.ops4j.pax.web:pax-web-api:jar:6.0.9
-
org.osgi:org.osgi.compendium:jar:4.3.1
-
org.osgi:org.osgi.compendium:jar:5.0.0
-
org.osgi:org.osgi.core:jar:4.3.1
-
org.osgi:org.osgi.core:jar:5.0.0
-
org.osgi:org.osgi.enterprise:jar:5.0.0
-
org.ow2.asm:asm:jar:5.0.2
-
org.ow2.asm:asm:jar:5.0.4
-
org.parboiled:parboiled-core:jar:1.1.8
-
org.parboiled:parboiled-java:jar:1.1.8
-
org.quartz-scheduler:quartz-jobs:jar:2.2.3
-
org.quartz-scheduler:quartz:jar:2.1.7
-
org.quartz-scheduler:quartz:jar:2.2.3
-
org.rrd4j:rrd4j:jar:2.2
-
org.rrd4j:rrd4j:jar:3.2
-
org.simplejavamail:simple-java-mail:jar:4.1.3
-
org.slf4j:jcl-over-slf4j:jar:1.7.24
-
org.slf4j:jul-to-slf4j:jar:1.7.24
-
org.slf4j:slf4j-api:jar:1.7.12
-
org.slf4j:slf4j-api:jar:1.7.1
-
org.slf4j:slf4j-api:jar:1.7.24
-
org.slf4j:slf4j-ext:jar:1.7.1
-
org.slf4j:slf4j-log4j12:jar:1.7.12
-
org.slf4j:slf4j-log4j12:jar:1.7.24
-
org.slf4j:slf4j-log4j12:jar:1.7.7
-
org.slf4j:slf4j-simple:jar:1.7.1
-
org.slf4j:slf4j-simple:jar:1.7.5
-
org.spockframework:spock-core:jar:1.1-groovy-2.4
-
org.springframework.ldap:spring-ldap-core:jar:2.3.2.RELEASE
-
org.springframework.osgi:spring-osgi-core:jar:1.2.1
-
org.springframework:spring-core:jar:5.0.4.RELEASE
-
org.taktik:mpegts-streamer:jar:0.1.0_2
-
org.twitter4j:twitter4j-core:jar:4.0.4
-
org.xmlunit:xmlunit-matchers:jar:2.5.1
-
us.bpsm:edn-java:jar:0.4.4
-
xalan:serializer:jar:2.7.2
-
xalan:xalan:jar:2.7.2
-
xerces:xercesImpl:jar:2.11.0
-
xerces:xercesImpl:jar:2.9.1
-
xml-apis:xml-apis:jar:1.4.01
-
xpp3:xpp3:jar:1.1.4c
Appendix E: Hardening Checklist
The following list enumerates the required mitigations needed for hardening. It is not intended to be a step-by-step procedure. To harden a new system, perform configuration as documented.
-
❏ Configure Certificate Revocation
-
❏ Deny Guest User Access (if denying Guest users)
-
❏ Allow Guest User Access (if allowing Guest users)
-
-
❏ Configure Guest Claim Attributes (if allowing Guest users)
-
❏ Isolate Solr Cloud and Zookeeper. (If using)
Appendix F: Metadata Reference
DDF extracts basic metadata from the resources ingested. Many file types contain additional file format-specific metadata attributes. A neutral Catalog Taxonomy enables transformation of metadata to other formats. See also a list of all formats supported for ingest.
F.1. Common Metadata Attributes
DDF supports a wide variety of file types and data types for ingest. The DDF’s internal Input Transformers extract the necessary data into a generalized format. DDF supports ingest of many datatypes and commonly used file formats, such as Microsoft office products: Word documents, Excel spreadsheets, and PowerPoint presentations as well as .pdf files, GeoJson and others. See complete list. Many of these file types support additional file format-specific attributes from which additional metadata can be extracted.
Note
|
These attributes will be available in all the specified file formats; however, values will only be present if present in the original document/resource. |
These attributes are supported by any file type ingested into DDF:
-
metadata
-
id
-
modified (date)
-
title (filename)
-
metadata content type (mime type)
-
effective (date)
-
created (date)
These 'media' file types have support for additional attributes to be available when ingested into DDF:
-
Video Types
-
WMV
-
AVI
-
MP4
-
MOV
-
h.264 MPEG2
-
-
Image Types
-
JPEG-2000
-
-
Document Types
-
.DOC, .DOCX, .DOTX, .DOCM
-
.PPT, .PPTX
-
.XLS, .XLSX
-
.PDF
-
These are the attributes common to any of the media file types which support additional attributes:
-
media.format-version
-
media.format
-
media.bit-rate
-
media.bits-per-sample
-
media.compression
-
media.encoding
-
media.frame-center
-
media.frame-rate
-
media.height-pixels
-
media.number-of-bands
-
media.scanning-mode
-
media.type
-
media.duration
-
media.page-count
-
datatype
-
description
-
contact.point-of-contact-name
-
contact.contributor-name
-
contact.creator-name
-
contact.publisher-name
-
contact.point-of-contact-phone
-
topic.keyword
F.2. File Format-specific Attributes
Many file formats support additional metadata attributes that DDF is able to extract and make discoverable.
F.2.2. All File Formats Supported
Using the various input transformers, DDF supports ingest of the following MIME types. While ingest is possible for these files, metadata will be limited unless otherwise noted.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
F.3. Catalog Taxonomy Definitions
To facilitate data sharing while maximizing the usefulness of metadata, the attributes on resources are normalized into a common taxonomy that maps to attributes in the desired output format.
Note
|
The taxonomy is presented here for reference only. |
F.3.1. Core Attributes
Term | Definition | Datatype | Constraints | Example Value |
---|---|---|---|---|
A name for the resource. Dublin Core elements-title . |
String |
< 1024 characters |
||
ID of the source where the Metacard is cataloged. While this cannot be moved or renamed for legacy reasons, it should be treated as non-mappable, since this field is overwritten by the system when federated results are retrieved. |
String |
< 1024 characters |
||
metadata-content-type [deprecated] see Media Attributes |
Content type of the resource. |
String |
< 1024 characters |
|
metadata-content-type-version [deprecated] see Media Attributes |
Version of the metadata content type of the resource. |
String |
< 1024 characters |
|
metadata-target-namespace [deprecated] see Media Attributes |
Target namespace of the metadata. |
String |
< 1024 characters |
|
Additional XML metadata describing the resource. |
XML |
A valid XML string per RFC 4825 (must be well-formed but not necessarily schema-compliant). |
||
The primary geospatial location of the resource. |
Geometry |
Valid Well Known Text (WKT) per http://www.opengeospatial.org/standards/wkt-crs |
POINT(150 30) |
|
The expiration date of the resource. |
Date |
|||
The |
Date |
|
||
point-of-contact [deprecated] |
The name of the point of contact for the resource. This is set internally to the user’s subject and should be considered read-only to other DDF components. |
String |
< 1024 characters |
|
Location of the resource for the metacard. |
String |
Valid URI per RFC 2396 |
||
URL location of the resource for the metacard. This attributes provides a resolvable URL to the download location of the resource. |
String |
Valid URL per RFC 2396 |
||
Size in bytes of resource. |
String |
Although this type cannot be changed for legacy reasons, its value should always be a parsable whole number. |
||
The thumbnail for the resource in JPEG format. |
Base 64 encoded binary string per RFC 4648 |
⇐ 128 KB |
||
An account of the resource. Dublin Core elements-description . |
String |
|||
Checksum value for the primary resource for the metacard. |
String |
< 1024 characters |
||
Algorithm used to calculate the checksum on the primary resource of the metacard. |
String |
< 1024 characters |
||
The creation date of the resource Dublin Core terms-created . |
Date |
|||
The modification date of the resource Dublin Core terms-modified . |
Date |
|||
The language(s) of the resource. Dublin Core language . |
List of Strings |
Alpha-3 language code(s) per ISO_639-2 |
||
Download URL(s) for accessing the derived formats for the metacard resource. |
List of Strings |
Valid URL(s) per RFC 2396 |
||
Location(s) for accessing the derived formats for the metacard resource. |
List of Strings |
Valid URI per RFC 2396 |
||
The generic type(s) of the resource including the Dublin Core terms-type . DCMI Type term labels are expected here as opposed to term names. |
List of Strings |
|
F.3.2. Associations Attributes
Term | Definition | Datatype | Constraints | Example Value |
---|---|---|---|---|
ID of one or more metacards derived from this metacard. |
List of Strings |
A valid metacard ID (conventionally, a type 4 random UUID with hyphens removed). |
70809f17782c42b8ba15747b86b50ebf |
|
ID of one or more metacards related to this metacard. |
List of Strings |
A valid metacard ID (conventionally, a type 4 random UUID with hyphens removed). |
70809f17782c42b8ba15747b86b50ebf |
|
One or more URI’s identifying external associated resources. |
List of Strings |
A valid URI. |
https://infocorp.org/wikia/reference |
F.3.3. Contact Attributes
Term | Definition | Datatype | Constraints | Example Value |
---|---|---|---|---|
The name(s) of this metacard’s creator(s). |
List of Strings |
< 1024 characters per entry |
|
|
The physical address(es) of this metacard’s creator(s). |
List of Strings |
< 1024 characters per entry |
|
|
The email address(es) of this metacard’s creator(s). |
List of Strings |
A valid email address per RFC 5322. |
|
|
The phone number(s) of this metacard’s creator(s). |
List of Strings |
< 1024 characters per entry |
|
|
The name(s) of this metacard’s publisher(s). |
List of Strings |
< 1024 characters per entry |
|
|
The physical address(es) of this metacard’s publisher(s). |
List of Strings |
< 1024 characters per entry |
|
|
The email address(es) of this metacard’s publisher(s). |
List of Strings |
A valid email address per RFC 5322. |
|
|
The phone number(s) of this metacard’s publisher(s). |
List of Strings |
< 1024 characters per entry |
|
|
The name of the contributor(s) to this metacard. |
List of Strings |
< 1024 characters per entry |
|
|
The physical address(es) of the contributor(s) to this metacard. |
List of Strings |
< 1024 characters per entry |
|
|
The email address(es) of the contributor(s) to this metacard. |
List of Strings |
A valid email address per RFC 5322. |
|
|
The phone number(s) of the contributor(s) to this metacard. |
List of Strings |
< 1024 characters per entry |
|
|
The name(s) of the point(s) of contact for this metacard. |
List of Strings |
< 1024 characters per entry |
|
|
The physical address(es) of a point(s) of contact for this metacard. |
List of Strings |
< 1024 characters per entry |
|
|
The email address(es) of the point(s) of contact for this metacard. |
List of Strings |
A valid email address per RFC 5322. |
|
|
The phone number(s) of the point(s) of contact for this metacard. |
List of Strings |
< 1024 characters per entry |
F.3.4. DateTime Attributes
Term | Definition | Datatype | Constraints | Example Value |
---|---|---|---|---|
Start time(s) for the resource. |
List of Dates |
|
|
|
End time(s) for the resource. |
List of Dates |
|
|
|
A descriptive name for the corresponding temporal attributes. See datetime.start and datetime.end. |
List of Strings |
< 1024 characters per entry |
|
F.3.5. History Attributes
Term | Definition | Datatype | Constraints | Example Value |
---|---|---|---|---|
Internal attribute identifier for which metacard this version is representing |
String |
A valid metacard ID (conventionally, a type 4 random UUID with hyphens removed). |
70809f17782c42b8ba15747b86b50ebf |
|
Internal attribute identifying the editor of a history metacard. |
String |
A valid email address per RFC 5322 |
|
|
Internal attribute for the versioned date of a metacard version. |
Date |
|
|
|
Internal attribute for the action associated with a history metacard. |
String |
One of |
|
|
Internal attribute for the tags that were on the original metacard. |
String |
|
|
|
Internal attribute for the metacard type of the original metacard. |
String |
|
|
|
Internal attribute for the serialized metacard type of the original metacard. |
Binary |
|
|
|
Internal attribute for the original resource uri. |
URI |
F.3.6. Location Attributes
Term | Definition | Datatype | Constraints | Example Value |
---|---|---|---|---|
Altitude of the resource in meters. |
List of Doubles |
> 0 |
|
|
One or more country codes associated with the resource. |
List of Strings |
ISO_3166-1 alpha-3 codes |
|
|
Coordinate reference system code of the resource. |
List of Strings |
< 1024 characters per entry |
EPSG:4326 |
|
Coordinate reference system name of the resource. |
List of Strings |
< 1024 characters per entry |
WGS 84 |
F.3.7. Media Attributes
Term | Definition | Datatype | Constraints | Example Value |
---|---|---|---|---|
The file format, physical medium, or dimensions of the resource. Dublin Core elements-format |
String |
< 1024 characters |
txt, docx, xml - typically the extension or a more complete name for such, note that this is not the mime type |
|
The file format version of the resource. Note that the syntax can vary widely from format to format. |
String |
< 1024 characters |
POSIX, 2016, 1.0 |
|
The bit rate of the media, in bits per second. |
Double |
|||
The frame rate of the video, in frames per second. |
Double |
|||
The center of the video frame. |
Geometry |
Valid Well Known Text (WKT) |
||
The height of the media resource in pixels. |
Integer |
|||
The width of the media resource in pixels. |
Integer |
|||
The type of compression this media uses. EXIF STANAG 4559 NC, NM, C1, M1, I1, C3, M3, C4, M4, C5, M5, C8, M8 |
String |
One of the values defined for EXIF Compression tag. |
||
The number of bits per image component. |
Integer |
|||
A two-part identifier for file formats and format content. |
String |
A valid mime-type per https://www.ietf.org/rfc/rfc2046.txt |
application/json |
|
The encoding format of the media. |
List of Strings |
< 1024 characters per entry |
MPEG-2, RGB |
|
The number of spectral bands in the media. |
Integer |
The significance of this number is instrumentation-specific, but there are eight commonly recognized bands. https://en.wikipedia.org/wiki/Multispectral_image |
||
Indicate if progressive or interlaced scans are being applied. |
String |
PROGRESSIVE, INTERLACED |
F.3.8. Metacard Attributes
Term | Definition | Datatype | Constraints | Example Value |
---|---|---|---|---|
The creation date of the metacard. |
Date |
|||
The modified date of the metacard. |
Date |
|||
The email address of the metacard owner. |
String |
A valid email address per RFC 5322 |
||
Collections of data that go together, used for filtering. query results. NOTE: these are system tags. For descriptive tags, Topic Attributes. |
List of Strings |
< 1024 characters per entry |
F.3.9. Security Attributes
Term | Definition | Datatype | Constraints | Example Value |
---|---|---|---|---|
Attribute name for storing groups to enforce access controls upon that will enable a user to read and write a metacard. |
List of Strings |
< 1024 characters per entry |
||
Attribute name for storing the email addresses of users to enforce access controls upon that will enable the ability to read and write a metacard. |
List of Strings |
A valid email address per RFC 5322. |
|
|
Attribute name for storing the email addresses of users to enforce access controls upon that can read, but not explicitly write to a metacard. |
List of Strings |
A valid email address per RFC 5322. |
|
|
Attribute name for storing groups to enforce access controls upon that will enable a user to read, but not necessarily write to a metacard. |
List of Strings |
< 1024 characters per entry |
||
Attribute name for explicitly stating who has the permissions to modify the access control values of a metacard. These values include changing the security.access-groups, security.access-individuals and the security.access-administrators values. |
List of Strings |
A valid email address per RFC 5322. |
F.3.10. Topic Attributes
Term | Definition | Datatype | Constraints | Example Value |
---|---|---|---|---|
A category code from a given vocabulary. |
List of Strings |
A valid entry from the corresponding controlled vocabulary. |
||
One or more keywords describing the subject matter of the metacard or resource. |
List of Strings |
< 1024 characters per entry |
||
An identifier of a controlled vocabulary from which the topic category is derived. |
List of Strings |
Valid URI per RFC 2396. |
F.3.11. Validation Attributes
Term | Definition | Datatype | Constraints | Example Value |
---|---|---|---|---|
Textual description of validation warnings on the resource. |
List of Strings |
< 1024 characters per entry |
||
Textual description of validation errors on the resource. |
List of Strings |
< 1024 characters per entry |