• Home
  • Observe and explore
  • Logs
  • Log Management and Analytics
  • Log ingestion via OneAgent
  • Log storage configuration

Log storage configuration

powered by Grail

Dynatrace version 1.252+ OneAgent version 1.243+

If you use a OneAgent version earlier than 1.243 and Dynatrace Cluster version earlier than 1.252, go to Log Sources and Storage.

Dynatrace allows you to include and exclude specific log sources for analysis by Dynatrace Log Monitoring. Using Dynatrace identity and access management (IAM) framework, you can control which user can change configurations on which scope.

The configuration is based on rules that use matchers for hierarchy, log path, and process groups. These rules determine the upload of log files known to OneAgent, auto-detected log files, and custom log files defined per process group.

Supported scopes

Three hierarchy scopes are supported: host, host group, and tenant. The scope with the least possible set of rules has priority over larger sets.

Log storage configuration priority

  1. Log storage rules configured for a host take precedence over log storage rules configured for a host group.
  2. Log storage rules configured for a host group take precedence over log storage rules configured for a tenant.

Host scope

The host scope can be accessed through the Host settings for a specific host.

  1. In the Dynatrace menu, go to Hosts and select your host.
  2. Select More (…) > Settings to open the Host settings page (available only on hosts assigned to a host group).
  3. On the Host settings page, select Log storage.
  4. Configure storage upload by adding rules with a set of attributes that matches the log data to be stored by Dynatrace.

Host group scope

The host group scope can be accessed via the Host page.

  1. In the Dynatrace menu, go to Hosts and select your host.
  2. In the Properties and tags section, select the Host group (available only on Hosts assigned to a Host group).
  3. On the Settings page, select Log storage.
  4. Configure storage upload by adding rules with a set of attributes that matches the log data to be stored by Dynatrace.

Tenant scope

The tenant scope is available in the settings menu.

  1. In the Dynatrace menu, go to Settings and select Log Monitoring > Log storage configuration.
  2. Configure storage upload by adding rules with a set of attributes that matches the log data to be stored by Dynatrace.

Matching rules to log data

Matching occurs in a predefined hierarchy and rules are executed from top to bottom. This means that if a rule above on the list matches certain log data, then the lower ones will be omitted. Items matched in the higher-level configurations are overwritten in the lower-level configurations if they match the same log data. The matching hierarchy is as follows:

  1. Host configuration rules
  2. Host group configuration rules
  3. Tenant configuration rules

Configure log storage

  1. In the Dynatrace menu, go to Settings and select Log Monitoring > Log storage configuration.

  2. Select Add rule and provide the name for your configuration.
    By default, the Include in storage button is turned on, indicating that items configured by this rule will be stored in Dynatrace. Alternatively, you can select the Exclude from storage rule type.

  3. Expand Details of your new rule and select Add matcher to create a specific match for this rule.
    Multiple matchers can be included in one rule. Note: Other than the Log source attribute, matchers are case-sensitive.

  4. Select the matching attribute:

AttributeDescriptionSearch dropdown logic

Container name

Matching is based on the name of the container.

Attributes visible in the last 90 days are listed.

K8s container name

Matching is based on the name of the Kubernetes container.

Attributes visible in the last 90 days are listed.

K8s deployment name

Matching is based on the name of the Kubernetes deployment.

Attributes visible in the last 90 days are listed.

K8s namespace name

Matching is based on the name of the Kubernetes namespace.

Attributes visible in the last 90 days are listed.

Log content

Matching is based on the content of the log; wildcards are supported in form of an asterisk.

Can be entered manually. No time limit.

Log source

Matching is based on a log path; wildcards are supported in form of an asterisk. Autocompletion for Log source is only partial. You can either choose one of the predefined values or enter your log source.

Can be entered manually. No time limit.

Process group

Matching is based on the process group ID.

Attributes visible in the last 3 days are listed.

Process technology

Matching is based on the technology name.

Can be entered manually. No time limit.

Note: The wildcard is supported for any attribute value, and might be used multiple times in a single value. However, some attributes, for example Process Group, have a limited, predefined list of possible values that are selected from an auto-complete list.
If no wildcard is used in the value, then the matcher looks for an exact fit to the value. If a wildcard is used, the matcher looks for the exact match. For example, the value INFO results in sending only the log data having the exact INFO string, but the value *INFO* (using the wildcards) matches log data that contain the INFO string in its content.

  1. Select Add value and, from the Values, select the detected log data items (log files or process groups that contain log data). Multiple values can be added to the selected attribute. You can have one matcher that indicates log source and matches values /var/log/syslog and Windows Application Log.

  2. Save changes.

Defined rules can be reordered and are executed in the order in which they appear on the Log storage page.

  1. To activate your rule, turn on the Active toggle.
The Active toggle

Starting with OneAgent version 1.249, you can activate/inactivate your rules by turning on/off the Active toggle. To manage your rules effectively, we recommend you upgrade your OneAgent to the 249 version. If you have any rules set on the host with OneAgent version earlier than 249, you will not be able to inactivate them. In such scenario, you need to remove such rules by selecting Delete on the rule level or via REST AP.

List hosts and host groups with overriding rules

The table on Settings > Log Monitoring > Log storage configuration lists all log storage rules that you have set at the tenant level. However, you may want to see where you have set log storage rules for hosts and host groups that override the tenant-level rules.

To list all entities (hosts and host groups) to which more specific log storage rules are applied

  1. In the Dynatrace menu, go to Settings and select Log Monitoring > Log storage configuration.

  2. In the upper-right corner of the Log storage configuration page, select More (…) > Hierarchy and overrides. A searchable Hierarchy and overrides panel lists all entities (hosts and host groups) on which you have set log storage rules that override the tenant-level rules listed on Settings > Log Monitoring > Log storage configuration.

  3. Select an entity name to go to that entity's Log storage configuration page.

Example upload

In this example, we configure the tenant storage upload for c:\inetpub\logs\LogFiles\ex_*.log files in two process groups: IIS (PROCESS_GROUP-3D9D854163F8F07A) and IIS (PROCESS_GROUP-4A7B47FDB53137AE). The log storage rule consists of two matchers: the first matcher finds the process groups and the second matcher matches only for the defined log source.

  1. In the Dynatrace menu, go to Settings and select Log Monitoring > Log storage configuration.
  2. Select Add rule and provide the title for your configuration.
  3. Select Add matcher. This is the first matcher to match two specified process groups.
  4. From the Attribute list, select Process group.
  5. Select Add value and type IIS, and then, from the suggestion list, select IIS (PROCESS_GROUP-3D9D854163F8F07A).
  6. Select Add value again, type IIS and select the second process group from the suggestion list: IIS (PROCESS_GROUP-4A7B47FDB53137AE).
  7. Select Add matcher again. This is the second matcher to match the specified log data source.
  8. From the Attribute list, select Log source.
  9. Select Add value and enter c:\inetpub\logs\LogFiles\ex_*.log as the value.
  10. Save changes.

Example exclude

In this example, we configure the tenant storage upload for all log sources except c:\inetpub\logs\LogFiles\ex_*.log files in a process group IIS (PROCESS_GROUP-4A7B47FDB53137AE).

  1. In the Dynatrace menu, go to Settings and select Log Monitoring > Log storage configuration.
  2. Select Add rule and provide the title for your configuration.
  3. Turn off Send to storage.
  4. Select Add matcher. This is the first matcher to match the specified process group.
  5. From the Attribute list, select Process group.
  6. Select Add value and type IIS, and then, from the suggestion list, select IIS (PROCESS_GROUP-3D9D854163F8F07A).
  7. Select Add matcher again. This is the second matcher to exclude the specified log data source.
  8. From the Attribute list select Log source.
  9. Select Add value and enter c:\inetpub\logs\LogFiles\ex_*.log as a value.
  10. Save changes.

Migration to the new storage configuration

After you select Dynatrace menu > Settings > Log Monitoring > Log storage, automatic migration from the old storage configuration format to the new one takes place. The following changes will occur in your current configuration:

  • Host perspective
    All items configured on the Hosts perspective are migrated as a set of matchers to the corresponding host scope.

  • Process groups perspective
    Only the rules that are applied to a whole process group are migrated to the tenant scope. If a process group is enabled only for a subset of hosts, the relevant rules must be created on the host level.

After your configuration of log sources is successfully migrated, you can use new configuration items and add your matchers.

REST API

You can use the Settings API to manage your log storage configuration:

  • View schema
  • List stored configuration objects
  • View single configuration object
  • Create new, edit, or remove existing configuration object

To check the current schema version for log storage configuration, list all available schemas and look for the builtin:logmonitoring.log-storage-settings schema identifier.

Log storage configuration objects are available for configuration on the following scopes:

  • tenant – configuration object affects all hosts on a given tenant.
  • host_group – configuration object affects all hosts assigned to a given host group.
  • host – configuration object affects only the given host.

To create a log storage configuration using the API:

  1. Create an access token with the Write settings (settings.write) and Read settings (settings.read) permissions.

  2. Use the GET a schema endpoint to learn the JSON format required to post your configuration. The log storage configuration schema identifier (schemaId) is builtin:logmonitoring.log-storage-settings. Here is an example JSON payload with the log storage configuration:

    json
    [ { "insertAfter":"uAAZ0ZW5hbnQABnRlbmFudAAkMGUzYmY2ZmYtMDc2ZC0zNzFmLhXaq0", "schemaId": "builtin:logmonitoring.log-storage-settings", "schemaVersion": "0.1.0", "scope": "tenant", "value": { "config-item-title": "Added from REST API", "send-to-storage": true, "matchers": [ { "attribute": "dt.entity.process_group", "operator": "MATCHES", "values": [ "PROCESS_GROUP-05F00CBACF39EBD1" ] }, { "attribute": "log.source", "operator": "MATCHES", "values": [ "Windows System Log", "Windows Security Log" ] } ] } } ]
  3. Use the POST an object endpoint to send your configuration.

Examples

The examples that follow show the results of various combinations of rules and matchers.

Example 1: Multiple rules

In this example, there are two rules:

  • Rule 1 is an Exclude rule and has two matchers: the process group attribute is Apache, and the Log source attribute is access.log).
  • Rule 2 is an Include rule and has one matcher: the process group attribute is Apache.

Results: access.log is not sent, error.log (of Apache) is sent, and error.log (of other PG) is not sent.

  • access.log written by Apache matches the first rule, which has send-to-storage: false, so it is not sent.
  • access.log not written by Apache doesn't match the first rule (due to incorrect process group), and doesn't match the second rule, so it is not sent.
  • error.log written by Apache does not match the first rule (due to incorrect source), but it matches the second rule, which has send-to-storage: true, so it is sent.
  • error.log not written by Apache doesn't match the first rule (due to both incorrect process group and log source), and doesn't match the second rule, so it is not sent.
json
{ "send-to-storage": false, "matchers": [ { "attribute": "log.source", "values": [ "/path/to/access.log" ] }, { "attribute": "dt.entity.process_group", "values": [ "PROCESS_GROUP-APACHEID" ] } ], "enabled": true }, { "send-to-storage": true, "matchers": [ { "attribute": "dt.entity.process_group", "values": [ "PROCESS_GROUP-APACHEID" ] } ], "enabled": true }

Example 2: Send logs written by Apache and containing 'ERROR'

This task requires setting one rule with two matchers.

json
{ "send-to-storage": true, "matchers": [ { "attribute": "log.content", "values": [ "*ERROR*" ] }, { "attribute": "dt.entity.process_group", "values": [ "PROCESS_GROUP-APACHEID" ] } ], "enabled": true }

Example 3: Send logs written by Apache or containing 'ERROR'

This task requires setting two rules with one matcher each.

json
{ "send-to-storage": true, "matchers": [ { "attribute": "log.content", "values": [ "*ERROR*" ] } ], "enabled": true }, { "send-to-storage": true, "matchers": [ { "attribute": "dt.entity.process_group", "values": [ "PROCESS_GROUP-APACHEID" ] } ], "enabled": true }

Example 4: Send logs written by Apache, and containing 'ERROR' and 'Customer'

This task requires setting one rule with three matchers, with one value each.

json
{ "send-to-storage": true, "matchers": [ { "attribute": "log.content", "values": [ "*ERROR*" ] }, { "attribute": "log.content", "values": [ "*Customer*" ] }, { "attribute": "dt.entity.process_group", "values": [ "PROCESS_GROUP-APACHEID" ] } ], "enabled": true }

Example 5: Send logs written by Apache, and containing 'ERROR' or 'Customer'

This task requires setting one rule with two matchers: a matcher with the process group value, and a matcher with two content values.

json
{ "send-to-storage": true, "matchers": [ { "attribute": "log.content", "values": [ "*ERROR*", "*Customer*" ] } ], "enabled": true }, { "send-to-storage": true, "matchers": [ { "attribute": "dt.entity.process_group", "values": [ "PROCESS_GROUP-APACHEID" ] } ], "enabled": true }

Example 6: Send logs written by Apache or MySQL

This task requires setting two rules, or one rule with one matcher having two values.
Rules with two matchers will not work here.

Setting two rules:

json
{ "send-to-storage": true, "matchers": [ { "attribute": "dt.entity.process_group", "values": [ "PROCESS_GROUP-MYSQL" ] } ], "enabled": true }, { "send-to-storage": true, "matchers": [ { "attribute": "dt.entity.process_group", "values": [ "PROCESS_GROUP-APACHEID" ] } ], "enabled": true }

Setting one rule with one matcher having two values:

json
{ "send-to-storage": true, "matchers": [ { "attribute": "dt.entity.process_group", "values": [ "PROCESS_GROUP-APACHEID", "PROCESS_GROUP-MYSQL" ] } ], "enabled": true }

Example 7: Send all logs

This task requires setting a rule without any matchers.

json
{ "send-to-storage": true, "matchers": [ ], "enabled": true }

Example 8: Send all logs except Apache and MySQL logs

This task requires setting two rules.

  • The first rule is an Exclude rule with one matcher having two values.
  • The second rule does not contain any matchers.

The rules have to be executed in the order indicated below.

json
{ "send-to-storage": false, "matchers": [ { "attribute": "dt.entity.process_group", "values": [ "PROCESS_GROUP-APACHEID", "PROCESS_GROUP-MYSQL" ] } ], "enabled": true }, { "send-to-storage": true, "matchers": [ ], "enabled": true }

FAQ

Will older OneAgents work with this solution?

OneAgent versions earlier than 1.243 won't send any data; they will get an empty whitelist in response.

Why don't I see any configuration on the global page after migration from the hosts' perspective?

All host perspective configs are migrated to the corresponding host scope.

Is this change reversible?

No. After the change, all old configurations are wiped out, so be sure before you make this change.

Is log storage configuration the same as/part of the autodiscovery process?

No. Autodiscovery is a mechanism of OneAgent that detects logs, but it doesn't mean that log files are sent to storage automatically. A configuration page for autodiscovery is planned for a future release. To learn more about autodiscovery, see Log content autodiscovery (Logs Classic)

How can I see the configurations from other scopes?

It is not possible to drill down from the tenant scope to a host group, and from a host group to a host. The only direction is up from a host to a host group, and from a host group to the tenant. Higher scopes are unaware of changes in the lower scopes.

Is the order of configuration items important?

Yes, configuration items are matched from top to bottom, meaning that the top value is the most important.

How long do I need to wait for the configuration to be applied to the host?

It is applied within 90 seconds.

Does adding a content matcher reduce the number of log events sent to Dynatrace?

Yes. A content matcher narrows down the scope of log events (log entries) according to the criteria set (for example, searching only for error logs).

Where is filtering carried out, in Dynatrace and or in OneAgent?
  • Filtering (narrowing down the scope according to the criteria set) is carried out in OneAgent.
  • Setting limits (for example, the log events per minute limit or the attribute values limit) is conducted in Dynatrace.
Does filtering the content reduce DDU cost and/or network usage?

Yes. Content filtering conducted on OneAgent reduces both DDU costs and network usage. You can calculate the cost and network use reduction by determining your total data consumption and deducting the GB size of data that was filtered out. For details on how DDUs costs are calculated, see:

  • Log Monitoring DDU calculation
  • Log Management and analytics powered by Grail DDU calculation