• Home
  • Observe and explore
  • Query data
  • Dynatrace Query Language
  • DQL Commands

DQL commands

This page describes DQL commands to help you get started using Dynatrace Query Language.

fetch

Loads data from the specified resource.

Syntax
fetch table [,from:Expression][,to:Expression][,timeframe:Expression][,samplingRatio:Ratio][,scanLimitGBytes:Gigabytes]

Here is an example of the fetch command in its simplest form:

plaintext
fetch logs

Relative query timeframes

This example with relative time ranges uses DQL time literals to query logs from the last 25 minutes:

  • On the UI level: in the timeframe selector in the upper-right corner:

    • To choose one of the existing values (for example last 72 hours or last 365 days), select Presets
    • To create your own timeframe value, select Custom
    • To select the last 2 hours, select Recent
  • On the query level

This example with relative time ranges uses DQL's time literals to query logs from the last 25 minutes:

plaintext
fetch logs, from: - 25m

You can also use the following syntax, where duration is automatically interpreted as an offset of the now() function.

plaintext
fetch logs, from:-25m

Using both the from and to parameters allows you to adjust the start and end timestamp of the query:

plaintext
fetch logs, from: - 24h, to: - 2h

Optionally, you can use the following syntax, where duration is automatically interpreted as an offset of the now() function.

plaintext
fetch logs, from:-24h, to:-2h

You can also use absolute time ranges with the timeframe parameter.

dql
fetch logs, timeframe:"2021-10-20T00:00:00Z/2021-10-28T12:00:00Z"

Sampling
Currently, to improve query performance, sampling is applicable for Log data within the initial fetch pipeline stage. Sampling happens vertically accross the data, resulting in the selection of a subset of Log records, according to the specified, optional samplingRatio parameter.

The applicable value ranges for sampling are:

  • 1: Default value, resulting in no applied sampling.
  • 10
  • 100
  • 1000
  • 10000

Depending on the specified value, 1/<samplingRatio> of available raw Log records are returned.

Sampling in practice

Sampling is non-deterministic, and will return a different result set with each query run. Also, all the following commands will work based on the sampled set of input data, yielding unprecise aggregates.

dql
fetch logs, from:now()-7d, samplingRatio:100 | summarize c = countIf(loglevel == "ERROR"), by:bin(timestamp, 3h) | fieldsAdd c = c*100

In this case, an estimation of the occurrences of ERROR Logs happens across the last 7 days. As a final step, an interpolation of the count() aggregation is added by multiplying and overriding the aggregation result with the selected value of the sampling parameter.

Read data limit

The optional scanLimitGBytes parameter controls the amount of uncompressed data to be read by the fetch stage. The default value is 500GB unless specified otherwise. If set to -1, all data available in the query time range is analyzed.

fields

Keeps only the specified fields. If the field is not present in the result generated by the previous pipeline stage, a new field gets created.

| fields fieldName [= Expression] [, ...]

In the following example, only three of all available fields returned from the fetch stage are selected. The loglevel field is additionally converted to lowercase by the DQL lower function.

dql
fetch logs | fields timestamp, severity = lower(loglevel), content

Additionally, the fields command ensures the that the order of fields in the query output aligns with the order specified in the query.

fieldsAdd

Evaluates an expression and appends or replaces a field.

Syntax

| fieldsAdd fieldName [= Expression] [, ...]

The following example shows the difference between fields and fieldsAdd. While the fields command defines the result table by the fields specified, the fieldsAdd command adds new fields to the existing fields.

dql
fetch logs | fieldsAdd severity = lower(loglevel)
Added fields override existing fields

In case of identical field names, added fields override the existing fields in the processing stream. When two identical field names are specified in the DQL statement, a warning "The field <fieldName> overrides an existing field." is returned.

fieldsRemove

Removes fields from the result.

Syntax

| fieldsRemove fieldNameOrPattern [,...]

The following example uses the fieldsAdd command to generate a new field called severity. In order to reduce the duplicated information that is still present in the loglevelfield, the fieldsRemove command is used to remove the original loglevel field.

dql
fetch logs | fieldsAdd severity = lower(loglevel) | fieldsRemove loglevel

fieldsRename

Renames a field.

Syntax

| fieldsRename fieldName = originalName, [,...]

dql
fetch logs | fieldsRename severity=loglevel, source=log.source, logmessage = content
Added fields override existing fields

If the assigned alias, e.g. severity is colliding with an already existing field severity, the original field is overridden. In case two identical field names are specified in the DQL statement, a warning "The field <fieldName> overrides an existing field." is returned.

filter

Reduces the number of records in a list by excluding all records that don't match a specific condition.

| filter Predicate

dql
fetch logs | fieldsRename severity=loglevel, source=log.source, logmessage = content | filter (severity == "ERROR" or severity == "SEVERE") and contains(logmessage,"failed")

filterOut

Removes records that match a specific condition.

Syntax

| filterOut Predicate

dql
fetch logs | fieldsRename severity=loglevel, source=log.source, logmessage = content | filterOut severity == "NONE" or severity == "INFO"

limit

Limits the number of returned records.
Note: If you do not specify the limit, the query renders 1000 results (this limit, by default, is appended as the last line of the query).
You can increase or decrease the limit depending on your needs. Changing the limit has an impact on your DDU consumption and query execution time.

Syntax

| limit numberOfRqueryecords

dql
fetch logs | limit 10

lookup

Adds (joins) fields from a subquery (the lookup table) to the source table by finding a match between a field in the source table (sourceField) and the lookup table (lookupField).

Syntax

| lookup executionBlock, sourceField , lookupField [, prefix]

Parameters

  • sourceField: the matching field on the source table (left side)
  • lookupField: the matching field on the lookup table (right side)
  • prefix: the prefix to add to the joined fields. If not specified, the default prefix is 'lookup'.
dql
fetch bizevents, from:-30m | summarize search_events = countIf(event.type == "com.easytravel.search-journey"), by:dt.entity.host | lookup [fetch logs | summarize count = countIf(loglevel=="ERROR"), by:dt.entity.host ], sourceField:dt.entity.host, lookupField:dt.entity.host, prefix:"logs.err."

If a field from the lookup table is to be added under a name that already exists in the main query, the existing field is overwritten.
If the subquery references the table only, all fields will be taken over from the new lookup table and each field's name prefixed with the given prefix. This is not advised.

Results

dt.entity hostsearch_eventslogs.err.dt.entity.hostlogs.err.count

HOST-1

59

HOST-1

15

HOST-2

234

HOST-2

44

parse

Parses a record field and puts the result(s) into one or more fields as specified in the pattern.

The parse command works in combination with the Dynatrace Pattern Language for parsing strings.

Syntax

| parse fieldName, pattern

To get started, a field is generated at runtime to demonstrate its functionality.

dql
fetch events | limit 1 | fields timestamp | fieldsAdd fieldToParse = "117.169.75.66--[14/Mar/2016:23:34:25 +0200] GET//db/scripts/setup.php HTTP/1.1 404 474 \n" | parse fieldToParse, "ipv4:ip LD httpdate:time ld:text eol"

The result shows that the parsed fields are added to the previously present set of fields and can be used for further processing.

timestampfieldToParseiptimetext
7/6/2022, 11:59:22 AM117.169.75.66--[14/Mar/2016:23...117.169.75.662016-03-14T21:34:25...GET//db/scripts/se...
Added fields override existing fields

In case of identical names, fields added by the parse command override the existing fields. When two identical field names are specified in the DQL statement, a warning "The field <fieldName> overrides an existing field." is returned.

sort

Sorts the records.

Syntax

| sort fieldname [asc | desc][, ...]

dql
fetch events | sort timestamp desc, event.type asc

summarize

Groups together records that have the same values for a given field and aggregates them.

Syntax

| summarize [SummarizeParameters] [[field =] aggregation [, ...]] [,by:{[field=]GroupExpression[, ...]}]

Simple aggregations

dql
fetch events, from:now()-1m | summarize event_count = count()

Advanced aggregations

dql
fetch logs, from:now()-1m | summarize severe = countIf(loglevel=="SEVERE"), errors = countIf(loglevel=="ERROR"), warnings = countIf(loglevel=="WARNING"), by:dt.entity.host
dql
fetch logs, from:now()-10m | filterOut loglevel == "NONE" | summarize count(), by:{ bin(timestamp, 30s), loglevel, dt.entity.host }

Using aliases in the summarize command

You can set alias names to attributes within the summarize command, according to the pattern presented below:

dql
fetch logs | summarize count=count(), by:{level=loglevel} limit 3

In this query, the count() attribute is given the count alias name, while the loglevel attribute is given the level alias name.

Results (visible in headers):

levelcount

ALERT

10

DEBUG

110085

ERROR

99332

Related topics
  • Dynatrace Query Language

    How to use Dynatrace Query Language.

  • How to use DQL queries

    Find out how DQL works and what are DQL key concepts.

  • DQL compared to SQL and more

    See how DQL compares to other query languages.

  • DQL language reference

    Dynatrace Query Language syntax reference.

  • DQL functions

    A list of DQL Functions.

  • DQL operators

    A list of DQL Operators.

  • DQL data types

    A list of DQL data types.

  • DQL Best practices

    Best practices for using Dynatrace Query Language.