• Home
  • Observe and explore
  • Query data
  • Dynatrace Query Language
  • DQL Commands

DQL commands

This page describes DQL commands to help you get started using Dynatrace Query Language.

data

Retrieves records from a static list of records or from JSON.
The command can either be used with a list of records or a JSON string. If you choose a JSON string, you can only use JSON data types. The content of the JSON is not scanned for any DQL data type that could be parsed. If the DQL data type is required, the record expressions must contain DQL functions.

  • Syntax

data [ records ] [, json: json_string ]

  • Parameters
NameTypeMandatoryDefaultConstraintsDescription

Records

Record expressions

N

N/A

Either records or JSON has to be specified

A list of record expressions.

json

string

N

N/A

Either records or JSON has to be specified

A string that defines either a single JSON object or a JSON array.

In the first example, you retrieve data from a static list of records:

dql
DATA record(a=1), record(b="b"), record(c=now())

Results:

abc

1

b

3/2/2023, 2:14:41 PM (now)

In the second example, you retrieve data from JSON.

dql
DATA json: """[{"a": 1}, {"b": "1"}, {"a": 2.0, "b": 2.2}]"""

Results:

aba, b

1

1

2 2.2

In the third example, you combine JSON with the decodeBase64ToString function:

dql
DATA json: decodeBase64ToString("W3siYSI6IDF9LCB7ImIiOiAiMSJ9LCB7ImEiOiAyLjAsICJiIjogMi4yfV0=")

Results:

aba, b

1

1

2 2.2

fetch

Loads data from the specified resource.

Syntax
fetch table [,from:Expression][,to:Expression][,timeframe:Expression][,samplingRatio:Ratio][,scanLimitGBytes:Gigabytes]

Here is an example of the fetch command in its simplest form:

plaintext
fetch logs

Relative query timeframes

This example with relative time ranges uses DQL time literals to query logs from the last 25 minutes:

  • On the UI level: in the timeframe selector in the upper-right corner:

    • To choose one of the existing values (for example last 72 hours or last 365 days), select Presets
    • To create your own timeframe value, select Custom
    • To select the last 2 hours, select Recent
  • On the query level

This example with relative time ranges uses DQL's time literals to query logs from the last 25 minutes:

plaintext
fetch logs, from: - 25m

You can also use the following syntax, where duration is automatically interpreted as an offset of the now() function.

plaintext
fetch logs, from:-25m

Using both the from and to parameters allows you to adjust the start and end timestamp of the query:

plaintext
fetch logs, from: - 24h, to: - 2h

Optionally, you can use the following syntax, where duration is automatically interpreted as an offset of the now() function.

plaintext
fetch logs, from:-24h, to:-2h

You can also use absolute time ranges with the timeframe parameter.

dql
fetch logs, timeframe:"2021-10-20T00:00:00Z/2021-10-28T12:00:00Z"

Sampling
Currently, to improve query performance, sampling is applicable for Log data within the initial fetch pipeline stage. Sampling happens vertically accross the data, resulting in the selection of a subset of Log records, according to the specified, optional samplingRatio parameter.

The applicable value ranges for sampling are:

  • 1: Default value, resulting in no applied sampling.
  • 10
  • 100
  • 1000
  • 10000

Depending on the specified value, 1/<samplingRatio> of available raw Log records are returned.

Sampling in practice

Sampling is non-deterministic, and will return a different result set with each query run. Also, all the following commands will work based on the sampled set of input data, yielding unprecise aggregates.

dql
fetch logs, from:now()-7d, samplingRatio:100 | summarize c = countIf(loglevel == "ERROR"), by:bin(timestamp, 3h) | fieldsAdd c = c*100

In this case, an estimation of the occurrences of ERROR Logs happens across the last 7 days. As a final step, an interpolation of the count() aggregation is added by multiplying and overriding the aggregation result with the selected value of the sampling parameter.

Read data limit

The optional scanLimitGBytes parameter controls the amount of uncompressed data to be read by the fetch stage. The default value is 500GB unless specified otherwise. If set to -1, all data available in the query time range is analyzed.

fields

Keeps only the specified fields. If the field is not present in the result generated by the previous pipeline stage, a new field gets created.

| fields fieldName [= Expression] [, ...]

In the following example, only three of all available fields returned from the fetch stage are selected. The loglevel field is additionally converted to lowercase by the DQL lower function.

dql
fetch logs | fields timestamp, severity = lower(loglevel), content

Additionally, the fields command ensures the that the order of fields in the query output aligns with the order specified in the query.

fieldsAdd

Evaluates an expression and appends or replaces a field.

Syntax

| fieldsAdd fieldName [= Expression] [, ...]

The following example shows the difference between fields and fieldsAdd. While the fields command defines the result table by the fields specified, the fieldsAdd command adds new fields to the existing fields.

dql
fetch logs | fieldsAdd severity = lower(loglevel)
Added fields override existing fields

In case of identical field names, added fields override the existing fields in the processing stream. When two identical field names are specified in the DQL statement, a warning "The field <fieldName> overrides an existing field." is returned.

fieldsRemove

Removes fields from the result.

Syntax

| fieldsRemove fieldNameOrPattern [,...]

The following example uses the fieldsAdd command to generate a new field called severity. In order to reduce the duplicated information that is still present in the loglevelfield, the fieldsRemove command is used to remove the original loglevel field.

dql
fetch logs | fieldsAdd severity = lower(loglevel) | fieldsRemove loglevel

fieldsRename

Renames a field.

Syntax

| fieldsRename fieldName = originalName, [,...]

dql
fetch logs | fieldsRename severity=loglevel, source=log.source, logmessage = content
Added fields override existing fields

If the assigned alias, e.g. severity is colliding with an already existing field severity, the original field is overridden. In case two identical field names are specified in the DQL statement, a warning "The field <fieldName> overrides an existing field." is returned.

filter

Reduces the number of records in a list by excluding all records that don't match a specific condition.

| filter Predicate

dql
fetch logs | fieldsRename severity=loglevel, source=log.source, logmessage = content | filter (severity == "ERROR" or severity == "SEVERE") and contains(logmessage,"failed")

filterOut

Removes records that match a specific condition.

Syntax

| filterOut Predicate

dql
fetch logs | fieldsRename severity=loglevel, source=log.source, logmessage = content | filterOut severity == "NONE" or severity == "INFO"

limit

Limits the number of returned records.
Note: If you do not specify the limit, the query renders 1000 results (this limit, by default, is appended as the last line of the query).
You can increase or decrease the limit depending on your needs. Changing the limit has an impact on your DDU consumption and query execution time.

Syntax

| limit numberOfRqueryecords

dql
fetch logs | limit 10

lookup

Adds (joins) fields from a subquery (the lookup table) to the source table by finding a match between a field in the source table (sourceField) and the lookup table (lookupField).

Syntax

| lookup executionBlock, sourceField , lookupField [, prefix]

Parameters

  • sourceField: the matching field on the source table (left side)
  • lookupField: the matching field on the lookup table (right side)
  • prefix: the prefix to add to the joined fields. If not specified, the default prefix is 'lookup'.
dql
fetch bizevents, from:-30m | summarize search_events = countIf(event.type == "com.easytravel.search-journey"), by:dt.entity.host | lookup [fetch logs | summarize count = countIf(loglevel=="ERROR"), by:dt.entity.host ], sourceField:dt.entity.host, lookupField:dt.entity.host, prefix:"logs.err."

If a field from the lookup table is to be added under a name that already exists in the main query, the existing field is overwritten.
If the subquery references the table only, all fields will be taken over from the new lookup table and each field's name prefixed with the given prefix. This is not advised.

Results

dt.entity hostsearch_eventslogs.err.dt.entity.hostlogs.err.count

HOST-1

59

HOST-1

15

HOST-2

234

HOST-2

44

parse

Parses a record field and puts the result(s) into one or more fields as specified in the pattern.

The parse command works in combination with the Dynatrace Pattern Language for parsing strings.

Syntax

| parse fieldName, pattern

To get started, a field is generated at runtime to demonstrate its functionality.

dql
fetch events | limit 1 | fields timestamp | fieldsAdd fieldToParse = "117.169.75.66--[14/Mar/2016:23:34:25 +0200] GET//db/scripts/setup.php HTTP/1.1 404 474 " | parse fieldToParse, "ipv4:ip LD HTTPDATE:time ']' ld:text EOL"

The result shows that the parsed fields are added to the previously present set of fields and can be used for further processing.

timestampfieldToParseiptimetext
7/6/2022, 11:59:22 AM117.169.75.66--[14/Mar/2016:23...117.169.75.662016-03-14T21:34:25...GET//db/scripts/se...
Added fields override existing fields

In case of identical names, fields added by the parse command override the existing fields. When two identical field names are specified in the DQL statement, a warning "The field <fieldName> overrides an existing field." is returned.

sort

Sorts the records.

Syntax

| sort fieldname [asc | desc][, ...]

dql
fetch events | sort timestamp desc, event.type asc

summarize

Groups together records that have the same values for a given field and aggregates them.

Syntax

| summarize [SummarizeParameters] [[field =] aggregation [, ...]] [,by:{[field=]GroupExpression[, ...]}]

Simple aggregations

dql
fetch events, from:now()-1m | summarize event_count = count()

Advanced aggregations

dql
fetch logs, from:now()-1m | summarize severe = countIf(loglevel=="SEVERE"), errors = countIf(loglevel=="ERROR"), warnings = countIf(loglevel=="WARNING"), by:dt.entity.host
dql
fetch logs, from:now()-10m | filterOut loglevel == "NONE" | summarize count(), by:{ bin(timestamp, 30s), loglevel, dt.entity.host }

Using aliases in the summarize command

You can set alias names to attributes within the summarize command, according to the pattern presented below:

dql
fetch logs | summarize count=count(), by:{level=loglevel} limit 3

In this query, the count() attribute is given the count alias name, while the loglevel attribute is given the level alias name.

Results (visible in headers):

levelcount

ALERT

10

DEBUG

110085

ERROR

99332

timeseries

The timeseries command is a starting command of DQL. It combines loading, filtering and aggregating metrics data into a time series output.

Syntax

plaintext
timeseries [column =] aggregation [, default: expression] [, ...] [, by: field-list] [, filter: condition] [, interval: duration] [, bins: integer] [, from: expression] [, to: expression] [, timeframe: expression]

Simple example

dql
timeseries avg(dt.host.cpu.usage)

Advanced example

dql
timeseries min_cpu=min(dt.host.cpu.usage), max(dt.host.cpu.usage), default:99.9, by:dt.entity.host, filter:in(dt.entity.host, "HOST-1", "HOST-2"), interval:1h, from:-7d

Timeseries response

The timeseries command produces homogenous time series of aggregated data: all series have identical start and end timestamps, time interval and number of elements. The timeframe column holds the start and end timestamps. The interval column holds the time interval expressed as a duration. Each aggregation (min, max, sum, avg) produces a column with the specified column name or a name derived from the aggregation expression. Each aggregation cell consists of the entire array of aggregated values for each timeslot defined by timeframe and interval.

Here is an example of the result of the timeseries command. Note that:

  • the first aggregation column name has been specified in the query (min_cpu)
  • the second aggregation column name has not been specified in the query, hence the name is derived from the expression (max(dt.host.cpu.usage))
  • the first aggregation does not specify a default parameter, hence it can contain null for empty time slots
  • the second aggregation does specify a default parameter, hence the empty time slots are replaced with the default value (99.9 in this example)
timeframeintervaldt.entity.hostmin_cpumax(dt.host.cpu.usage)
{"start":"2022-10-24T07:00:00","end":"2022-10-31T07:00:00"}"1h"HOST-1[35.1,35.9,35.5,36.7,...,37.9,39.4][36.9,37.8,38.8,38.8,...,38.6,39.5]
{"start":"2022-10-24T07:00:00","end":"2022-10-31T07:00:00"}"1h"HOST-2[24.9,25.1,null,25.0,...,23.8,24.5][30.9,31.3,99.9,32.7,...,33.1,37.1]

Aggregation

There are four aggregations available: min, max, sum, avg. It is possible to specify several aggregations in the same command. All aggregations must use the same metric key.

Default value for empty time slots

The timeseries command produces homogenous time series of aggregated data: all series have identical start and end timestamps, time interval and number of elements. If data is missing for a particular time slot, it is filled with null. Specifying a default parameter fills empty time slots with the default parameter value instead of null.

Time interval

The timeseries command automatically calculates an appropriate time interval derived from the query timeframe. The timeframe is divided into time slots of identical time interval, and data is then rolled up into each of these time slots, so that the number of points per time series is suitable for graphing. For instance, in order to graph a metric over a 7-days timeframe, it usually makes more sense to use 1-hour interval data (168 points) than 1-minute interval data (10,080 points).

It is possible to define a custom interval parameter which will be used instead of the calculated time interval, whenever possible.

Another option is to define a custom bins parameter specifying a desired number of time slots per time series, which will be used to calculate an equivalent time interval.

The interval and bins parameters are exclusive. Both parameters are optional.

When it is not possible to have an exact match for the desired interval or bins, the closest matching time interval is retained. Reasons for an approximate match include:

  • the time interval had to be rounded to a "well-known" interval (1-minute, 2-minutes, 5-minutes, 10-minutes, 15-minutes, 30-minutes, 1-hour),
  • the time interval would have exceeded the limit of elements per series (10,080).

–>

Related topics
  • Dynatrace Query Language

    How to use Dynatrace Query Language.

  • How to use DQL queries

    Find out how DQL works and what are DQL key concepts.

  • DQL compared to SQL and more

    See how DQL compares to other query languages.

  • DQL language reference

    Dynatrace Query Language syntax reference.

  • DQL functions

    A list of DQL Functions.

  • DQL operators

    A list of DQL Operators.

  • DQL data types

    A list of DQL data types.

  • DQL Best practices

    Best practices for using Dynatrace Query Language.