DQL Best practices
This page describes actions you can take to improve query performance.
Narrow the query time range
A shorter analysis window provides better performance based on identical data sets. Use available timeframe selectors provided by the user interface or directly specify the query time range within the fetch command.
fetch bizevents, from:-10m
Utilize available sampling options
Currently, sampling is applicable for Log data within the initial fetch
pipeline statement. Depending on the specified value, a fraction (1/<samplingRatio>
) of all available raw log records is returned.
The applicable value ranges for sampling are:
- 1: Default value, resulting in no applied sampling.
- 10
- 100
- 1000
- 10000
The following query uses sampling to improve query performance to observe an approximation of error logs over time.
fetch logs, from:-7d, samplingRatio:100
| summarize c = countIf(loglevel == "ERROR"), by:bin(timestamp, 3h)
| fieldsAdd c = c*100
Utilize options to limit the scanned amount of data
Particularly when dealing with logs, a large amount of data is ingested in short periods. Even with a narrow query time range, a query can take considerable time to complete.
To stop reading data after a specified amount, use the scanLimitGBytes
fetch command optional parameter.
fetch logs, from:-2h, scanLimitGBytes:100
Recommended order of commands
It is advised to use the below order in your DQL queries:
fetch
- Commands, for example
filter
,fields
,fieldsAdd
,parse
- Functions, for example
summarize
sort
limit
Example query with the recommended order:
fetch logs
| filter isNotNull(dt.entity.host)
| filterOut loglevel == "NONE"
| summarize count = count(), by: {dt.entity.host, loglevel}
| sort loglevel asc, count desc
| limit 3
It is recommended to place sort
at the end of the query. Sorting right after fetch
and then continuing the query will reduce the query performance. Example:
fetch logs
| sort timestamp desc
| filter contains(content, "Error")
However, the query will run smoothly once you change the order to:
fetch logs
| filter contains(content, "Error")
| sort timestamp desc
You can repeat the same command within one query and still stick to the recommended order. In the below example, you first filter the fetched content, then again you filter the parsed content, but the sort
command and summarize
function retain their positions:
fetch logs
| filter dt.entity.host == "HOST-1" and dt.entity.process_group_instance == "PROCESS_GROUP_INSTANCE-123AB35"
| parse content, "ipaddr:ip ld ' POST ' ld:action ' HTTP/1.1 ' long:status ld"
| filter action == "/cart" or action == "/cart/checkout"
| summarize count(), by:{ip, log.source}
| sort timestamp desc
Filter early
The overall query performance is greatly improved by narrowing the dataset before further processing.
-
For business event queries, filter on any ingested field.
fetch bizevents | filter event.provider == "www.easytrade.com"
-
For log queries, filter on fields representing the topological context of your data.
fetch logs | filter k8s.container.name == "coredns" and (loglevel != "NONE" or loglevel != "INFO")
Commonly used fields within logs:
log.source
loglevel
host.name
dt.process.name
event.type
dt.kubernetes.cluster.name
k8s.namespace.name
k8s.deployment.name
k8s.container.name
k8s.pod.name
dt.host_group.id
dt.event.group_label
dt.entity.host
aws-log_group
Use string comparisons with care
-
Use
==
or!=
whenever the value of a field is known.fetch logs | filter k8s.container.name == "coredns" and (loglevel != "NONE" or loglevel != "INFO")
-
Use
matchesPhrase()
instead ofcontains()
whenever the value of a field is only partly known or unknown.fetch logs | filter matchesPhrase(content,"No files matching import")
Fields names to be avoided or used in backticks
It is not recommended to use the below eight reserved keywords as field identifiers (field names) or dimensions:
- true
- false
- null
- mod
- and
- or
- xor
- not
However, you can still use these words as field names, identifiers and dimensions if you put them in backticks ('`')
For example, if you have a dimension named 'true':
...
| fields x = true // creates a boolean field that is always true
...
| fields x = `true` // allows to access the custom dimension named 'true'
Similarly, if you need to sort by a field named 'not':
...
| sort not desc // sorts by a boolean value of dimension `desc`
...
| sort `not` desc // sorts descending by a field named `not`