• Home
  • Platform
  • Davis® AI
  • Davis DQL examples

Davis DQL examples

powered by Grail

These examples help illustrate how to build powerful and flexible health dashboards by using DQL to slice and dice all Davis reported problems and events.

Basic Davis problem examples

Davis problems represent results that originate from the Davis root-cause analysis runs. Within Grail Davis problems along with their updates are stored as Grail events.

  • Example 1
    Count the total number of problems in the last 24 hours.
  • Example 2
    Count the current number of active problems.
  • Example 3
    Chart the number of problems in the last 7 days to identify a trend within your environment stability.
  • Example 4
    Identify the top 10 most problem affected entities within your environment.
  • Example 5
    Join entity attributes with detected problems and apply a name filter.
  • Example 6
    Load the last state of a given problem.

Basic Davis event examples

Davis events represent raw events that originate from various anomaly detectors within Dynatrace or within the OneAgent. Examples here are OneAgent detected CPU saturation events or High Garbage Collection Time events.

  • Example 7
    Chart the number of process restart events in the last 7 days.

Example 1: Count the total number of problems in the last 24 hours.

Count the total number of problems in the last 24 hours.

  • Davis detected problems are distinguished by the event.kind=”DAVIS_PROBLEM” field.
  • As Davis problems also include regular refresh events, the summarize DQL command is used to group by unique event.id before counting the total number.
  • The event.id holds the unique problem id that is kept stable across all refreshes and updates that Davis reports for the same problem.
dql
fetch events, from:now()-24h, to:now() | filter event.kind == "DAVIS_PROBLEM" | summarize {problemCount = countDistinct(event.id)}

Results table

problemCount

415

Example 2: Count the current number of active problems.

  • Davis detected problems are distinguished by the event.kind=”DAVIS_PROBLEM” field.
  • As Davis problems also include regular refresh events, the query first sorts all refresh events by their timestamp.
  • The result is grouped by the unique event.id field that contains the problem id.
  • The DQL command takeLast of the field event.status receives the last state with the purpose to filter out all those problems that are not in state ACTIVE anymore.
dql
fetch events | filter event.kind == "DAVIS_PROBLEM" | sort timestamp, direction:"ascending" | summarize {event.status = takeLast(event.status)}, by:{ event.id } | filter event.status == "ACTIVE" | summarize {activeEvents = countDistinct(event.id)}

Results table

activeProblems

15

Example 3: Chart the number of problems in the last 7 days to identify a trend within your environment stability.

  • Davis detected problems are distinguished by the event.kind=”DAVIS_PROBLEM” field.
  • The count is done in a resolution of 60 minutes bins.
dql
fetch events, from:now()-7d, to:now() | filter event.kind == "DAVIS_PROBLEM" | summarize count = count(), by: {`60m interval` = bin(timestamp, 60m)}

Results table

60min interval count

5/25/2023, 3:00 PM

46

5/25/2023, 4:00 PM

339

5/25/2023, 5:00 PM

152

Example 4: Identify the top 10 most problem affected entities within your environment.

  • Davis detected problems are distinguished by the event.kind=”DAVIS_PROBLEM” field.
  • The arrays field containing all affected entity ids is expanded into individual fields.
  • Count all unique problems grouped by the affected entity ids.
  • Sort by that problem count.
  • Return the top 10 entity ids.
dql
fetch events | filter event.kind == "DAVIS_PROBLEM" | expand affected_entity_ids | summarize by:{affected_entity_ids}, count = countDistinct(display_id) | sort count, direction:"descending" | limit 10

Results table

affected_entity_ids count

HOST-A9449CACDE12B2BF

10

SERVICE-5624DD59D74FF453

5

PROCESS_GROUP_INSTANCE-3184C659684130C7

3

Example 5: Fetch all problems for a host with name "myhost".

A join with entity attributes is performed with the goal to filter all problems with a given host name.

  • Davis detected problems are distinguished by the event.kind=”DAVIS_PROBLEM” field.
  • The arrays field containing all affected entity ids is expanded into individual fields.
  • A topology and entity lookup is done on the affected_entity_ids field.
  • The resulting records are enriched with two entity fields that are prefixed with host., namely host.id and host.name.
  • A filter for the host name myhost is applied.
dql
fetch events | filter event.kind == "DAVIS_PROBLEM" | expand affected_entity_ids | lookup sourceField:affected_entity_ids, lookupField:id, prefix:"host.", [ fetch dt.entity.host | fields id, name = entity.name ] | filter host.name == "myhost" | limit 3

Results table

timestamp affected_entity_ids host.id host.name display_id

5/31/2023, 1:31:39 PM

HOST-27D70086952122CF

HOST-27D70086952122CF

myhost

P-23054243

Example 6: Load the last state of a given problem.

A join with entity attributes is performed with the goal to filter all problems with a given host name.

  • Davis detected problems are distinguished by the event.kind=”DAVIS_PROBLEM” field.
  • Filter by the unique display identifier of the problem.
  • Sort all problem update events by their timestamp in descending order.
  • Return the last one.
dql
fetch events | filter event.kind == "DAVIS_PROBLEM" | filter display_id == "P-23053506" | sort timestamp desc | limit 1

Results table

timestamp affected_entity_ids host.id host.name display_id

5/31/2023, 1:31:39 PM

HOST-27D70086952122CF

HOST-27D70086952122CF

myhost

P-23053506

Example 7: Chart the number of CPU saturation and high memory events in the last 7 days.

  • Davis raw events are distinguished by the event.kind=”DAVIS_EVENT” field.
  • The count is done in a resolution of 60 minutes bins.
dql
fetch events, from:now()-7d, to:now() | filter event.kind == "DAVIS_EVENT" | filter event.type == "OSI_HIGH_CPU" or event.type == "OSI_HIGH_MEMORY" | summarize count = count(), by: {`60m interval` = bin(timestamp, 60m)}

Results table

60min interval count

5/25/2023, 3:00 PM

146

5/25/2023, 4:00 PM

312

5/25/2023, 5:00 PM

201