Synthetic Classic has reached end of support and is no longer available. Existing Synthetic Classic customers have been upgraded to the all-in-one Dynatrace software intelligence platform.

Synthetic Classic API version 3.2 use cases


The Synthetic Classic API response content can be integrated with other applications. When you use the trend API call and choose metrics like nwtme or uxtme, you also can choose the response content format in either CSV or JSON. Of course, a spreadsheet product could be used for viewing CSV, or you could view the JSON response in a web browser or a coding program. However, it is also fairly straightforward to grab the API response content and render that data in operational intelligence applications like Microsoft PowerBI and Splunk. This API data can then be correlated with other infrastructure data for better analysis and reporting.

API integration with Splunk Enterprise 6.x

These instructions assume a general familiarity with these Splunk concepts: Apps, Source Types, Indexes, Data Inputs, props.conf, Search, and events.

Following these steps will produce raw events in Splunk for the measurement data you specify in your REST poll. For instance, if you want to poll the Synthetic Classic API every 5 minutes for measurement id 12341234:


In Splunk, you can then create a time chart of nwtme performance (see Let's trend some data).

index=dynatrace_api monid=12341234 | timechart avg(nwtme)

The sub-sections that follow explain in detail the high-level steps below:

  1. Install the REST API Modular Input application so that new REST API Data Inputs can be created in Splunk  (rest_ta app).

  2. Create a a new source type for Dynatrace API in props.conf. The new dynatraec_api source type will tell Splunk how to parse the incoming Dynatrace API JSON into individual events.

  3. Create a testing index to receive the the API input (dynatrace_api index).

  4. Restart Splunk for props.conf changes to take effect.

  5. Create a new REST API data input (dynatrace_api) with the correct configuration and set it on a Cron schedule.

  6. Search the new testing index.

This procedure has been tested on the following product versions without using complicated Splunk features.

  • Splunk Enterprise 6.2.x
  • Splunk REST API Modular Input 1.3 and 1.4

It should work with other Splunk 6.x versions and other Splunk REST API Modular Input versions.

If you are completely new to Splunk, these instructions will work on a brand-new Enterprise trial install, just after you log in for the first time with admin /changeme.

Install the REST API modular input application rest_ta

  1. On your workstation, download the application from
    Following security practices, verify checksum after download.

  2. Log into your Splunk web interface.

  3. Go to Apps > Manage Apps.

    1. Click Install app from file.
    2. Click Choose File and locate the rest-api-modular-input_###.tgz file.
    3. Click Upload.

    The application rest_ta should now appear in the application list.

Create a new source type dynatrace_api for Dynatrace API

Edit /opt/splunk/etc/apps/search/local/props.conf (or create props.conf if it does not exist).If you are familiar with other props.conf locations, feel free to tailor it to your environment.

Add this text to props.conf to create the dynatrace_api source type, then save the file.

TIME_PREFIX = \"mtime\":
SEDCMD-remove_header = s/\{\"meta.+?data\":\[//g
SEDCMD-remove_footer = s/\]\}//g

This source type does the following:

  • Assigns the Splunk event timestamp the mtime epoch + ms time from the Synthetic Classic measurement, instead of the polling time of the REST API input.
  • If you do not tell Splunk to do this, it will default to the REST API poll time as the events' timestamp.
  • Removes the header, footer, and tells Splunk to treat each JSON data response as a unique "event."
Source type definition

If you are a Splunk admin and know how to parse JSON differently or more efficiently, feel free to make your modifications.

This source type definition essentially parses the response data into clean events with mtime as the event timestamp.

  monid: 1234,
  mtime: 1447792560000,nwtme:

Create a testing index

  1. Go to Settings > Index.
  2. Create a new index for testing and name it dynatrace_api.
  3. Change maximum size to 500 MB (for testing).

Restart splunk for props.conf changes

/opt/splunk/bin/splunk restart

Create a new REST API data input (dynatrace_api)

Once you get this simple example working, feel free to add all your measurement IDs, add more metrics, etc.

  1. Click Settings > Data Inputs.

  2. Click REST.

  3. Click New and fill out the following fields.

    • REST API Input Name – dynatrace_api
    • Endpoint URL –
    • HTTP Method – GET
    • Authentication Type – none
    • URL Arguments – rltime=300000&bucket=second&group=mname,monid&metrics=count,avail,nwtme,uxtme&monid=<monid>&login=<dynatrace_login>&pass=<api_md5hash_password> Substitute one of your measurement IDs, your login, and the MD5 hash of your password.
    • Response type – json
    • Polling interval – */5 * * * *
    • Delimiter – &
    • Set sourcetype – Manual
    • Source type – dynatrce_api
  4. Click More settings.

    • Leave the Host field at the default value (the name of your Splunk server).
    • Change Index to dynatrace_api.

Search the new testing index

Open a new search and search the testing index:


Events should appear like this:


Test your URL arguments in a browser

It is often helpful to validate your endpoint URL and URL arguments in a browser. A simple error in the URL arguments can be frustrating to troubleshoot.

Take your endpoint URL, add a question mark, add your URL arguments, and enter into your browser as a single URL:


While speaking of testing in a browser, the Firefox add-on JSONview by Ben Hollis, and its port for Chrome JSONView by gildas, are recommended for validation of your REST API calls.

Respect the Cron setting

Splunk will honor your Cron syntax in the polling interval.

If you told it */5, that means it will check every 5th minute (12:00, 12:05, 12:10, and so on). Say you save the data input at 12:01, you will not get search results until just after 12:05 because Cron has not hit its run schedule yet.

Other thoughts on Cron and rltime

If rltime is used to specify 5 minutes, or 300000 milliseconds, then Cron should be set to */5 for every 5 minutes. rltime and your Cron schedule should match so that the poll grabs data for the past X minutes every X minutes.

So if you want to poll the API every hour:

  • Change rltime to 1 hour.
  • Change your Cron schedule to 1 hour.

Why not use tstart and tend?

This is definitely possible and recommended if you want to backfill X days of API data.
After you get familiar with the above tutorial, do the following:

Create a new testing index for this new exercise.

Create a new REST data input and in URL arguments, do not use rltime. Instead, define tstart and tend as something like the following:

  • tend = current time in epoch + ms time
  • tstart = tend - 30days

Do not poll at a regular interval; leave that field blank.

Send the data to the new testing index you just created.

After you save the new REST input, it will only run once.

This isn't entirely true. There are some actions that Splunk takes when it restarts. To avoid this data input running by accident during every Splunk restart, disable or delete the REST API data input after it runs initially.

Now, make a new REST data input, but this time, remove tstart and tend, use rltime=900000 (15 minutes), and poll time at */15 * * * *.Now, you should have 30 days of backfilled data and all new data moving forward.


If you are certain that your polling interval is correct and that Splunk should have polled the Dynatrace API, check splunkd.log :

tail -1000 /opt/splunk/var/log/splunk/splunkd.log

If your log file is verbose, it may be helpful to add " | grep" after the tail command.

This error, for example, can indicate the delimiter was not entered in your REST API data input:

ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/rest_ta/bin/"     
url_args = dict((k.strip(), v.strip()) for k,v in
ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/rest_ta/bin/"
ValueError: need more than 1 value to unpack

Dashboard ideas

NOC dashboard

This dashboard shows the last 15 minutes, 60 minutes, and 24 hours for all measurements you specify.

Show nwtme, uxtme, availability, or any metric that is important to your applications' health.

The cell highlighting is performed using custom JavaScript files in $SPLUNK_HOME/etc/apps/app_name/appserver/static.

index=dynatrace_api | eval nwtmeSec=nwtme/1000 | eval avail=avail*100 | stats
avg(nwtmeSec) As "Avg Network Delta" avg(avail) as "Availability" by mname | sort mname |
rename mname AS Measurement

Let's trend some data

At-a-glance vital metrics for any measurement:

index=dynatrace_api monid=12341234 | eval nwtmesec=nwtme/1000 | timechart span=1h  

The Select Slot menu is autogenerated using:

  • Search String – index=dynatrace_api | dedup mname | sort mname | table mname, monid
  • Field for Label – mname
  • Field for Value – monid