AWS OpsWorks

Dynatrace ingests metrics for multiple preselected namespaces, including AWS OpsWorks. You can view metrics for each service instance, split metrics into multiple dimensions, and create custom charts that you can pin to your dashboards.

Prerequisites

To enable monitoring for this service, you need

Add the service to monitoring

In order to view the service metrics, you must add the service to monitoring in your Dynatrace environment.

Configure service metrics

Once you add a service, Dynatrace starts automatically collecting a suite of metrics for this particular service. These are recommended metrics.

Recommended metrics:

  • Are enabled by default
  • Can't be disabled
  • Can have recommended dimensions (enabled by default, can't be disabled)
  • Can have optional dimensions (disabled by default, can be enabled)

Apart from the recommended metrics, most services have the possibility of enabling optional metrics.

Optional metrics:

  • Can be added and configured manually

Import preset dashboards

Dynatrace provides preset AWS dashboards that you can import from GitHub to your environment's dashboard page. Once you download a preset dashboard locally, there are two ways to import it.

opsworks

Available metrics

Name Description Unit Statistics Dimensions Recommended
cpu_idle The percentage of time that the CPU is idle Percent Multi StackId ✔️
cpu_idle Percent Multi Region, InstanceId ✔️
cpu_idle Percent Multi Region, LayerId ✔️
cpu_nice The percentage of time that the CPU is handling processes with a positive nice value, which have a lower scheduling priority Percent Multi StackId ✔️
cpu_nice Percent Multi Region, InstanceId
cpu_nice Percent Multi Region, LayerId
cpu_steal The percentage of time that an instance is waiting for the hypervisor to allocate physical CPU resources Percent Multi StackId ✔️
cpu_steal Percent Multi Region, InstanceId ✔️
cpu_steal Percent Multi Region, LayerId ✔️
cpu_system The percentage of time that the CPU is handling system operations Percent Multi StackId ✔️
cpu_system Percent Multi Region, InstanceId
cpu_system Percent Multi Region, LayerId
cpu_user The percentage of time that the CPU is handling user operations Percent Multi StackId ✔️
cpu_user Percent Multi Region, InstanceId ✔️
cpu_user Percent Multi Region, LayerId ✔️
cpu_waitio The percentage of time that the CPU is waiting for input/output operations Percent Multi StackId ✔️
cpu_waitio Percent Multi Region, InstanceId
cpu_waitio Percent Multi Region, LayerId
load_1 The load averaged over a one-minute window None Multi StackId
load_1 None Multi Region, InstanceId
load_1 None Multi Region, LayerId
load_5 The load averaged over a five-minute window None Multi StackId ✔️
load_5 None Multi Region, InstanceId ✔️
load_5 None Multi Region, LayerId ✔️
load_15 The load averaged over a 15-minute window None Multi StackId
load_15 None Multi Region, InstanceId
load_15 None Multi Region, LayerId
memory_buffers The amount of buffered memory None Multi StackId ✔️
memory_buffers None Multi Region, InstanceId
memory_buffers None Multi Region, LayerId
memory_cached The amount of cached memory None Multi StackId ✔️
memory_cached None Multi Region, InstanceId
memory_cached None Multi Region, LayerId
memory_free The amount of free memory None Multi StackId ✔️
memory_free None Multi Region, InstanceId ✔️
memory_free None Multi Region, LayerId ✔️
memory_swap The amount of swap space None Multi StackId ✔️
memory_swap None Multi Region, InstanceId
memory_swap None Multi Region, LayerId
memory_total The total amount of memory None Multi StackId ✔️
memory_total None Multi Region, InstanceId ✔️
memory_total None Multi Region, LayerId ✔️
memory_used The amount of memory in use None Multi StackId ✔️
memory_used None Multi Region, InstanceId ✔️
memory_used None Multi Region, LayerId ✔️
procs The number of active processes None Multi StackId ✔️
procs None Multi Region, InstanceId ✔️
procs None Multi Region, LayerId ✔️