AWS OpsWorks monitoring

Dynatrace ingests metrics for multiple preselected namespaces, including AWS OpsWorks. You can view metrics for each service instance, split metrics into multiple dimensions, and create custom charts that you can pin to your dashboards.

Prerequisites

To enable monitoring for this service, you need

  • An Environment or Cluster ActiveGate version 1.197+
  • Dynatrace version 1.201+
  • An updated AWS monitoring policy to include the additional AWS services.
    To update the AWS IAM policy, use the JSON below, containing the monitoring policy (permissions) for all supporting services.

If you don't want to add permissions to all services, and just select permissions for certain services, consult the table below. The table contains a set of permissions that are required for all services (All monitored Amazon services) and, for each supporting service, a list of optional permissions specific to that service.

Example of JSON policy for one single service.

In this example, from the complete list of permissions you need to select

  • "apigateway:GET" for Amazon API Gateway
  • "cloudwatch:GetMetricData", "cloudwatch:GetMetricStatistics", "cloudwatch:ListMetrics", "sts:GetCallerIdentity", "tag:GetResources", "tag:GetTagKeys", and "ec2:DescribeAvailabilityZones" for All monitored Amazon services.

Enable monitoring

To enable monitoring for this service, you first need to integrate Dynatrace with Amazon Web Services:

Add the service to monitoring

In order to view the service metrics, you must add the service to monitoring in your Dynatrace environment.

Cloud-service monitoring consumption

Beginning in early 2021, all cloud services consume Davis data units (DDUs). The amount of DDU consumption per service instance depends on the number of monitored metrics and their dimensions (each metric dimension results in the ingestion of 1 data point; 1 data point consumes 0.001 DDUs). For DDU consumption estimates per service instance (recommended metrics only, predefined dimensions, and assumed dimension values), see DDU consumption estimates per cloud service instance.

Monitor resources based on tags

You can choose to monitor resources based on existing AWS tags, as Dynatrace automatically imports them from service instances. Nevertheless, the transition from AWS to Dynatrace tagging isn't supported for all AWS services. Expand the table below to see which supporting services are filtered by tagging.

To monitor resources based on tags

  1. Go to Settings > Cloud and virtualization > AWS and select the AWS instance.
  2. For Resource monitoring method, select Monitor resources based on tags.
  3. Enter the Key and Value.
  4. Select Save.

Configure service metrics

Once you add a service, Dynatrace starts automatically collecting a suite of metrics for this particular service. These are recommended metrics.

Recommended metrics:

  • Are enabled by default
  • Can't be disabled
  • Can have recommended dimensions (enabled by default, can't be disabled)
  • Can have optional dimensions (disabled by default, can be enabled)

Apart from the recommended metrics, most services have the possibility of enabling optional metrics.

Optional metrics:

  • Can be added and configured manually

View service metrics

You can view the service metrics in your Dynatrace environment either on the custom device overview page or on your Dashboards page.

View metrics on the custom device overview page

To access the custom device overview page

  1. Go to Technologies on the Dynatrace navigation menu.
  2. Filter by service name and select the relevant custom device group.
  3. Once you select the custom device group, you're on the custom device group overview page.
  4. The custom device group overview page lists all instances (custom devices) belonging to the group. Select an instance to view the custom device overview page.

View metrics on your dashboard

After you add the service to monitoring, a preset dashboard containing all recommended metrics is automatically listed on your Dashboards page. To look for specific dashboards, filter by Preset and then by Name.
aws-presets
Note: For existing monitored services, you might need to resave your credentials for the preset dashboard to appear on the Dashboards page. To resave your credentials, go to Settings > Cloud and virtualization > AWS, select the desired AWS instance, and then select Save.

You can't make changes on a preset dashboard directly, but you can clone and edit it. To clone a dashboard, open the browse menu (...) and select Clone.
To remove a dashboard from the dashboards page, you can hide it. To hide a dashboard, open the browse menu (...) and select Hide.
Note: Hiding a dashboard doesn't affect other users. clone-hide-aws

To check the availability of preset dashboards for each AWS service, see the list below.

opsdash

Available metrics

Name Description Unit Statistics Dimensions Recommended
cpu_idle The percentage of time that the CPU is idle Percent Multi StackId ✔️
cpu_idle Percent Multi Region, InstanceId ✔️
cpu_idle Percent Multi Region, LayerId ✔️
cpu_nice The percentage of time that the CPU is handling processes with a positive nice value, which have a lower scheduling priority Percent Multi StackId ✔️
cpu_nice Percent Multi Region, InstanceId
cpu_nice Percent Multi Region, LayerId
cpu_steal The percentage of time that an instance is waiting for the hypervisor to allocate physical CPU resources Percent Multi StackId ✔️
cpu_steal Percent Multi Region, InstanceId ✔️
cpu_steal Percent Multi Region, LayerId ✔️
cpu_system The percentage of time that the CPU is handling system operations Percent Multi StackId ✔️
cpu_system Percent Multi Region, InstanceId
cpu_system Percent Multi Region, LayerId
cpu_user The percentage of time that the CPU is handling user operations Percent Multi StackId ✔️
cpu_user Percent Multi Region, InstanceId ✔️
cpu_user Percent Multi Region, LayerId ✔️
cpu_waitio The percentage of time that the CPU is waiting for input/output operations Percent Multi StackId ✔️
cpu_waitio Percent Multi Region, InstanceId
cpu_waitio Percent Multi Region, LayerId
load_1 The load averaged over a one-minute window None Multi StackId
load_1 None Multi Region, InstanceId
load_1 None Multi Region, LayerId
load_5 The load averaged over a five-minute window None Multi StackId ✔️
load_5 None Multi Region, InstanceId ✔️
load_5 None Multi Region, LayerId ✔️
load_15 The load averaged over a 15-minute window None Multi StackId
load_15 None Multi Region, InstanceId
load_15 None Multi Region, LayerId
memory_buffers The amount of buffered memory None Multi StackId ✔️
memory_buffers None Multi Region, InstanceId
memory_buffers None Multi Region, LayerId
memory_cached The amount of cached memory None Multi StackId ✔️
memory_cached None Multi Region, InstanceId
memory_cached None Multi Region, LayerId
memory_free The amount of free memory None Multi StackId ✔️
memory_free None Multi Region, InstanceId ✔️
memory_free None Multi Region, LayerId ✔️
memory_swap The amount of swap space None Multi StackId ✔️
memory_swap None Multi Region, InstanceId
memory_swap None Multi Region, LayerId
memory_total The total amount of memory None Multi StackId ✔️
memory_total None Multi Region, InstanceId ✔️
memory_total None Multi Region, LayerId ✔️
memory_used The amount of memory in use None Multi StackId ✔️
memory_used None Multi Region, InstanceId ✔️
memory_used None Multi Region, LayerId ✔️
procs The number of active processes None Multi StackId ✔️
procs None Multi Region, InstanceId ✔️
procs None Multi Region, LayerId ✔️