Amazon Elastic Inference

Dynatrace ingests metrics for multiple preselected namespaces, including Amazon Elastic Inference. You can view graphs per service instance, with a set of dimensions, and create custom graphs that you can pin to your dashboards.

Prerequisites

To enable monitoring for this service, you need

Add the service to monitoring

In order to view the service metrics, you must add the service to monitoring in your Dynatrace environment.

Configure service metrics

Once you add a service, Dynatrace starts automatically collecting a suite of metrics for this particular service. These are recommended metrics.

Recommended metrics:

  • Are enabled by default
  • Can't be disabled
  • Can have recommended dimensions (enabled by default, can't be disabled)
  • Can have optional dimensions (disabled by default, can be enabled)

Apart from the recommended metrics, most services have the possibility of enabling optional metrics.

Optional metrics:

  • Can be added and configured manually

Import preset dashboards

Dynatrace provides preset AWS dashboards that you can import from GitHub to your environment's dashboard page. Once you download a preset dashboard locally, there are two ways to import it.

elastic-inference

Available metrics

Name Description Unit Statistics Dimensions Recommended
AcceleratorHealthCheckFailed Reports whether the Elastic Inference accelerator has passed a status health check in the last minute Count Sum InstanceId, ElasticInferenceAcceleratorId ✔️
AcceleratorHealthCheckFailed Count Multi InstanceId, ElasticInferenceAcceleratorId
AcceleratorInferenceWithClientErrorCount The number of inference requests reaching the Elastic Inference accelerator in the last minute that resulted in a 4xx error Count Sum InstanceId, ElasticInferenceAcceleratorId ✔️
AcceleratorInferenceWithClientErrorCount Count Multi InstanceId, ElasticInferenceAcceleratorId
AcceleratorInferenceWithServerErrorCount The number of inference requests reaching the Elastic Inference accelerator in the last minute that resulted in a 5xx error Count Sum InstanceId, ElasticInferenceAcceleratorId ✔️
AcceleratorInferenceWithServerErrorCount Count Multi InstanceId, ElasticInferenceAcceleratorId
AcceleratorMemoryUsage The memory of the Elastic Inference accelerator used in the last minute Bytes Multi InstanceId, ElasticInferenceAcceleratorId ✔️
AcceleratorSuccessfulInferenceCount The number of successful inference requests reaching the Elastic Inference accelerator in the last minute Count Sum InstanceId, ElasticInferenceAcceleratorId ✔️
AcceleratorSuccessfulInferenceCount Count Multi InstanceId, ElasticInferenceAcceleratorId
AcceleratorTotalInferenceCount The number of inference requests reaching the Elastic Inference accelerator in the last minute Count Sum InstanceId, ElasticInferenceAcceleratorId ✔️
AcceleratorTotalInferenceCount Count Multi InstanceId, ElasticInferenceAcceleratorId
AcceleratorUtilization The percentage of the Elastic Inference accelerator used for computation in the last minute Percent Multi InstanceId, ElasticInferenceAcceleratorId ✔️
ConnectivityCheckFailed Reports whether connectivity to the Elastic Inference accelerator is active or has failed in the last minute Count Sum InstanceId, ElasticInferenceAcceleratorId ✔️
ConnectivityCheckFailed Count Multi InstanceId, ElasticInferenceAcceleratorId