• Home
  • Deploy Dynatrace
  • Set up Dynatrace on cloud platforms
  • Google Cloud Platform
  • Integrations
  • End-to-end guide for monitoring GCP services integrating Operations Suite
  • Set up the Dynatrace GCP metric integration on a GKE cluster

Set up the Dynatrace GCP metric integration on a GKE cluster

Dynatrace version 1.230+

As an alternative to the main deployment, that provides Google Cloud Platform monitoring for both metrics and logs, you can choose to set up monitoring for metrics only. In this scenario, you'll run the deployment script in Google Cloud Shell. Instructions will depend on the location where you want the deployment script to run:

  • On a new GKE Autopilot cluster created automatically recommended

  • On an existing GKE standard or GKE Autopilot cluster

During setup, GKE will run a metric forwarder container. After installation, you'll get metrics, dashboards, and alerts for your configured services in Dynatrace.

For other deployment options, see Alternative deployment scenarios.

This page describes how to install version 1.0 of the GCP integration on a GKE cluster.

  • If you already have an earlier version installed, you need to migrate.

Limitations

Dynatrace GCP metric integration supports up to 50 GCP projects with the standard deployment. To monitor larger environments, you need to enable metrics scope. See Monitor multiple GCP projects - Large environments.

Prerequisites

To deploy the integration, you need to make sure the following requirements are met:

GCP permissions

Running the deployment script requires a list of permissions. You need to create a custom role (see below) and use it to deploy dynatrace-gcp-monitor.

  1. Create a YAML file named dynatrace-gcp-monitor-helm-deployment-role.yaml with the following content:
dynatrace-gcp-monitor-helm-deployment-role.yaml
yaml
title: Dynatrace GCP Monitor helm deployment role description: Role for Dynatrace GCP Monitor helm and pubsub deployment stage: GA includedPermissions: - container.clusters.get - container.configMaps.create - container.configMaps.delete - container.configMaps.get - container.configMaps.update - container.deployments.create - container.deployments.delete - container.deployments.get - container.deployments.update - container.namespaces.create - container.namespaces.get - container.pods.get - container.pods.list - container.replicaSets.create - container.replicaSets.get - container.replicaSets.getScale - container.replicaSets.getStatus - container.replicaSets.list - container.secrets.create - container.secrets.delete - container.secrets.get - container.secrets.list - container.secrets.update - container.serviceAccounts.create - container.serviceAccounts.delete - container.serviceAccounts.get - container.services.create - container.services.delete - container.services.get - container.statefulSets.create - container.statefulSets.delete - container.statefulSets.get - container.statefulSets.update - iam.roles.create - iam.roles.list - iam.roles.update - iam.serviceAccounts.actAs - iam.serviceAccounts.create - iam.serviceAccounts.getIamPolicy - iam.serviceAccounts.list - iam.serviceAccounts.setIamPolicy - resourcemanager.projects.get - resourcemanager.projects.getIamPolicy - resourcemanager.projects.setIamPolicy - serviceusage.services.enable - serviceusage.services.get
  1. Run the command below, replacing <your_project_ID> with the project ID where you want to deploy the Dynatrace integration.
bash
gcloud iam roles create dynatrace_monitor.helm_deployment --project=<your_project_ID> --file=dynatrace-gcp-monitor-helm-deployment-role.yaml

Note: Be sure to add this role to your GCP user. For details, see Grant or revoke a single role.

GCP settings

The location where you deploy the integration determines whether you need to change additional settings.

Deploy on a GKE Autopilot cluster

If you deploy the integration on an existing GKE Autopilot cluster or on a new Autopilot cluster that will be automatically created by the deployment script, you don't need to make any additional settings.

Deploy on a GKE standard cluster

If you deploy the integration on an existing GKE standard cluster, you need to:

  • Enable Workload Identity on a cluster.
  • Enable GKE_METADATA on the GKE node pools.

Dynatrace permissions

  • Create an API token and enable the following permissions:
    • API v1:
      • Read configuration
      • Write configuration
    • API v2:
      • Ingest metrics
      • Read extensions
      • Write extensions
      • Read extension monitoring configurations
      • Write extension monitoring configurations
      • Read extension environment configurations
      • Write extension environment configurations

Install

Complete the steps below to finish your setup.

Download the Helm deployment package in Google Cloud Shell

Configure parameter values

Connect your Kubernetes cluster

Run the deployment script

Download the Helm deployment package in Google Cloud Shell

bash
wget -q "https://github.com/dynatrace-oss/dynatrace-gcp-monitor/releases/latest/download/helm-deployment-package.tar"; tar -xvf helm-deployment-package.tar; chmod +x helm-deployment-package/deploy-helm.sh

Configure parameter values

  1. The Helm deployment package contains a values.yaml file with the necessary configuration for this deployment. Go to helm-deployment-package/dynatrace-gcp-monitorand edit the values.yaml file, setting the required and optional parameter values as follows.

    Note: You might want to store this file somewhere for future updates, since it will be needed in case of redeployments. Also, keep in mind that its schema can change. In such case, you should use the new file and only copy over the parameter values.

Parameter nameDescriptionDefault value
gcpProjectIdrequired The ID of the GCP project you've selected for deployment.Your current project ID
deploymentTyperequired Set to 'metrics'.all
dynatraceAccessKeyrequired Your Dynatrace API token with the required permissions.
dynatraceUrlrequired For SaaS metric ingestion, it's your environment URL (https://<your-environment-id>.live.dynatrace.com).
For Managed metric ingestion, it's your cluster URL (https://<cluster_ID>.managed.internal.dynatrace/e/<your_environment_ID>).
For Managed metric ingestion with an existing ActiveGate, it's the URL of your ActiveGate (https://<your_activegate_IP_or_hostname>:9999/e/<your_environment_ID>).
Note: To determine <your-environment-id>, see environment ID.
requireValidCertificateoptional If set to true, Dynatrace requires the SSL certificate of your Dynatrace environment.true
selfMonitoringEnabledoptional Send custom metrics to GCP to quickly diagnose if dynatrace-gcp-monitor processes and sends metrics to Dynatrace properly. For details, see Self-monitoring metrics for the Dynatrace GCP integration.false
serviceAccountoptional Name of the service account to be created.
dockerImageoptionalDynatrace GCP Monitor docker image. We recommend using the default value, but you can adapt it if needed.dynatrace/dynatrace-gcp-monitor:v1-latest
printMetricIngestInputoptional If set to true, the GCP Monitor outputs the lines of metrics to stdout.false
serviceUsageBookingoptional Service usage booking is used for metrics and determines a caller-specified project for quota and billing purposes. If set to source, monitoring API calls are booked in the project where the Kubernetes container is running. If set to destination, monitoring API calls are booked in the project that is monitored. For details, see Monitor multiple GCP projects - Standard environments - Step 4.source
useProxyoptional Depending on the value you set for this flag, the GCP Monitor will use the following proxy settings: Dynatrace (set to DT_ONLY), GCP API (set to GCP_ONLY), or both (set to ALL).By default, proxy settings are not used.
httpProxyoptional The proxy HTTP address; use this flag in conjunction with USE_PROXY.
httpsProxyoptional The proxy HTTPS address; use this flag in conjunction with USE_PROXY.
gcpServicesYamloptional Configuration file for GCP services.
queryIntervaloptional Metrics polling interval in minutes. Allowed values: 1 - 63
scopingProjectSupportEnabledoptional Set to true when metrics scope is configured, so metrics will be collected from all projects added to the metrics scope. For details, see Monitor multiple GCP projects - Large environments.false
  1. Choose which services you want Dynatrace to monitor.

    By default, the GCP integration starts monitoring a set of selected services. Uncomment any additional services you want Dynatrace to monitor in the values.yaml file.

Note: For DDU consumption information, see Monitoring consumption.

Connect your Kubernetes cluster

  • If you want to have a new GKE Autopilot cluster created by the deployment script, add --create-autopilot-cluster to the script. Setting up a connection to the cluster will happen automatically in this case and you can proceed to step 4.
  • If you run the deployment script on an existing standard GKE or GKE Autopilot cluster, you can connect to your cluster from the GCP console or via terminal. Follow the instructions below.
  1. In your GCP console, go to your Kubernetes Engine.
  2. Select Clusters, and then select Connect.
  3. Select Run in Cloud Shell.

Run the command below, making sure to replace

  • <cluster> with your cluster name
  • <region> with the region where your cluster is running
  • <project> with the project ID where your cluster is running
sh
gcloud container clusters get-credentials <cluster> --region <region> --project <project>

For details, see Configuring cluster access for kubectl.

Run the deployment script

  • If you run the deployment script on an existing standard GKE or GKE Autopilot cluster, the deployment script will create an IAM service account with the necessary roles and deploy dynatrace-gcp-monitor to your Kubernetes cluster.
  • If you run the deployment script with the --create-autopilot-cluster option, the deployment script will automatically create the new GKE Autopilot cluster and deploy dynatrace-gcp-monitor to it.

To run the deployment script, follow the instructions below.

The latest versions of GCP extensions will be uploaded. You have two options:

  • Run the deployment script without parameters if you want to use the default values provided (dynatrace-gcp-monitor-sa for the IAM service account name and dynatrace_monitor for the IAM role name prefix):
bash
cd helm-deployment-package ./deploy-helm.sh
  • Run the deployment script with parameters if you want to set your own values (be sure to replace the placeholders with your desired values):
bash
cd helm-deployment-package ./deploy-helm.sh [--role-name <role-to-be-created/updated>]

Note: To keep the existing versions of present extensions and install the latest versions for the rest of the selected extensions, if they are not present, run the command below instead.

bash
cd helm-deployment-package ./deploy-helm.sh --without-extensions-upgrade

Run the command below. The latest versions of extensions will be uploaded.

bash
cd helm-deployment-package ./deploy-helm.sh --create-autopilot-cluster

Note: To set a different name for the new cluster, run the command below instead, making sure to replace the placeholder (<name-of-new-cluster>) with your preferred name.

bash
cd helm-deployment-package ./deploy-helm.sh --create-autopilot-cluster --autopilot-cluster-name <name-of-new-cluster>

Note: To keep the existing versions of present extensions and install the latest versions for the rest of the selected extensions, if they are not present, run the command below instead.

bash
cd helm-deployment-package ./deploy-helm.sh --create-autopilot-cluster --without-extensions-upgrade

Verify installation

To check whether installation was successful

  1. Check if the container is running.

    Note: After the installation, it may take a couple of minutes before the container is up and running.

    plaintext
    kubectl -n dynatrace get pods
  2. Check the container logs for errors or exceptions. You have two options:

Run the following command.

plaintext
kubectl -n dynatrace logs -l app=dynatrace-gcp-monitor -c dynatrace-gcp-monitor-metrics

To check the container logs for errors in your GCP console

  1. Go to Logs explorer.
  2. Use the filters below to get metric and/or log ingest logs from the Kubernetes container:
    • resource.type="k8s_container"
    • resource.labels.container_name="dynatrace-gcp-monitor-metrics"
  1. Check if dashboards are imported.

    In the Dynatrace menu, go to Dashboards and filter by Tag for Google Cloud. A number of dashboards for Google Cloud Services should be available.

Enable alerting

To activate alerting, you need to enable metric events for alerting in Dynatrace.

To enable metric events

  1. In the Dynatrace menu, go to Settings.
  2. In Anomaly detection, select Metric events.
  3. Filter for GCP alerts and turn on On/Off for the alerts you want to activate.

View metrics

After deploying the integration, you can see metrics from monitored services (in the Dynatrace menu, go to Metrics and filter by gcp).

View enabled services

The list of currently enabled services can be found in the cluster's ConfigMap named dynatrace-gcp-monitor-config.

Update services

Adding, removing, and updating versions of existing services is done by modifying the corresponding list of services and redeploying.

  1. Edit values.yaml by commenting or uncommenting configuration blocks corresponding to specific services. Note: If you already deleted the deployment package and don't have the original values.yaml file anymore, you can use a new one. In this case, the new file will override your previous configuration, so make sure not to accidentally disable monitoring of previously monitored services. Terminology within the file includes:

    • service: represents GCP service name you want to monitor. Services are grouped by extensions, but you can decide what you need to monitor on a lower level (featureSets)
    • featureSet: a set of metrics for a given service. default_metrics is a default featureSet with a recommended set of metrics to be monitored. In more specific use cases, you can consider monitoring such sets as istio featureSet for gae_instance service
    • filter_conditions: a service-level filter that enables you to narrow the monitoring scope. It is based on the GCP Monitoring filters.
      Example:
      yaml
      filter_conditions: resource.labels.location = "us-central1-c" AND resource.labels.namespace_name = "dynatrace"
  2. Update monitored services by running the script below.

    Note: Version upgrade of extensions is done by default. To keep the versions of existing extensions, run the script with the --without-extensions-upgrade parameter.

    bash
    cd helm-deployment-package ./deploy-helm.sh
  3. If you removed services from monitoring, find the relevant extensions in your Dynatrace Hub (in the Dynatrace menu, go to Extensions) and delete them to remove service-specific assets (dashboards, alerts, etc).

Example

In the following example

  • The gae_instance service is disabled.
  • For the gce_instance service, only two feature sets are enabled: default_metrics and istio.
yaml
# Google App Engine Instance #- service: gae_instance # featureSets: # - default_metrics # vars: # filter_conditions: "" # Google VM Instance - service: gce_instance featureSets: - default_metrics # - agent # - firewallinsights - istio # - uptime_check vars: filter_conditions: ""

For a complete list of the GCP supported services, see Google Cloud Platform supported service metrics.

Change deployment settings

  • To change the deployment type (all, metrics, or logs), see Change deployment type.
  • To change which services are monitored, see Add or remove services.
  • To change other settings in values.yaml, see Change parameters from values.yaml.

Change parameters from values.yaml

To load a new values.yaml file, you need to upgrade your Helm release.

To update your Helm release

  1. Find out what Helm release version you're using.

    plaintext
    helm ls -n dynatrace
  2. Run the command below, making sure to replace <your-helm-release> with the value from the previous step.

    plaintext
    helm upgrade <your-helm-release> dynatrace-gcp-monitor -n dynatrace

For details, see Helm upgrade.

Change deployment type

To change the deployment type (all, metrics, or logs)

  1. Find out what helm release version you're using.

    plaintext
    helm ls -n dynatrace
  2. Uninstall the release.

    Note: Be sure to replace <your-helm-release> with the release name from the previous output.

    plaintext
    helm uninstall <your-helm-release> -n dynatrace
  3. Edit deploymentType in values.yaml with the new value and save the file.

  4. Run the deployment command again. For details, see Run the deployment script.

Troubleshoot

To investigate potential deployment and connectivity issues

  1. Verify installation
  2. Enable self-monitoring optional
  3. Check the dynatrace_gcp_<date_time>.log log file created during the installation process.
  • This file will be created each time the installation script runs.
  • The debug information won't contain sensitive data such as the Dynatrace access key.
  1. If you are contacting Dynatrace ONE:
    • Make sure to provide the dynatrace_gcp_<date_time>.log log file described in the previous step.
    • Provide version information.
      • For issues during installation, check the version.txt file.
      • For issues during runtime, check container logs.

Uninstall

  1. Find out what Helm release version you're using.
plaintext
helm ls -n dynatrace
  1. Uninstall the release.

Note: Be sure to replace <your-helm-release> with the release name from the previous output.

plaintext
helm uninstall <your-helm-release> -n dynatrace

Alternatively, you can delete the namespace.

bash
kubectl delete namespace dynatrace
  1. To remove all monitoring assets (dashboards, alerts, etc) from Dynatrace, you need to remove all GCP extensions.

To remove an extension

  1. In the Dynatrace menu, go to Extensions and search for the GCP extensions.
  2. Select an extension you want to remove, and then select the trash icon in the Actions column to remove it.

Repeat the procedure until you remove all GCP extensions.

Monitoring consumption

All cloud services consume DDUs. The amount of DDU consumption per service instance depends on the number of monitored metrics and their dimensions (each metric dimension results in the ingestion of 1 data point; 1 data point consumes 0.001 DDUs). For details, see Extending Dynatrace (Davis data units).

Related topics
  • Set up Dynatrace on Google Cloud Platform

    Monitor Google Cloud Platform with Dynatrace.