Set up the Dynatrace GCP metric integration on a GKE cluster
Dynatrace version 1.230+
As an alternative to the main deployment, that provides Google Cloud Platform monitoring for both metrics and logs, you can choose to set up monitoring for metrics only. In this scenario, you'll run the deployment script in Google Cloud Shell. Instructions will depend on the location where you want the deployment script to run:
-
On a new GKE Autopilot cluster created automatically recommended
-
On an existing GKE standard or GKE Autopilot cluster
During setup, GKE will run a metric forwarder container. After installation, you'll get metrics, dashboards, and alerts for your configured services in Dynatrace.
For other deployment options, see Alternative deployment scenarios.
This page describes how to install version 1.0 of the GCP integration on a GKE cluster.
- If you already have an earlier version installed, you need to migrate.
Limitations
Dynatrace GCP metric integration supports up to 50 GCP projects with the standard deployment. To monitor larger environments, you need to enable metrics scope. See Monitor multiple GCP projects - Large environments.
Prerequisites
To deploy the integration, you need to make sure the following requirements are met:
GCP permissions
Running the deployment script requires a list of permissions. You need to create a custom role (see below) and use it to deploy dynatrace-gcp-monitor
.
- Create a YAML file named
dynatrace-gcp-monitor-helm-deployment-role.yaml
with the following content:
- Run the command below, replacing
<your_project_ID>
with the project ID where you want to deploy the Dynatrace integration.
gcloud iam roles create dynatrace_monitor.helm_deployment --project=<your_project_ID> --file=dynatrace-gcp-monitor-helm-deployment-role.yaml
Note: Be sure to add this role to your GCP user. For details, see Grant or revoke a single role.
GCP settings
The location where you deploy the integration determines whether you need to change additional settings.
Deploy on a GKE Autopilot cluster
If you deploy the integration on an existing GKE Autopilot cluster or on a new Autopilot cluster that will be automatically created by the deployment script, you don't need to make any additional settings.
Deploy on a GKE standard cluster
If you deploy the integration on an existing GKE standard cluster, you need to:
Dynatrace permissions
- Create an API token and enable the following permissions:
- API v1:
- Read configuration
- Write configuration
- API v2:
- Ingest metrics
- Read extensions
- Write extensions
- Read extension monitoring configurations
- Write extension monitoring configurations
- Read extension environment configurations
- Write extension environment configurations
- API v1:
Install
Complete the steps below to finish your setup.
Download the Helm deployment package in Google Cloud Shell
Configure parameter values
Connect your Kubernetes cluster
Run the deployment script
Download the Helm deployment package in Google Cloud Shell
wget -q "https://github.com/dynatrace-oss/dynatrace-gcp-monitor/releases/latest/download/helm-deployment-package.tar"; tar -xvf helm-deployment-package.tar; chmod +x helm-deployment-package/deploy-helm.sh
Configure parameter values
-
The Helm deployment package contains a
values.yaml
file with the necessary configuration for this deployment. Go tohelm-deployment-package/dynatrace-gcp-monitor
and edit thevalues.yaml
file, setting the required and optional parameter values as follows.Note: You might want to store this file somewhere for future updates, since it will be needed in case of redeployments. Also, keep in mind that its schema can change. In such case, you should use the new file and only copy over the parameter values.
Parameter name | Description | Default value |
---|---|---|
gcpProjectId | required The ID of the GCP project you've selected for deployment. | Your current project ID |
deploymentType | required Set to 'metrics'. | all |
dynatraceAccessKey | required Your Dynatrace API token with the required permissions. | |
dynatraceUrl | required For SaaS metric ingestion, it's your environment URL (https://<your-environment-id>.live.dynatrace.com ).For Managed metric ingestion, it's your cluster URL ( https://<cluster_ID>.managed.internal.dynatrace/e/<your_environment_ID> ).For Managed metric ingestion with an existing ActiveGate, it's the URL of your ActiveGate ( https://<your_activegate_IP_or_hostname>:9999/e/<your_environment_ID> ).Note: To determine <your-environment-id> , see environment ID. | |
requireValidCertificate | optional If set to true , Dynatrace requires the SSL certificate of your Dynatrace environment. | true |
selfMonitoringEnabled | optional Send custom metrics to GCP to quickly diagnose if dynatrace-gcp-monitor processes and sends metrics to Dynatrace properly. For details, see Self-monitoring metrics for the Dynatrace GCP integration. | false |
serviceAccount | optional Name of the service account to be created. | |
dockerImage | optionalDynatrace GCP Monitor docker image. We recommend using the default value, but you can adapt it if needed. | dynatrace/dynatrace-gcp-monitor:v1-latest |
printMetricIngestInput | optional If set to true , the GCP Monitor outputs the lines of metrics to stdout. | false |
serviceUsageBooking | optional Service usage booking is used for metrics and determines a caller-specified project for quota and billing purposes. If set to source , monitoring API calls are booked in the project where the Kubernetes container is running. If set to destination , monitoring API calls are booked in the project that is monitored. For details, see Monitor multiple GCP projects - Standard environments - Step 4. | source |
useProxy | optional Depending on the value you set for this flag, the GCP Monitor will use the following proxy settings: Dynatrace (set to DT_ONLY ), GCP API (set to GCP_ONLY ), or both (set to ALL ). | By default, proxy settings are not used. |
httpProxy | optional The proxy HTTP address; use this flag in conjunction with USE_PROXY . | |
httpsProxy | optional The proxy HTTPS address; use this flag in conjunction with USE_PROXY . | |
gcpServicesYaml | optional Configuration file for GCP services. | |
queryInterval | optional Metrics polling interval in minutes. Allowed values: 1 - 6 | 3 |
scopingProjectSupportEnabled | optional Set to true when metrics scope is configured, so metrics will be collected from all projects added to the metrics scope. For details, see Monitor multiple GCP projects - Large environments. | false |
-
Choose which services you want Dynatrace to monitor.
By default, the GCP integration starts monitoring a set of selected services. Uncomment any additional services you want Dynatrace to monitor in the
values.yaml
file.
Note: For DDU consumption information, see Monitoring consumption.
Connect your Kubernetes cluster
- If you want to have a new GKE Autopilot cluster created by the deployment script, add
--create-autopilot-cluster
to the script. Setting up a connection to the cluster will happen automatically in this case and you can proceed to step 4. - If you run the deployment script on an existing standard GKE or GKE Autopilot cluster, you can connect to your cluster from the GCP console or via terminal. Follow the instructions below.
- In your GCP console, go to your Kubernetes Engine.
- Select Clusters, and then select Connect.
- Select Run in Cloud Shell.
Run the command below, making sure to replace
<cluster>
with your cluster name<region>
with the region where your cluster is running<project>
with the project ID where your cluster is running
gcloud container clusters get-credentials <cluster> --region <region> --project <project>
For details, see Configuring cluster access for kubectl.
Run the deployment script
- If you run the deployment script on an existing standard GKE or GKE Autopilot cluster, the deployment script will create an IAM service account with the necessary roles and deploy
dynatrace-gcp-monitor
to your Kubernetes cluster. - If you run the deployment script with the
--create-autopilot-cluster
option, the deployment script will automatically create the new GKE Autopilot cluster and deploydynatrace-gcp-monitor
to it.
To run the deployment script, follow the instructions below.
The latest versions of GCP extensions will be uploaded. You have two options:
- Run the deployment script without parameters if you want to use the default values provided (
dynatrace-gcp-monitor-sa
for the IAM service account name anddynatrace_monitor
for the IAM role name prefix):
cd helm-deployment-package
./deploy-helm.sh
- Run the deployment script with parameters if you want to set your own values (be sure to replace the placeholders with your desired values):
cd helm-deployment-package
./deploy-helm.sh [--role-name <role-to-be-created/updated>]
Note: To keep the existing versions of present extensions and install the latest versions for the rest of the selected extensions, if they are not present, run the command below instead.
cd helm-deployment-package
./deploy-helm.sh --without-extensions-upgrade
Run the command below. The latest versions of extensions will be uploaded.
cd helm-deployment-package
./deploy-helm.sh --create-autopilot-cluster
Note: To set a different name for the new cluster, run the command below instead, making sure to replace the placeholder (<name-of-new-cluster>
) with your preferred name.
cd helm-deployment-package
./deploy-helm.sh --create-autopilot-cluster --autopilot-cluster-name <name-of-new-cluster>
Note: To keep the existing versions of present extensions and install the latest versions for the rest of the selected extensions, if they are not present, run the command below instead.
cd helm-deployment-package
./deploy-helm.sh --create-autopilot-cluster --without-extensions-upgrade
Verify installation
To check whether installation was successful
-
Check if the container is running.
Note: After the installation, it may take a couple of minutes before the container is up and running.
kubectl -n dynatrace get pods
-
Check the container logs for errors or exceptions. You have two options:
Run the following command.
kubectl -n dynatrace logs -l app=dynatrace-gcp-monitor -c dynatrace-gcp-monitor-metrics
To check the container logs for errors in your GCP console
- Go to Logs explorer.
- Use the filters below to get metric and/or log ingest logs from the Kubernetes container:
resource.type="k8s_container"
resource.labels.container_name="dynatrace-gcp-monitor-metrics"
-
Check if dashboards are imported.
In the Dynatrace menu, go to Dashboards and filter by Tag for
Google Cloud
. A number of dashboards for Google Cloud Services should be available.
Enable alerting
To activate alerting, you need to enable metric events for alerting in Dynatrace.
To enable metric events
- In the Dynatrace menu, go to Settings.
- In Anomaly detection, select Metric events.
- Filter for GCP alerts and turn on On/Off for the alerts you want to activate.
View metrics
After deploying the integration, you can see metrics from monitored services (in the Dynatrace menu, go to Metrics and filter by gcp
).
View enabled services
The list of currently enabled services can be found in the cluster's ConfigMap named dynatrace-gcp-monitor-config
.
Update services
Adding, removing, and updating versions of existing services is done by modifying the corresponding list of services and redeploying.
-
Edit
values.yaml
by commenting or uncommenting configuration blocks corresponding to specific services. Note: If you already deleted the deployment package and don't have the originalvalues.yaml
file anymore, you can use a new one. In this case, the new file will override your previous configuration, so make sure not to accidentally disable monitoring of previously monitored services. Terminology within the file includes:service
: represents GCP service name you want to monitor. Services are grouped by extensions, but you can decide what you need to monitor on a lower level (featureSets
)featureSet
: a set of metrics for a given service.default_metrics
is a defaultfeatureSet
with a recommended set of metrics to be monitored. In more specific use cases, you can consider monitoring such sets asistio featureSet
forgae_instance service
filter_conditions
: a service-level filter that enables you to narrow the monitoring scope. It is based on the GCP Monitoring filters.
Example:filter_conditions: resource.labels.location = "us-central1-c" AND resource.labels.namespace_name = "dynatrace"
-
Update monitored services by running the script below.
Note: Version upgrade of extensions is done by default. To keep the versions of existing extensions, run the script with the
--without-extensions-upgrade
parameter.cd helm-deployment-package ./deploy-helm.sh
-
If you removed services from monitoring, find the relevant extensions in your Dynatrace Hub (in the Dynatrace menu, go to Extensions) and delete them to remove service-specific assets (dashboards, alerts, etc).
Example
In the following example
- The
gae_instance
service is disabled. - For the
gce_instance
service, only two feature sets are enabled:default_metrics
andistio
.
# Google App Engine Instance
#- service: gae_instance
# featureSets:
# - default_metrics
# vars:
# filter_conditions: ""
# Google VM Instance
- service: gce_instance
featureSets:
- default_metrics
# - agent
# - firewallinsights
- istio
# - uptime_check
vars:
filter_conditions: ""
For a complete list of the GCP supported services, see Google Cloud Platform supported service metrics.
Change deployment settings
- To change the deployment type (
all
,metrics
, orlogs
), see Change deployment type. - To change which services are monitored, see Add or remove services.
- To change other settings in
values.yaml
, see Change parameters fromvalues.yaml
.
Change parameters from values.yaml
To load a new values.yaml
file, you need to upgrade your Helm release.
To update your Helm release
-
Find out what Helm release version you're using.
helm ls -n dynatrace
-
Run the command below, making sure to replace
<your-helm-release>
with the value from the previous step.helm upgrade <your-helm-release> dynatrace-gcp-monitor -n dynatrace
For details, see Helm upgrade.
Change deployment type
To change the deployment type (all
, metrics
, or logs
)
-
Find out what helm release version you're using.
helm ls -n dynatrace
-
Uninstall the release.
Note: Be sure to replace
<your-helm-release>
with the release name from the previous output.helm uninstall <your-helm-release> -n dynatrace
-
Edit
deploymentType
invalues.yaml
with the new value and save the file. -
Run the deployment command again. For details, see Run the deployment script.
Troubleshoot
To investigate potential deployment and connectivity issues
- Verify installation
- Enable self-monitoring optional
- Check the
dynatrace_gcp_<date_time>.log
log file created during the installation process.
- This file will be created each time the installation script runs.
- The debug information won't contain sensitive data such as the Dynatrace access key.
- If you are contacting Dynatrace ONE:
- Make sure to provide the
dynatrace_gcp_<date_time>.log
log file described in the previous step. - Provide version information.
- For issues during installation, check the
version.txt
file. - For issues during runtime, check container logs.
- For issues during installation, check the
- Make sure to provide the
Uninstall
- Find out what Helm release version you're using.
helm ls -n dynatrace
- Uninstall the release.
Note: Be sure to replace <your-helm-release>
with the release name from the previous output.
helm uninstall <your-helm-release> -n dynatrace
Alternatively, you can delete the namespace.
kubectl delete namespace dynatrace
- To remove all monitoring assets (dashboards, alerts, etc) from Dynatrace, you need to remove all GCP extensions.
To remove an extension
- In the Dynatrace menu, go to Extensions and search for the GCP extensions.
- Select an extension you want to remove, and then select the trash icon in the Actions column to remove it.
Repeat the procedure until you remove all GCP extensions.
Monitoring consumption
All cloud services consume DDUs. The amount of DDU consumption per service instance depends on the number of monitored metrics and their dimensions (each metric dimension results in the ingestion of 1 data point; 1 data point consumes 0.001 DDUs). For details, see Extending Dynatrace (Davis data units).