In part one of this series, I talked through the common pain points software delivery teams face as they’re asked to support cloud adoption and modernization initiatives. Part one also provided an overview of Dynatrace’s Cloud Automation solution, Microsoft’s GitHub Actions, and open-source examples you can use and extend related to deployment and release monitoring.
This blog continues with more examples of Dynatrace’s Monitoring as Code (Monaco) and Service Level Objectives (SLOs) release validation using Dynatrace SaaS Cloud Automation. For orientation on the use cases in this blog series, refer to the picture below.
Example #3 – Automate Monitoring configuration as code
The 2021 State of DevOps report found successful organizations enable application teams to set up and configure monitoring and alerting through self-service capabilities, removing the need for manual work from teams responsible for monitoring. Without such rules, configuring your environments can result in chaos, with losses in flexibility, speed, and stability.
The Dynatrace configuration API, helps many Dynatrace customers implement and gain the benefits of this best practice for their Dynatrace configurations such as alerting rules, synthetic scripts, dashboards, and SLO monitors. The Dynatrace configuration API helps customers keep track and manage Dynatrace monitoring environment configurations through GET, ADD, UPDATE, DELETE endpoints for each type of configuration.
Since collections of configurations are typically required for multiple Dynatrace environments, Dynatrace has developed the Monaco toolset to help manage and execute the various configuration files within a project structure. Monaco is implemented as a Command Line Interface (CLI) utility. This CLI extracts a specified Dynatrace environment and project folder containing the collection of Dynatrace configuration files and then it will use these files for the payloads when it calls the Dynatrace configuration API to update the Dynatrace configuration.
Below is a picture illustrating the use case of using Monaco as part of a code delivery pipeline.
To automate Monaco within code delivery pipelines, the Dynatrace team has created an open-source container image with the Dynatrace’s Monaco called the “Monaco Runner”, which is added as a GitHub Action since container images can be run natively by GitHub as part of an Action workflow.
The GitHub action workflow called “Dynatrace Monitoring as Code” pipeline has this logic and actions:
- The “Monaco Runner” image is defined as the GitHub Action source image
- Define Dynatrace URL and API Token as environment variables
- Define the “Monaco Runner” as the action step. In this example, “deploy” the configuration.
- The “Monaco Runner” reads in the specification files. The example is taken from this collection of files
- The result is a new or updated configuration within Dynatrace. In this example, new tagging rules are shown
The image below also refers to this example.
Monaco brings DevOps teams a self-service model for establishing monitoring so they can focus more time on building business services, as described in more detail in this blog post. Monaco also fits to the GitOps process and mindset – where one describes the desired state of the whole system using a declarative specification for each environment.
Example #4 – Automated release validation
Since 2019, Dynatrace has been leading the development of an open-source initiative called Keptn to help organizations adopt cloud-native concepts for their cutting-edge microservice applications and application modernization initiatives.
Keptn eliminates the need for organizations to custom code scripts that tie together different DevOps tools of choice for delivery and operational automation. Keptn solves the custom tool integration dilemma through an open standard communication protocol-driven through the Continuous Delivery Foundation (CDF). Keptn also automates orchestration decisions through the core project capability of evaluating SLOs between every automation sequence task.
This main use case – automated release evaluation – has also been widely adopted as a capability known as “Quality Gates” which integrates seamlessly into existing continuous delivery automation – adding data-driven decisions without manual coding for new software builds and releases.
To understand the setup for Quality Gates, refer to the picture below. On the left are the set of specific metrics to be collected, known as Service level indicators (SLIs). On the right, are the SLOs that define the pass, fail, and warning criteria for each SLI. SLIs can come from any data provider. As described earlier for monitoring configuration, these SLI and SLO configurations are also checked into a code repo using this declarative specification for each environment and service.
If the SLI metrics provider is Dynatrace, SLIs and SLOs can also be configured within a custom Dynatrace dashboard versus having to manage the individual files. When the SLO evaluation is requested, the Cloud Automation service simply reads the SLOs and SLIs that were defined within the dashboard. This approach provides the benefit of having a dashboard that’s in sync with the SLO automation.
Below is an example that shows a Dynatrace dashboard and SLO evaluation results.
- On the left is the dashboard where various SLIs, SLOs, and targets are configured.
- To the right is the SLO Evaluation page where the target and actual value for each SLI are displayed along with the evaluation result. In this example, the result is a “warning” because the overall score was 80 – that’s between the passing score of 90 and the failing score of 70.
A “Quality Gate” evaluation can be triggered via the Keptn API or the binary. Once triggered, the SLO evaluation service first gathers each SLI actual value, compares them against their targets, and finally aggregates them into a total SLO score. The resulting score is used to determine a pass/fail to allow or to stop the promotion of a release.
To make it easy to integrate the SLO evaluation request into GitHub workflows you can start with an open-source container image, called Keptn Automation, which calls the Keptn CLI. This is the same approach described above for the “Monaco Runner”.
The image below demonstrates this example GitHub action workflow called the “SLO evaluation” pipeline and is carried out through these actions:
- Define the “Keptn Automation” image as the GitHub Action source image
- Define the various values like the evaluation timeframe, Keptn URL, Keptn API Token, and Keptn project as environment variables
- Define the “Keptn Automation” as the GitHub Action step. In this example, the step performs the SLO evaluation
- The “Keptn Automation” reads in the specification files, performs the evaluation, and outputs the result in a log with a URL to the SLO Evaluation web page. The example is taken from this collection of files
- Detail for each SLI and the overall result are viewable on the SLO evaluation page.
Example #5 – Onboard service to Dynatrace Cloud Automation
The Keptn SLO evaluation in the previous example has the few pre-requisites.
- A Dynatrace tenant and an instance of Keptn or Dynatrace Cloud Automation installed
- Dynatrace Keptn service installed in the Keptn environment
- Dynatrace URL and API token stored as a Secrets object read by the Dynatrace Keptn service
This second set of steps onboard a service and when the service SLI processing rules change.
- A Keptn project with environment stages (such as DEV, TEST, PROD)
- For each service, register it within a Keptn project as a new Keptn service
- For each service, configure the Dynatrace SLI processing to use “Dynatrace dashboard” or “SLO and SLI resource files” for the SLO evaluations.
The good news is these steps can be easily automated and leveraged by multiple teams by using the same open-source GitHub Action, Keptn Automation, described in the previous section.
The “Keptn Automation” GitHub action is simply a Docker container image containing the Keptn CLI and set of Unix bash scripts that have the logic to call the Keptn CLI for these use cases:
- Create the Dynatrace Keptn service secret
- Onboard a Keptn service
- Perform SLO evaluation
If there’s a new version of the Keptn CLI, users simply update and rebuild the Docker container.
Call the following set of Keptn CLI commands to “Onboard a Keptn service”. Refer to the image below for this example GitHub action workflow to onboard a service.
- Workflow environment variables section: These variables are both strings, such as project name, and the paths to SLI and SLO file within the repo and are made available to all the steps in the workflow.
- Workflow job container: This section specifies the Keptn Automation image and tag to use as well as the expected environment variable values to be used by the container when the step is run.
- Workflow job steps section: There are two important steps for this section. The running within Kubernetes. The second step will invoke the Keptn create-service command to onboard the catalog service to Keptn.
- View the onboarded service: Once the “Keptn onboard service” pipeline is run, the newly onboarded service can be viewed within the Keptn web UI along with the historical SLO evaluation results for that service as shown in the example below.
Try it yourself
All the examples shared in this blog post are available as open-source and demoed on Microsoft DevRadio, so be sure to try them in your environment and let us know how you get on.
As mentioned earlier in this blog, Keptn is embedded into the Dynatrace Platform and offered commercially for customers to use within our Cloud Automation module and any of the examples from this blog work with either open source Keptn or the Dynatrace Cloud Automation module.
We’d love to hear your feedback on how combination of GitHub Actions and Dynatrace helps you to:
- Reduce time spent on manual processes by simplifying and standardizing Kubernetes deployments and introducing self-service monitoring as code
- Stop the finger-pointing by adding context for environment changes and versioning
- Improve customer experiences by ensure that only high-quality releases through automated SLO quality gates.
- Part 1: How Dynatrace and GitHub help you deliver better software faster
- Monitoring-as-code through Dynatrace’s Open-Source Initiative
- Answer-driven release validation with Dynatrace SaaS Cloud Automation
- Transparent and confident software delivery with Dynatrace Release Analysis
Release Validation Product Tour
Learn how Dynatrace prevents bad quality code from reaching production with continuous release validation. Automatically evaluate code against pre-defined quality criteria and only progress code when it achieves the desired quality score.
Looking for answers?
Start a new discussion or ask for help in our Q&A forum.Go to forum