AWS CodePipeline is a more recent addition to Amazon Web Services – allowing development teams to push code changes from source check-in all the way into production in a very automated way. While code pipelines like that are not new (e.g: XebiaLabs, Electric Cloud, Jenkins Pipeline), Amazon provides seamless integration options for AWS CodeCommit, S3, CodeDeploy, Elastic Beanstalk, Cloud Formation as well as integration options for popular external DevOps tools such as Jenkins, Solano CI, Apica, Runscope or Ghost Inspector. Here is a quick overview of the most common stages in an AWS CodePipeline, the integration options you have and where Dynatrace integrates:

Common AWS CodePipeline: from Source Commit to Production Deployment safeguarded with Dynatrace
Common AWS CodePipeline: from Source Commit to Production Deployment safeguarded with Dynatrace

I encourage you to also watch the 3-minute explainer video from Amazon on https://aws.amazon.com/codepipeline/. It does a great job in explaining the optimum workflow and usage of AWS CodePipeline and the other tools in the ecosystem.

While the video shows how easy it is to push code through a single pipeline, or even build multiple pipelines if you have multiple teams working on separate features, services or applications – it also shows where the bottleneck will be in case you really try to scale this. I summarized my thoughts in this blog but also recorded a quick video where I explain AWS CodePipeline and how to scale it with Dynatrace:

Slow Pipeline Phases become your new bottleneck

Just as I wrote in “Scaling Continuous Delivery: Shift-Left to Improve Lead Time” or as we heard from Capital One in our PurePerformance podcast: the biggest challenge is ensuring constant fast lead times when onboarding more teams and more developers. All of a sudden your slower pipeline stages – such as load and performance testing, integration testing or manual approvals – slow you down. I tried to visualize it with the following graphic:

The more developers checking in code and triggering a pipeline run the more likely it is that your pipeline runs longer: impacting lead time and resulting in late feedback
The more developers checking in code and triggering a pipeline run the more likely it is that your pipeline runs longer: impacting lead time and resulting in late feedback.

Automated Pipelines can lead to Bad Quality

I even go a step further. Having a pipeline that is fully automated and promises fast lead times might result in bad quality code. Especially when developers start checking in bad quality code without running local tests. Unit & Functional Tests might even be modified to simply “pass the pipeline” leading to too many builds that need to be load tested (if you have such a phase) before deploying it to production. A standard AWS CodePipeline does not prevent that from happening:

Pipelines are great: but AWS doesn’t prevent you from pushing bad code more frequently through the pipeline!
Pipelines are great: but AWS doesn’t prevent you from pushing bad code more frequently through the pipeline!

Shift-Left Quality: It Starts with the Developer

Test Driven Development is a fundamental component to successful DevOps and Continuous Delivery. If you have existing tests Dynatrace automatically analyzes the test execution before committing code changes to AWS CodeCommit or Git. Dynatrace automatically detects problems based on well-known architectural, scalability and performance patterns – even without running large scale performance or load tests. This is enabled through our PurePath technology and the integration into your IDE (Eclipse, Visual Studio 2015 or 2017, IntelliJ) as well as into your testing frameworks (NUnit, JUnit, xUnit, Selenium, WebDriver, Appium, JMeter)

Automatically detect problem patterns on your local workstation. This will result in fewer but higher quality code commits and results in a more efficient pipeline flow.
Automatically detect problem patterns on your local workstation. This will result in fewer but higher quality code commits and results in a more efficient pipeline flow.

Our mission is to enable engineers to build better code right from the start. This is why we provide you with a Lifetime Personal License of Dynatrace AppMon to be used on your local development and testing workstation: http://bit.ly/dtpersonal

Here are additional resources especially geared towards developers

Level-Up Continuous Integration: Auto Detect 80% of Problems Earlier

The same concept of automatic problem detection Dynatrace enables on local developer or tester workstation can also be applied to your Continuous Integration phase. When AWS CodePipeline kicks of a Jenkins build to execute Unit-, Integration-, Functional- or Acceptance Tests you can simply enable our Jenkins Plugin. This plugin will monitor every test execution and will identify the same architectural, scalability and performance regressions as compared to previous builds. Our experience tells us that we can detect up to 80% of quality issues in your faster Unit-, Integration and Functional Test Phase of your pipeline:

Dynatrace automatically baselines key quality metrics for your existing Unit-, Integration- or Functional Tests. This allows us to detect architectural, performance and scalability regressions early on.
Dynatrace automatically baselines key quality metrics for your existing Unit-, Integration- or Functional Tests. This allows us to detect architectural, performance and scalability regressions early on.

If you want to see how to Integrate Dynatrace with the Jenkins Instance that you trigger from AWS CodePipeline have a look at the following material:

Reduce Load Testing: Automate Analysis and Regression Detection

The previous two steps will ensure that only “worthy” code commits make it all the way to the more costly and longer running load and performance testing phase. Yet this phase typically takes long as analyzing the root cause of bad performing code under load and the change from one release to the next is not trivial if you can only look at the results from your load testing tools.

Dynatrace has its roots in Load Testing and therefore has the best support for automatically detecting the root cause of slow application performance. Not only that. We also automatically compare load tests from different builds to highlight the differences / regressions. This speeds up problem isolation and resolution. Dynatrace integrates with every Load Testing tool through our Load Testing Integration Interface. Whether it is Apica, Neotys, SilkPerformer, Load Runner, JMeter or others. We provide the same level of automated analysis and pull it back into your AWS CodePipeline:

Besides giving you code level root cause of your performance issues, Dynatrace also automatically compares load tests across builds and identifies regressions.
Besides giving you code level root cause of your performance issues, Dynatrace also automatically compares load tests across builds and identifies regressions.

We have a lot of material around load testing and speeding up the load testing phase of your pipeline. Here are some links to blogs and videos that explain the major benefits and integration points:

Closing the Feedback Loop: Monitoring Deployments and End Users

When deploying your application using AWS CodeDeploy, Cloud Formation or Elastic Beanstalk into your pre-production (Dev, QA, Staging) or into production you can simply deploy the Dynatrace Agents (Java, .NET, PHP, Node.js, Web Server …) with your application. There are two options

  1. Leverage the existing extension mechanism such as .ebextension or procfile for Elastic Beanstalk to inject our agents into your application
  2. Leverage our automated deployment scripts for Chef, Puppet, Ansible and Powershell

When deploying a new version into production also leverage our Deployment Incident REST API which allows you to mark a new Deployment with Dynatrace which will later become visible in our Dynatrace Dashboards. If you do A/B Testing or use Canary Releases where you have different versions of your application in production make sure to leverage the Dynatrace UEM Version Tracking Feature. This will give you monitoring capabilities for each of your simultaneously deployed apps. It also allows you to capture additional Meta Data from each user that uses your specific app versions:

Automatically Deploy Dynatrace with your Application and close the feedback loop to your engineering team by providing performance and end user insights.
Automatically Deploy Dynatrace with your Application and close the feedback loop to your engineering team by providing performance and end user insights.

If you want to learn more about how to deploy Dynatrace in order to monitor your applications, system and end users I suggest you start with watching my YouTube Tutorial on What is Dynatrace and How to get Started.

Scaling your DevOps Deployments with Dynatrace and AWS

If you integrate Dynatrace into your DevOps Pipeline you will not only see a more efficient pipeline and faster lead times. You will also see an increase in quality which starts right on the developers workstation. And as you are using one single monitoring solution from Dev via CI/CD into Ops you also eliminate any communication or data sharing gap that would normally exist when trying to correlate monitoring data form different tools used in the different stages:

Dynatrace integrates with your pipeline tool set. One monitoring solution for every phase which will lead to better lead times and better quality.
Dynatrace integrates with your pipeline tool set. One monitoring solution for every phase which will lead to better lead times and better quality.

If you want to try this on your own follow these simply steps

  1. Get your own Dynatrace Personal License and explore how Dynatrace works: http://bit.ly/dtpersonal
  2. Make yourself familiar with the different use cases along your delivery pipeline by watching our YouTube Tutorials: http://bit.ly/dttutorials
  3. Integrate and expand your Dynatrace footprint by using our plugins on GitHub: https://github.com/Dynatrace