Singapore is a leader in Cloud Migration, Digital Transformation and DevOps cultural change. Who says that? Besides key media outlets that covered last week’s announcements from GovTechStack 2018 (ZDNet, The Straits Times, Blogspot, …) I was fortunate enough to see for myself on how fast things are changing in Singapore. My previous trip in July, which I covered in my blog on “The State of Cloud Adoption in Australia, Singapore and Malaysia”, seemed to have made a good impression with GovTech. They invited me back to present my thoughts on the “Unbreakable Delivery Pipeline” at GovTechStack in front of the target audience that these media outlets were reporting on.
One of the key announcements that was made around GovTechStack was the decision of the Singaporean Government to push towards the public clouds as part of their Smart Nation charter. Dynatrace was chosen as a trusted partner to help them monitor and manage their diverse, multi-cloud environment. This is a great honor for all of us at Dynatrace and we are proud to be part of such a major transformation!
Optimizing my trip: How much can we fit into 80 hours 😊
As it is quite a long trip from Austria to Singapore, our local Dynatrace team in Singapore made sure to arrange additional meetings with some of the top technology and transformation leading companies in Singapore. Between my Monday, (6AM touchdown at Changi Airport) and my departure (Thursday, 11:30PM,) our local team arranged 13 meetings, workshops and events where I could both share my own thoughts on Cloud Migration, Up-leveling Monolithic into Microservices Architectures, Continuous Delivery, Monitoring as a Service, Performance as a Self-Service, Self-Healing, DevOps and even NoOps. To round it up, I did an analyst briefing and one of my Performance Clinics just before boarding the plane 🙂
Before highlighting some of the topics I want to say THANK YOU to our local team – especially Kim Ee NG – who pulled all the strings and made this trip go as smooth as it went!
Also, THANKS TO ALL the different teams at FWD, DBS, GovTech, BlackRock, Red Hat, NCS, JPMC, Daimler, AWS, Barclays, StarHub and our ASEAN Sales Engineering Team that I met and exchanged thoughts with this week. Here a little collage of pictures we took:
Across all my meetings, I noticed a handful of questions that were asked by all technology leaders:
- How to start or speed up a DevOps Transformation?
- What are the approaches to a successful Cloud Migration?
- What do you think about breaking the Monolith?
- How to integrate Performance Feedback in CI/CD?
- How to automate Performance Engineering?
- How do we get to Self-Healing Systems?
Without going too deep into every question let me glace over each question, what my typical answer is and let me also give you links and show you screenshots that I also used while being onsite with these groups.
#1: How to start or speed up a DevOps Transformation?
Two years ago, when I first talked about our own Dynatrace DevOps transformation at a local event in Singapore I felt that most organizations, as well as many people, were not ready for that change. Mainly, I am talking about the cultural change that comes with DevOps.
Two years later, it seems that everyone understands that the biggest change is not related to technology, but in the way we design our processes, assign responsibilities, structure our organization and how to start automating manual tasks so that we have more time to learn new technologies and with that become more innovative.
When I was asked “Hey Andi, what’s your advice for anybody here in the room to start to change?” I often responded with these simple tips:
- Take 15 minutes out of your day and start automating a manual task that you currently do more than once per week. Benefits: You don’t need to ask for permission for 15 minutes; you will learn how to think about automation and once finished, you will save time when executing these tasks!
- Learn a new scripting language or a new tool! Automation is a key part of our transformation and there will be new tools we will be using. I personally try to learn a new scripting language on a regular basis, e.g: I decided to use Python for my Dynatrace CLI vs choosing a language I already knew!
- Once a month attend a local meetup or user group. Benefits: It allows you to learn something new; you see how others solve problems with tools and technologies; you make new friends and you typically get food and drinks as well! 😊
- Once a year, try to present at a meetup. Benefits: It forces you to think about how to share your gained knowledge in a certain space with others. Sharing is a key aspect of DevOps where the idea is that we openly share with our peers. Get started in a local meetup!
For more information check out the DevOps-related Episodes on our PurePerformance Podcast, e.g: DevOps at Facebook, DevOps from the Frontlines, From 6 Months Waterfall to 1h Code Deploys at Dynatrace.
#2: What are the approaches to a successful Cloud Migration?
Cloud migration is a big topic – both for vendors of Cloud & PaaS solutions as well as enterprises that need to figure out how their cloud strategy looks like. I really like the following 6-R Migration Pattern visualization. I believe it was AWS who first came out with it – at least that’s where I have seen it first. It explains the different strategies which are Rehost, Replatform, Repurchase, Refactor, Retire or Retain:
I put some thoughts into how we as Dynatrace can help with all of these patterns and presented this at our executive lunch meeting we co-hosted with Red Hat at Singapore’s Suntec City:
- Rehosting: Automate your Migration
Install OneAgents on your existing infrastructure and leverage Dynatrace Smartscape and the auto dependency, auto-load and auto-resource consumption capturing to decide which parts of your existing infrastructure you should and should not rehost. For more information check out /solutions/cloud-migration/
- Replatforming: Packaged Software Monitoring
If you decide to replace your existing services with a packaged app offering leverage Dynatrace OneAgent for automated dependency detection, automated instrumentation, automated baselining and automated suggestions on how to optimize these packaged apps. Want to learn more? /news/blog/optimizing-engineering-productivity-on-atlassian-with-addteq-and-dynatrace/
- Replatforming: Containers, Cloud Services, Logs & Serverless
If you rebuild your applications on a new platform leveraging containers, cloud services or even Serverless leverage Dynatrace OneAgent for automated container, microservices, log and serverless monitoring: Find out more @ /capabilities/microservices-and-container-monitoring
- Refactoring: Break the Monolith
Dynatrace can be used to virtually break the monolith and give better data to architects about where and where not to break the monolith. For more information check out my blog on /news/blog/breaking-the-monolith-an-8-step-recipe
- Replace: Visibility into SaaS Services
Some replace existing applications & services with state-of-the art SaaS-based offerings such as Workday, Salesforce, Office365, Concur, … – Dynatrace provides Real User Monitoring for SaaS-based solutions as well to ensure that your business workflows are not impacted by performance or user experience issues. Try Vendor SaaS RUM yourself: /capabilities/saas-vendor-real-user-monitoring
#3: What do you think about breaking the Monolith?
Instead of starting to break every monolith into microservices you should first ask the question: What is the problem with our current Monolithic architecture and how can we make this go away?
Most of the time I hear that “Time to Market” is impacted by the current monolithic architecture, but that problem might not only be solvable through a Microservice architecture. What’s really slowing things down are lengthy approval processes, manual and error prone deployment processes, very long build times and too many dependencies. These problems do not necessarily go away with Microservices. Before blindly going down the Microservice route you might want to think about:
- How can we improve our approval process? Improving and automating that process will help for Monolith and Microservice architectures.
- How can we fully automate the deployment process? How can we make it so that in theory – everyone can deploy into any environment? The more often we deploy the fewer problems you will see. This again will benefit any type of architecture!
- How can we bring build times down to minutes? Faster build times give developers faster feedback on their code changes and will automatically result in higher quality releases. We at Dynatrace spend lots of time and energy to keep build times low. If they get too long the automated feedback loops we built into CI/CD are less effective developers are already on to their next task!
- How can we reduce dependencies? The fewer dependencies you have the faster you can build and deploy. This is true for libraries you depend on in a monolith or other services in a microservice architecture.
As you can see: Before you should think about moving to Microservices there are a lot of other areas you can improve. Microservices are not a silver bullet that will make all these challenges mentioned above go away!
If you have a Monolith where breaking it into Microservices makes sense, e.g: dynamic scaling, canary releases, … you should check out my blog post on Breaking the Monolith as well as the excellent blog series from my colleagues Johannes and Jürgen on Fearless from Monolith to Microservices where they walk you step-by-step through breaking a monolith app!
#4: How to integrate Performance Feedback in CI/CD?
I have been promoting earlier Performance Feedback in CI/CD over the last couple of years. In the last 12-months, I focused on figuring out how we can integrate Dynatrace with your Load Testing Tools (JMeter, Neotys, Load Runner, …) that get executed as part of the CI/CD pipeline. My goal was to automate the approval of a build pipeline based on key performance metrics.
I was inspired by Mark Tomlinson (formerly PayPal) and Thomas Steinmaurer (Dynatrace) and how they automated performance checks into their pipelines. The concept of a “Performance Signature” was born which allows developers to define which performance metrics are important to them. This “Performance Signature” can be written in JSON (=Monitoring Definition as Code) and be put into version control where it is picked up by the CI/CD pipeline for every build. For every build the pipeline can automatically query these key metrics, report back the actual values to the pipeline and even let it act as a quality gate.
The first implementation I did was for AWS CodePipeline as part of my AWS Unbreakable Pipeline Tutorial. After this we saw Donovan Brown and Abel Wang implement the Unbreakable Pipeline and Performance Signature concept for Azure DevOps (formerly VSTS). Last but not least – our partner T-Systems just implemented the Performance Signature with Dynatrace for Jenkins plugin which fully automates performance feedback into your CI/CD:
For more information also check out my initial blog post on the Unbreakable DevOps Pipeline: Shift-Left, Shift-Right and Self-Healing as well as Trades of a Performance Engineer in 2020!
#5: How to automate Performance Engineering?
I learned Performance Engineering in my previous job at Segue Software – which later became Borland and is now part of Microfocus. I was a tester, engineer, product manager and product evangelist for SilkPerformer and with that ran a lot of large-scale load tests and helped our customers to analyze performance hotspots and try to give them advice on who to make these apps perform faster.
The big question in the recent years was how to automate the analysis and advice, giving so that performance engineering can be provided more as a Self-Service and even part of the delivery pipeline. At Dynatrace, the different teams focusing on automated diagnostics invested a lot of time to automate classical performance engineering tasks which is crunching through a lot of data that was captured during a load test and then presenting the top findings. Additionally, they invested in automatically highlighting differences between builds or between different load scenarios. This has always been a task that took a lot of time as it requires to crunch through a lot more data and find the differences.
The following screenshot shows how Dynatrace enables performance engineers to automatically analyze the differences between different load testing runs. The full power of the filter capabilities is enabled through Load Testing Integrations (see doc for more):
For more information on automating performance engineering check out my blog on Load Testing Redefined and the YouTube tutorials on Dynatrace Diagnostics (listed on my YouTube playlist).
#6: How do we get to Self-Healing Systems?
When I mention the term “Self-Healing” I always see eyes sparkling, as this is the “next big thing”. To keep the discussion more realistic, I typically talk about automation of remediating actions. The idea here is that we can take manual runbooks, convert them into automation scripts and trigger those when our monitoring detects a problem with a specific use case.
The following is an animation from one of my slide decks I use when talking about Auto-Remediation, triggered by Dynatrace. It explains that we can trigger targeted auto remediation actions. Actions that potentially developers can define as part of their code delivery and that the auto-remediation workflow can trigger in case a specific problem was detected in an application, service or infrastructure:
If you are interested in learning more, I suggest you read up on the blog posts from Jürgen and myself, as we have been writing about self-healing and auto-remediation for a while now:
- Auto-Mitigation with Dynatrace – or shall we call it Self-Healing?
- ServiceNow & Dynatrace – Symbiosis for self-healing Applications
- Set up Ansible Tower with Dynatrace to enable self-healing applications
- How StackStorm enables auto-remediation with Dynatrace
Conclusion: I will go back to Singapore to learn more!
There was a lot I learned in my 80 hours in Singapore and I want to say THANK YOU again for the friendly atmosphere and the willingness to share your experience and listen to my ideas. I hope the summary of those questions I received will benefit everyone that is already going or about to go through a cloud, DevOps or digital transformation.
I want to end with one of the best quotes I had seen during my trip. I saw it at the GovTech office where we posed for one of the group pictures (see below): Be Happy, Be Awesome, help others to be HAPPY & AWESOME!