Note: Scott Turner and his team from Verizon Terremark performed the tests on the Terremark Cloud Platform and other public clouds. Scott can be reached at firstname.lastname@example.org
One of the most appealing benefits of cloud deployment is the ease of use and the flexibility of adding or removing compute capacity. You can dynamically allocate resources based on the changing workloads which give you the flexibility in managing your compute cost.
AWS Auto Scaling enables you to closely follow the demand curve for your applications, reducing the need to provision Amazon EC2 capacity in advance. For example, if your CPU Utilization goes over 70% you can add additional EC2 instances. Similarly, if your CPU Utilization goes down above a threshold, you can remove EC2 instances.
Similarly, Rackspace auto scaling can be Schedule-based and Event-based. You can prepare for a burst of traffic during specific holidays or during peak hours by creating a schedule or you can monitor specific events like a CPU load and provision additional capacity.
In the Cloud Foundry based PAAS platform, auto-scaling an application to hundreds of instances using BOSH auto-scaler is as simple as defining auto scaling rules and this can create multiple instances of your applications.
$ cf scale myApp -i 500
While auto scaling flexibility is beneficial and a required feature for any cloud deployment, this can quickly add cost to your cloud deployment. More compute resources means more cost and this can spiral quickly without addressing the real problem – is your app using the resources you’ve already provisioned properly? Do you even have that level of visibility? Are you closing the visibility gap?
What is actually needed is complete application visibility to address your cloud performance issues. The first step is to optimize your apps and benchmark the performance and get the visibility you need. The next step is Auto Scaling, which can enhance your application performance to meet the unknowns.
How do you prepare your application to meet this challenge?
To answer this question, we ran some tests and analyzed the results. To start, we deployed a standard e-Commerce store application on a LAMP stack from a couple of Cloud Providers. The e-Commerce store sells electronics and has the full functionality of an e-Commerce site. The key transactions on this store are:
- Register as user
- Place order
- Receive email
- Check status of order
- Payment type (check or money order)
This app is pre-packaged from Bitnami and runs on an open source LAMP stack which allows visualization of infrastructure.
The infrastructure consisted of Web/App servers and MySQL database in multiple cloud sites on the US East and West Coasts and in Europe.
We have Multi-Tenants Firewalls and Load Balancers. The e-Commerce platform is RHEL 64 bit OS, Apache, MySQL, PHP, and Magento. The Webserver / Application Servers are in DMZ (1vCPU x 4 GB x 10 GB Disk) and the MySQL DB Server in Internal (1vCPU x 4 GB x 10 GB Disk)
Next, we used Dynatrace Synthetic Monitoring 2.0 to script key transactions on the store. We generated a steady synthetic load mimicking real users from various backbone and last mile locations and across ISP’s. We scripted single URL and multiple step tests to measure the availability and performance of the Website in the cloud.
We then instrumented backend with the Dynatrace solution, by deploying the PHP/Web agents across all the tiers of the application.
What are the results?
As we ran up load against the e-Commerce site, we could see the performance and availability of the site across different geographical locations, different ISPs and benchmark the performance on cloud providers. Looking at the CPU/Disk/IO/Network capabilities, we saw differences in the different cloud provider’s ability to seamlessly provide the same level of response time and throughput.
Results and Takeaways
Clouds are elastic; you could potentially increase compute power to get to peak performance across different providers but it comes at a cost. Prepare for the inevitable, under similar circumstances, performance varies.
You can use the tools like I mentioned in the test scenario to get visibility into the performance of your cloud-hosted app. Start by identifying where the app is spending time: is it the front end, network or within the cloud hosting environment? Drill down to the end-to-end transaction flow in Dynatrace and quickly identify bottlenecks, optimize resource and application.
In order to ensure high availability and performance make sure these steps are the part of your cloud strategy.
- Proactive performance and load testing
- Optimize your application by gaining the end to end visibility
- Benchmarking and comparison (Try deploying your app on multiple cloud service providers, run the tests previously mentioned and see who performs better)
- Right sizing the cloud resource/capacity planning.
You can solve your performance problems with extra compute resources as you move your apps to the cloud, or you can perform these steps and close the visibility gap.