Introduction to Performance Monitoring

Chapter: Virtualization and Cloud Performance

Cloud computing seems to be everywhere these days. Some hail it as the future of IT, while others see it as an overhyped technology that brings nothing new to the table. Both arguments have some truth to them; let’s understand why this is.

It’s unlikely that cloud computing could have been realized without the underlying technology of virtualization, which has been an integral aspect of computing since the ’50s. This is why many view the cloud as evolutionary and not revolutionary.

Historically, virtualization has fulfilled two main purposes:

Mainframes have been able to run multiple isolated systems on single big machines forever, but it is in the last 20 years that this form of virtualization has entered the mainstream.

We can summarize the advantages of virtualization in two words: manageability and utilization.

Using virtualization, IT departments can manage a large number of systems on comparatively few physical machines. This is accomplished by adding a slim software layer between the guest operating system (the VM) and the physical hardware. This layer of middleware, known as the hypervisor, is like a traffic director. Requests from the guest system are either sent to the hardware or queued because a request from another VM had priority and was sent to the hardware first. In this way much of the administration can be done remotely, even moving VMs to different hardware or allocating more memory without restart.

At the same time, virtualization allows us to achieve greater hardware utilization. From the perspective of a performance expert, this might not always be desirable - higher utilization increases the potential for negative performance impacts on the application. But from an operations perspective, higher utilization means potentially less hardware. Ultimately, this is why virtualization is so desirable and successful: less hardware and, more importantly, lower operational cost. This key point has given birth to the cloud!

Virtualization has provided IT with the ability to provision new systems quickly without the need to buy extra hardware. In turn, this flexibility has made it possible to start, stop, and move deployments ever more quickly. A new configuration and management layer has been added to help set policies and control virtual-machine assignments and hardware allocations. While this is all achievable with less effort than it would without virtualization, the flexibility and agility demands can become a burden for large IT organizations. In response, IT organizations began automating parts of the process. Some of the biggest, like Amazon, went ahead and automated it all - and thus, the first private clouds were born.

But let’s back up to the basics for a moment. Computing clouds provide on-demand provisioning of VMs and other resources via the network - without manual intervention or physical access to those resources. Therefore, every cloud is controlled by a set of underlying policies to assure the best possible utilization of available resources while providing each and every VM the resources needed to run efficiently and effectively.

Amazon took this cloud-computing idea from the ’60s and did something completely logical, but at the same time totally unprecedented. Having created a colossal IT infrastructure, Amazon decided to rent out some of its excess capacity during nonpeak hours to others online, turning what had been a private cloud into the first public cloud. In one blow, Amazon became the technology leader in cloud computing and created a market where none had previously existed! Not surprisingly, others, along with many of the large hosting companies, quickly followed suit.

Both the cloud and virtualization came from the relentless quest of IT operations to reduce cost and improve operational efficiency. Interestingly, by making provisioning automatic, the traditional infrastructural building blocks of IT are being demoted, which means the business importance of applications is increasing. This side effect has been embraced by the industry and has given rise to the newest trends in cloud computing: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).

Read the Java enterprise performance eBook online

Chapter: Application Performance Concepts

Differentiating Performance from Scalability

Calculating Performance Data

Collecting Performance Data

Collecting and Analyzing Execution Time Data

Visualizing Performance Data

Controlling Measurement Overhead

Theory Behind Performance

How Humans Perceive Performance

Chapter: Memory Management

How Garbage Collection Works

The Impact of Garbage Collection on application performance

Reducing Garbage Collection Pause time

Making Garbage Collection faster

Not all JVMS are created equal

Analyzing the Performance impact of Memory Utilization and Garbage Collection


GC Configuration Problems

The different kinds of Java memory leaks and how to analyse them

High Memory utilization and their root causes

Classloader releated memory issues

Out-Of-Memory, Churn Rate and more

Chapter: Performance Engineering

Approaching Performance Engineering Afresh

Agile Principles for Performance Evaluation

Employing Dynamic Architecture Validation

Performance in Continuous Integration

Enforcing Development Best Practices

Load Testing—Essential and Not Difficult!

Load Testing in the Era of Web 2.0

Chapter: Virtualization and Cloud Performance

Introduction to Performance Monitoring in virtualized and Cloud Environments

IaaS, PaaS and Saas – All Cloud, All different

Virtualization’s Impact on Performance Management

Monitoring Applications in Virtualized Environments

Monitoring and Understanding Application Performance in The Cloud

Performance Analysis and Resolution of Cloud Applications

Start your 15-day free Java monitoring trial!

Try for free Contact us