More and more Web sites and applications are being moved from Apache to nginx. While Apache is still the number 1 HTTP server with more than 60% on active Web sites, nginx has now taken over the 2nd place in the ranking and relegated Microsoft’s IIS to 3rd place. Among the top 10.000 Web sites nginx is already the leader in the field, with a market share of 40%.

And the reasons are obvious: nginx is a high-speed, lightweight HTTP server engine. The performance improvement is quite significant for serving static content. Especially at high load, nginx is much faster than Apache and consumes much less resources on the server. Thus, concurrent requests can be handled more efficiently. As a consequence, the same tasks can be fulfilled by less hardware. And every byte of memory, CPU or even server to be economized reduces your infrastructure costs.

I ran some load tests: 10.000 requests showed quite remarkable differences, even more distinct with more concurrent users. Note that with Apache the total execution time increases with the number of users, while nginx can easily handle that. For 2000 users nginx could process the requests almost 4 times faster!
I ran some load tests: 10.000 requests showed quite remarkable differences, even more distinct with more concurrent users. Note that with Apache the total execution time increases with the number of users, while nginx can easily handle that. For 2000 users nginx could process the requests almost 4 times faster!

While nginx uses an event-based request handling in a small number of processes, Apache is spawning new processes or threads for each request, depending on the processing mode. Apache’s default multi-process (prefork) mode creates child processes for each request. Such a process is a complete instance of Apache including all linked modules. That means that even a request for static content, like an image, causes a new process to be started and the PHP module to be loaded.

Apache can also be operated in a multi-threaded (worker) mode, which creates multiple threads in fewer processes, one per request. Thus, it consumes much less memory, but the operation is no longer thread-save. Therefore modules like mod_php can’t be used.

I went through the exercise of figuring out the best way to leverage nginx on an application that runs on Apache. In this blog we will cover the actual installation steps, different deployment and migration scenarios, as well as how to measure the actual performance gain.

Installing nginx

All you have to do to start boosting your application performance is to install nginx on your machine and follow some configuration rules. In this article I will be referencing an example site running Ubuntu.

sudo apt-get install nginx

No doubt, Apache provides much more functionality by supplying a broad range of mountable modules and many more options to be configured. A common way to adjust the behavior of a website is a combination of the virtual host setup and using the .htaccess file. First of all: this file doesn’t exist in nginx, which is another performance bonus. Apache checks every single directory in the path of the requested file for an .htaccess file and evaluates the content if it exists. And, if not configured properly, keeping a config file together with your data could result in a severe security issue. Nginx keeps the configuration in a central place and loads the settings into memory at startup.

Even if you are not sure whether or not you really should replace Apache by nginx, you could always use both together? We will cover this later.

Migrating Configuration

There are quite some similarities, but it’s important to understand the differences between the configurations. Just like Apache, nginx keeps the files in /etc/nginx/sites-available. Use a symbolic link for active configurations in /etc/nginx/sites-enabled.

First of all, create a server block for each virtual host.

server {
listen 80;

}

The basic setup for running a site is similar to Apache, with a slightly different syntax:

#

# Apache

#

#

# nginx

#

<VirtualHost *:80>

ServerName mysite.com
ServerAlias www.mysite.com

DocumentRoot /srv/www/mysite
DirectoryIndex index.php index.html

</VirtalHost>

server {
listen: 80;

server_name mysite.com www.mysite.com;

root /srv/www/mysite;
index index.php index.html;

}

To add specific behavior for certain requests define a location inside your server block. You can use regular expressions to select the effected requests:

server {

location / {
try_files $uri $uri/ @notfound;
}

location /doc/ {

alias /usr/share/doc/;

autoindex on;

allow 127.0.0.1;

deny all;

}

location /images/ {
root /media/images

}

location @notfound {

rewrite (.*) /index.php?paramstring=$1;

}

location ~ /\.ht {

deny all;

}
}

This sample configuration shows some of the setup options for server/locations. Make sure to create a config to deny .ht* files, as nginx is not doing that from scratch. Direct access to these files is automatically rejected by Apache. Note that familiar options from Apache can be found here in nginx: allow/deny, alias, rewrite, etc.

Please refer to the online documentation on nginx.org for further information.

Especially when you have multiple websites running on your server, and lots of requests causing high load, it is a good decision to move to nginx. But multiple websites, configured differently, could result in a quite high effort on migration. There are some converters available doing that job for you, but mainly they convert a .htaccess file to an nginx config. But Apache also uses configurations for virtual hosts – do not forget about these! Even if converted by a tool, I recommend checking your configurations manually before using them in a production environment!

Tip: install nginx as the primary HTTP server and leave your Apache running on a different port. Migrate your virtual servers one-by-one by creating nginx server configurations, and forward requests for not-yet-migrated websites to Apache. I’ll show you how:

nginx and Apache

Not only as a fallback server for not yet migrated configurations it might be a requirement to keep your Apache up and running. As an example your website could depend on customized modules running in Apache. However, you can still profit from nginx’ performance advantages. Configured as a reverse proxy, nginx can serve static content directly, while dynamic requests get forwarded to Apache.

This transaction flow shows a web application, where static content is served by nginx, while dynamic requests are forwarded to Apache and handled by Apache’s PHP module.
This transaction flow shows a web application, where static content is served by nginx, while dynamic requests are forwarded to Apache and handled by Apache’s PHP module.
One of the powerful features of nginx is to serve static content very fast.
One of the powerful features of nginx is to serve static content very fast.
Requests that are not served by nginx according to the configuration are forwarded to Apache, which is located on a separate application server behind a firewall.
Requests that are not served by nginx according to the configuration are forwarded to Apache, which is located on a separate application server behind a firewall. 

Best practice: change your Apache configuration to listen to a port other than 80 (example: 8000), run nginx as your default HTTP server (listening on port 80) and forward all requests not served by nginx to your Apache.

Tip: create a default forwarding rule to Apache and migrate your websites step by step by adding new server configurations to nginx.

server {

listen 80 default_server;

server_name default;

location ~ \.(js|css|gif|jpg|jpe|jpeg|png|ico)$ {

try_files $uri $uri/ =404;

}

location / {

proxy_set_header X-Real-IP  $remote_addr;

proxy_set_header X-Forwarded-For $remote_addr;

proxy_set_header Host $host;

proxy_pass http://127.0.0.1:8000;
}
}

The first server section in your nginx configuration is used as the default. All requests that are not matching another configuration are handled by it. This example makes nginx forward all requests that are not handled in another server block to port 8000 on localhost. In our case we have Apache listening there.

server {

listen 80;
server_name www.website.com;

location ~ \.(js|css|gif|jpg|jpe|jpeg|png|ico)$ {
try_files $uri $uri/ =404;
}

location / {
proxy_set_header X-Real-IP  $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
proxy_pass http://127.0.0.1:8000;
}

}

The server block in this example makes nginx work as a reverse proxy, serving all requests for extensions matching the regular expression in the first location block directly, and sending all other requests to port 8000 on localhost. And just as before we have Apache listening there.

Monitoring nginx

Once you’ve decided to welcome nginx as a new member in your enterprise environment, you have to care for proper monitoring. There are quite a number of tools available for that purpose; even nginx itself provides modules for displaying a basic status. But you might want to get deeper insights and a complete picture of your environment. Therefore you should keep in mind some requirements for reliable monitoring:

  • Transaction monitoring
  • Response Times
  • Throughput / Bandwidth
  • User experience
  • Host health
  • Error detection and tracking
  • Testing

The more complex your environment becomes, the more important it is to get full end-to-end visibility of your transactions – from the user’s activity in the browser to the database and back, combined with cross-referenced information, like host health at a certain time when response times were poor. Only then will you be able to find bottlenecks or performance hotspots in your application.

The transaction flow is perfect to visualize occurrences of incidents.
The transaction flow is perfect to visualize occurrences of incidents.

 

It’s easy to drill down to the details to get further information and insights into the root cause.
It’s easy to drill down to the details to get further information and insights into the root cause.

nginx and dynaTrace

An easy way to monitor nginx, and get a 100% visibility of your transactions, is by using dynaTrace. The new version, 6.0 (get the free trial here), offers an nginx web server agent, which gains full coverage of the transactions running on your HTTP server. Integrated into your enterprise environment, dynaTrace provides full end-to-end visibility of your business logic, from the browser to the database.

Other than in Apache, where it is loaded as a module, the agent has to be linked dynamically at startup by using LD_PRELOAD:

LD_PRELOAD=/var/lib/dynatrace/agent/lib/libdtagent.so nginx

Tip: Include this line in your /etc/init.d/nginx startup file to load the dynaTrace agents at nginx startup or restart automatically.


start)
echo -n “Starting $DESC: “
test_nginx_config
# Check if the ULIMIT is set in /etc/default/nginx
if [ -n “$ULIMIT” ]; then
# Set the ulimits
ulimit $ULIMIT
fi
LD_PRELOAD=/var/lib/dynatrace/agent/lib/libdtagent.so start-stop-daemon –start –quiet –pidfile /var/run/$NAME.pid –exec $DAEMON — $DAEMON_OPTS || true
echo “$NAME.”
;;

Check the agent overview in your dynaTrace client if your nginx agent is connected to the server.

The Agent Overview dashlet of the dynaTrace client shows the connected agents.
The Agent Overview dashlet of the dynaTrace client shows the connected agents.

Once integrated into your environment with nginx, dynaTrace makes it easy to trace your transactions and find possible bottlenecks and performance hotspots. Check here for supported versions of nginx. Start monitoring your performance, and share with us your experiences in our forum.