When I started looking into Nginx, I was very impressed by the high performance of this lightweight HTTP server. But more and more I’ve become keen on the ease of its configuration. I have successfully used Nginx for serving PHP applications for quite a while, this article is about the lessons I have learned.

I’m using Nginx on Ubuntu Linux, the installation is straight forward:

sudo apt-get install nginx

Nginx as a reverse proxy

A common use case for Nginx is to act as a HTTP reverse proxy. Collecting all incoming web requests, these get forwarded to different destinations to be processed by the respective services. Nginx allows a very flexible configuration based on a combination of server name and location to define the behavior for specific URLs.

Requests for static files like images, JavaScript or style sheets can be served directly or forwarded to another Nginx behind a firewall.

PHP FastCGI process manager

As Nginx does not support the loading of external modules, PHP integration can’t be done the Apache way. Therefore we run PHP in the Fast Process Manager, that has been introduced in PHP 5.3.3.

The installation is as easy as for Nginx:

sudo apt-get install php5-fpm

PHP5-FPM can be configured to run multiple connection pools with different settings, in our example we will be using just one, which is the default pool defined in the pool.d/www.conf file. The most important setting there is the listen parameter, which defines, how php5-fpm accepts requests. Both TCP and Unix sockets may be defined here:

listen = /var/run/php5-fpm.sock     ; listen to unix socket
listen = 9000                       ; listen to port TCP 9000
listen = 127.0.0.1:9000             ; listen to port 9000 on 127.0.0.1

If PHP is running on the same host as Nginx, a communication via Unix sockets is recommended because it’s slightly faster than TCP.

A basic configuration would look like:

upstream php {
  server unix:/var/run/php5-fpm.sock;
}
server {
  listen   80;
  root /var/www;
  index index.php index.html index.htm;
  server_name www.mysite.com;
  location / {
    try_files $uri $uri/ =404;
  }
  location ~ .php$ {
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_pass php;
  }
}

URL rewrite

A typical method of passing parameters into a single index.php file is to use them as folder names in the URL. Here is an example:

instead of

http://www.mysite.com/index.php?module=news&action=browse&year=2014

we would like to use the URL

http://www.mysite.com/news/browse/2014

Apache is directly parsing HTML or PHP files. When a <?php … ?> section is found, this part is sent to the PHP module to be processed. Finally that section is replaced with the output of PHP. Therefore we have to make sure that the processed web request is for index.php.

This is done by performing a URL rewrite, usually in the .htaccess file:

...
RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-l
RewriteRule ^(.+)$ index.php
...

In this example we rewrite the original URL (/news/browse/2014) with index.php, which is therefore executed. $_SERVER[‘REQUEST_URI’] contains the original URL that was used to call the page. This can then be split and parsed in the PHP script.

A direct migration from Apache to Nginx would result in a configuration like:

upstream php {
  server unix:/var/run/php5-fpm.sock;
}

server {
  listen   80;
  root /var/www;
  index index.php index.html index.htm;
  server_name www.mysite.com;

  location / {
    try_files $uri $uri/ @missing;
  }

  location @missing {
    rewrite (.*) /index.php;
  }

  location ~ .php$ {
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_pass php;
  }
}

I’m using Dynatrace Application Monitoring to trace the transactions. The PurePath for a request to a server configured like that unveils the details:

Nginx PurePath
The initial web request to /news/browse/2014 triggers a new request to /index.php, which finally starts the PHP execution.

Tip: With Nginx and PHP-FPM there is a better way to do this. Nginx does not need to process the index.php file, as it won’t pick the <?php … ?> section anyway. Instead we have the FPM, which does not receive a PHP code from Nginx, but executes a complete file, in our case the index.php. The configuration should be like:

upstream php {
  server unix:/var/run/php5-fpm.sock;
}

server {
  listen   80;
  root /var/www;
  index index.php index.html index.htm;
  server_name www.mysite.com;

  # add locations for static files to not forward these requests to PHP
  location /images {
    try_files $uri $uri/ =404;
  }

  location /scripts {
    try_files $uri $uri/ =404;
  }

  location / {
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_param  SCRIPT_FILENAME /var/www/index.php;
    fastcgi_pass php;
  }
}

The server parameters SCRIPT_NAME, DOCUMENT_URI and PHP_SELF still contain the original URI (/news/browse/2014) and can be used for further parsing inside the script.

The execution plan now contains one less web request step:

The initial web request to /news/browse/2014 triggers the PHP execution directly, which is another step towards increased performance.
The initial web request to /news/browse/2014 triggers the PHP execution directly, which is another step towards increased performance.

FastCGI cache

With the ngx_http_fastcgi_module Nginx offers an integrated caching mechanism. Just a couple of lines need to be added to our config file:

fastcgi_cache_path /var/nginx/cache levels=1:2 keys_zone=SPX:100m inactive=60m;
fastcgi_cache_key "$scheme$request_method$host$request_uri";

upstream php {
  server unix:/var/run/php5-fpm.sock;
}

server {
  listen   80;
  root /var/www;
  index index.php index.html index.htm;
  server_name www.mysite.com;

  # add locations for static files to not forward these requests to PHP
  location /images {
    try_files $uri $uri/ =404;
  }

  location /scripts {
    try_files $uri $uri/ =404;
  }

  location / {
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_param  SCRIPT_FILENAME /var/www/index.php;
    fastcgi_pass php;
    fastcgi_cache SPX;
    fastcgi_cache_valid 200 60m;
  }
}

The parameter fastcgi_cache_path defines directory structure for the cache files. The first argument is the root path, while the second (level) defines the subfolders. The name of the cache file is created by calculating the MD5 hash for the given fastcgi_cache_key.

Fastcgi-cache

Keys_zone is the name of that cache and is used in the location, where the fastcgi_pass is used. The fastcgi_cache {zone key} finally activates the cache for the given location. By defining fastcgi_cache_valid 200 60m we make sure that a correctly processed HTTP request (response code 200) is stored in the cache for 60 minutes.

When a web request can be served from cache, no call to the FastCGI process manager has to be performed at all. That process does not even have to be active. This example shows how requests for a cached resource are processed:

The first request for index.php can’t be served from cache, therefore it is sent to PHP for processing. The result is stored in the FastCGI cache.
The first request for index.php can’t be served from cache, therefore it is sent to PHP for processing. The result is stored in the FastCGI cache.
SecondFastCGIrequest
The second request can already be served from cache. No PHP execution is required, the response time for the request is much faster.

Load balancing

If you have a high number of requests for your PHP application, you might want to move your PHP to a different host and change the upstream to connect via TCP socket rather than unix socket. You might also consider splitting the load onto different servers. This can be done very easily by just adding more servers to the upstream. The incoming requests are then sent to these hosts in the sequence of their appearance in the list.

upstream php {
  ip_hash;
  server 192.168.101.1:9000;
  server 192.168.101.2:9000;
  server 192.168.101.3:9000;
  server unix:/var/run/php5-fpm.sock backup;
}

By specifying the ip_hash directive you can make sure, that requests from a certain IP address always get forwarded to the same host. Thus it can be guaranteed, that steady sessions always find their session data. Another option would be to allow the requests to be sent to different hosts, but keep the session data in a commonly accessible storage like memcached.

The backup option allows defining a server that is only used if no other server is available.

This diagram shows PHP requests that have been distributed from Nginx to two different servers.
This diagram shows PHP requests that have been distributed from Nginx to two different servers.

Performance Management

Once your environment is configured properly to run your PHP application on Nginx, be sure to trace your transactions and monitor end-to-end performance. It is important to consider all tiers, starting at the browser, all the way down to the database.

End-to-end monitoring of your application allows you to find problem patterns and performance hotspots easily.
End-to-end monitoring of your application allows you to find problem patterns and performance hotspots easily.

I have used Dynatrace AppMon Free Trial to monitor the application. Register for your 30 days Free Trial with the option to get a lifetime license for your local machine. Learn more about our Share Your PurePath Program!