PostgreSQL is a powerful enterprise class database, serving companies like Skype, Instagram or Etsy.com. However, with today’s growing enterprise data PostgreSQL performance can suffer. By keeping an eye on the overall database health and proactively looking for potential problems you can resolve them before they have a chance to affect the user experience.
By following this database performance checklist you can easily find issues and optimize your PostgreSQL database accordingly:
By checking CPU, memory, and disk space metrics you make sure your PostgreSQL processes have sufficient resources available.
CPU - PostgreSQL databases will deliver better performance on faster CPUs. When monitoring virtual machines, also monitor the virtual host that the machines run on. Numbers like CPU ready time are of particular importance.
Page faults per seconds - Having thousands of page faults per second indicates that your hosts are out of memory.
Disk space - For an optimal PostgreSQL performance make sure you have lots of disk space available on your hard drive.
Knowing which services access your PostgreSQL DB is vital for finding performance bottlenecks. If there is a single service that’s suffering from bad database response times, dig deeper into that service’s metrics to find out what’s causing the problem.
Take a deeper look into the service’s communication with the database and find our what kind of commands affect the database performance the most.
Even if the way you query your database is perfectly fine, you may still experience inferior database performance. Make sure if your application’s database connection is correctly sized.
If a database performance issue suddenly appears, process level visibility comes in handy in identifying a failing component.
Dynatrace monitors and analyzes the activity of your PostgreSQL databases’ performance across all platforms, providing visibility down to individual database statements.