Analyzing the overall performance of any large computer system is more an art than a science. Understanding the applications, flow of data and balancing the resource demand to achieve best throughput requires a holistic view on the IT infrastructure. Clusters add another dimension of complexity to performance analysis in the form of distributing resources among nodes and the resulting inter-node communication.
The service commences with a detailed look at the application design, workload and the distribution of resources since understanding all of these is key to improving performance. Cluster wide performance measurement is followed by an on-site or off-site analysis of the measurement data. The service concludes with a performance report that shows utilization levels of all resources involved, throughput levels and response times by workload (if possible) and detailed tuning recommendations with improvement estimates.
In many environments it is often advantageous to understand the system demand by workload. The generic term workload is typically used to define the grouping of system activity, e.g. the demand per application, user community or database instance. The workload analysis service uses process or thread based measurements and accounting logs to determine CPU, memory and I/O (depends on operating system instrumentation) demands per workload. This information is valuable for balancing the workload of a cluster or tuning throughput and/or response times. When combined with the cluster performance analysis this service shows the best route to balance a cluster, decrease response time or improve throughput.
Capacity planning is the process of estimating the growth (or shrinkage) of system resources over time. It allows IT management to determine the right time for upgrades before the cluster runs out of steam. A workload analysis is used to determine the demands per workload, which form the basis of a capacity model. After being verified the capacity model allows the performance analyst to calculate the impact of changes to the configuration or applications like e.g.
- faster CPU's or disks (decreasing demand per unit)
- higher throughput (increase in units/sec)
- new application functionality (increasing demand per unit)
Running a benchmark is more than putting an application on a computer and make it perform as fast as possible. A proper cluster configuration, relevant application profiles, a load generator and sufficient data size are prerequisites for a successful benchmark. Letting us conduct the test avoids the pitfalls of benchmarketing and guarantees that the achieved results have relevance in the real world.