The 5 elements of Application Performance Monitoring

Application Performance Monitoring is the general term for all of the initiatives needed to get a firm grip on your application landscape. Correctly configuring all of the related aspects is very important to maintaining this grip in a world in which the IT landscape is constantly changing.

Gartner has developed a 5-dimensional model that you can use to maintain control of your application performance. The dimensions are End User Experience, Runtime Application Architecture, Business Transactions, Deep Dive Component Monitoring and Analytics / Reporting. In this blog post I’ll be discussing all of these dimensions.

gartner apm model

End User Experience

The best way of getting a grip on the status of your application landscape is to monitor it from the end user’s perspective. Measuring the end-user experience presents the clearest impression, but also makes it possible for you to respond quickly to faults and reliably assess their impact. There are roughly two ways of analysing the end-user experience.

The first is agentless monitoring. This monitoring method makes use of the network traffic that goes through the load balancers and switches. The traffic is tapped and analysed by data probes. This yields information about the performance of transactions throughout the whole infrastructure and you also find out more about the client in question: location, browser, operating system, etc. The advantage of this analysis method is that you can quickly present data since there is no need to build complex scripts. You also gain a lot of information about the clients who use the application. But there are some downsides, too. First, there is a lack of information about the performance from the end-user to the IT landscape. This is not included in the monitoring. Secondly, you will be unable to monitor if there are no users and if the infrastructure is interrupted just before the network traffic tap.

The second analysis method is synthetic monitoring. This involves using probes/robots to run scripts that simulate an end-user. We often use 3 to 5 monitoring robots to simulate several users so that there is a constant monitoring flow and you are able to respond quickly to possible faults, even if there are no real users. The advantages of this monitoring method are that you always have data available and that you run a fixed pattern on the application so that results are not subject to outside influences. Problems in the application can be detected even before opening. This monitoring method is also suitable for monitoring SLAs that are set for an application.

This way of monitoring is indispensible: without it, you have no information about the end-user experience and will draw the wrong conclusions.

Runtime Application Architecture

To avoid faults or solve them more quickly it is vital that the organisation understands how the IT infrastructure works.

Faults can be avoided by carrying out a good impact analysis of the changes being made to the IT infrastructure beforehand. If you are unclear about which components are situated where and which of them communicate with each other, changes could lead to a chain reaction that presents the end-user with an even more serious fault.

You can clear faults more quickly if you are able to link a monitoring signal to the underlying IT infrastructure.

At this stage of the Gartner APM model you have to ensure that you always have access to the right knowledge: what type of IT infrastructure do I have and which components communicate with each other? It is also important for you to monitor the change process at all times: which changes are being introduced at this point and which oncoming changes could disrupt each other?

Tools that scan the IT infrastructure at set times ensure that you always have the latest information, which is automatically updated. This rules out human errors.

Business Transactions

In fact, this has already been briefly covered in the 1st stage of the model: End User Monitoring. The aspects monitored by agentless or synthetic monitors often consist of simple URLs or database calls. These simple transactions can always be back-translated into transactions implemented by the user. These are also referred to as Business Transactions. The SLAs drawn up between the business and IT generally cover the performance and availability aspects, which are specified for each business transaction.

Synthetic monitors are the most suitable for supplying the data. This is because they take the form of predefined transactions that are implemented according to a set schedule.

Deep Dive Component Monitoring

If we can say something about the end-user’s experience we should be able to detect faults in the IT infrastructure as soon as possible and solve them. The purpose of this is to avoid disturbing the end-user’s experience, and if this does happen the impact is minimised.

Component Monitoring involves monitoring all components of the IT infrastructure. This covers network components, servers, OS, middleware and application components. These components can be monitored agentless by periodically scanning the various components, but this can also be done with an agent. The advantage of agentless is that the impact on the components is low because barely any of the components’ resources are called on. But an agent has more options regarding the components. Agents are usually built specifically for certain components in order to gain the best possible result. Also, it is often the case that agents have more rights to the components than agentless monitoring tools.

If possible, the best result can be achieved by connecting component monitoring to the charted IT infrastructure and ensuring that this is combined with the right form of End User Monitoring.

Analytics / Reporting

The tools set out above generate a huge amount of data: big data. Translating this data into information is of vital importance to getting a good return on investment. OK, so we are better able to predict faults, but what are the current trends? When do faults occur, what is the capacity of the IT infrastructure, which changes are on their way and are we prepared for them?

Improving the end-user experience is a continuous process. The IT organisation contains a wealth of information that the business can use to improve its service. Unfortunately, this is often neglected.

Conclusie

I hope that the model explained above will prompt you as a CIO to develop a sound monitoring strategy. During my 10-year career I’ve seen many tools come and go, and these days there are still plenty of developments, such as the cloud. But the model described above can always be applied. New tools make an APM implementation a good deal easier and less expensive, but the essence will remain the same.

Should you have any questions after reading this article please feel free to ask them below. You’re also welcome to do this on Twitter or other social media.

About Coen Meerbeek

Splunk consultant @ Blue Factory, eigenaar en oprichter @ BuzzardLabs, basketbalspeler en Xbox-gamer. Lees meer van Coen op Launchers.nl en Twitter.

Speak Your Mind

*