How many times do people stand in a shop and say: "The service is a bit slow in here"? Chances are, they walk out - deciding not to go there again; and even go so far as to advise friends and work colleagues to avoid the place. Then they move on to "blame apportionment" - who to blame? They may start with the shop management - surely they could plan better for busy times? Customers may also cut them a little slack with the excuse that maybe they didn't expect that many patrons midweek and were not appropriately staffed.
Or perhaps, customers blame the staff for lack of interest in servicing them properly?
An IT department can sometimes be described in exactly the same manner. Its customers - in this case the business executives, external customers, internal employees and others - all want their applications to work fast and flawlessly. And another customer - the CFO - doesn't want to pick up an unnecessarily big bill at the end.
It all comes down to IT management and capacity planning. This allows a company to reliably predict how much IT capacity is needed based on past application performance, so it can organise its infrastructure accordingly. Get it right and deliver an exceptional end-user experience, reduce costs and minimise risk. Get it wrong and it's a gamble - there's loss of business through application downtime, high capital expenditure (capex), additional costs due to an under-utilised infrastructure, and uncertainty over how effective the infrastructure is at delivering service levels.
Planning capacity is a big challenge. Budgets are flat, while the demand for IT services has skyrocketed, driven by the march of mobile services, cloud and the consumerisation of IT. Complexity is everywhere: a blend of physical, virtual, cloud and mainframe systems all need to be optimised to deliver business-critical applications.
Search for answers
So, what is the initial solution?
Over-provision? Yes, that can be done. Build up resources (in this case infrastructure) to meet the forecasted increase in demand for IT services. However, this solution should see a company going out of business quite quickly. If the demand doesn't materialise the way it is predicted to, the company is left with idle servers. Over-provisioned data centres not only call for higher capex, they consume more operational costs, including maintenance, upgrades, power, and licensing. And there is the knowledge that the average utilisation of a virtualised server can be as low as 20%. So over-provisioning is not the right thing to do.
Most of the answers lie with application performance management (APM). It captures a company's transaction performance data from problem sources like applications, end-users, and infrastructure. APM uses integrated, end-user experience information to help prioritise problem resolution and manage service level agreements. But, a company still needs a way to cost-effectively address the capacity issue without increasing the risk to the business.
Comprehensive solution
Unified predictive capacity planning is what needs to happen. It blends the power of APM with the control of capacity management. This can be achieved by leveraging both APM and capacity management in a unified predictive capacity planning solution that can more reliably forecast future capacity needs. This allows a company to mitigate risks, help ensure quality of service and right-size the application delivery environment while optimising costs.
It all comes down to IT management and capacity planning.
This works in two ways. Firstly, when a problem occurs, APM alerts IT operations to an incident - such as a server running at more than 80% utilisation. The company recognises the need to take action and reach into the server workload data. It then uses the capacity management component to run 'what if?' scenarios for the affected server and entire application delivery chain, thereby solving the system problem which caused the APM alert.
Take the real case of a multinational food and beverage company. Its existing testing process failed to uncover bottlenecks in the company's large-scale SAP environment. Testing was also a very expensive process and the company was under pressure to cut costs and improve service delivery. This food and beverage company deployed a predictive reliability solution to deliver an early warning system during the design, test and production phases. This meant the development teams could discuss the data, redesign, and thus avoid problems that could have occurred in production. As a result, the company saved significant time and money by catching design problems early in the life cycle.
The second scenario for predictive capacity planning involves right-sizing the environment for future growth. Using APM performance data from the production environment, the solution enables the company to conduct scenario analyses simulating different load patterns. This allows it to optimise its production infrastructure with the right system configurations based on the planned workload. That's how a leading financial services organisation uses predictive capacity planning. This company was challenged to adapt to changes in consumer expectations - always there, always on, always with me - and also needed to lower the cost of building and operating applications. Model-based performance testing provided the confidence to know what was going to happen, for any given change to the finance firm's application environments. Costs went down, performance went up, and confidence in the supportive infrastructure soared.
Predictive capacity planning lets companies reliably right-size their infrastructure based on real performance trends.
Share