The ideal IT infrastructure is one that provides companies with all the processing power and storage capacity they need - making allowances for varying needs at different times of the month - at the lowest cost and with the least human interaction.
In addition, the optimal IT set-up will manage all backups, middleware and integration tasks itself to ensure the company always experiences optimal performance and reliability from its data centre hardware and software. All this, of course, will happen automatically, behind the scenes, with IT staff only called on to intervene in exceptional circumstances when hardware and software need to be replaced or upgraded.
"This goal is the foundation of adaptive computing, where resources are assigned to where they are most needed and made dormant when not required," says Bernard Donnelly, consulting services manager, Unisys Africa. "But no matter what computing model is chosen, optimal infrastructure is not about technology, but about management.
"To achieve the optimal infrastructure, organisations need first to examine what they have, what their requirements are and what IT services will deliver to their specifications. This is the first step in policy-based computing."
Practical policy implementation
Policy-based computing allows business and IT to work together in theoretically defining the optimal performance of the data centre in relation to business needs. To put the policy into practice at a lower cost than traditional IT, however, requires a management application that can control the whole organisation`s IT infrastructure from a single interface.
This is not server, application or enterprise management software, for which there are many existing solutions, but an all-encompassing enterprise management application combining all of these in an effort to effectively govern the performance of IT assets - from mainframe hardware to Microsoft data centre offerings.
"Even companies using Windows as a data centre operating environment need to be sure their systems adhere to the policy," adds Donnelly. "To achieve this requires a systems management solution able to monitor all aspects of the technical landscape, automate corrective and preventative measures to avoid a system failure (instead of raising the alarm after a failure) and increase the overall reliability, scalability and performance of the data centre."
If this management software is to meet the demands of the IT policy, it needs to offer more than load balancing functionality; it must also provide a set of tools to manage workload, schedule resources and proactively monitor the status of the data centre. But that`s only the start.
Health monitoring
Based on the predetermined policy, the software must monitor system events, taking automatic actions based on identified problems. Monitoring must be continuous and in real-time, providing the assurance that all critical components are operating within defined limits or are being dealt with. In addition, the software must also manage the diverse platforms and operating environments in the data centre.
"The management system enforcing IT policy must also have the ability to integrate existing management products, incorporating data from these systems into its management processes," says Donnelly. "Using open SNMP agents makes it simpler to interoperate and exchange vital information on the status of the systems."
Self-healing is a reality
Any software solution enforcing a well designed computing policy will have the ability to take action automatically to ensure the system can continue operating without waiting for human intervention. For example, this includes the ability to automatically reconfigure hardware, re-initialise or reboot of the operating environment or restart partitions and operating system services.
It also includes hardware self-healing capabilities - as far as is physically possible. Once a fault or potential fault is detected, the source must be determined, the guilty component isolated and stopped, while the rest of the system continues operation. Then, while the business continues as normal, the system can then notify IT staff that a component needs replacing, restarting the services transparent to users when it is replaced.
Unattended operations
Key to the effectiveness of management software and the ultimate success of policy-based computing is the ability of the software to function without constant human interaction. The system needs to be able to refer to the corporate IT policy to make intelligent, proactive decisions to preserve user experience and productivity in response to faults.
This reduces the overall cost of IT by allowing the system to operate in a lights-out environment, extending user productivity, maximising operational staff efficiency and significantly minimising problems introduced through operator error. Naturally, a fully automated inventory and asset management monitoring application also needs to be part of the solution, as is the ability of IT staff to access the system remotely when required.
"Policy-based computing is not something one can buy off the shelf," notes Donnelly. "It requires careful thought as to what a business expects from its IT systems, how the systems should perform and react in various situations, with the resulting policy scripted into an enterprise management application able to control the server, application or enterprise levels of every technical infrastructure.
"The result will be an improvement in the overall performance and resiliency of the system as well as a reduction in the amount of human management required. This will ensure that a business is able to continue to generate revenue and maintain high levels of client satisfaction with a data centre designed to survive failures while providing superior availability, reliability and serviceability.
Share