One of the consequences of the broad adoption of technology in organisations of all sizes is an ever-increasing amount of data that needs to be managed and stored in an easily accessible, yet secure manner. The corporate database was designed for this purpose, but the more information a business produces, the larger databases must grow and the greater the cost of running a database and its associated technologies.
"The expense involved in database management does not only manifest in the software itself, but also in the servers required to keep these information stores running at peak performance 24x7," says Bernard Donnelly, consulting services manager at Unisys Africa.
"Large companies with very large databases for business analytics or ERP data cannot afford to have their databases unavailable; therefore they need servers that can manage a heavy-duty processing workload, while allowing users immediate, transparent and secure access to data."
Over the years, server technology has advanced to provide features needed for very large databases (VLDBs). Some of the functionality developed includes symmetric multiprocessing (SMP), massively parallel processing (MPP), synchronised scanning and bitmap indexing - to name a few.
"While all these features made for excellent VLDB performance and reliability, the only platforms initially able to handle the processing strain were proprietary mainframes, followed later by Unix servers," Donnelly adds. "Looking back five years, companies wanting to run their databases on lower-cost Windows servers, such as those running NT4, were limited to four-processor and later eight-processor systems - suitable for most companies, but not for large corporations with VLDB processing needs.
"In addition, Microsoft`s NT4 could not deliver the processing and reliability demands VLDBs make on systems. Customers choices were Unix or mainframe, both costly options."
A new platform emerges
With the launch of Windows 2000, matters changed. The dreaded blue screen of death became a rarity (even more so with Windows Server 2003), allowing companies to rely on the underlying operating system.
While the software component of the server may be up to the task, the hardware part is another matter. A VLDB cannot run on a normal PC server, and for the best performance it is generally unwise to divide the workload and data across multiple small systems. So even with the option of a cheaper Windows server, companies have again been left with only the more costly platform options.
What corporations have needed is a high-end Windows server platform that delivers the reliability, availability and scalability (RAS) features VLDB implementations require, without the high platform costs.
Integrated operational excellence
Analysts have determined that only 20% of platform failures are due to operating system or hardware problems. The other 80% are due to the environment, including people problems. In other words, failure to implement best practices in IT environments is a leading cause of downtime across all platforms and hardware configurations. This means, in effect, that downtime is a consequence more frequently of poor operational procedures than of one architecture or another; and that Windows servers can be supported by the appropriate disciplines to ensure high uptime.
Lowering cost of ownership
The price differential is a result of the combinations of standard, non-proprietary hardware and software (such as the Windows Server family and SQL Server database). A cost comparison carried out by the Walklett Group in 2002 compared the costs of Unix hardware (a SunFire 6800) running an Oracle database with an ES7000 running SQL Server and found: "... the ES7000 with Microsoft SQL Server 2000 is significantly less expensive to operate over five years."
"The only reason left to select a Unix or mainframe solution for VLDB processing now is purely a legacy decision," Donnelly notes. "Windows systems can now match or beat the performance of Unix or mainframe solutions at a fraction of the price."
Share