Subscribe
About

Is the database the weakest link?

Databases are often taken for granted and left to look after themselves.

Michael de Andrade
By Michael de Andrade, MD of EnterpriseWorx.
Johannesburg, 03 Jul 2009

It is common cause that a computer system is only as successful as its weakest link. If any one of the processor, storage system, network, memory or application is operating sub-optimally, it will tend to compromise the performance of the overall system.

One performance culprit that is often overlooked, though, is the database, which has come to be at the heart of today's corporations.

Our level of dependency on the relational database today is complete: it is true to say that without these databases, which have only been around for just under four decades, commerce as we know it today would not exist.

The modern database, with its enormous capacity and potentially high performance, made it possible to separate logic, data and presentation layers, and centralise the database: this was the inherent design of the client/server architecture, and it propelled the spectacular growth of IT through the late '80s and '90s. In the modern era of software as a service, the need for powerful, optimal central databases is more relevant than ever.

Today's databases tend to be stable, many of them running unattended for great periods of time. Due to this stability, they are often taken for granted: left to look after themselves. Then performance across the system starts to degrade and users find themselves staring at an hourglass, while IT personnel try to pinpoint the reason for the newfound latency.

Very often, it is the database that has not been managed properly.

Start your engines

Think of a database like a car: if a car isn't serviced, its performance will slowly degrade; and if a database isn't managed, fine-tuned and optimised, it will slowly lose its edge. A lack of maintenance may lead to embarrassing, and sometimes public, failures.

Depending on how many people rely on the database for their daily processing requirements - whether online processing or analytical - the impact of an underperforming database can have significant implications. Three hundred users waiting for the hourglass to stop turning is not just costly; it also leads to end-user frustration and loss of internal morale. It can and does impact customer service as customers either experience poor response times through self-service channels, or have to wait while customer service personnel in turn wait for their system to respond.

Given our new way of working - millions of users accessing Terabytes of data at any given time, 24x7 - it is critical.

There are no shortcuts when it comes to database management and administration; indeed, it can be said that a shortcut or incorrectly applied practice at any stage of the creation of a database will manifest in problems further down the line.

DBA's checklist

It all starts with design, not just of the actual database, but also of the hardware on which it is hosted. The purpose of the database will dictate the choice of hardware.

Will the database be used for storing transactions? Then an optimal disk subsystem configuration is needed, so as to write to disk as quickly as possible.

In the modern era of software as a service, the need for powerful, optimal central databases is more relevant than ever.

Michael de Andrade is MD of EnterpriseWorx.

Or is the database destined to be used for a BI application? In that case, appropriate hardware should be chosen, and the database designed for effective partitioning, dimension and fact data modelling, and report generation to end-users.

The vital importance of the design phase can never be over-stated: Thousands of end-users can find themselves waiting unnecessary seconds while interrogating the BI application, which they will find an unacceptable latency, and many of them simply won't put up with the delay, and they'll either under-use the application or not use it at all.

The first step in design would be to determine the purpose of the database: Will this be an OLTP database or data warehouse, for example? This will determine the level of normalisation or de-normalisation, among other considerations, that are required in the data modelling design. The second step would be to meet with the application or BI developers to agree on a standard for database objects, their naming, and usage and change management considerations, as well as business standards such as established company hierarchies and data relationships.

The third step would be the actual design of the database, bearing in mind all the points mentioned above, including foreign keys and indexing strategies. Normalisation for an OLTP system or fact and dimension modelling, and potential de-normalisation for a data warehouse also have to be considered. The last step would be an iteration of implementation, testing, performance monitoring and design improvement.

* In the next Industry Insight in this series, I'll explore the issue of database design a little further.

* Michael de Andrade is MD of EnterpriseWorx.

Share