There has been a plethora of commentary already regarding the cloud outages of 2011, with Amazon getting the brunt of it.
Storage virtualisation solutions must take the next step toward availability.
Lori MacVittie is technical marketing manager for Application Services at F5 Networks.
Most of it has focused - unfairly - on cloud computing. The detailed technical descriptions of what went wrong clearly indicate the source of the real issue, which is unfortunately a rare issue to be discussed and even rarer to be discussed in the context of cloud computing. The issue? As my storage-minded colleague Don MacVittie expounded in a recent blog: “While plenty of people have had a mouthful (or page full, or pipe full) of things to say about the Amazon outage, the one thing that it brings to the fore is not a problem with cloud, but a problem with storage.” (Once Again I Can Haz Storage as a Service, April 2011)
Let me echo Don's sentiment: there is a problem with storage. Now let me expand that to include, “and it's compounded by cloud computing”.
Availability
While application and network virtualisation has enabled architectures designed for failure, ie, supportive of failover, storage virtualisation has not.
The underlying problem is that storage virtualisation is about aggregation of resources for purposes of expanding capacity of the entire storage network, not individual files. Storage virtualisation controllers, unlike application delivery controllers, do not provide failover. If a resource, ie, a file system, becomes unavailable, it's unavailable. There's no backup, no secondary, no additional copies of that file system to which the storage virtualisation controller could redirect users. Storage virtualisation solutions just aren't designed with redundancy in mind, and redundancy is critical for enabling availability of resources.
Redundancy is critical, but not the only technological feature required. Interfaces to the storage, too, must be normalised across redundant resources. A common interface allows transparent failover from one resource to another in the event of failure, making it possible to take advantage of redundancy.
Applications, for example, especially in cloud computing environments, generally take advantage of the ubiquitous nature of HTTP. Availability of applications is made possible by the existence of multiple copies of the application (redundancy) and by the fact that all clients are accessing the application via HTTP. If one instance fails, the same protocol can seamlessly interact with a secondary or tertiary “copy” of the application.
Storage virtualisation solutions have addressed the problem of normalised interfaces by acting as a go-between, a proxy, to provide a single interface to clients while managing the complexity of heterogeneous storage systems in the background.
But the protocols used to manage storage resources internal to the storage architecture are not always the ones used to access storage resources across physically disparate environments, such as between the data centre and a cloud computing environment. Every provider presents its own “service interface”, requiring storage virtualisation solutions to use customised access methods to integrate such services into the enterprise architecture.
For intra-provider redundancy this is not a problem, but for inter-provider redundancy, this becomes a very serious drawback as generally only the most popular provider services are supported.
Redundancy and interfaces
What storage services need, particularly in cloud computing environments, is the ability to provide for failover - whether across environments (DC-DC, DC-cloud, cloud-cloud) or internal to the environment.
Storage virtualisation solutions must take the next step toward availability, and ultimately, true storage as a service. That means making storage services available in a more standards-oriented way to enable inter-cloud and ultimately inter-architecture compatibility. Normalised interfaces would make it possible for storage virtualisation solutions typically deployed in large enterprises to take advantage of external storage, without the complexity and dependency on vendor whim or pure populism.
But first, storage virtualisation solutions must implement failover capabilities. They must be able to not only tier data across environments, as they do now, but to replicate and failover from one to another, to assure availability of the aforementioned services - especially of mission-critical files.
Storage virtualisation solutions need to support redundancy in a manner similar to the network and application redundancy that has enabled highly available architectures to date. Without the ability to support redundancy, and thus failover, storage as a service remains a single point of failure that, as evidenced by the Amazon outage, can be disastrous.
Once redundancy and subsequently high-availability is achieved internal to the storage architecture, then it becomes possible to include external storage as a service as part of that architecture. It is at that point that standard interfaces become imperative to providing customers with the choice and flexibility of services.
But first and foremost, it has to be recognised that the real issues with storage as a service are caused by inadequacies in storage technology, not the service technology, and these problems must be addressed, rather than laying responsibility at the feet of cloud computing.
Share