Subscribe
About

The future of storage

Storage in the future could be an odd mix of intelligence and stupidity.

Paul Furber
By Paul Furber, ITWeb contributor
Johannesburg, 02 Mar 2009

Storage looks like it could be one sector of the industry that escapes the cutbacks that will be brought on by the global downturn. Whether business is good or bad, companies will still need to store their data somewhere.

The midrange is where the most action is happening: customers want technologies traditionally reserved for the higher end in their data centres, and vendors look happy to oblige.

Management and services will be one growth area within the storage market - making it easier to keep a handle on what's stored, while being more efficient about it will be big drivers as costs and budgets get cut. The other driver for storage will be more virtualisation: CPUs, operating systems, servers and desktops can be virtualised with some ease now, but storage needs more work.

According to Manfred Gramlich, storage practice lead at Sun Microsystems SA, virtualising storage is made more difficult by the fact that there are multiple techniques and that there isn't a single standard.

Without standards, it's going to take a long time to implement storage virtualisation.

Vic Booysen, business unit manager, Business Connexion

“I think the challenge with storage virtualisation is that there are various techniques: host-based through ISPs, networked through hardware vendors, and controller virtualised - so there isn't really a standard. Yes, there is an adoption, but it's being implemented at different levels. What we need is a single standard.”

The other problem is that server virtualisation is all about getting more processing out of less server resources. Storage virtualisation is all about more resources.

“You don't want to lose any inherent ability that storage gives you when you virtualise,” says Jan Sipsma, senior systems engineer at EMC. “Storage is much wider than just a CPU. Different vendors are going to do it different ways depending on how they work at the server level.”

Herman van Heerden, MD of Starship Systems, says the problems with approaching storage virtualisation are to do with the diversity of mediums and connections.

“It's much more a conglomeration of different devices: NAS devices, NFS shares - you have to take a lot of different systems and lump them together, and then get different communications channels to talk to each other,” he says.

“As soon as VMware and Microsoft allow more than one terabyte to be dynamically allocated, that will be sorted out. We'll get to the stage where virtual devices will say: 'This is your kernel and memory. Here is your storage. Go and attach to it'.”

Unfair advantage

We're not there, yet. Server virtualisation has had the benefit of a few years of hard-won experience, points out Vic Booysen, business unit manager at Business Connexion.

“We've learned a lot already on the server side. But, without standards, it's going to take a long time to implement it for storage. In the end, we'll get a lot of benefits, especially from a management point of view and from sweating the older assets. Using older assets in a pool for perhaps lower tiers will play a huge role. Mixing different vendor technologies and using a central point of management - that's where the benefits will come in.”

Some vendors already have virtualised storage offerings. Paul de Reuck, IBM systems and storage brand manager, says these units take resources that are available and present them to the host as capacity or storage space.

There's data that's valuable and data that's not.

Mike Hamilton, MD, Channel Data

“The model we see a lot of our customers moving to is the utility model,” he notes. “Storage will have to present resources to the network so that the fabric can get access to what is available. Then there's the other issue, which is virtualisation within the storage array itself. In order to present and use storage in an efficient manner, we need to be able to control these huge arrays properly. They're becoming an absolute nightmare to manage; not just for IBM, but for other vendors too. Information life cycle management (ILM) and tiering has been a solution for that, but ILM is very difficult to manage in itself. I think the next phase will be tierless: your management software will be able to sort out tiers without someone sitting there doing it for you.”

But, as Sun's Gramlich notes, even basic virtualisation could use more simplification. “Virtualisation is about trying to simplify management, but the management still doesn't set up the units for you - the vendor must do that. If you're a customer with a heterogeneous storage environment, you're still going to have to rely on the vendors to manage that storage environment. The trend today is to take all the complexity out of storage, both in the way it's deployed and managed. We still have a way to go.”

Mike Hamilton, MD of Channel Data, notes that storage virtualisation predates desktop and server-based software like VMware.

“If you look at RAID, that's a first attempt at separating the physical from the logical. But in storage, there's only been a limited adoption of standards.”

He predicts that, in the future, there will be two kinds of technology alliance. “I think there will be a partnering of first-tier fabric providers working with fairly dumb storage technologies, and then also a partnering of first-tier storage providers with relatively dumb fabric providers. Partnering in opposite directions, in other words. I don't know how it will turn out. There have been some accelerators for this, one of them is higher speed networking. Even though we have to live with an old protocol like Ethernet, faster networks have made it easier to move data to and from the data centre. But I'm not sure who will win.”

Will storage get more intelligent or less so? Paradoxically, it takes a lot of intelligence to present storage simply. A simple looking system may have a great deal of processing power and components behind it so that it looks simple to the wider infrastructure.

Storage OS

“If you look at what the major vendors are doing, there's already a lot of intelligence built into the storage platform,” says Business Connexion's Booysen.

“It's quite possible, in the future, that there will be a storage operating system running directly on the storage devices. All the intelligence for the different protocols and throughput will be embedded in the OS.”

Adrian Hollier, storage channel manager at Comztek, says it doesn't matter one way or the other whether the storage or the fabric is intelligent.

“Until we have a unified file system for storage, and I can move things seamlessly between operating systems, then what's the point of trying to virtualise? Just as we have heterogeneous operating systems, we need to have heterogeneous storage with simplified management so that customers can move seamlessly between operating systems with their storage.”

Hamilton says there's a deeper point about the relationship between data and the underlying operating systems. “I draw a parallel with the network industry, which has exactly the same problem. There's data that's valuable and data that's not. There are great volumes flowing in both industries, but the problem is the same: how do you classify it? In an ideal world, the system would be smart enough to tier it properly based on age or importance, or whatever it might be, and it would migrate automatically.

I've seen six disks in a 17Tb array fail one week after purchase.

Herman van Heerden, MD, Starship Systems

“We're dealing with the storage of whatever is thrown at us, but we don't have decent single-instance storage approaches built into the operating system. We store multiple instances of the same thing and then bitch like hell that the volumes are going up all the time. And it's the unstructured content that's multiplying rapidly.

“It can't just be tackled by a brute force approach of getting faster and more efficient storage at a server level. Rather, I think we have to address the cause. File systems are still pretty stupid. Users can't classify data and it's impossible for a data admin to classify data on behalf of a user.”

Solid state

One new technology medium that could go a long way to addressing speed and reliability issues is solid state disk (SSD). SSDs have no moving parts, are very fast for both read and write operations, and are coming down in price after gaining traction in the ultra-portable market. If you're prepared to pay, you can also use SSDs in the data centre.

Comments De Reuck: “IBM has been focusing on solid state disks with very high performance for some solutions. The problem right now is price. We need utility storage and solid state is not at the point where you can present it as cheap storage. Until then, there's still a sweet spot for [traditional] spinning disks.”

But the high price of SSDs can be factored in, says EMC's Sipsma. “Although the price is still high, the speed that you get is enormous. One solid state drive is the equivalent of 20 or 30 15 000rpm disks. If you need that kind of performance then you can weigh up the cost of 30 spinning disks versus the solid state drive and see what makes sense. Customers are looking at it, and for some, it's becoming very viable, especially from a power point of view. One SSD drive consumes a great deal less than 30 spinning disks.”

Gramlich agrees consumer adoption for SSDs is picking up in laptops, but it will only be in the year 2020 that there will be a break-even point from a capacity-cost perspective in the data centre.

One solid state drive is the equivalent of 20 or 30 15 000rpm disks.

Jan Sipsma, senior systems engineer, EMC

“For instance, Texas Instruments is building very high-end arrays now, but they cost a million dollars. Sun doesn't see growth in the traditional ERP and CRM enterprise markets, but rather on the 'net. Look at organisations like Google and YouTube: they don't deploy high-end storage. What do they want? Simple to manage, quick to deploy - as in 10 to 15 minutes. What are also evolving are storage appliances, which are able to talk NFS and ZFS, and can talk iSCSI and fibre over Ethernet. If there's an easy management platform, then that's where there will be growth.”

SSDs could also prevent a common problem with storage deployment: over-provision. Starship Systems' Van Heerden says as soon as hardware manufacturers can guarantee that a storage device won't fail, then people will start using what they pay for - and not several times more.

“People who use five terabytes now say they need two terabytes for data and another three terabytes for backup. If I know I won't lose 16% of my hard drives in my NAS environment when the power goes down, then I won't need more space. I want to see the adoption of SSD quite urgently, just to remove that risk of mechanical failure. I've seen six disks in a 17Tb array fail one week after purchase. Yes, you should have backups in two different places, but it's still difficult to cope with that kind of failure.”

Faster? Slower? Less?

Storage technologists have plenty of ideas about where the art of keeping data secure is going, but what about customers?

“We always land up discussing technology, but what matters is what our customers want,” says Sun's Gramlich. “What's their pain point? What are the things they want from storage? If you look at storage administration, where do the skills come from? There are no universities that offer high-end storage courses, so our current initiatives involve keeping things simple: pre-packaged, high-end storage products with everything included, ready to go.”

Sun isn't the only vendor with this attitude. The midrange storage market looks to be an interesting battleground next year, with some vendors even projecting sales growth. And that's no mean feat in the current climate.

Share