During the first quarter of 2020, Gartner reported a 90.2% increase in year-on-year enterprise petabyte delivered.
The report noted this demand was anticipated to increase ‘new normal’ rates through 2022 and beyond. This is hardly a surprising development, as businesses across the globe spent much of 2020 in a scramble to simultaneously continue to deliver services online and support remotely located staff.
The IDC endorsed this trend with predictions that the collective sum of the world's data will grow from 33 zettabytes this year to 175ZB by 2025, for a compound annual growth rate of 61%. The 175ZB figure represents a 9% increase over last year's forecast of data growth by 2025.
IT teams need to adapt to this jump in data creation by updating storage strategies, but remember, it's not just about having enough space. The growth in ransomware threats, compounded by remote working models, makes it more critical than ever to ensure data is secure and accessible.
Traditional data storage methods bring challenges, including scant flexibility for increasing space and, of course, hardware costs – it's expensive. Managing storage is time-consuming, and since conventional storage often lacks deduplication and compression, data isn't stored efficiently.
Moreover, migrating data is a massive undertaking when upgrading and adding backup plus disaster recovery into the equation. IT organisations continue to struggle with the daunting task of backing up growing quantities of data within diminishing backup windows.
Unfortunately, IT administrators respond to emergency data restore requests in a manner that aims to maintain the backup infrastructure to keep up with the demands. What they need to look at are disk-based immutable backup storage solutions.
However, many current solutions are based on a scale-up architecture with limited scalability and performance. Once the scalability limits are reached, the only available options are adding another standalone array with separate management, or undergoing an arduous task of a forklift upgrade and replacing the existing array.
This approach results in many islands of backup data that are complex to manage and a significant increase in the cost of ownership. Typically, these storage solutions are not immutable, making them vulnerable to ransomware attacks. Many organisations are adopting scale-out immutable storage solutions that eliminate many issues afflicting traditional data storage methods.
What is scale-out immutable storage?
Scale-out immutable storage is network-attached storage (NAS) where the amount of disk space can be expanded by adding more drives to individual storage clusters and more clusters, as needed.
Scale-out storage builds on the concept of clustering by adding features like data deduplication and compression, simplified remote management, and built-in backup and disaster recovery options. Ultimately, scale-out isn't just another way to store data; it's a better way to manage, protect and even recover it. Businesses taking this approach will see time savings, increased efficiencies and downtime reduction in most cases.
What are the benefits of immutable scale-out storage?
The first is push-button scalability − traditional storage isn't practical anymore, and with the explosive growth of data, legacy systems are quickly hitting their limits. That leaves businesses with a few options to consider and questions to ask.
For example, should they move data to the cloud and put their trust in a third-party? Or should they continue supporting their infrastructure through expensive upgrades?
IT organisations continue to struggle with the daunting task of backing up growing quantities of data within diminishing backup windows.
Many companies are concluding that the answer lies with making the transition to scale-out immutable storage. Specifically, with object-based scale-out storage, businesses can future-proof their storage infrastructure.
In a nutshell, instead of having storage scattered across locations and hardware, object-based storage lets companies treat all storage as one global pool. When it's time to upgrade, nodes and drives can be added to the storage cluster.
One would assume that managing storage infrastructure should be easy. Still, the problem is that given the pace at which data is growing − which is compounded by the increase in data silos and the different types and sensitivity of data – the fact of the matter is that managing it is not easy.
By centralising the entire data infrastructure, organisations can be more efficient, create uniform policies, and even run backups and recoveries, saving IT admin a lot of time and effort.
Traditional storage is rarely used effectively. Inline deduplication and data compression can help solve that issue, enabling companies to use their storage space more efficiently.
It is also beneficial for companies to have the ability to buy more storage as needed. Since storage space is treated as one storage pool, IT admin should optimise the space used on each hardware piece − meaning they won't have any half-full drives that are never fully utilised.
No storage infrastructure is complete without a backup and recovery plan. Companies need to schedule backups, set retention policies and replicate data to a variety of places. That includes on-premises replication targets, offsite data centres, or using a cloud service provider.
This may require scrutiny of IT infrastructure if enhanced efficiency and cost savings are to be achieved. Data protection is mission-critical for many businesses, with some constrained by compliance directives to store backups offsite.
Scale-out architecture allows businesses to seamlessly add one drive at a time or multiple nodes in a cluster, making it ideal for large-scale unstructured data and backup targets.
Data growth is inevitable, regardless of the sector. Businesses need to question whether their current infrastructure can meet surging and unpredictable data demands and the potential impact on the IT budget.
Traditional storage is no longer an option – it simply won't cut it in today's era of data surge. Scale-out storage solutions can lower the total cost of ownership, save management time, and protect from costly data loss and downtime – all of which adds up to one conclusion − you need to examine an immutable scale-out approach.
Share