How the new state and future path of storage virtualization will transform the enterprise

What is storage virtualization? Well, it’s a combination of physical storage devices used as virtual storage. Software-defined storage has been years in the making. It is particularly relevant for datacenters.

As far as datacenters go, to have storage infrastructure controllable by software is a commonplace; storage virtualization takes it to a whole new level. With the explosive growth in data and networked storage over the past decade, storage virtualization satisfies a critical imperative to avoid server and storage sprawl by providing an efficient and salient way to access storage.

For enterprises, the tangential priority is to also boost utilization, rein in capex and operational expenditure incurred on such sprawling facilities while continuing to meet all relevant service-level agreements (SLAs). Such factors are the key drivers behind the growth of storage virtualization.

Current state and enterprise advantages

future path of storage virtualization

These are some of the key advantages of storage virtualization as noted by IDC in one of their research reports:

  • Storage provisioning is simplified, and capacity expansion across different types of storage systems is well enabled.
  • Data protection is also well provided for.
  • Utilization levels for storage assets increase by 20 percent-70 percent.
  • Storage virtualization enables seamless movement of data across different types of systems, which enables systems administrators to be able to automatically have data migrated to less expensive storage levels, thus cutting costs significantly

Drivers for enterprise adoption

future path of storage virtualization

These are some of the other drivers for implementing storage virtualization, as identified by HP in one of their white papers:

  • Organizations are continuing to provision individual storage units manually while buffer capacity remains trapped, and unable to be shared.
  • Siloed management and data services also inhibit storage virtualization — each storage pool, device, and service needs to be handled separately, which causes ineffective management, thus driving up costs.

Four infrastructure gaps

There are a few gaps that need to be addressed to effectively implement storage virtualization. They are:

Explosive growth in data constrained by stranded capacity and storage silos

For many years, organizations have been experiencing extraordinary growth in data requirements. IDC has estimated that even during the 2009 recession, the annual rate of growth in networked storage was around 41 percent. As the economy has rebounded, leading to new applications and the increased adoption of digital media by enterprises, IT departments must prepare themselves for higher rates of data growth and specific data storage requirements.

In the past, IT departments have responded to such explosions in data by adding more storage devices.

However, this action led to the creation of storage silos and stranded capacity, which made it very difficult to do sharing of storage capacity, shift capacity to other applications as needed, or repurpose the underutilized storage capacity in any other form. Such challenges call for storage tiering, consolidation, effective data migration, and storage virtualization.

Inability to scale or pool storage

There are several ways in which an IT infrastructure made up of stranded disks and silos of storage increase the cost of storage.

  • Acquiring, managing, and deploying separate, disconnected storage pools.
  • Low levels of utilization waste the investment in storage.
  • It is costly and inefficient to manage separate pools of storage.
  • The inability to repurpose storage capacity reduces flexibility while forcing the purchase of more unnecessary storage.

Storage virtualization reduces the stranded capacity and at the same time, enables the pooling of all resources. Thus, storage virtualization is able to provide smooth storage capability, which is as easy as simply attaching additional disk capacity.

Piecemeal approach to management inhibits automation and optimization

It’s only when the storage is managed as a single, logical pool that effective automation and storage optimization can be implemented. In the absence of this, each storage silo would need to be automated and optimized as its own separate island of storage.

This kind of disconnected approach renders futile any efforts at load balancing, dynamic capacity management, or performance tuning. Optimization can be best implemented when administrators are able to work at a higher logical level of abstraction across the entire range of storage devices and physical storage pools.

This kind of piecemeal management is also inefficient and increases costs in many ways. Being slow, it requires the administrators to handle each device or storage silo (no, this silo has nothing to do with the agriculture industry or missiles either!) separately.

This approach also forces systems administrators to work at the lower device level, which demands more specialized skills. Consequently, the organization starts needing more administrators with varied skills. As a sum total of all the outcomes, administrators handle much fewer TBs on a per-administrator basis.

Separate domains hinder the provision of unified data services

future path of storage virtualization

The presence of unified data services capability allows administrators to operate at higher levels of abstraction where they can logically move storage across different vendors, arrays, and storage domains. Unified data services are inclusive of:

  • Replication: The ability to move data synchronously or asynchronously to other capacity by using data-mirroring techniques.
  • Snapshots: The partial, fast backup performed for a large dataset, usually for data protection in the interim.
  • Intelligent tiering: This uses policy-driven tools to automatically store data on the right storage device based on the required level of storage and protection, age, frequency of use, and other such attributes.
  • Cloning: A fast, partial backup of a large dataset, similar to snapshots function. It is generally used for interim data protection or for load balancing.
  • Mirroring: An exact copy of a dataset made on a block by block basis when the data is written to disk for the first time (synchronous mirroring) or at a later time (asynchronous mirroring).
  • Thin provisioning: This is a form of storage provisioning whereby the actual physical capacity blocked for an application is less than the amount that has been provisioned logically, with the specific intent that more physical capacity can be allocated later when needed.

A well-implemented set of unified storage data services enables the IT division to reduce costs as well as provide capabilities such as migration, tiering, and cross-array consolidation in a fast, effective, and tremendous manner. Overall, this results in faster storage provisioning, better data protection, and more effective storage deployment.

Photo credit: Wikimedia

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Scroll to Top