Working with several enterprise clients of late, it's become apparent that there is a not-so-subtle shift of emphasis or prioritization taking place regarding the tiering of storage. To understand this, it's helpful to review the recent history and drivers related tiered storage.
Fundamentally, the reason we even consider tiering is simple: cost - the opportunity for savings by placing less "valuable" information on lower-cost storage. Without diving down the whole ILM/data classification rabbit hole, let's suffice to say that generally the three primary metrics influencing storage tiering strategy have been performance, availability, and recoverability. In recent years, performance has come to rank behind the other two largely because most business computing performance requirements tended to fall into a band that could be satisfied by a wide range of available storage platforms. The factor that correlated most closely with increased storage spending was the need for advanced functionality, such as replication. However, changes in both business demands and technology capabilities are causing a shift in priority.
From a business perspective, replication functionality, once reserved for only the most critical applications, has become a default requirement for a wider range of applications. Storage vendors have responded by adding or improving replication functionality across their product lines, in some cases even offering essentially the same capabilities across their product lines. While there is not yet 100% parity, it's safe to say that replication is no longer exclusively in the domain of high-end storage systems.
Consider an environment where, from a service level requirements perspective, the majority of applications are deemed to require sub-24 hour recovery. Does this mean that a tiered storage strategy isn't applicable, and all data must land on expensive tier-1 storage? In such a situation, the key differentiator among application requirements is most likely to be performance. Vendors have spotted this trend and are addressing it in several ways, including "within the box" tiered storage. By offering choices ranging from slower, high-capacity SATA storage to high-IO solid-state devices and several options in between, it's possible to provide comparable recoverability services across the board while still enabling cost differentiation based on performance. Of course, deploying multiple tiers within a frame isn't the only way to differentiate performance-based service tiers. For example, performance is also influenced by factors such as aggregation and allocation of bandwidth.
The demand for high application availability and recoverability is not limited to the enterprise. Increased disaster-recovery awareness has caused organizations of all sizes to reassess their capabilities and needs. Likewise, performance profiles are changing with broader adoption of newer classes of applications (e.g. streaming video) and newer application designs (e.g. service-oriented architectures).
Twenty years ago, storage needs were primarily determined based on capacity and performance. Data protection and availability distinctions grew with the rise of network-based storage systems. With data protection functionality now becoming ubiquitous, it looks like performance is once more back on top.
Jim Damoulakis is chief technology officer of GlassHouse Technologies, a leading provider of independent storage services. He can be reached at firstname.lastname@example.org.