Diaxion IT Strategy & Optimisation
+61 (0)2 9043 9200


If there’s one thing that is certain in IT, it’s change. A quick review of this industry’s history shows consistent improvements in capability, capacity, and power requirements for a given workload. The sub-area of storage is no exception to this rule; the introduction of flash storage in the late 1980s heralded such an improvement that has become of increasing importance to the data centre over the past five to ten years, culminating in the release of multiple flash storage arrays from a large swathe of vendors. In order to understand the potential significance of flash storage in general, it is first necessary to look at the role hard disks have played in data centre storage over the past thirty years.

A standard hard disk works by storing data on one or more spinning platters of magnetic media. Reading or writing data involves moving the magnetic head to the right radius (involving a seek delay), and waiting for the required data to move underneath the head (a rotational delay.) How quickly data can be accessed is a function of how scattered the data is across the hard disk – how much seek and rotational delay is necessary to pull it off the platter. High throughput is typically achieved by grouping disks together in a RAID set (RAID 1+0 for high performance databases; RAID 5 or 6 when performance is less important), and frequently by using only the faster, outer parts of the hard drives to minimise the seek delay. All this increases cost.

Flash storage, on the other hand, does not require any head seeking movement, significantly reducing the latency in retrieving data. IOPS (Input/Output Operations Per Second) are higher than hard disks by over three orders of magnitude (depending upon the flash product), making a great many applications – virtualisation and databases are the big ones – far more responsive. But there is a cost for this: flash chips have limited ability to accept data writes, with the consequence that their lifespan is limited (although as technology has improved, the write endurance – and techniques to minimise the wear caused by writing data out – has also improved to a degree.) The per-gigabyte cost of flash is also significantly higher than that for hard drives, though this is mitigated to an extent by the elimination of the need to keep storage unused for high performance.

To date, most flash storage devices have emulated hard disks, allowing them to be drop-in replacements (via SATA for home systems, or PCIe for higher performance configurations.) Such configurations do relatively little to compensate for the weaknesses of flash – the write limitations, and the high cost per gigabyte. This is where flash-specific designs, such as XtremIO, come into play.

XtremIO’s design is geared towards high throughput rates, whilst minimising unnecessary writes to the flash storage backend. It achieves this by deduplicating data on the fly; the system’s RAM holds a cache of hashes (using the SHA-1 algorithm) representing the data already stored, allowing writes of redundant data to be intercepted before they hit the backend. This, in turn, reduces the wear on the underlying flash storage; however, because the XtremIO device also keeps in memory the information about the chains of data that make up the fully constructed data seen by the client, it comes at a cost. It is critical that an XtremIO device be powered by a reliable source (generally meaning that it be connected to an uninterruptible power supply, with a connection to inform the system if the power goes down), so it can write the internal state to non-volatile storage in the event of a power disruption.

One of the consequences of this design is that cloning the data – as might be done for point-in-time snapshots of virtual machine states, or copying database data from production to development or staging systems for testing purposes – is incredibly quick: all the controller has to do is make a copy of the metadata chain and then present that to the required host, rather than physically copy the data. The system is also designed in such a way as to allow solid scale-out ability: double the number of XtremIO hosts in the array, and throughput doubles.

Overall, the release, growth, and general increase in maturity in flash storage for the data centre promises to ease the pain of managing storage performance into the future. Where the market will go is anybody’s guess, but based upon initial indications, the future of flash in the storage hierarchy is well and truly assured.