Wednesday, December 23, 2015

SSD, 3D Vertical NAND, or 3D XPoint?

As flash memory evolves, new issues are found and as time progress they get resolved. The problems can not be found in advance of creating the higher density of storage. You need to implement the higher storage density before you can find out the problems associated with its implementation. SSD problems mentioned in the article below will be addressed by improved hardware and software and SSD will be used for awhile.

Similarly, 3D Vertical NAND has similar set of issues and it needs a more sophisticated controller to utilize its higher density (see November 2012 post - 3D NAND flash is coming ). In the same way Intel's 3D XPoint will face its own set of challenges when it is finally introduced next year.


Ron
Insightful, timely, and accurate semiconductor consulting.
Semiconductor information and news at - http://www.maltiel-consulting.com/


Was 2015 the beginning of the end for SSDs?


The advent of SSDs has arguably done more to transform the experience of using a computer than any other event in the past eight years. Faster GPUs and CPUs benefit the high-end users that need such horsepower, but solid state disks can breathe new performance into virtually any hardware. The earliest drives may have had performance issues, but once those were ironed out, it was clear that NAND flash’s ascension to the top of the storage market was a question of when, not if, and the “when” depended on questions of reliability and cost-per-bit — not fundamental speed. This fundamental argument has been accepted at every level, from individual PCs to high-end enterprise deployments.
That’s why it’s surprising to see ZDNet instead arguing that 2015 was the “beginning of the end” for NAND flash in the enterprise. This argument hinges on a number of different papers that were published in 2015 concerning NAND reliability, performance, and suitability for datacenter applications. We covered some of these findings when the papers were new, but will summarize the collective findings here:
  • Facebook and Carnegie-Mellon found that higher temperatures can negatively impact SSD reliability and that this correlates with higher bus power consumption as well. Interestingly, this study found that failure rates did not monotonically increase (only increase) with the amount of data written to NAND flash, that sparse data layouts and dense data layouts can both increase failure rates under certain conditions, and that SSDs that don’t throttle andexperience high temperatures have higher failure rates.
  • A major Korean study on VM performance found that SSD garbage collection didn’t mesh well with existing algorithms for that purpose, leading to significant performance degradations in some cases. The paper concluded that it’s currently impossible to guarantee a set number of IOPS when multiple VMs are hosted on a single drive. While this paper used consumer hardware, the flaws it found in how garbage collection is handled would have applied to enterprise equipment as well.
  • A new Sandisk study found that the use of multiple layers of log-structured applications “affects sequentiality and increases write pressure to flash devices through randomization of workloads, unaligned segment sizes, and uncoordinated multi-log garbage collection. All of these effects can combine to negate the intended positive affects of using a log.”
When you put these reports together, they point to issues with SSD reliability, performance, and suitability for certain workloads. But I’m much less certain than ZDnet that this stacks up to NAND’s rapid retreat from the data center.

Teething problems vs. cataclysmic deficiencies

I strongly suspect that if we could rewind the clock to the beginning of the HDD era, we’d see similar comments made about the suitability of hard drives to replace tape. In the 1970s and early 1980s, tape was the proven technology and HDDs, particularly HDDs in consumer systems, was the upstart newcomer. It’s difficult to find comparative costs (and it’s highly segment dependent), but the March 4, 1985 issue of Computerworld suggests that tape drives were far cheaperthan their HDD equivalents.
3d-nand-flash
The advent of 3D NAND flash has the potential to improve NAND reliability
I don’t want to stretch this analogy too far, but I think there’s a lesson here. The pace of hardware innovation is always faster than the software that follows it; you can’t write software to take advantage of hardware that doesn’t exist yet (at least, not very well). It’s not surprising to see that it’s taken years to suss out some of the nuance of SSD use in the enterprise, and it’s also not surprising to discover that there are distinct best practices that need to be implemented in order for SSDs to perform optimally.
To cite one equivalent example — it was a 2005 paper (backed up by an amusing 2009 video) that demonstrated howshouting at hard drives could literally make them stop working. While drive OEMs were obviously aware of the need to dampen vibrations in enterprise deployments long before then, the issue bubbled up to consumer awareness in that timeframe.
Hard drives, nevertheless, continue to be sold in large numbers — even in enterprise deployments.
None of this is to suggest that NAND flash is foolproof or does not need a medium-to-long-term replacement. I’ve covered several such potential replacements just this year. It does, however, suggest that a bit more perspective is in order. It’s easy to promise huge gains on paper and extremely difficult to deliver those gains in a scaleable, cost-effective manner.
Right now, it looks as though 3D NAND adoption will drive the further evolution of SSD technology for the next 3-5 years. That, in turn, will make it more difficult for alternative technologies to find footing — a replacement storage solution will need to match the improving density of 3D NAND, or offer multiple orders of magnitude better performance in order to disrupt the NAND industry. Intel’s joint Micron venture and its 3D XPoint could disrupt the status quo when it arrives next year, but I’ll wait for benchmarks and hard data before concluding that it will.
Far from being the beginning of the end, I suspect 2015 was the end of the beginning of NAND flash, and will mark a shift towards software-level optimization and a better understanding of best practices as the technology moves deeper into data centers.

No comments:

Post a Comment