Tuesday, July 30, 2013

Server Storage 85% Faster w/o PCI Express

Latest advance in flash storage by changing system architecture and how flash storage is integrated are discussed in the article below. The potential for changes in flash storage architecture was already discussed in May 2012

 "Diablo’s Memory Channel Storage (MCS) architecture, expected to show up in servers shipping later this year, allows flash storage components to plug into the super-fast channel now used to connect CPUs with memory. That will slash data-access delays even more than current flash caching products that use the PCI Express bus...

Diablo estimates that MCS can reduce latencies by more than 85 percent compared with PCI Express SSDs (solid-state disks)...

The connection is designed to be used by many DIMMs (dual in-line memory modules) in parallel, so each component doesn’t have to relinquish the bus for another one to use it. That saves time, as well as CPU cycles that would otherwise be used managing the bus"


Ron
Insightful, timely, and accurate semiconductor consulting.
Semiconductor information and news at - http://www.maltiel-consulting.com/



Flash breakthrough promises faster storage, terabytes of memory

In the ongoing quest for faster access to data, Diablo Technologies has taken what could be a significant next step.
Diablo’s Memory Channel Storage (MCS) architecture, expected to show up in servers shipping later this year, allows flash storage components to plug into the super-fast channel now used to connect CPUs with memory. That will slash data-access delays even more than current flash caching products that use the PCI Express bus, according to Kevin Wagner, Diablo’s vice president of marketing.
The speed gains could be dramatic, according to Diablo, helping to give applications such as databases, big data analytics and virtual desktops much faster access to the data they need most. Diablo estimates that MCS can reduce latencies by more than 85 percent compared with PCI Express SSDs (solid-state disks). Alternatively, the flash components could be used as memory, making it affordable to equip servers terabytes of memory, Wagner said.
Other than on-chip cache, the memory channel is the fastest route to a CPU, Wagner said. Not only do bits fly faster over this link, there are also no bottlenecks under heavy use. The connection is designed to be used by many DIMMs (dual in-line memory modules) in parallel, so each component doesn’t have to relinquish the bus for another one to use it. That saves time, as well as CPU cycles that would otherwise be used managing the bus, Wagner said.
The parallel design of the memory bus also lets system makers scale up the amount of flash in a server without worrying about diminishing returns, he said. A second MCS flash card will truly double performance, where an added PCIe SSD could not, Wagner said.
Diablo, which has been selling memory controllers for about 10 years, has figured out a way to use the standard DDR-3 interface and protocols to connect flash instead of RAM to a server’s CPU. Flash is far less expensive than RAM, but also more compact. The MCS components, which come in 200GB and 400GB sizes, will fit into standard DIMM slots that typically accommodate just 32GB or so of memory. The only adaptation manufacturers will need to make is adding a few lines of code to the BIOS, Wagner said.
Enterprises are more likely to use MCS as high-capacity memory than as low-latency storage, said analyst Jim Handy of Objective Analysis.
“Having more RAM is something that a lot of people are going to get very excited about,” Handy said. His user surveys show most IT departments automatically get as much RAM as they can for their servers, because memory is where they can get the fastest access to data, Handy said.
“Basically, you’d like everything to be in the RAM,” Handy said. Virtualized data centers, where many servers need to share a large set of data, need a shared store of data. But in other applications, especially with databases and online transaction processing, storage is just a cheaper and more plentiful—but slower—alternative to memory. “Everything that’s on the storage is there just because it can’t fit on the RAM,” he said.
To implement the MCS architecture, Diablo developed software and a custom ASIC (application-specific integrated circuit), which it will sell to component vendors and makers of servers and storage platforms. Flash vendor Smart Storage Systems, which earlier this month agreed to be acquired by SanDisk, will be among the companies using the MCS technology, Wagner said. In addition, a tier-one server vendor is preparing about a dozen server models with the technology and will probably ship the first of them this year, Walker said.
For the most part, Diablo doesn’t expect consumers or small enterprises to install MCS flash on their own computers. However, Diablo may work directly with enterprises that have very large data centers they want to accelerate, he said.
Using MCS flash to supplement DRAM would dramatically reduce the per-gigabyte cost of memory but also would allow for further consolidation of the servers in a data center, Wagner said. A large social networking company with 25,000 servers analyzed the MCS technology and said it would make it possible to do the same amount of work with just 5,000 servers.
That’s because the current DRAM-only servers can be equipped with just 144GB of memory, but MCS would allow each server to have 16GB of DRAM and 800GB of flash. With that much memory, each server can do more work so fewer are needed, Wagner said. Fewer servers would mean savings of space and energy, which would translate into lower costs, he said.

No comments:

Post a Comment