Sign in with
Sign up | Sign in

RAMCloud: The Idea of Storing All Data in RAM

By - Source: Stanford University | B 3 comments

Researchers at Stanford University have an idea how to overcome latency and performance bottlenecks of hard drive and solid state disk-based storage systems.

According to a recently published paper, they believe that a RAM-based cloud system with about 1000 servers and a total RAM capacity of 64 TB can be built for about $4 million and is feasible today.

Compared to a disk-based system, a RAMCloud could have a 100-1000x lower latency than disk-based systems and 100-1000x greater throughput, the researchers said. The system would use replication and backup techniques to overcome the problem of volatility and data loss if the power supply is interrupted. The approach would provide enough performance for cloud systems to solve scalability issues for web applications, enable a "a new class of data-intensive applications" due to the extremely low latency of RAM and provide a growth path for small applications to grow into a large application on demand.

The estimate is that latencies of only 5 to 10 microseconds should be achievable by a measured RAMCloud system, which is about 1000x faster than the 5 - 10 milliseconds that is provided by disk-based systems for data that is accessed over a network. The researchers estimate that a single multi-core RAM server could support at least 1,000,000 small requests per second, while disk based systems are typically maxed out at 1000 to 10,000 requests.

Cost is the barrier for a broad use of such RAMClouds. However, the scientists noted that "the cost of DRAM today is roughly the same as the cost of disk 10 years ago ($ 10-30/GB)", which, of course, does not help much considering the massive storage space requirements today.

Display 3 Comments.
This thread is closed for comments
  • 0 Hide
    santfu , 12 November 2011 04:10
    This always struck me as the ultimate way to do storage. Every other way is a compromise.
  • 0 Hide
    tulx , 12 November 2011 04:20
    Wait - aren't "in-memory computing" servers already doing this?
  • 0 Hide
    Vampyrbyte , 13 November 2011 00:18
    If we could, we would all have enough L1 cache on die to cache every piece of data that ever has and ever will exist. However this is impossible so we have to use cheaper storage mediums to store larger ammounts of data and the result is a cost/performance compromise.

    This isnt news, its GCSE Computing.