Steve Harris has been commenting on dzone about my last post, “BigMemory: Heap Envy.” One of his comments linked to a blog post of his, “Direct Buffer Access Is Slow, Really?,” in which he says that direct access is not slow, and therefore one of my points was invalid.
Well, folks, he’s right, for all intents and purposes; it doesn’t change the conclusions about BigMemory (it’s still for people who aren’t willing to, you know, tell the JVM how it’s supposed to manage memory), but direct access is not as slow as first supposed.
I believed their own documentation, and memory exaggerated the effect of the context changes.
See, the Terracotta documentation of BigMemory, in the Storage Hierarchy section, has this quote:
OffHeapStore – fast (one order of magnitude slower than MemoryStore) storage of Serialized objects off heap.
Further, in the introduction of BigMemory (again, on Ehcache.org), you find this:
Serialization and deserialization take place on putting and getting from the store. This means that the off-heap store is slower in an absolute sense (around 10 times slower than the MemoryStore)
Like an idiot, I took “one order of magnitude” and “10 times slower than the MemoryStore,” and thought “ouch.” Looking at Steve’s measurements and my own, it’s slower – but definitely not by much, and your allocation patterns definitely affect the speed.
Therefore, I don’t think it’s fair to point out the direct buffer access as a problem.
That said, I haven’t seen anything that mitigates the cost of serialization, which was the primary point I was trying to make; offheap access time wasn’t crucial.
So I’m glad Steve has been posting about this; it’s challenged the assumptions I gathered from reading their pages on BigMemory.
It hasn’t changed my initial analysis: it’s an idea that others have used, and discarded, because it hasn’t proven necessary.
Why don’t I value BigMemory? Because I ran their test and was able to get better response times and latency with the same data set sizes and JVM heap sizes, with the basic ConcurrentHashMap.
If you have needs that specify ehcache with giant cache sizes, and you can’t tune your JVM through lack of access or knowledge or whatever, BigMemory might help you some. (Can’t help you if you can’t access your JVM startup parameters, but the knowledge thing – there’s this intarweb thing I’ve heard about…)
I still think a side cache is a signal that your data access mechanism is too slow for your uses, and you should try to fix that rather than adding extra parts.
Author’s Note: Reposted as clarification for Repost: BigMemory: Heap Envy.