Wednesday, August 22, 2007

Extreme Hibernate Performance - Delivered

Ok, I know, the title of this blog post is a bit sensational, but hang on - take a look at the results. See if you don't agree.


Operation Type Results
Update Hibernate ~ 1000 ops/sec
Update Hibernate + 2nd Level Cache ~ 1800 ops/sec
Update Terracotta ~ 7000 ops/sec

Operation Type Results
Read Hibernate ~ 1000 ops/sec
Read Hibernate + 2nd Level Cache ~ 1800 ops/sec
Read Terracotta ~ 500,000 ops/sec
Yeah, that's not a typo. 500,000 read ops / sec. So how did that happen? That's the topic of a Webinar we just did, so I'll sum up the highlights, and then give you some pointers to get more info.

Hibernate Performance Strategies


Coming from a straight JDBC app, here's what you can do to improve performance:

  1. Plain JDBC

  2. Hibernate

  3. Hibernate + 2nd Level Cache

  4. Detached POJOs

As you walk the sequence of steps, you get better and better performance. But at what cost? The last two options are very problematic in a clustered environment, loss of a server means loss of data, and that is not an acceptable tradeoff.

Enter Terracotta


But, hang on, what if you could write the Hibernate POJOs to a durable memory store - and maintain high performance? That's exactly where Terracotta steps in.

Leveraging the power of POJOs, the combination of Hibernate and Terracotta together means your application can get some really eye-popping performance results.

More Information


All of the resources from the Webinar are available at the Terracotta site. I invite you to run it and examine the code yourself.

Tuesday, August 21, 2007

Read / Write Lock Syntactic Sugar?

I'm sure this must have been discussed already, but a search of Google and the JCP turned up nothing.

It occurred to me after reviewing the documentation for the Java 1.5 ReeentrantReadWriteLock from the java.util.concurrent package it could benefit from some syntactic sugar a la the for loop.

If you browse the Javadocs for ReentrantReadWriteLock, you'll find that the suggested idiom is to use a try finally block like so:

public Data get(String key) {
r.lock(); try { return m.get(key); } finally { r.unlock(); }
}
What if you could write something like:
public Data get(String key) {
lock (rwl.readLock()) {
return m.get(key);
}
}
instead?

I asked this question around the water-cooler (so to speak) at Terracotta and got some good responses - the most convincing was that maybe it was to ensure that Read Write locks looked different enough from traditional synchronized so that it's obvious to the reader that there is something different going on.

This seems a reasonable argument, but I don't completely buy it. I personally think several things about Java lead to it's eventual success:

  1. Simplicity - embodied in things like GC, single inheritance, lack of operator overloading, etc.

  2. Built-in thread primitives and synchronization. C++ finally got these with posix threads, but its never been as easy, IMO, as Java

  3. Ubiquity. Sun's mantra "Write once, run everywhere" is a great philosophy - even if it isn't 100% true it's pretty close


So therefore I think we deserve some (more) simplicity.

Do I think the first idiom has its place? Sure. It provides for composable synchronization, which is important if you want to decouple your synchronization from your call-stack - something which Java doesn't give you out of the box (but of course, you can always implement it yourself using the synchronization primitives).

Well if you have seen this before, let me know in the comments. Or let me know what you think of this syntax.