Industry-standard benchmarks

The industry and the academic community continuously strive to provide generic benchmarks that emulate many common programmatic problems for Java. This is, of course, is of interest to JVM vendors and hardware vendors (to make sure that the JVM itself performs well and also for marketing purposes). However, the standardized benchmarks often provide some insights on tuning the JVM for a specific problem. It is recommended to have a look at some of the things that the industry tries to benchmark in order to understand how Java can be applied to different problem domains.

Naturally, standard benchmarks for anything and everything exists. Many software stacks are subject to performance measurement standardization, with organizations releasing benchmarks for everything from application servers down to network libraries. Finding and using industry-standard benchmarks is a relevant exercise for the modern Java developer.

Note

This section is written from a somewhat JVM centric perspective. We are JVM developers and it has been in our interest to make sure that the JVM performs as well as possible on many configurations. So, we have chosen to highlight many standard benchmarks that JVM vendors tend to use. However, a lot of the effort we have spent optimizing the JVM has had very real impact on all kinds of Java applications. This is exactly the point that this chapter has tried to make so far optimizing for good benchmarks that accurately represent real-world applications leads to real-world application performance.

Some of the benchmarks we mention, such as SPECjAppServer also work well as generic benchmarks for larger software stacks.

The SPEC benchmarks

The Standard Performance Evaluation Corporation (SPEC) is a non-profit organization that strives to maintain and develop a set of benchmarks that can be used to measure performance in several runtime environments on modern architectures. This section will quickly introduce the most well known SPEC benchmarks that are relevant to Java.

None of the SPEC benchmarks mentioned in this section are available for free, with the sole exception of SPECjvm2008.

The SPECjvm suite

SPECjvm—its first incarnation released in 1998 as SPECjvm98 (now retired)—was designed to measure the performance of a JVM/Java Runtime Environment. The original SPECjvm98 benchmark contained almost only single-threaded, CPU-bound benchmarks, which indeed said something about the quality of the code optimizations in a JVM, but little else. Object working sets were quickly, after a few years, deemed too small for modern JVMs. SPECjvm98 contained simple problem sets such as compression, mp3 decoding, and measuring the performance of the javac compiler.

The current version of SPECjvm, SPECjvm2008, is a modification of the original SPECjvm98 suite, including several new benchmarks, updated (larger) workloads, and it also factors in multi-core aspects. Furthermore, it tests out of the box settings for the JVM such as startup time and lock performance.

Several real-life applications have recently been added to SPECjvm, such as the Java database Derby, XML processors, and cryptography frameworks. Larger emphasis than before has been placed on warm-up rounds and on measuring performance from a steady state.

The venerable scientific computing benchmark SciMark has also been integrated into SPECjvm2008. The original standalone version of SciMark suffered from the on-stack replacement problem in that the main function contained the main work loop for each individual benchmark, which compromised, for example, the JRockit JVM and made it hard to compare its results with those of other virtual machines. This has been fixed in the SPECjvm2008 implementation.

The SPECjAppServer / SPECjEnterprise2010 suite

SPECjAppServer is a rather complex benchmark, and a rather good one, though hard to set up. It started its life called ECPerf and has gone through several generations—SPECjAppServer2001, SPECjAppServer2002, and SPECjAppServer2004. The latest version of this benchmark has changed names to SPECjEnterprise2010, but the basic benchmark setup is the same.

The idea behind this benchmark is to exercise as much of the underlying infrastructure, hardware, and software, as possible, while running a typical J2EE application. The J2EE application emulates a number of car dealerships interacting with a manufacturer. The dealers use simulated web browsers to talk to the manufacturer, and stock and transactions are updated and kept in a database. The manufacturing process is implemented using RMI. SPECjEnterprise2010 has further modernized the benchmark by introducing web services and more Java EE 5.0 functionality.

The SPECjAppServer suite is not just a JVM benchmark but it can also be used to measure performance in everything from server hardware and network switches to a particular brand of an application server. The publication guidelines specify that the complete stack of software and hardware needs to be provided along with the score. SPECjAppServer / SPECjEnterprise2010 is an excellent benchmark in that any part of a complete system setup can be measured against a reference implementation. This makes it relevant for everyone from hardware vendors to application server developers. The benchmark attempts to measure performance of the middle tier of the J2EE application, rather than the database or the data generator (driver).

The benchmark is quite complicated to set up and hard to tune. However, theoretically, once set up, it can run self-contained on a single machine, but is rather resource heavy, and this will not produce any interesting results.

A typical setup requires a System Under Test (SUT), consisting of network infrastructure, application servers, and a database server, all residing on different machines. A driver, external to the test system, injects load into the setup. This is an example of the technique for measuring outside the system that was explained earlier in this chapter. In SPECjEnterprise2010, one of the more fundamental changes, compared to earlier versions, is that database load has been significantly reduced so that other parts of the setup (the actual application on the application server) becomes more relevant for performance.

In a complete setup, the performance of everything, from which network switch to which RAID solution the disk uses, matters. For a JVM, SPECjAppServer is quite a good benchmark. It covers a large Java code base and the execution profile is spread out over many long stack traces with no particular "extra hot" methods. This places strict demands on the JIT to do correct inlining. It can no longer look for just a few bottleneck methods and optimize them.

The SPECjAppServer / SPECjEnterprise2010 suite

As of SPECjAppServer2004, in order to successfully run the benchmark, a transaction rate (TxRate) is used for work packet injection. This is increased as long as the benchmark can keep up with the load, and as soon as the benchmark fails, the maximum TxRate for the benchmark system can thus be determined. This is used to compute the score.

The realism is much better in newer generations of the benchmark. Application servers use more up-to-date standards and workloads have been increased to fit modern architectures. Support for multiple application servers and multiple driver machines has also been added. Measurements have also been changed to more accurately reflect performance.

The SPECjbb suite

SPECjbb is probably one of the most widespread Java benchmarks in use today. It is interesting because it has been used frequently in academic research, and has been a point of competition between the big three JVM providers, Oracle, IBM, and Sun Microsystems. The big three, in cooperation with hardware vendors, have taken turns publishing press releases announcing new world records.

SPECjbb has existed in two generations, SPECjbb2000 (retired), and lately SPECjbb2005, that is still in use. SPECjbb, similar to SPECjAppServer, emulates a transaction processing system in several tiers, but is run on one machine in a self-contained application.

SPECjbb has done the Java world some good in that a lot of optimizations that JVMs perform, especially code optimizations, have been driven by this benchmark, producing beneficial spin-off effects. There are also many examples of real-life applications that the quest for SPECjbb scores has contributed performance to—for example, more efficient garbage collection and locks.

Note

Here are some examples of functionality and optimizations in the JRockit JVM that are a direct result of trying to achieve high scores on SPECjbb. There are many more. All of these optimizations have produced measurable performance increases for real-world applications, outside the benchmark:

  • Lazy unlocking (biased locking)
  • Better heuristics for object prefetching
  • Support for large pages for code and heap
  • Support for non-contiguous heaps
  • Improvements to array handling such as System.arraycopy implementations, clear-on-alloc optimizations, and assignments to arrays
  • Advanced escape analysis that runs only on parts of methods

A downside of SPECjbb is that it is, in fact, quite hardware-dependent. SPECjbb is very much memory bound, and just switching architectures to one with a larger L2 cache will dramatically increase performance.

Another downside to SPECjbb is that it is possible to get away with only caring about throughput performance. The best scores can be achieved if all execution is occasionally stopped and then, massive amounts of parallel garbage collection is allowed to take place for as long as it takes to clean the heap.

SPECjbb2005 has also been used as the basis for the SPECpower_ssj2008 benchmark that utilizes the same transaction code, but with an externalized driver. It is used to quantify transactions per Watt at different levels of load, to provide a basis for measuring power consumption.

Note

Here's another benchmarking anecdote. Sometimes an optimization designed for a benchmark is only that and has no real world applications. An example from JRockit is calls to System.currentTimeMillis. This is a Java method that goes to native and finds out what the system time is, expressed in milliseconds since January 1, 1970. Because this typically involves a system call or a privileged operation, very frequent calls to System.currentTimeMillis could be a bottleneck on several platforms.

Irritatingly enough, it turned out that calls to System.currentTimeMillis made up several percent of the total runtime of SPECjbb2000. On some platforms, such as Windows and Solaris, quick workarounds to obtain system time were available, but not on Linux. On Linux, JRockit got around the bottleneck by using its own signal-based OS timers instead. The Linux JVM uses a dedicated thread to catch an OS-generated signal every 10 milliseconds. Each time the signal is caught, a local time counter is increased. This makes timer granularity a little worse than with the system timer. However, as long as the timer is safe (that is, it cannot go backwards) this maintains Java semantics and SPECjbb runs much faster.

Today on JRockit versions for Linux, native-safe timers are disabled. If you, for some weird reason, have problems with the performance of System.currentTimeMillis in your application, they can still be enabled with the hidden flag -XX:UseSafeTimer=true. We have never heard of anyone who needs this.

SipStone

SipStone (www.sipstone.org) provides a suite of benchmarks that are interesting for the telecom industry, making it possible to benchmark implementations of the Session Initiation Protocol (SIP).

Real life scenarios for testing telecom applications are provided. One of the benchmarks used is Proxy200, where the SIP application is provided by the benchmark user. Typically, the benchmark user is a SIP server provider. This benchmark is interesting as it provides a standardized proxy for testing SIP performance.

The DaCapo benchmarks

The DaCapo (www.dacapobench.org) suite is a free benchmark suite created by the DaCapo group, an academic consortium that does JVM and runtime research. The idea behind the initiative is to create a benchmark with more GC-intensive loads and more modern Java applications.

The benchmark suite includes, for example, a parser generator, a bytecode optimizer, a Java based Python interpreter, and some of the non-GUI unit tests for the Eclipse IDE. It is fairly simple to run and makes for an interesting collection of benchmarks that stresses some typical applications of Java.

Real world applications

Of course, one should not underestimate the usefulness of keeping a library of real world applications around for benchmarking, if they can exercise relevant areas of your code. For example, for developers of a Java web server, including a couple of free Java web servers in the benchmarking matrix is probably a good idea to make sure that competitive advantage is maintained over them.

The authors of this book, and their JVM development teams use, with kind permission, a collection of customer applications that have caused performance issues with JRockit over the years. This enables us to better understand performance and make sure that no regressions appear. These applications have been turned into benchmarks that are executed in known environments in nightly or weekly performance runs. Smaller benchmarks are run more often.

Note

Having "thrashatons" for a development team every now and then is a fun and useful activity. Have the developers spend a few days deploying their product on every kind of relevant (and not so relevant) compatible application platform downloaded from the web. This helps weed out bugs and performance problems. While, naturally, finding thrashaton fodder for a JVM or a compiler is easy (the input is any Java program), there is still plenty of source out there for testing more specialized platforms. Also look for load generators, network tests, and other platform-agnostic products that can stress an application until unknown issues pop up.

Our recommendation is always to keep a large "zoo" of applications around, for platform testing. If your platform is a J2EE application, deploy it on several application servers. If it is a mathematical package, run it with several JVMs and different java.lang.math implementations, and so on. Storage is cheap—so never throw anything away. Test, benchmark, improve, and repeat!

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset