Java vs. C, a totally unscientific microbenchmark

Java vs. C — inspired by an interview question, I wrote a little bit of sample code to create an array of a billion random integers, and either (A) take the sums as I go, or (B) go back over and sum it on a second pass.

In C (cygwin, gcc 6.4.0 64-bit):
– summing as I go, about 3.7 seconds on my laptop, -O3 doesn’t make much difference
– two passes, about 6.5 seconds unoptimized, about 4.1 seconds with -O3
– time to malloc(sizeof(int) * 1000000000) < 1ms

In Java (jdk 10):
– summing as I go, about 12.1 seconds
– two passes, about 13.4 seconds
– time to new int[1_000_000_000] about 1.7 seconds

So Java sucks, right?

Replaced random integer with consecutive integers, all in two passes:
Java : 2.7s
Unoptimized C: 6.1s
C at -O3: 2.1s

Given how much of that time is the equivalent of malloc in Java zeroing the memory, that’s pretty impressively fast.

So the real issue seems to be that Random.java is WAYYYY slower than rand()?

JVM performance over time

Out of curiosity, I decided to run some quick (and non-repeated, largely unscientific) benchmarks of a Gentoo (-march=native,etc etc) OpenJDK/IcedTea build vs. actual Sun/Oracle JVM builds. Interestingly, the Gentoo builds are slower. Unsurprisingly, the JVM gets noticeably faster across versions from 1.6->1.7->1.8

Continue reading “JVM performance over time”

%d bloggers like this: