optimization – Is it possible to write a decent java optimizer if information is lost in the translation to bytecode? – Education Career Blog

It occurred to me that when you write a C program, the compiler knows the source and destination platform (for lack of a better term) and can optimize to the machine it is building code for.
But in java the best the compiler can do is optimize to the bytecode, which may be great, but there’s still a layer in the jvm that has to interpret the bytecode, and the farther the bytecode is away translation-wise from the final machine architecture, the more work has to be done to make it go.

It seems to me that a bytecode optimizer wouldn’t be nearly as good because it has lost all the semantic information available from the original source code (which may already have been butchered by the java compiler’s optimizer.)

So is it even possible to ever approach the efficiency of C with a java compiler?


Actually, a byte-code JIT compiler can exceed the performance of statically compiled languages in many instances because it can evaluate the byte code in real time and in the actual execution context. So the apps performance increases as it continues to run.


What Kevin said. Also, the bytecode optimizer (JIT) can also take advantage of runtime information to perform better optimizations. For instance, it knows what code is executing more (Hot-spots) so it doesn’t spend time optimizing code that rarely executes. It can do most of the stuff that profile-guided optimization gives you (branch prediction, etc), but on-the-fly for whatever the target procesor is. This is why the JVM usually needs to “warm up” before it reaches best performance.


In theory both optimizers should behave ‘identically’ as it is standard practice for c/c++ compilers to perform the optimization on the generated assembly and not the source code so you’ve already lost any semantic information.


If you read the byte code, you may see that the compiler doesn’t optimise the code very well. However the JIT can optimise the code so this really doesn’t matter.

Say you compile the code on an x86 machine and new architecture comes along, lets call it x64, the same Java binary can take advantage of the new features of that architecture even though it might not have existed when the code was compiled. It means you can take old distributions of libraries and take advantage of the latest hardware specific optimisations. You cannot do this with C/C++.

Java can optimise inline calls for virtual methods. Say you have a virtual method with many different possible implementations. However, say one or two implementations are called most of the time in reality. The JIT can detect this and inline up to two method implementations but still behave correctly if you happen to call another implementation. You cannot do this with C/C++

Java 7 supports escape analysis for locked/synchronised objects, it can detect that an object is only used in a local context and drop synchronization for that object.
In the current versions of Java, it can detect if two consecutive methods lock the same object and keep the lock between them (rather than release and re-acquire the lock)
You cannot do this with C/C++ because there isn’t a language level understanding of locking.

Leave a Comment