Abuse
|
 |
«
Posted
2010-09-08 22:45:00 » |
|
I've never really thought about it before now, but as I recently purchased a 6-core Phenom II it dawned on me that javac is making almost no use of these extra resources.
This seems rather archaic, and not at all in-keeping with "a platform that has been designed from the ground up to support concurrent programming."
I wonder - what do enterprise customers do? Do multi-threaded Java compilers exist? Surely large code bases & multi-core processors are at the heart of their business - it'd be extraordinary if they have been lumbered with this limitation for so long & done nothing about it?!
A few Googles have turned up some investigation into the issue back in 2008, but it looks like the work was abandoned for one reason or another. This lack of interest seems very odd to me, as from my rudamentary understanding compilation appears to be eminently suitable for distribution across any number of cores, or for that matter across distributed networks.
This liability is obviously only going to get worse with the inevitable growth of parallel computing, and the subsequent larger, more complex programs that take advantage of these resources. Are the powers that be simply too afraid to rewrite the compiler for fear of introducing bugs?
|
|
|
|
Orangy Tang
|
 |
«
Reply #1 - Posted
2010-09-08 23:20:20 » |
|
I wonder if most people just don't care - compilation in java is lightning quick anyway (doubly so in Eclipse when it's doing it in the background as you type, I assume Netbeans has something similar).
In environments with multithreaded compilation it seems to be a side effect of having to compile the whole binary in one go - with java you can just divide your classes up arbitrarily and spin up multiple versions of javac. Hell, most C/C++ environments still only link in a single thread and that can take minutes - many times more than a complete java compile.
|
|
|
|
Riven
|
 |
«
Reply #2 - Posted
2010-09-09 05:51:47 » |
|
In Eclipse, compile-times are often in the range of 0.1 second. If I do a full rebuild of all my projects, it takes 5 seconds. I have to admit that I run my workspace off of a ram-disk, so the bottleneck 'for everybody else' is probably disk I/O, which is known to seriously degrade on multi-thread access.
|
Hi, appreciate more people! Σ ♥ = ¾ Learn how to award medals... and work your way up the social rankings!
|
|
|
Games published by our own members! Check 'em out!
|
|
princec
|
 |
«
Reply #3 - Posted
2010-09-09 11:40:32 » |
|
I switched to a solid state disk with my workspace on. Everything speeded up by roughly a factor of 5. Cas 
|
|
|
|
elias4444
|
 |
«
Reply #4 - Posted
2010-09-09 14:37:04 » |
|
I switched to a solid state disk with my workspace on. Everything speeded up by roughly a factor of 5. Dang I've got to get me one of those! I just wish the prices were a bit lower.
|
|
|
|
Riven
|
 |
«
Reply #5 - Posted
2010-09-09 15:00:09 » |
|
With a RAM disk, you get ~25x speed increase.
I make a backup every hour to be safe.
|
Hi, appreciate more people! Σ ♥ = ¾ Learn how to award medals... and work your way up the social rankings!
|
|
|
princec
|
 |
«
Reply #6 - Posted
2010-09-09 15:12:34 » |
|
I find that the worry of a crash prevents me from even considering a ram disk  One that shadow copied to a real disk, though... now that'd work. Cas 
|
|
|
|
dime
Senior Newbie 
|
 |
«
Reply #7 - Posted
2010-09-10 06:43:53 » |
|
SSDs are pretty fast and pretty darn safe. I have one at home and it Eclipse flies compared to what I have at work (even though work has better CPU and more memory).
|
|
|
|
Riven
|
 |
«
Reply #8 - Posted
2010-09-10 06:45:17 » |
|
SSDs are pretty fast and pretty darn safe.
RAM disk != SSD
|
Hi, appreciate more people! Σ ♥ = ¾ Learn how to award medals... and work your way up the social rankings!
|
|
|
Roquen
|
 |
«
Reply #9 - Posted
2010-09-10 07:23:36 » |
|
What? People actually use javac?
Seriously: Eclipse spawns multiple threads to compile. I'd be amazed if NetBeans doesn't as well. The trick with parallel compiling is that you either need to know the complete dependency chains (DC) in advance or have a bunch of "glue" logic for thread communication to deal with these dependencies. Eclipse handles the problem by knowing the DC in advance. You can do the same via the old-school UNIX makesystems by have a "make depends" build the DC and then invoke make to work in parallel (typically with 1-2x processes per core). The downside of this method is that is launching processes for each unique chain (at best..assuming your dependency generator is doing a good job). "javac" is easy to hack, so you could create a version which uses (say) an external DC file and builds everything that is out-of-date. My guess is that you'd see, at best, a 2x improvement over using something like eclipse. Creating the programs needed and switching between your editor and shell will probably end up eat much more time than you'd save.
@OrangeTang: For many modern "C/C++" compilers, the "link" stage now includes global optimizations and that's why it sometimes takes awhile. Java compiles very fast mostly because it perform the equivalent of a "debug" build.
|
|
|
|
Games published by our own members! Check 'em out!
|
|
markus.borbely
|
 |
«
Reply #10 - Posted
2010-09-11 20:43:19 » |
|
We have hudson to compile, test and package for us so I almost never see javac. Of course, I have to test it locally, but it's not the building that takes time. It's restarting the application server that takes time. I used to work with IBM websphere commerce, it took 5 minutes to start (I'm not kidding).
|
|
|
|
DzzD
|
 |
«
Reply #11 - Posted
2010-09-11 21:46:04 » |
|
really never experienced any javac time problem, I just did a try to rebuild all projects I am working on right now and they all tooks less than 5 seconds to compile ( on a really medium mobile computer ) : thoses projects size range from 70 to 400 classes for the bigger one (using JCreator)
anyway as mentioned above multithreading compilation may be very hard due to unknow dependency before compilation, no ?
|
|
|
|
|