Thanks for that. Although at first-glance it looks fairly uninteresting. It seems mainly to just do the basic stuff that we did years ago, e.g.
Leaving ACCEPT out of the picture for the moment, EmberIO is setup so that data flows in the following manner:
* READ events trigger physical reads into ByteBuffers (which are under control of the app). When a READ gets a full object, it stuffs on the object on a the PROCESS FIFO queue. These READ events are generally triggered by an NIO Selector.
* PROCESS events get triggered when something gets put in the PROCESS queue. This is independent of the Selector.
* WRITE events get triggered when a user calls write() on a ReadWriteEndpoint. What happens here is that we stick the object to write on a WRITE FIFO queue, and then fire the WRITE event. In practice, if we're in non-blocking WRITE mode we try the write first, and if that doesn't work (or is incomplete) we then add OP_WRITE interest to the endpoint and wake up the Selector to deal with it.
...is about the same as someone doing a 3D engine first writing a helper class to choose a screenmode intelligently: this is basic code that anyone can write, and most experienced networking people probably did the first time they seriously used NIO. Arguably, this kind of stuff ought to be part of the standard libraries (although I'm not sure it should be, personally...see below).
I suspect official reasons why it's not would include the argument "deployment is EXTREMELY sensitive to how this stuff is implemented, and unless your implemenation is tunable to the Nth degree it will be useless for many people". Which is certainly true for many servers - and would make me very cautious about EmberIO (wonderng how long I could use it before I was forced to wade into the source and rewrite big chunks because they'd made assumptions that don't hold in my environment...).
There's some stuff in the article that is either interesting, scary, or reason to avoid Ember (depending upon how paranoid you are
Most people don't seem to realize that Java sockets really love to deal with just one byte at first, and then open the flood gates immediately afterwards.
EmberIO was coded with full awareness of this odd quirk of sockets, and if a non-blocking read or write does not complete on the first try it tries it again immediately. Knowing what you know now, you won't be surprised to hear that this tiny optimization boosts throughput by more than 50%.
Well, our server (unoptimized) is faster than Apache without any coding for this scenario; leaving me wondering whether this is:
- Platform specific (quite likely)
- VM-version specific (possible; there are precedents for this)
- Only a concern if your threads delay at certain points in their cycle (our different network architecture just might make this never appear)
I've done a lot of NIO debugging, and even have test cases which clearly demonstrate this is BS for particular deployments; this could be a good example of the "Ember makers assumptions that are not true and harmful" that I outlined above as a possibility.
...but of course, I've not tried the lib, and the linked article is not official, so coudl be peddling BS because of the ignorance of the author (happens a lot; sigh