Java-Gaming.org    
Featured games (81)
games approved by the League of Dukes
Games in Showcase (487)
Games in Android Showcase (110)
games submitted by our members
Games in WIP (553)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: [1]
  ignore  |  Print  
  What is best : 2 Selectors or only 1 ?  (Read 2802 times)
0 Members and 1 Guest are viewing this topic.
Offline karmaGfa

Junior Member




Miaow


« Posted 2006-02-01 07:16:58 »

Hi guys,

What is the best to have :

1 Selector that is used only with socket channels that register for OP_ACCEPT + 1 Selector that is used only with socket channels that register for OP_READ (so totally there is 2 selectors).

or

only 1 selector that is used for both ?

<a href="http://www.le-moulin-studio.com">Le Moulin Studio</a> - MMO Technologies and Services.
Offline Herkules

Senior Member




Friendly fire isn't friendly!


« Reply #1 - Posted 2006-02-01 14:27:57 »

Hm, HeadQuarter uses 1 for both .... do you see any advantages of having two? Just makes shutdown harder .....

You need 2 threads then as well?

HARDCODE    --     DRTS/FlyingGuns/JPilot/JXInput  --    skype me: joerg.plewe
Offline karmaGfa

Junior Member




Miaow


« Reply #2 - Posted 2006-02-01 15:23:52 »

The advantage of having 2 of them (handled by 2 threads) is that in the case of heavy loading of the process of the incoming data and heavy connections requests, the incoming connections can be accepted faster, making the server more available to those who try to connect.

I have read somewhere that the amount of pending connection requests is pretty small ... 5 or 6. Someone can confirm that or is it a myth ?

I also would like to know if there is any performance problems when we are using more than 1 selector in a program (let's say plenty of them).

<a href="http://www.le-moulin-studio.com">Le Moulin Studio</a> - MMO Technologies and Services.
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline Jeff

JGO Coder




Got any cats?


« Reply #3 - Posted 2006-02-02 00:48:21 »

Well, if you use 1 selector per connection your back to the whole problem that NIO was designed to solve-- having a thread per connection.

Unless you have multiple processors, or parts of your task can be put to sleep on block,  you always lose power be separating a task into more threads.  Your adding context switching costs and you still have the same amount of CPU power no matter how much you chop it up.

I agree with others that I cant see a value in using multiple selectors.

Got a question about Java and game programming?  Just new to the Java Game Development Community?  Try my FAQ.  Its likely you'll learn something!

http://wiki.java.net/bin/view/Games/JeffFAQ
Offline Mr_Light

Senior Member




shiny.


« Reply #4 - Posted 2006-02-02 11:27:38 »

the way I read it he's using one thread to accept his connections, an other to handle them, so that the server would always respond in a timely matther. thats 2 threads for 0..n connections not 1 on 1.

It's harder to read code than to write it. - it's even harder to write readable code.

The gospel of brother Riven: "The guarantee that all bugs are in *your* code is worth gold." Amen brother a-m-e-n.
Offline blahblahblahh

JGO Coder


Medals: 1


http://t-machine.org


« Reply #5 - Posted 2006-02-02 16:00:54 »

Unless you have multiple processors, or parts of your task can be put to sleep on block,  you always lose power be separating a task into more threads.  Your adding context switching costs and you still have the same amount of CPU power no matter how much you chop it up.

A standard way to increase performance in server situations is finer-grained optimization. Sun is famous for their boasts that Solaris has many many more synchronization locks than any other OS, allowing it to make much more precise scheduling decisions, and hence more likely to extract better performance in any given situation.

In reality, if your tasks cannot be broken up like this, you've probably not designed your server properly, and are never going to get good performance. However, even if that weren't the case, then to say "you always lose power" is at least misleading: e.g. if you are buying an Intel processor these days it's actually hard not to end up with a multi-core CPU. Sun, IBM, and Intel have all claimed that multicore CPU's are the way forwards, and at various times hinted all their future CPU's will be multicore. On top of that, it's currently standard for internet hosted servers to be dual or quad CPU (can be annoying when you want a low-CPU-power machine and e.g. just want lots of RAM - you end up having to pay more for CPU power you certainly don't need).

And on single core CPUs at the hardware level, all modern desktop CPU's are optimized for multi-threaded execution. There is silicon dedicated to this task on Intel and AMD processors, and has been for years.

Quote
I agree with others that I cant see a value in using multiple selectors.

On the contrary, I would say that if you care about performance then - in theory - it is foolish to use only one selector. This doesn't guarantee that in practice it is best, but from my personal experience multiple selectors deliver performance no worse than single selectors.

To be *absolutely* clear: for most stuff people want to do, one selector is fine. It's not the best approach, but its certainly good enough - you won't notice differences in performance, because other things in your code will be having bigger effects.

However, multiple selectors give you the following advantages:

 - fine-grained control of server resources: allow you to favour incoming connections, existing connections, or completed connections (ACCEPT, READ, WRITE). If you're doing serious server development, you will almost certainly have situations where you know exactly what balance you want between those three, and mutliple selectors allows you to optimize for it.

 - cleaner, more modular, code: e.g. your connection-building code is completely independent from your response-sending code, save that one has - ultimately - got to call a Selector register method to transfer the work to the other. You have limited the dependency to a single simple method call, making debugging, optimization, and ongoing development much easier

malloc will be first against the wall when the revolution comes...
Offline Mr_Light

Senior Member




shiny.


« Reply #6 - Posted 2006-02-02 21:23:19 »

Quote
context switching costs
what creates those costs? is that the actually waking up of the other thread,  or is that caused by other factors like wenn reading resources and the creations of 'more' random writes vs the (usually faster) squencal writes.

(chance for) 'more' random writes <-> (chance for) the (usually faster) squencal writes.
more threads <-> lesser threads

enlighten me please  Cool

It's harder to read code than to write it. - it's even harder to write readable code.

The gospel of brother Riven: "The guarantee that all bugs are in *your* code is worth gold." Amen brother a-m-e-n.
Offline whome

Junior Member




Carte Noir Java


« Reply #7 - Posted 2006-02-04 10:04:33 »

Thread 1: Handle NIO selector and OP_XXX events, add messages to pending msg queue, take pending outgoing msg and write back to socket.
Thread 2: take pending messages, call appropriate handler class and write response to pending outgoing msg queue.

I have always used one selector run in a dedicated thread for ACCEPT,READ and WRITE operations. Reader method appends bytes to the bytearrayoutputstream until message terminator is found (or length of msg is read). Each socketchannel has an attachment object where this buffer is stored.

Then completed message is wrapped as a InMessage object and added to a message queue. Another thread handles all pending InMessages from the queue. Response is wrapped as OutMessage and put to another queue. First selector thread takes OutMessage and writes responses to socketchannels.
Offline Herkules

Senior Member




Friendly fire isn't friendly!


« Reply #8 - Posted 2006-02-04 11:07:39 »

Thread 1: Handle NIO selector and OP_XXX events, add messages to pending msg queue, take pending outgoing msg and write back to socket.
Thread 2: take pending messages, call appropriate handler class and write response to pending outgoing msg queue.

This sounds pretty reasonable. HeadQuarter currently is even singlethreaded assuming the server doesn't do heavy computing. Just passing messages along with minor computations. I feel many gameservers are of this kind. And CPU still is some orders of magnitude faster than network. When computing time grows into 'some milliseconds', the system might get less responsive under heavy load.

'Thread 2' could be a small threadpool then.

HARDCODE    --     DRTS/FlyingGuns/JPilot/JXInput  --    skype me: joerg.plewe
Offline Jeff

JGO Coder




Got any cats?


« Reply #9 - Posted 2006-02-05 00:07:05 »

Quote
context switching costs
what creates those costs? is that the actually waking up of the other thread, 

A task switch on a single processor (or Core to be more precise and satsisfy BB's pedanticism)  means al lthe state of the current task must be saved off, and the saved state of the old task be loaded back into the proessor.

All this takes cycles.  On some CPUs  (cores) it takes more then others but on any CPU the cost is >0.

Got a question about Java and game programming?  Just new to the Java Game Development Community?  Try my FAQ.  Its likely you'll learn something!

http://wiki.java.net/bin/view/Games/JeffFAQ
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline Mr_Light

Senior Member




shiny.


« Reply #10 - Posted 2006-02-05 00:48:42 »

I see, it would be interesting to try and pin that down in some funky benchmark.

but aren't we heading for 128 cores on the desktop within 10 years?
I remember some blog or artikel by some google employee that we where beeing too narrowminded about the amount of cores.

if you take that and combine it with the time certain projects take.

but thats all a bit too theoretical.


It's harder to read code than to write it. - it's even harder to write readable code.

The gospel of brother Riven: "The guarantee that all bugs are in *your* code is worth gold." Amen brother a-m-e-n.
Offline Jeff

JGO Coder




Got any cats?


« Reply #11 - Posted 2006-02-05 03:06:43 »

I see, it would be interesting to try and pin that down in some funky benchmark.

but aren't we heading for 128 cores on the desktop within 10 years?

Dunno my time machien broke down last week.

The vast majority of today's desktop compiuters though are still single core.


Got a question about Java and game programming?  Just new to the Java Game Development Community?  Try my FAQ.  Its likely you'll learn something!

http://wiki.java.net/bin/view/Games/JeffFAQ
Offline Herkules

Senior Member




Friendly fire isn't friendly!


« Reply #12 - Posted 2006-02-05 10:42:42 »

A task switch on a single processor (or Core to be more precise and satsisfy BB's pedanticism)  means al lthe state of the current task must be saved off, and the saved state of the old task be loaded back into the proessor.

Yes, but the scheduler itself runs regardless wether 1 or more threads are active. On the CPU-level, the state of the current thread has to be saved, the schedulers context has to be restored. The scheduler detemermines which thread to proceed with and does so. So there actually IS a context change regardless wether the thread changes or not? I just assume staying in the same thread is cheaper bc. the context of a thread is richer than the CPU context (registers)?

Sidenote: anybody remembers the RTX2000 CPU? The RealTime eXpress? 4 cycles to safe state when interrupt comes in! Smiley

HARDCODE    --     DRTS/FlyingGuns/JPilot/JXInput  --    skype me: joerg.plewe
Offline whome

Junior Member




Carte Noir Java


« Reply #13 - Posted 2006-02-06 10:02:27 »

Quote
>>Thread 1: Handle NIO selector and OP_XXX events, add messages to pending msg queue, take pending outgoing msg and write back to socket.
>>Thread 2: take pending messages, call appropriate handler class and write response to pending outgoing msg queue.
>This sounds pretty reasonable. HeadQuarter currently is even singlethreaded assuming the server doesn't do heavy computing. Just >passing messages along with minor computations. I feel many gameservers are of this kind. And CPU still is some orders of >magnitude faster than network. When computing time grows into 'some milliseconds', the system might get less responsive under >heavy load.
>'Thread 2' could be a small threadpool then.

Thread2 request controller could use a thread pool to process several inMessages in parallel. But I havent needed it yet, only two threads within a server application has done the job. After looking my code again, actually I use the following context switch optimization.
* thread2 handles the InMessage by calling appropriate handler class
* OutMessage is returned from the handler class
* _thread2_ writes OutMessage to the client socketchannel. This might write 0,1...n bytes due to a nature of NIO sockets.
* If OutMessage was completely written to the client, its not added to the pending out messages. Smaller responses never go to the pending queue because socket write buffer is never overrun.
* If one or more bytes was left for later write, then its added to the pending out messages queue. Socketchannel is registered as OP_WRITE code. Remaining bytes are written to client by thread1.
* _thread1_ reads all incoming messages (OP_READ) and writes pending out messages for all channels registered as OP_WRITE

Only larger out messages or client is too slow to purge internal socket write buffer before another response, then OutMessage increases a payload of thread1 a bit. I find this a good combination of singlethreaded schema and multithreaded server schema.
Offline Jeff

JGO Coder




Got any cats?


« Reply #14 - Posted 2006-02-07 04:11:37 »

Herc,

Under lInux, the scheduler is actually run as part of your thread-- its the last thing done on any OS call.
Linux handles programs that never go into the OS with a hack-- it has a timeout interrupt that it resets on every OS call.  If it fires then it forces it into the scheduler as an interrupt routine.

Other OSs often run the scheduler just as a periodic interrupt.

In either case, an interrupt routine does not require a full conetxt swtich.  Its just a forced jump.  It may have to dump some registers to stack and pop them on return but thats it.  It works on the curren thread's stack and such.

Got a question about Java and game programming?  Just new to the Java Game Development Community?  Try my FAQ.  Its likely you'll learn something!

http://wiki.java.net/bin/view/Games/JeffFAQ
Offline karmaGfa

Junior Member




Miaow


« Reply #15 - Posted 2006-02-07 14:44:48 »

...
Only larger out messages or client is too slow to purge internal socket write buffer before another response, then OutMessage increases a payload of thread1 a bit. I find this a good combination of singlethreaded schema and multithreaded server schema.

Your description is really usefull and full of sense.
I will follow those guidelines for JNAG.

Thank you a lot  Cheesy
Karma

<a href="http://www.le-moulin-studio.com">Le Moulin Studio</a> - MMO Technologies and Services.
Offline karmaGfa

Junior Member




Miaow


« Reply #16 - Posted 2006-02-07 18:19:41 »

Thread2 request controller could use a thread pool to process several inMessages in parallel. But I havent needed it yet, only two threads within a server application has done the job. After looking my code again, actually I use the following context switch optimization.
* thread2 handles the InMessage by calling appropriate handler class
* OutMessage is returned from the handler class
* _thread2_ writes OutMessage to the client socketchannel. This might write 0,1...n bytes due to a nature of NIO sockets.
* If OutMessage was completely written to the client, its not added to the pending out messages. Smaller responses never go to the pending queue because socket write buffer is never overrun.
* If one or more bytes was left for later write, then its added to the pending out messages queue. Socketchannel is registered as OP_WRITE code. Remaining bytes are written to client by thread1.
* _thread1_ reads all incoming messages (OP_READ) and writes pending out messages for all channels registered as OP_WRITE

Only larger out messages or client is too slow to purge internal socket write buffer before another response, then OutMessage increases a payload of thread1 a bit. I find this a good combination of singlethreaded schema and multithreaded server schema.


I propose a better optimization of the context switch stuff, please tell me what you thing of it :

Thread 0 : OP_ACCEPT connections.

Thread 1 : Is a pool of thread that handle OP_READ and OP_WRITE for all accepted connections.
  • When a thread is selected by a OP_READ event, it try to fully read a message. If he success, he process it directly instead of putting it into a buffer.
  • The outcoming message is writting in the socket if possible, and any non-written data is put back in some buffer that are handled later by a thread of the pool 1 selected by a OP_WRITE event.
  • When a thread is selected by a OP_WRITE event, it try to write the buffer into the socket.

An eventual problem that may appear with this method is that the buffer of the socket that receives the data might be full quickly if the processing of the message takes a lot of time. So the approach of whome have the advantage to free the buffer of the socket quickly enough ... but on another way, it still can fill some buffers handled by Java (that's not so bad since we can handle them more efficiently and avoid to maintain a lot of empty buffer per socket).

Please tell me if I understood the problem correctly, I will start to implement this part in JNAG very soon and I want to be sure that I am doing the best choices.

Karma

<a href="http://www.le-moulin-studio.com">Le Moulin Studio</a> - MMO Technologies and Services.
Offline whome

Junior Member




Carte Noir Java


« Reply #17 - Posted 2006-02-10 09:47:36 »

You have the big picture. I still suggest _not to  create_ too much concurrent threads if use NIO sockets. JavaApi does say nio can be used in multithreaded environment, but at least in JDK1.4 version had quite many bugs to make it unstable. I wanted to avoid all problems and handle all selector operations within a single thread.

Only concurrent NIO operation I have is thread2 using SocketChannel.write method. It tries to write response to the client. But if one or more bytes are pending then message is put to the outgoing msg queue. Anything else related to NIO methods is run inside thread1 to avoid possible concurrency bugs.

low-level socket NIO read and write methods does not take much time because due to a unblocked nature they are returned immediately if nothing can be read or write. Then reading bytes from the socket and adding to the byte buffer is almost instant call.

Only issue what might take some time is a program code to recognize the EndOfMessage terminator. What I use as  a terminator is two usual methods:
* NULL as terminator byte: this is fine for string based messages. And Flash clients always add NULL as a msg terminator. This is very easy to test. I read socket and write bytes to the byte buffer until NULL is found. Then we know we have a complete message in a byte buffer and I create "new InMessage(bytes)" instance.
* read predefined num of bytes: each message writes one or two leading bytes to indicate the length of message. Then we read until all bytes are read for this message.

I prefer NULL as a terminator byte because clients does not need to know the length of message in advance. They can start streaming data to the server and write NULL as a last byte. Predefined length cannot be determined until client is parsed the outgoing message. Then it can take a byte length and write leading lenOfMsg bytes and data bytes.

I have only two synchronization blocks within the code:
* Incoming message queue: synchronized put, remove methods
* Outgoing message queue: synchronized put, remove methods

If I ever needed a thread pool, I would create it for calling "actionHandler.process(InMessage msg)" methods. Message has a "session = msg.getClientSession()" method to give me an internal container class. Its where I get reference to the underlying socketchannel if I need it and client-scoped hashmap storage.
Pages: [1]
  ignore  |  Print  
 
 
You cannot reply to this message, because it is very, very old.

 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

CopyableCougar4 (23 views)
2014-08-22 19:31:30

atombrot (34 views)
2014-08-19 09:29:53

Tekkerue (30 views)
2014-08-16 06:45:27

Tekkerue (28 views)
2014-08-16 06:22:17

Tekkerue (18 views)
2014-08-16 06:20:21

Tekkerue (27 views)
2014-08-16 06:12:11

Rayexar (65 views)
2014-08-11 02:49:23

BurntPizza (41 views)
2014-08-09 21:09:32

BurntPizza (32 views)
2014-08-08 02:01:56

Norakomi (42 views)
2014-08-06 19:49:38
List of Learning Resources
by Longor1996
2014-08-16 10:40:00

List of Learning Resources
by SilverTiger
2014-08-05 19:33:27

Resources for WIP games
by CogWheelz
2014-08-01 16:20:17

Resources for WIP games
by CogWheelz
2014-08-01 16:19:50

List of Learning Resources
by SilverTiger
2014-07-31 16:29:50

List of Learning Resources
by SilverTiger
2014-07-31 16:26:06

List of Learning Resources
by SilverTiger
2014-07-31 11:54:12

HotSpot Options
by dleskov
2014-07-08 01:59:08
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!