Java-Gaming.org    
Featured games (81)
games approved by the League of Dukes
Games in Showcase (489)
Games in Android Showcase (112)
games submitted by our members
Games in WIP (554)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: 1 [2]
  ignore  |  Print  
  [synchronized(this) { } ?]  (Read 4271 times)
0 Members and 1 Guest are viewing this topic.
Offline Riven
« League of Dukes »

JGO Overlord


Medals: 783
Projects: 4
Exp: 16 years


Hand over your head.


« Reply #30 - Posted 2012-11-06 01:06:57 »

In a typical producer/consumer relation between threads, you typically don't want to produce at a much higher rate than what is consumed. A bounded queue (which means: it has a maximum capacity) will cause the producer to block on insert, when the queue is full, described by spasi as back-pressure.

Hi, appreciate more people! Σ ♥ = ¾
Learn how to award medals... and work your way up the social rankings
Offline ra4king

JGO Kernel


Medals: 345
Projects: 3
Exp: 5 years


I'm the King!


« Reply #31 - Posted 2012-11-06 01:22:12 »

Ah ok. I don't see that as a problem for the few times I used it but I'll keep that in mind Smiley

Offline Spasi
« Reply #32 - Posted 2012-11-06 01:27:19 »

Having an unbounded queue means that if the consumer thread(s) is slower than the producer, the whole thing might go out of control and eventually you'll run out of memory (or have horrible performance).

In practice, queues are either mostly empty or mostly full. For mostly full queues, you don't want an unbounded implementation for the above reason. With a mostly empty queue and a linked list implementation, you have this weird situation that both the head and the tail point to the same object, the same memory. When two separate threads try to concurrently update that same memory (the same cache line to be precise), you effectively get two serialized updates, you can't have the head and tail updating simultaneously.

With the ConcurrentLinkedQueue implementation in particular you have another problem: the head and tail references in the CLQ object itself are next to each other, which means that in practice they'll both be part of the same cache line in the CLQ instance. Updating the head invalidates the tail and vice-versa, every time and by any thread. This doesn't mean that every access will have to go to main memory, modern CPUs handle it in the cache, but it still causes unnecessary communication across CPU cores. The Disruptor library handles this issue with dummy fields that create enough padding between fields that may be contended and are normally too close to each other. Though the JVM is pretty aggressive with laying out object fields and will often completely remove unused fields, so they had to come up with a few tricks to avoid that.
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline Roquen
« Reply #33 - Posted 2012-11-06 15:59:10 »

The most used concurrent data-structure is a fixed-length single producer/single consumer circular list (wait-free).  Toss in some atomic types and that covers the majority of concurrent communication.

Disruptor or something custom?
I've been aware of Distruptor for awhile and have been meaning to look at it more closely, but no...I'm talking about the trivial read counter, write counter and fixed array which is simple and perfectly fine (assuming sequential consistency of writes..modification can address that issue) when a fixed size with low probability of concurrent reads & writes describes the problem.  Not that I would suggest anyone running out and writing this.  DON'T WRITE CONCURRENT DATA STRUCTURES....is my main advice.  Along with keeping things as simple as possible unless you really hate yourself.  This ties back in to lock-free, not only is lock-free and wait-free more efficient (in most sane real-world cases), I'm of the very strong opinion that they are simpler to use.

When two separate threads try to concurrently update that same memory (the same cache line to be precise)...
This can't be emphasized enough.  Cache-thrashing murders performance and is something that most people seem to completely ignore and you won't know that it's occurring unless you're explicitly looking for it.  Like above I said that I mostly use volatile for communication...I'd advice not to follow my lead, use atomic wrappers instead.  Remember we're (generally) talking about multiple caches and if any memory within a line changes, then we have to reload the entire effected hierarchy to insure memory consistency (specifics are architecture dependent). 
Offline delt0r

JGO Knight


Medals: 27
Exp: 18 years


Computers can do that?


« Reply #34 - Posted 2012-11-14 14:19:36 »

Getting too paranoid about cache performance before it matters aren't we.

I have no special talents. I am only passionately curious.--Albert Einstein
Offline princec

JGO Kernel


Medals: 367
Projects: 3
Exp: 16 years


Eh? Who? What? ... Me?


« Reply #35 - Posted 2012-11-14 14:28:53 »

Well, there is a school of thought that says if you've resorted to multithreading at all in the first place you are already being extremely mindful of performance...

Cas Smiley

Offline Roquen
« Reply #36 - Posted 2012-11-14 14:57:49 »

Getting too paranoid about cache performance before it matters aren't we.
Not really.  I think that in Java using the atomic wrappers instead of volatile if you don't have very sound understanding of caching and/or concurrency is very sound advice.  Minimal cost to not have to worry about the issue.  Or are you referring to something else?
Offline delt0r

JGO Knight


Medals: 27
Exp: 18 years


Computers can do that?


« Reply #37 - Posted 2012-11-15 09:55:20 »

I am referring to the fact that if cache coherency is the issue with multithreaded work, then you have much bigger problems. Multithreaded stuff like this only works where there is very little real contention. ie when missing a cache line every few 10,000 of clock cycles or more is not going to matter.

I have no special talents. I am only passionately curious.--Albert Einstein
Offline Varkas
« Reply #38 - Posted 2012-11-15 10:04:59 »

I was wondering if anyone could give me some advice on this, without directing me to a API, rather tips on when to use it, and when not to use it?

1  
2  
synchronized(this) {
}


Hope someone can help, thanks.

You need this if (and only if) you have a section of code that will be accessed by two or more threads at the same time, and which does calculations which need a strict order, so that the threads can't access it in parallel.

Try to keep such sections small.

if (error) throw new Brick(); // Blog (german): http://gedankenweber.wordpress.com
Offline Riven
« League of Dukes »

JGO Overlord


Medals: 783
Projects: 4
Exp: 16 years


Hand over your head.


« Reply #39 - Posted 2012-11-15 10:06:44 »

I was wondering if anyone could give me some advice on this, without directing me to a API, rather tips on when to use it, and when not to use it?

1  
2  
synchronized(this) {
}


Hope someone can help, thanks.

You need this if (and only if) you have a section of code that will be accessed by two or more threads at the same time, and which does calculations which need a strict order, so that the threads can't access it in parallel.

Try to keep such sections small.
FYI: you do not have control over the order of execution, just over exclusive access to a code block, by 1 thread.

Hi, appreciate more people! Σ ♥ = ¾
Learn how to award medals... and work your way up the social rankings
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline Varkas
« Reply #40 - Posted 2012-11-15 10:08:32 »

This makes sure that the instructions in the section are always executed sequentially. (This is what I wanted to say, not the order of threads accessing it.).

if (error) throw new Brick(); // Blog (german): http://gedankenweber.wordpress.com
Offline Riven
« League of Dukes »

JGO Overlord


Medals: 783
Projects: 4
Exp: 16 years


Hand over your head.


« Reply #41 - Posted 2012-11-15 10:12:29 »

It's slightly more complex than that. A synchronized block enforces 'happens before' and 'happens after' semantics, but within the synchronized block, you'll have your usual out-of-order execution of instructions.

But maybe that's what you meant Pointing

Hi, appreciate more people! Σ ♥ = ¾
Learn how to award medals... and work your way up the social rankings
Offline Roquen
« Reply #42 - Posted 2012-11-15 10:13:47 »

@ delt0r: Then we're 100% in agreement.  Isolation of tasks and minimization of communication and shared data-structures is priority one (and it makes your life easier).  I bring up the volatile vs. atomic point because people insist on creating their own wheels.  Like I mentioned above the trivial SP/SC fixed-length circular list, consider the data layouts:

1  
2  
3  
private volatile int rPos;
private volatile int wPos;
private final T[] data;


1  
2  
3  
private final AtomicInteger rPos;
private final AtomicInteger wPos;
private final T[] data;


The volatile version is fine if concurrent reads & writes have a very low probability.  The second burns some extra memory and slightly more cycles but you don't have to worry (too much) about what the probability is.  Likewise for any object instance which contains a volatile field (and again for static members).

WRT: synchronized.  Again my advice is to try to "Just say no."
Offline nsigma
« Reply #43 - Posted 2012-11-15 12:48:39 »

@Roquen - just wondering if your example actually has another issue.  While the read and write positions have "happen before" semantics in both cases, what about the data array itself?  A need for data to be an AtomicReferenceArray?

Praxis LIVE - open-source intermedia toolkit and live interactive visual editor
Digital Prisoners - interactive spaces and projections
Offline Roquen
« Reply #44 - Posted 2012-11-15 13:27:42 »

No it doesn't.  I should have mentioned that this is a purposely bad example.  The only role of having the read & write positions stored in atomic wrappers is to (semi-insure) that the memory chunk shared by the two threads is read-only.  The atomic operations themselves really serve no purpose at all.  The storage of the positions could be stored in any manner which insures this to be true (and thus superior)...via thread local or a common worker thread data-chunk for instance.  The point being generally using the atomic wrappers will tend to insure better performance than volatiles while one is working up the learning-curve of concurrency.
Offline sproingie

JGO Kernel


Medals: 202



« Reply #45 - Posted 2012-11-15 16:08:23 »

The other upside of the atomic types is that since you don't have to make them volatile, you can't forget to do so either, and neither do you have to worry about synchronizing the accessors (which would be an issue if you used long).  Their type encapsulates all the responsibility for thread-safety, so less of the responsibility for using it right falls on you.  This is the best reason for using classes in java.util.concurrent instead of rolling your own: someone else got the semantics right so you don't have to.
Pages: 1 [2]
  ignore  |  Print  
 
 
You cannot reply to this message, because it is very, very old.

 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

TehJavaDev (18 views)
2014-08-28 18:26:30

CopyableCougar4 (26 views)
2014-08-22 19:31:30

atombrot (39 views)
2014-08-19 09:29:53

Tekkerue (36 views)
2014-08-16 06:45:27

Tekkerue (33 views)
2014-08-16 06:22:17

Tekkerue (22 views)
2014-08-16 06:20:21

Tekkerue (33 views)
2014-08-16 06:12:11

Rayexar (70 views)
2014-08-11 02:49:23

BurntPizza (47 views)
2014-08-09 21:09:32

BurntPizza (38 views)
2014-08-08 02:01:56
List of Learning Resources
by Longor1996
2014-08-16 10:40:00

List of Learning Resources
by SilverTiger
2014-08-05 19:33:27

Resources for WIP games
by CogWheelz
2014-08-01 16:20:17

Resources for WIP games
by CogWheelz
2014-08-01 16:19:50

List of Learning Resources
by SilverTiger
2014-07-31 16:29:50

List of Learning Resources
by SilverTiger
2014-07-31 16:26:06

List of Learning Resources
by SilverTiger
2014-07-31 11:54:12

HotSpot Options
by dleskov
2014-07-08 01:59:08
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!