Java-Gaming.org    
Featured games (81)
games approved by the League of Dukes
Games in Showcase (481)
Games in Android Showcase (110)
games submitted by our members
Games in WIP (548)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: [1]
  ignore  |  Print  
  Basic Java code optimisation  (Read 8246 times)
0 Members and 1 Guest are viewing this topic.
Offline oysterman

Senior Newbie





« Posted 2013-08-20 16:01:25 »

Title says it all.
In essence, good programming habits one could (and should) have in order to make one's program faster and more stable. Or possibly, what you this is good programming habits in general.

I know there are resources for this kind of thing, but perhaps hearing it from the members of the forum might give a different light on things.

I barely know about this kind of thing, so the only two things that pop in mind immediately are bit-shifting and taking conditions out of loops whenever possible. I guess using the same constant (100, 100 instead of 101, 102 ; that kind a thing) values might help as well.

Any suggestions ?
Offline Jeremy
« Reply #1 - Posted 2013-08-20 16:06:13 »

Title says it all.
In essence, good programming habits one could (and should) have in order to make one's program faster and more stable. Or possibly, what you this is good programming habits in general.

I know there are resources for this kind of thing, but perhaps hearing it from the members of the forum might give a different light on things.

I barely know about this kind of thing, so the only two things that pop in mind immediately are bit-shifting and taking conditions out of loops whenever possible. I guess using the same constant (100, 100 instead of 101, 102 ; that kind a thing) values might help as well.

Any suggestions ?

Optimizations in my perspective:
1. Make a well educated guess to determine if the module you are developing it going to be performance intensive, if you're building a collision detection algorithm, particle engine, anything with many many tightly packed iterations you might need to design with optimization in mind. Otherwise don't and strictly just consider proper code design.

2. If you didn't do (1) [and programmers are generally horrible at doing 1 properly] and you begin experiencing code performance issues, use a profiler to determine where your bottle-necks are, and optimize specifically your bottle-necks. What in particular it is that you do to optimize it depends on what the code is doing.

JevaEngine, Latest Playthrough (This demo is networked with a centralized server model)

http://www.youtube.com/watch?v=rWA8bajpVXg
Offline oysterman

Senior Newbie





« Reply #2 - Posted 2013-08-20 16:10:09 »

Thanks for the reply,
but that is pretty general, and my sort of question includes programming habits, so I guess that would include the approach to code design too. I was thinking of performance improvements more along the lines of the basic stuff I mentioned.
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline StrideColossus
« Reply #3 - Posted 2013-08-20 16:10:30 »

There's probably two basic principles I follow when developing:

1. KISS (Keep It Simple Stupid), i.e. avoid 'clever' or overly complex code, keep it nice and maintainable.

2. http://www.c2.com/cgi/wiki?RulesOfOptimization

Offline Jeremy
« Reply #4 - Posted 2013-08-20 16:53:38 »

Thanks for the reply,
but that is pretty general, and my sort of question includes programming habits, so I guess that would include the approach to code design too. I was thinking of performance improvements more along the lines of the basic stuff I mentioned.

Well the two don't always travel in the same direction. Usually code optimizations steer away from code design to reduce indirection, simplify the execution path etc.

That said, if you're looking for common code design patterns, here is a good list I found of some of the more common ones, the only note I would make is that in that list you'll see singletons and service locators. Singletons are incredibly  controversial (as are service locators) but the service locator is considered the lesser of the two evils - avoid both of them if you can:
http://www.javacamp.org/designPattern/

JevaEngine, Latest Playthrough (This demo is networked with a centralized server model)

http://www.youtube.com/watch?v=rWA8bajpVXg
Offline Troncoso

JGO Coder


Medals: 20



« Reply #5 - Posted 2013-08-20 17:44:34 »

I hate when people say to avoid Singletons. You know what makes them "bad"? The fact that people use them when they shouldn't.

In my opinion, you should learn any and every possible aspect and technique that you can. Not only so you know what you have available to you, but so you also know when and how to use it. By saying "avoid Singletons" that's basically saying that they are obsolete or show no usefulness, which isn't true at all.

In my experience, I find that the people that suggest this the most are the people who just heard it elsewhere and are blindly passing it forward.

Anyway, there are several techniques that can be used to optimize code. A couple that I learned in my time with C and code optimizing:

instruction scheduling
loop unrolling
code motion
reduction in strength
reduction in code duplication
inlining

I'll leave it to you to learn about them.
Offline Several Kilo-Bytes

Senior Member


Medals: 11



« Reply #6 - Posted 2013-08-20 17:51:54 »

The extent to which optimization is helpful is proportional to the complexity of your algorithms. The complexity of a programs design is going to be proportional to the knowledge of the programmer. So if you do not know what qualifies as an optimization then you probably don't need to optimize.

Look to see if your game is slow due to graphics or due to engine code. If graphics are the bottle neck, then optimizing Java source won't help.

Learning bitwise operations and using powers of two for array indexing is a simple improvement.

If physics is a bottleneck, replacing arrays of Rectangles with primitive arrays with data interleaved. [x0, y0, w0, h0, x1, y1, w1, h1, x2, y2, w2, h2, ...] Such changes may help because sequential memory access is faster than random memory access. Objects may create an extra level of indirection. If hundreds of objects get accessed tens of thousands of times, you will be access memory in an irregular pattern and get poor RAM cache performance. Making such a change improved my brute force algorithm's speed by a factor of 16. (Sometimes brute force is faster if you know what you are doing. This code took a third of my loop time, so it helped in my case.)

Do not attempt optimization beyond the scope to which you understand how a computer works at a low level. Remember that Java compilers (at least HotSpot) are very good. What C users might consider a necessary optimization might be unnecessary or detrimental for Java performance.

Edit for Troncoso posting:

instruction scheduling - Possibly helpful, the compiler can do this usually.
loop unrolling - Not generally useful in Java or C because it makes instructions take more RAM. The compiler is also capable of doing so automatically if it would help.
inlining and reduction in code duplication - These are opposite goals. Java can inline most methods including non-final ones. Overzealous inlining is harmful for the same reason as loop unrolling.
Offline Jeremy
« Reply #7 - Posted 2013-08-20 18:05:16 »

In my opinion, you should learn any and every possible aspect and technique that you can. Not only so you know what you have available to you, but so you also know when and how to use it. By saying "avoid Singletons" that's basically saying that they are obsolete or show no usefulness, which isn't true at all.

By saying use singletons you're saying you know for a fact no one is ever going to need to have a second instance of your class. Which is more untrue.

Singletons aren't obsolete and they don't show no usefulness, but they make re-factoring code incredibly difficult, they make isolating your code difficult and they hide their dependencies.

Finally, I said avoid using singletons\service locators, I didn't say never use them. The chances of a newbie using a singleton properly (and I am sure people could argue all day if that is actually possible, an argument I won't involve myself in) is very low.

Never say never -- wait, singletons say never.

JevaEngine, Latest Playthrough (This demo is networked with a centralized server model)

http://www.youtube.com/watch?v=rWA8bajpVXg
Offline Abuse

JGO Knight


Medals: 12


falling into the abyss of reality


« Reply #8 - Posted 2013-08-20 18:21:49 »

When you do optimize, 1st focus on optimising the algorithm rather than the code.

Make Elite IV:Dangerous happen! Pledge your backing at KICKSTARTER here! https://dl.dropbox.com/u/54785909/EliteIVsmaller.png
Offline Troncoso

JGO Coder


Medals: 20



« Reply #9 - Posted 2013-08-20 18:28:37 »

Edit for Troncoso posting:

instruction scheduling - Possibly helpful, the compiler can do this usually.
loop unrolling - Not generally useful in Java or C because it makes instructions take more RAM. The compiler is also capable of doing so automatically if it would help.
inlining and reduction in code duplication - These are opposite goals. Java can inline most methods including non-final ones. Overzealous inlining is harmful for the same reason as loop unrolling.
[/quote]

There is a time and a place for all of these techniques. I never said use them all at once. He wanted some actual optimization techniques, so I threw some out there. Honestly, I don't really think optimization is a big deal in Java if you understand the language and understand programming logic.

Oh. Loop tiling is another one. I wasn't a fan of it, but it's something. Can reduce cache misses. Though, you'd have to understand how cache works to implement it properly.
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline Several Kilo-Bytes

Senior Member


Medals: 11



« Reply #10 - Posted 2013-08-20 19:08:31 »

There is a time and a place for all of these techniques. I never said use them all at once. He wanted some actual optimization techniques, so I threw some out there. Honestly, I don't really think optimization is a big deal in Java if you understand the language and understand programming logic.

Oh. Loop tiling is another one. I wasn't a fan of it, but it's something. Can reduce cache misses. Though, you'd have to understand how cache works to implement it properly.

Not really. In 2013 making an optimization whose Wikipedia phrase uses the phrase "is an optimization performed by compilers" is redundant at best or counterproductive at worst.

There is no place for inlining at a source code level when it would be equally convenient to put it in a method. An inlined C function is bad code smell. I think some people still put an inlined keyword in a method header knowing the compiler ignores it to satisfy bosses, but then there are also people that know better and try to work around the compiler anyway to force inlining.

Loop unrolling (but not loop tiling) is also something that should be avoided. A compiler might benefit from a re-rolling optimization if you had legacy code featuring loop unrolling.

For Java at least if you do not have a compiler that does optimization well, you can use another program as part of your build process to perform that optimization on source or byte code. (Like ProGuard.) Un-"optimized" code is more cross platform, more future proof, and more optimizable.
Offline philfrei
« Reply #11 - Posted 2013-08-20 19:12:52 »

For many, the optimal code is that which is easiest to read and modify or debug as needed. Are you wanting to optimize the hours you have to spend wrangling with a piece of code?

I've been bitten by the functional programming bug. I think it helps with keeping code clean and clear and can lend itself to parallel processing when done correctly, which is a plus as things go more and more multi-core.

You will be surprised as to what is or isn't handled by today's compilers. Actually, there are a surprising number of common bad habits that are dealt with, such as a simple calculation inside a loop condition test. That's not to say its okay to do this, though!

So, if it really matters, profile/verify. "Agile" programming recommends, though, not to add features until they are needed, and I think this would include attempts at optimizing that go against basic good form and clarity or common sense.

Martin Fowler is a good author on the topic optimization of readable style.

I just did a search on "functional programming java" and found the following article. Looks like it might be interesting.
http://www.ibm.com/developerworks/java/library/j-fp/index.html
There's some references at the end that might be more readable.

"Greetings my friends! We are all interested in the future, for that is where you and I are going to spend the rest of our lives!" -- The Amazing Criswell
Offline Troncoso

JGO Coder


Medals: 20



« Reply #12 - Posted 2013-08-20 19:59:17 »

There is a time and a place for all of these techniques. I never said use them all at once. He wanted some actual optimization techniques, so I threw some out there. Honestly, I don't really think optimization is a big deal in Java if you understand the language and understand programming logic.

Oh. Loop tiling is another one. I wasn't a fan of it, but it's something. Can reduce cache misses. Though, you'd have to understand how cache works to implement it properly.

Not really. In 2013 making an optimization whose Wikipedia phrase uses the phrase "is an optimization performed by compilers" is redundant at best or counterproductive at worst.

There is no place for inlining at a source code level when it would be equally convenient to put it in a method. An inlined C function is bad code smell. I think some people still put an inlined keyword in a method header knowing the compiler ignores it to satisfy bosses, but then there are also people that know better and try to work around the compiler anyway to force inlining.

Loop unrolling (but not loop tiling) is also something that should be avoided. A compiler might benefit from a re-rolling optimization if you had legacy code featuring loop unrolling.

For Java at least if you do not have a compiler that does optimization well, you can use another program as part of your build process to perform that optimization on source or byte code. (Like ProGuard.) Un-"optimized" code is more cross platform, more future proof, and more optimizable.

Not really what? I said there is a time and place for these. Maybe the time isn't 2013. Haha. Don't look so much into it. If nothing else, I think that kind of stuff is fun to learn about.
Offline HeroesGraveDev

JGO Kernel


Medals: 245
Projects: 11
Exp: 2 years


┬─┬ノ(ಠ_ಠノ)(╯°□°)╯︵ ┻━┻


« Reply #13 - Posted 2013-08-21 06:27:50 »

If it's not broken, don't fix it. Pointing

Offline concerto49

Junior Member





« Reply #14 - Posted 2013-08-21 08:40:21 »

If it's not broken, don't fix it. Pointing

Always have to create new versions to get sales :p

High performance, fast network, affordable price VPS - Cloud Shards
Available in Texas, New York & Los Angeles
Need a VPS Upgrade?
Offline gimbal

JGO Knight


Medals: 25



« Reply #15 - Posted 2013-09-04 11:53:01 »


Indeed. If it ain't broke, add more stuff to it and break it all over again.
Offline philfrei
« Reply #16 - Posted 2013-09-04 18:50:38 »

I recently spent some time profiling a procedural FM synth program I've been working on, trying to find areas of improvement. This is what I came up with:

Instead of "/ 2" (integer division by 2), ">> 1" really does perform better. However, "* 2" and "<< 1" gave me identical performance times. Is this compiler dependent?

Instead of creating a new double array with each iteration of a crucial loop, clearing an existing array and reusing it was significantly faster. Probably saves on garbage collection, too.

Instead of using Math.sin(), using a lookup table of 1024 indexes into a single sine wave, combined with linear interpolation performed significantly faster. I was a bit surprised because with the sin() one can just plug in a double, but with a lookup table, for decent accuracy, you have to do two lookups and compute the linear interpolation--e.g., a lot more fussing. In spite of this, the lookup method would win.

Really stupid (and what caused me to have to look for sources of performance dropoff), I came up with a totally inefficient way to do error checking, in that the call had a String concatenation (to identify the location of the code) in the parameter area. It wasn't obvious to me that this was getting executed, since it was buried within the line of code. (And, I hadn't made this error before. I seem to need to make every possible error at LEAST once, usually more, before I learn better.) That turned out to be the biggest culprit. If there is any String activity needed at all, keep it out of the sections that require any degree of performance.

The nice thing was that this error forced me to have to profile, and I found the other stuff in the process.

Maybe these suggestions are obvious or basic for most of you. I'm admittedly a self-taught intermediate level Java programmer with a LOT to learn still.

"Greetings my friends! We are all interested in the future, for that is where you and I are going to spend the rest of our lives!" -- The Amazing Criswell
Offline Several Kilo-Bytes

Senior Member


Medals: 11



« Reply #17 - Posted 2013-09-04 20:40:00 »

Division is slow. (Though division by a constant may be slightly faster.) Multiplication is better. I do not know the exact difference between addition, subtraction, shifts, and bitwise operations (which are the fastest two operand operations) but it is fast enough that integer division is sometimes implement faster (in assembly code only) using a combination of multiplication and other instructions. Multiplication by a power of two is probably optimized right off the bat. Division by two cannot by optimized on the other hand because it is not exactly the same. -1 >> 1 == -1 is the exception, so the change may not be automatic because it is not exactly equivalent to division.

I am a little surprised the array thing was not optimized, even though I would do that to begin with and assume it didn't. Table look up may be different on different platforms, but it can be a good idea when you do not need a perfect drop in replacement. The String advice is also good. Strings don't belong in most functions unless they are within an if block that ends with an exception being thrown. How did you find the division thing using a profiler? Huh Was that just a coincidence that you noticed it?
Offline philfrei
« Reply #18 - Posted 2013-09-04 21:47:23 »

Thanks for the review! Nice to get some confirmation. I am not totally up on profiling techniques--am wondering if anyone knows of good tutorials on the subject. I have used jconsole and jvisualvm, but don't really feel like I am using them to their true potential.

I mostly did the measurements "in context". My synth is set up to play a "note" by iterating through an "envelope". Normally, the output is directed to a SourceDataLine (a blocking queue) which limits the execution speed. Instead, I set the synth to just overwrite the output without blocking, allowing it to run at full speed.

Since the inner loops execute 44100 times for every second of sound being processed, and the envelope was set to about 6 or 7 seconds, running a single note and timing the duration seemed like a reasonable test, covering over 200,000 iterations. I'd compare execution times by changing one element. For example, using a Math.sin() function in a single oscillator gave me a total running time of about 9 millis for those "6 or 7 seconds" of audio. Using the table lookup method with interpolation, the otherwise same code took about 300 micro seconds! (I checked: when used as designed, the two methods are acoustically identical to my ear, and I have a reasonably ear for this sort of thing. Plus people with actual acoustical engineering background have assured me that linear interpolation should be adequate for this.)

I was looking for my notes on the div, and discovered I did that test by putting the operation in a loop. I don't think I had any /2's in the synth code to use for testing. The test code I wrote is below, very simple loop. Just uncomment the line you want to test. Of course every time you run it, it will produce a different outcome, and if you do two or three in a row, the latter runs will execute more quickly. Still, I found that *2 and << 1 and >>1 were all in the same ballpark with / 2 being an order of magnitude slower (relatively) for that number of iterations. I don't know exactly how valid this sort of test is.

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
      // assumes you already have these
     long startTime;
      long endTime;
      @SuppressWarnings("unused")
      double a;

      //
     startTime = System.nanoTime();
      for (int i = 0; i < 10000000; i++)
      {
      // uncomment one of the following...
//         a = i / 2;
//         a = i >> 1;
//         a = i * 2;
//         a = i << 1;
     
      }
      endTime = System.nanoTime();
      System.out.println("elapsed: " + (endTime - startTime));


I hate accumulating those yellow warnings in Eclipse--hence the annotation.
Using i in the operation being tested is an attempt to lessen the potential amount of caching.

"Greetings my friends! We are all interested in the future, for that is where you and I are going to spend the rest of our lives!" -- The Amazing Criswell
Offline Several Kilo-Bytes

Senior Member


Medals: 11



« Reply #19 - Posted 2013-09-04 22:23:51 »

<whisper>My general experience is that micro-benchmarking is more valuable than profiling. Don't tell anyone though.</whisper> The problem with profiling is that you cannot get a very accurate picture of little things. Sampling misses things and full profiling creates stalls that would not normally exist. I once profiled code that involved generating random numbers and profiling did not tell me that the RNG was the problem, even though the change increased speed by 50%. Yours is an optimization story I would recommend people read.

If you are curious why the sound is no different, it is because sound can be interpreted as a superposition of sine waves. If you subtract your base frequency sine wave from your interpolated sine wave then you get a function with tiny blips. If you fill in the difference with small amplitude high frequency, you get the real (interpolated) function. If you do it with 8 points, the difference between the big low frequency amplitude and small high frequency amplitudes is like a whisper in a loud, crowded room and since they are all overtones of the original you won't notice them by ear anyway. If you double the number of points the difference is even greater. If you have thousands of points, the difference is negligible. At that point the low frequency overtones are eliminated (the first thousand frequency multiples I think) and very very high frequency overtones are much much smaller because the difference is smaller. They end up being too high and too quiet to hear or even produce through speakers, so you hear the intended frequency only.

Edit: That assumes a sound system with an infinite sampling rate and infinite precision numbers. Or assuming your precalculated values have a frequency is divisible by the sampling rate. People can't hear below 20 Hz or above 20000 Hz, so...
Offline Oskuro

JGO Knight


Medals: 39
Exp: 6 years


Coding in Style


« Reply #20 - Posted 2013-09-05 11:01:28 »

Want to try something fun? Compile your code, then use a decompiler, and take a look at the decompiled code.

The compiler will try to optimize on its own, so it will give you an idea of what is being done under the hood to improve performance.... But it will also make you realize how horribly unreadable such code is.

It is a fun experience though. I particularly like decompiling stuff to try and figure out how it works.

Offline pjt33
« Reply #21 - Posted 2013-09-05 15:12:48 »

Want to try something fun? Compile your code, then use a decompiler, and take a look at the decompiled code.

The compiler will try to optimize on its own, so it will give you an idea of what is being done under the hood to improve performance....
Not very useful with Java, though, unless you can get at the JIT output. javac leaves most of the optimisation to Hotspot.
Offline sproingie

JGO Kernel


Medals: 202



« Reply #22 - Posted 2013-09-05 17:02:11 »

Not very useful with Java, though, unless you can get at the JIT output. javac leaves most of the optimisation to Hotspot.

https://wikis.oracle.com/display/HotSpotInternals/PrintAssembly
Offline Several Kilo-Bytes

Senior Member


Medals: 11



« Reply #23 - Posted 2013-09-05 23:44:38 »

Want to try something fun? Compile your code, then use a decompiler, and take a look at the decompiled code.

The compiler will try to optimize on its own, so it will give you an idea of what is being done under the hood to improve performance.... But it will also make you realize how horribly unreadable such code is.

It is a fun experience though. I particularly like decompiling stuff to try and figure out how it works.

1) Bad idea if you think its applicable to human optimization. It can't help you and will often hurt you.
2) If "optimized" Java code is unreadable then the person optimizing it did not write it for a computer made in the multicore era or does not know what constitutes an optimization. There is no more excuse for optimized source code to be unreadable than for unoptimized code to be unreadable. -- Are these fixes unreadable: Using Trove/Colt primitive collections over wrapper classes? Reducing the number of operations in a calculation? Replacing naively selected data structures and algorithms with good big O behavior like linked lists and quad trees with faster, shorter, simpler brute force methods?
Offline Oskuro

JGO Knight


Medals: 39
Exp: 6 years


Coding in Style


« Reply #24 - Posted 2013-09-06 16:11:36 »

1) Bad idea if you think its applicable to human optimization. It can't help you and will often hurt you.
2) If "optimized" Java code is unreadable then the person optimizing it did not write it for a computer made in the multicore era or does not know what constitutes an optimization.

I think you missed my point.

I'm not suggesting decompilation is a valid optimization technique, it is just a fun thing I like to do sometimes, and that in some cases can give you ideas.

The most educational part of such an exercise, in my opinion, is to learn to value readable code. I agree that optimization shouldn't be unreadable, but in practice, it often is, specially when code is over-optimized.

Offline Several Kilo-Bytes

Senior Member


Medals: 11



« Reply #25 - Posted 2013-09-06 22:27:42 »

I think you missed my point. Sad It's educational, but its in no way applicable to optimizing on a source code level.

If a compiler does something weird with your source code, it is because it did static analysis on it and determined that the two operations were equivalent. It only does it if is faster and if it is equivalent. If you use it in your source code its either a) slower, b) not what you intended, or c) identical in effect and speed to more straightforward, cross platform, and future proof code. It can only hurt you to take inspiration from it for your high level source code.

I know it is unintuitive that optimization would not be unintuitive. It should be complicated. Right? It should be extra work. Right? Here is a shocking truth: If a compiler performs optimizations on your code, it does not mean your code is deficient and needs to be mucked around with until you confuse the compiler. It means your code is optimal.

Optimization and readability is also not a tradeoff. If it is, then you are using optimization techniques from the 80s and 90s that don't work on modern computers. (Back when a several kilobytes was a lot of memory and all instructions took approximately the same long amount of time.) Desktop and mobile computers today are now literal super computers. Super computers are absolutely common place now, so a different programming style is required. They have a different architecture; it's not that they're just faster versions of old machines. I was not kidding that certain brute force methods are better than things liked linked list and quad trees.

Optimal code nowadays is code that can run on fast hardware (either in series or in parallel) without being interrupted. Complicated code uses complicated features which usually stalls the CPU or GPU; so, complicated high level code is the opposite of optimal. Also: The opposite of complicated code is not optimal. Conflating optimized/unoptimized with unreadable/readable is long outdated. It's pretty great because you can use another programmer's optimized code without even noticing. It's also why making small changes like iterating over an array backwards may hurt performance even though it doesn't seem more complex and it used to be a recommended optimization when C was still young.
Offline hwinwuzhere
« Reply #26 - Posted 2014-01-21 11:04:58 »

I've found that avoiding the use of loops within the main game loop boosts the performance quite alot.

What did the boolean say to the integer? You can't handle the truth.
Offline Riven
« League of Dukes »

JGO Overlord


Medals: 781
Projects: 4
Exp: 16 years


Hand over your head.


« Reply #27 - Posted 2014-01-21 11:07:24 »

I've found that avoiding the use of loops within the main game loop boosts the performance quite alot.
Did I miss your sarcasm? Smiley The less you do, the faster it goes, so... yes.

Hi, appreciate more people! Σ ♥ = ¾
Learn how to award medals... and work your way up the social rankings
Online Roquen
« Reply #28 - Posted 2014-01-21 12:33:10 »

Quote
>My general experience is that micro-benchmarking is more valuable than profiling.
Profile real code with real data.  The vast majority of people can't write a micro-benchmark that doesn't lie to them.

Quote
Instead of "/ 2" (integer division by 2), ">> 1" really does perform better. However, "* 2" and "<< 1" gave me identical performance times. Is this compiler dependent?
Division by 2 isn't the same as a right shift by one.  Consider an input of -1.  The compiler can only transform into a shift if the input is insured to be positive and/or negative and even.  Multiplication by 2 and left-shift by 1 are always the same and the compiler can brainlessly perform the transform for you.

If you want sin/cos fast...think minmax.  It's excessively rare than a table-lookup will be a win (Sound synth might be one).  LUT in general are very slow but people believe their broken micro-benchmark that's telling them lies.

Skip clearing the array if you can.

Quote
(Though division by a constant may be slightly faster.)
Integer constant divisions are always transformable into a multiple (worst case..don't know if HotSpot does this or not).  Floating point generally isn't.

Quote
Want to try something fun? Compile your code, then use a decompiler, and take a look at the decompiled code.
Java ahead of time compilers (javac, eclipse, etc) don't do anything.  They just transform source into bytecode...no optimizations occur (well interesting ones).  You have to have HotSpot dump out the native assembly to see real optimizations.
Offline hwinwuzhere
« Reply #29 - Posted 2014-01-21 13:34:49 »

I've found that avoiding the use of loops within the main game loop boosts the performance quite alot.
Did I miss your sarcasm? Smiley The less you do, the faster it goes, so... yes.

No sarcasm.. Just a really, really basic fact. Sometimes I personally tend to forget these things, that's why I 'contributed' my simple, yet usefull knowledge to the community. Smiley

What I meant to say is: There are alot of different algorithms that can give you the same result. What I find challenging in Java (and what makes it more fun for myself) is trying to write algorithms that don't make excessive use of loops and at the same time aren't recursive or lengthy.

What did the boolean say to the integer? You can't handle the truth.
Pages: [1]
  ignore  |  Print  
 
 
You cannot reply to this message, because it is very, very old.

 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

atombrot (26 views)
2014-08-19 09:29:53

Tekkerue (24 views)
2014-08-16 06:45:27

Tekkerue (23 views)
2014-08-16 06:22:17

Tekkerue (14 views)
2014-08-16 06:20:21

Tekkerue (22 views)
2014-08-16 06:12:11

Rayexar (60 views)
2014-08-11 02:49:23

BurntPizza (39 views)
2014-08-09 21:09:32

BurntPizza (30 views)
2014-08-08 02:01:56

Norakomi (37 views)
2014-08-06 19:49:38

BurntPizza (67 views)
2014-08-03 02:57:17
List of Learning Resources
by Longor1996
2014-08-16 10:40:00

List of Learning Resources
by SilverTiger
2014-08-05 19:33:27

Resources for WIP games
by CogWheelz
2014-08-01 16:20:17

Resources for WIP games
by CogWheelz
2014-08-01 16:19:50

List of Learning Resources
by SilverTiger
2014-07-31 16:29:50

List of Learning Resources
by SilverTiger
2014-07-31 16:26:06

List of Learning Resources
by SilverTiger
2014-07-31 11:54:12

HotSpot Options
by dleskov
2014-07-08 01:59:08
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!