Java-Gaming.org Hi !
Featured games (81)
games approved by the League of Dukes
Games in Showcase (513)
Games in Android Showcase (119)
games submitted by our members
Games in WIP (576)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: 1 [2] 3 4 ... 6
  ignore  |  Print  
  Generalized Rant Thread  (Read 24437 times)
0 Members and 1 Guest are viewing this topic.
Offline Roquen
« Reply #30 - Posted 2012-05-10 12:31:05 »

[size=12pt]Optimization is the root of all evil (phrase):  A myth successfully promoted by computer science professors and teaching assistants.  The goal is to minimize (optimize) their time actually spent with students and student related activities, such as grading assignments and tests, so that they have more free time for their real reason of being an academic.  Examples:  performing research, getting grants, scoring with undergraduates and/or drinking at the pub to drown their sorrows about not being able to get a real job.[/size]

Seriously.  Let me see a show of hands of people that think "gotos" are evil.  Now let me see a show of hands of people that think "optimizations" are evil.  If you held up your hand both times, you're being a parrot without even knowing it and there is no way that you've read the two principle papers on which these notions are based.

DONALD E. KNUTH, "Structured Programming with go to Statements", Computing Surveys, Vol. 6, No. 4 December 1974:  Which is a defense of the goto statement.  The paper is available online.

Quote
There is no doubt that the grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of non critical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified. It is often a mistake to make a priori judgments about what parts of a program are really critical, since the universal experience of programmers who have been using measurement tools has been that their intuitive guesses fail.

And another from the same paper:
Quote
The conventional wisdom shared by many of today's software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by pennywise-and-pound-foolish programmers, who can't debug or maintain their "optimized" programs.


[size=8pt]
Copyright © 1974, Association for Computing Machinery, Inc. General permission to republish, but not for profit, all or part of this material is granted, provided that ACM's copyright notice is given and that reference is made to this publication, to its date of issue, and to the fact that reprinting privileges were granted by permission of the Association for Computing Machinery.
[/size]
Offline delt0r

JGO Knight


Medals: 27
Exp: 18 years


Computers can do that?


« Reply #31 - Posted 2012-05-10 14:49:47 »

True story.

A guy (Whom shall remain nameless) was working on some code that needed to be run on cluster for many months. Since the cluster was still getting built, he decided to ultra optimize the core part of the code. He busted it down to assembler and after about 6 months, he managed to make almost 5x faster than the original C code. We he gave a talk about the ultra cool optimizations like instruction order and other such stuff. Someone else in the crowed (whom shall also remain nameless), said that compilers are for the most part just better than humans at that stuff*. To prove it, this person took the original C and spent a day optimizing the compiler flags for gcc. After just a day, he also had almost a 5x speed increase. When switched to the Intel compiler, it was more than 6x faster than the original. Mr Ultra optimizer cried and went MIA for about 2 months before coming back to finished his PhD.

Sure don't write crap code off the bat. There is no point doing a bunch of O(n^2) stuff when just as easily could be O(n ln n). But for the most part, people i know that want to optimize optimize optimize, are the root of all evil on the projects I have had the displeasure to work on with them.

* Of course there are exceptions. Like basic vector stuff for SSE etc. But these are typically the exception.

I have no special talents. I am only passionately curious.--Albert Einstein
Offline Roquen
« Reply #32 - Posted 2012-05-10 15:13:03 »

I'd say that the problem here wasn't optimization.  The wasted time and effort was a lack of understanding.  In this case of tools.  In other cases it will be the language in question, mathematics, algorithms and the actual problem itself.  I bet the nameless person will forever more pay attention to choice of compilers and their associated options.  Lesson learned.  And they needed to be burned...dropping to assembly should virtually never be done and large chunks of code is pure foolishness.
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline sproingie

JGO Kernel


Medals: 202



« Reply #33 - Posted 2012-05-10 16:09:12 »

The only reason you need to drop to hand-rolled ASM for SSE in C is because the language lacks a construct to express vector operations in, so the compiler has to analyze loops, which is essentially an impossible problem for arbitrary loops.  C++ can express vector ops by defining vector types, but still has no vector primitives for the compiler's benefit, so the "hand-rolling" would still have to take place in the class body, which is the wrong place to be making architecture-specific implementation decisions.

If Java wants high-performance vectorized operations, it could do worse than to lift them from Fortress.
Offline ra4king

JGO Kernel


Medals: 350
Projects: 3
Exp: 5 years


I'm the King!


« Reply #34 - Posted 2012-05-10 18:07:40 »

I believe what we programmers collectively agree on is that premature optimizations are the root of all evil.

Offline Roquen
« Reply #35 - Posted 2012-05-11 04:45:48 »

Let's not get bogged down in "dropping to assembly".  This is the least interesting kind of optimization in terms of usefulness.  Generally a reasonable expectation is some small linear improvement with a very short shelf life.  It's the last line of defense, squeezing water from a rock, (fill in some other cliche phrases).  If you go there without seriously considered all other options you're officially doing it wrong.

@sproingie: While what your saying is true for code written for scalar (SISD) execution, it should be noted that MSVC, Intel's compiler & GCC all support extensions which expose SIMD et al instructions.  So dropping to assembly isn't really needed in that case except to manually schedule, register allocate, etc.

Quote
I believe what we programmers collectively agree on is that premature optimizations...
Unless you're using the royal "we" in this sentence it seems to me that programmers can agree on very little.  There are certainly is a large group of programmers that believe that optimization is evil.

Personally I think that "premature optimization" is an oxymoron.  You can "prematurely code" but it's impossible to "prematurely optimization".  Optimization != make this faster.   Optimization is attempt to meet a collection of goals and having some measures on the relative success of the attempt.  Making some piece of code go faster without have any real impact on the final product is an anti-optimization.  It was wasted time (almost always the most important resource) and doesn't measurable move you toward any goal but did move you away from the ultimate goal of getting the project done.

But even if you take the narrow view of optimization only being about speed, then I consider the notion of "premature optimization" to be harmful.  Because so many people take that to mean not worrying about performance until "the end".  On the whole that simply doesn't work.  The largest speed improvements will come from design and understanding the problem.  Waiting until the end will tend to limit your options and cost you additional coding time.

I'd also like to point out that the notion that optimizing for speed makes code harder to read, write and debug.  On the whole I find that to be more of an exception than a rule.  Most of the time it should be a wash, followed by actually easier and finally harder (a very small percentage).

In summary: Your most important resource is your time...don't freaking waste it.
Offline davedes
« Reply #36 - Posted 2012-05-11 06:10:46 »

Quote
In summary: Your most important resource is your time...don't freaking waste it.
Unless you're a hobbyist, and your time is spent trying to outperform your last project.. even though none of your games will ever need to render nearly that many sprites at once. Roll Eyes

Offline Roquen
« Reply #37 - Posted 2012-05-11 07:43:47 »

Wasting time in this context is only in terms of goal meeting.  It's only wasted if your making zero or negative progress.  So if the goal is a learning experience, it's not really expected that the produced code is useful, fast, bug-free, well-designed, etc. unless any of these criterion are a part of the goal set.
Offline Orangy Tang

JGO Kernel


Medals: 56
Projects: 11


Monkey for a head


« Reply #38 - Posted 2012-05-11 09:35:42 »

True story.

A guy (Whom shall remain nameless) was working on some code that needed to be run on cluster for many months. Since the cluster was still getting built, he decided to ultra optimize the core part of the code. He busted it down to assembler and after about 6 months, he managed to make almost 5x faster than the original C code. We he gave a talk about the ultra cool optimizations like instruction order and other such stuff. Someone else in the crowed (whom shall also remain nameless), said that compilers are for the most part just better than humans at that stuff*. To prove it, this person took the original C and spent a day optimizing the compiler flags for gcc. After just a day, he also had almost a 5x speed increase. When switched to the Intel compiler, it was more than 6x faster than the original. Mr Ultra optimizer cried and went MIA for about 2 months before coming back to finished his PhD.

Sure don't write crap code off the bat. There is no point doing a bunch of O(n^2) stuff when just as easily could be O(n ln n). But for the most part, people i know that want to optimize optimize optimize, are the root of all evil on the projects I have had the displeasure to work on with them.

* Of course there are exceptions. Like basic vector stuff for SSE etc. But these are typically the exception.

Ha! I can do one like that - the first place I got a proper coding job was a massive C++ hardware control system / UI, written in the bad old days of VS6 and MFC. It ran like an absolute dog - because they *only* did debug builds. In release builds it was so crash-tastic it wouldn't even boot. No-one seemed to know (or care) what the release-only build bugs were (I think people assumed it was something wrong in the MS compiler and not their problem, but I'm certain it was just the usual uninitialised memory stuff).

Of course, performance was still an issue, and they actually attempted to optimised their debug-only code.

To make matters even weirder, they actually shipped debug builds. And because debug builds contain debug libraries from VS which you're not supposed to ship, they actually had to buy a site license of VS for every client they shipped to ($$$).

I would love to go back, knowing what I know now, and fix the release build up. It probably wasn't even any huge problems, just lots of little ones...

[ TriangularPixels.com - Play Growth Spurt, Rescue Squad and Snowman Village ] [ Rebirth - game resource library ]
Offline Eli Delventhal

JGO Kernel


Medals: 42
Projects: 11
Exp: 10 years


Game Engineer


« Reply #39 - Posted 2012-05-11 17:31:06 »

1  
[code][code]

FYI there is this crap above because the forum won't stop inserting this and jacking up my post, this is the best I can do.

Optimization is the root of all evil (phrase):  etc... etc...

It seems like you're making your point invalid with your own quotes. Also, your mindless judgmental insult of professors (both my parents are professors, by the way) further dilutes any valid point you may have had in your statement.

Nobody has ever told me "optimization is the root of all evil," in school or otherwise. I have always heard "premature optimization is the root of all evil," which, lo and behold, is exactly what Knuth said. Guess what I was often reading in school and taught had good philosophies to think about? Knuth. And who put me down that path? Professors.

Reading other peoples' follow-up posts, I see they are all providing examples of exactly the same point. Don't optimize prematurely. The proper development path is to write as good code as you can without stressing over performance, instead focus on good design, readability, and modularity. Then when you find something is too slow you find out why and fix it. End of story. Se get off your high horse because nobody is going to raise their hand saying that optimization in general is a bad thing.

Gotos, maybe, but I find that's personal opinion, because there are a lot of ways to do the same thing you'd use gotos for.

/////////////////////////////////////
Now my true story:

I worked on an iPhone game that was very much in the prototype stage, and had been in development for a couple months. We were playing around with different ways of doing things and making it fun. Because we were short people, we hired another engineer. To get familiar with the code, he was supposed to write a level editor. Instead, he went through the entire codebase (tens of thousands of lines long) and did these optimizations:

Change all:
1  
for (int i = 0; i < arrayList.length; i++)

to:
1  
for (int i = arrayList.length-1; i > -1; i--)


Why? Because we avoid calling .length more than once and therefore it's faster. Let's forget about any potential compiler optimizations there are and assume he's right. The most we'd be looping through is maybe 100 things. So, he saved maybe a fraction of a nanosecond every once in a while.

Change all:
1  
if (i >= 0 || x >= array.length || y >= 0 || z <= n)

to:
1  
if (i > -1 || x > array.length-1 || y > -1 || z < n+1)


Why? Also apparently faster. Any LTE or GTE checks he decided required two checks (> and ==) and therefore was slower. This sounded totally insane to me, but I've never prided myself at knowing exactly what goes on under the hood so I let him have his declaration for the moment. A few minutes of Googling later and I had several links telling him he was very wrong. His response was "okay" but behind his eyes it seemed like he didn't trust the links I gave him. Also, his changes are insanely difficult to read.

Change all:
1  
float f = obj.thingy.x + obj.thingy.y;

to:
1  
2  
Thingy thingy = obj.thingy;
float f = thingy.x + thingy.y;


Yes, this one is actually fractionally faster. Although with compilers these days I'd question that too. And once again you're sacrificing readability by having massive line bloat.

I know I've said some pretty stupid assumptions or misunderstandings on these forums (yes nobody needs to point them out to me), so I'm not saying I'm spotless. But I would never go into an active codebase and start rewriting everything completely pointlessly instead of doing my job.

Did I mention I was the lead on that project? He didn't work there much longer.[/code][/code]

See my work:
OTC Software
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline Riven
« League of Dukes »

JGO Overlord


Medals: 816
Projects: 4
Exp: 16 years


Hand over your head.


« Reply #40 - Posted 2012-05-12 04:12:11 »

Did I mention I was the lead on that project? He didn't work there much longer.
I think this case is a failure in management. He might have been the project lead, but that doesn't mean he could do whatever he wanted. At some point he should have been told that his intended actions were not part of his assigned tasks. That it actually came that far that he rewrote your codebase, is shocking, imho.

Hi, appreciate more people! Σ ♥ = ¾
Learn how to award medals... and work your way up the social rankings
Offline ra4king

JGO Kernel


Medals: 350
Projects: 3
Exp: 5 years


I'm the King!


« Reply #41 - Posted 2012-05-12 18:10:06 »

Did I mention I was the lead on that project? He didn't work there much longer.
I think this case is a failure in management. He might have been the project lead, but that doesn't mean he could do whatever he wanted. At some point he should have been told that his intended actions were not part of his assigned tasks. That it actually came that far that he rewrote your codebase, is shocking, imho.
No no Eli was the project lead, not that incompetent fellow Tongue

However, that guy still shouldn't have been allowed to rewrite the entire codebase to his liking... :/

Offline princec

JGO Kernel


Medals: 404
Projects: 3
Exp: 16 years


Eh? Who? What? ... Me?


« Reply #42 - Posted 2012-05-12 19:55:22 »

Quite often programmers can zoom off on a tangent and do stuff so quickly nobody even has time to question what they're doing let alone stop them. That's programmers for you.

Cas Smiley

Offline pitbuller
« Reply #43 - Posted 2012-05-12 20:13:30 »

Quote from: Eli Delventhal
Change all:
1  
float f = obj.thingy.x + obj.thingy.y;

to:
1  
2  
Thingy thingy = obj.thingy;
float f = thingy.x + thingy.y;


That can actually make things more readable. Let say that obj name would be randomObjFromFoo

1  
float f = randomObjFromFoo.thingy.x + randomObjFromFoo.thingy.y;

to:
1  
2  
Thingy thingy = randomObjFromFoo.thingy;
float f = thingy.x + thingy.y;



Or how about that would be four component vector.

1  
float f = randomObjFromFoo.thingy.x + randomObjFromFoo.thingy.y + randomObjFromFoo.thingy.z + randomObjFromFoo.thingy.w;

to:
1  
2  
Thingy thingy = randomObjFromFoo.thingy;
float f = thingy.x + thingy.y + thingy.z + thingy.w;


So it's mean less code and easier to read.


But I think I am missing the point by far margin.
Offline sproingie

JGO Kernel


Medals: 202



« Reply #44 - Posted 2012-05-12 22:45:13 »

1  
float f = obj.thingy.x + obj.thingy.y;


This sort of "reaching through objects" is something you don't want to do all the time.  Obviously just one example of it doesn't mean anything one way or the other, but when you find yourself doing it frequently, it goes against an OO design principle called the Law Of Demeter.  Usually it means you wanted a method on the class of thingy rather than computing it "from the outside" as it were.

IDEA has an inspection called "feature envy" that detects this sort of thing.  Ironically, refactoring it into a temporary reference will defeat this inspection though.

Offline OttoMeier
« Reply #45 - Posted 2012-05-12 23:39:08 »

Well i guess Law Of Demeter is just another myth successfully promoted by computer science professors.  Roll Eyes
Imho pracmatism is fine but if you start defending goto its really gets rediculous.
Offline Riven
« League of Dukes »

JGO Overlord


Medals: 816
Projects: 4
Exp: 16 years


Hand over your head.


« Reply #46 - Posted 2012-05-12 23:53:02 »

Did I mention I was the lead on that project? He didn't work there much longer.
I think this case is a failure in management. He might have been the project lead, but that doesn't mean he could do whatever he wanted. At some point he should have been told that his intended actions were not part of his assigned tasks. That it actually came that far that he rewrote your codebase, is shocking, imho.
No no Eli was the project lead, not that incompetent fellow Tongue
Whoops, I misread that. Anyway, I assume that a clear task description would have prevented this situation. But then again, I wasn't there...

Hi, appreciate more people! Σ ♥ = ¾
Learn how to award medals... and work your way up the social rankings
Offline Roquen
« Reply #47 - Posted 2012-05-13 21:23:22 »

Quote from: Eli Delventhal
Also, your mindless judgmental insult of professors (both my parents are professors, by the way) further dilutes any valid point you may have had in your statement.
My definitions are attempting to use a little know literary style known as Satire.  BTW: my father and step-father are professors, a fair number of friends and family members are as well and I spent a number of years performing university research.  The core of my definition is a paraphrase of what the chair of computer science said to me during a meeting when I was describing what my plan of attack was.  I described my plan and ended with "or I can apply the principle of K.I.S.S. and defer the optimizations".

Quote from: Eli Delventhal
It seems like you're making your point invalid with your own quotes.
I included them in an attempt to show what he actually said, rather than a twisted version or completely opposite in meaning. Knuth's statements are tangential as they are only concerned with localized micro-optimizations, which are the least interesting kind.  Generally the best you can hope for is some small linear speed increase in the specific code in question. It can go higher if you find reducible mathematical formulations or computational fast exits but still the gains will tend to be relatively minor.  Also note that it seems that you think I'm diss'in Don K.  No way.  I 100% agree with what he actually said...in the context that he said it.  And part of that context is when it was said:  1974, which is B-4 the personal computer revolution when the kinds of programs being written changed pretty radically.  Today what he said is still good advice, say about 97% of the time.

Recall what I've said:  optimization is attempting to meet some set goals with some metric of success and your time is your most valuable resource.  Your time should be measured in opportunity cost.  Every hour you work doing A has an opportunity cost of 2 hours.  The hour you spend on A and the hour that it will take you to catch up doing B if B would have been overall better thing to be doing at the time in question.  And of course this further explodes if the lack of 'B' has an impact on the effectiveness of others involved.  Worse if 'A' ends up of having no value and must be replaced.

Quote from: Eli Delventhal
The proper development path is to write as good code as you can without stressing over performance, instead focus on good design, readability, and modularity. Then when you find something is too slow you find out why and fix it. End of story.
The ability to come up with a reasonable design hinges on ones understanding of the problem statement.  And perhaps more importantly insuring that the problem statement is really what one wants to solve.  Understanding the problem, design and modularity is where "real" optimization for speed will come from.

Quote from: Eli Delventhal
Instead, he went through the entire codebase (tens of thousands of lines long) and did these optimizations:
You mean he did these anti-optimizations.  This is the same as delt0r's example.  His person was clueless about C compilers.  Your's was clueless about compilers, HotSpot and CPU instructions.  If you want to beat your tools at their own game and/or hold their hand then you have to understand them to some extend and understand how the hardware works.  Otherwise you're spending a bunch of time flailing in the dark and are most likely slowing things down.

I'll jump on this bandwagon with: Beware of anti-optimizations.

However, as Riven said, this is ultimately a management failure.  Any unknown should have his/her hand held for awhile.  But again, hopefully lesson learned by whomever was responsible and will keep eye and team n00bs in the future.

Quote
...avoid calling .length more than once and therefore it's faster. Let's forget about any potential compiler optimizations there are and assume he's right.
He's wrong.
Quote
Any LTE or GTE checks he decided required two checks
Again wrong.

WRT: dereferencing chains.  Personally I tend to pull them out for cosmetic purposes.  But it is a useful and easy micro-optimiziation in the case where the compiler cannot statically tell the one or more of the members couldn't have changed since the last dereference.  A non-inlined call between or the potential for an alias are examples.

Quote from: princec
Quite often programmers can zoom off on a tangent and do stuff so quickly nobody even has time to question what they're doing let alone stop them. That's programmers for you.

~/src/SecretProject: svn up

"Huh, why have ALL these files changed?"

Quote from: Eli Delventhal
Se get off your high horse because nobody is going to raise their hand saying that optimization in general is a bad thing.
I'm happy up on my horse.  I can see further than the heathens at my feet.

Quote from: Eli Delventhal
Gotos, maybe, but I find that's personal opinion, because there are a lot of ways to do the same thing you'd use gotos for.
Hopefully you mean unstructured gotos.  There's never been a question about the structured kind, which sadly many people don't get.

---------------------------

Now back to the subject at hand:

I'm not really talking about micro-optimizations.  These can and and almost always should be deferred, as they have no or little external impact.  This is what Knuth is talking about.  They might be needed to meet the performance requirements, but they offer little returns compared to the development time cost.  And Knuth is absolutely correct that frequently programmers will not properly identify what will end up being a hotspot so there is no reason not to wait until you're 100% sure they've been properly located.  This lowers your risks.

Similarly for small local only optimizations. By this I mean routines or sub-systems that external interactions can easily be abstracted away or corrected by calling on Cas' refactoring fairy when a bad choice has been made.  The opportunity cost is a real drag but hopefully manageable. The trick here is that small is context dependent.  If the project has to be done in a couple of weeks, then nothing is small and if it's a 'my lifelong tinkering project' then pretty much every thing is.

So what AM I talking about then?  Wide spread decisions which pretty much must be made in advance to not skyrocket your opportunity cost.  And if you move in that direction far enough, then you've painted yourself into a corner and you're SOL if you need to change.  I'll give a couple of examples and stick to things that have popped up recently on these forums.

1) Scenegraphs vs. spatial partitioning.  These two styles of world management are pretty much mutually exclusive when use as the world database representation.  I won't go into pros & cons as I'm far too biased.

2) Not storing explicit angles in 2D for orientation/rotational information.  Logically using complex numbers instead of storing the angle makes it possible to hardly ever need to use trig and inverse trig functions.  This rotational information is numerically equivalent to a unit vector in the "facing" direction.  Also it's possible to drop a fair number of matrix operations as complex numbers trivially handle composition of rotations and rotation of vectors/points as well as reflections (although a different formula, where they are unified in matrices).

3) In theagentd's thread: Random thoughts: Extreme speed 2D physics one concern was having enough precision in coordinate information to be able to handle the scale that he desired.  One possibility would be to move to a higher (non-native supported) format.  Doing so would have an enormous impact on performance of every simple calculation involving a coordinate.  Additionally, to be practical, you'd have to create a class to support this non-native format.  HotSpot isn't great for small objects. If my quick math is correct (excluding the pointer) a 3D vector of doubles is 48 bytes & a 3D vector of a non-primative (as a class) using 128 bits/component is 120 (2.5x more memory).  Toss in a lack of operator overload to complicate implementation and that (something like double-doubles) each simply operation would cost about 10x the number of cycles of a double.  Of course all of this would take much longer to implement.  So, simply forget about all of that and just break the world up into some collection of local coordinates...problem solved you're back to using plain old doubles.  And even if you do the most naive collision detection possible (n2) the fact that entities are scattered across multiple coordinate frames will led toward an exponentially faster execution time.  No downsides here.  Faster, smaller, easier and faster to implement...move on to the next task.



Offline princec

JGO Kernel


Medals: 404
Projects: 3
Exp: 16 years


Eh? Who? What? ... Me?


« Reply #48 - Posted 2012-05-13 22:01:16 »

Quote from: princec
Quite often programmers can zoom off on a tangent and do stuff so quickly nobody even has time to question what they're doing let alone stop them. That's programmers for you.

~/src/SecretProject: svn up

"Huh, why have ALL these files changed?"
Of course, but doesn't change the fact that it's already been done. Let us not forget the extreme speed at which coders and do their magic when on a roll. I wouldn't necessarily call it a management failure at this stage: but letting him get away with it more than once would be.

Cas Smiley

Offline philfrei
« Reply #49 - Posted 2012-05-14 08:38:53 »

@Eli -- Having some fellow making changes in the code base without agreement from the project lead seems quite out of line!

This "optimization" of his caught my eye. Having i suddenly heading in the opposite direction seems like a dangerous change unless is it only being used for counting.
Quote
Change all:
1  
  for (int i = 0; i < arrayList.length; i++) 

to:

1  
  for (int i = arrayList.length-1; i > -1; i--) 


But what I am curious about is that I was reading the proper way to do this sort of thing was as follows:
1  
  for (int i = 0, n = arrayList.length; i < n; i++) 

This way, i stays the same for the looped code, and arrayList.length is no longer being repeated needlessly.

Are compilers getting smart enough to automatically fix this sort of thing now? Does this optimization matter very much (maybe only with large arrays)? Is it worth the bother? It seems to me to be readable way to write loops. I can't recall where I first read about it.

"It's after the end of the world! Don't you know that yet?"
Offline ra4king

JGO Kernel


Medals: 350
Projects: 3
Exp: 5 years


I'm the King!


« Reply #50 - Posted 2012-05-14 08:45:50 »

The argument for the optimization of that for loop is so silly that it makes me cry. Grin

Offline princec

JGO Kernel


Medals: 404
Projects: 3
Exp: 16 years


Eh? Who? What? ... Me?


« Reply #51 - Posted 2012-05-14 10:08:17 »

The thing is, I trust a professional programmer to get on with doing what he thinks is best, and I find that having to agree pointless micro bullshit like this with a so-called project lead to be insulting. I have a general rule for people who work with me these days, which is, you don't tell me what to do, and I won't tell you how to do it. Whoever writes it and makes it work first is right. After a while of working with people you get to know who goes off on a tangent doing pointless work, and yes, sometimes it's even me, because occasionally I like to do pointless work while I'm thinking about something else or just for a change.

One unfortunate aspect of programming is the very wide disparity in understanding and ability of programmers on all levels. This creates astounding friction with the other aspect of programmers which is that they all think they know more than everyone else on their team (you can just see how aspect #2 mysteriously gives rise to aspect #1). Yes, me included. It is a remarkable achievement sometimes that software ever gets made in a collaborative manner given these two invariant truths on programming teams. It is also therefore remarkably unsurprising that lone programmers usually produce their best work, and very small teams are vastly, vastly more productive than very large teams.

Cas Smiley

Offline delt0r

JGO Knight


Medals: 27
Exp: 18 years


Computers can do that?


« Reply #52 - Posted 2012-05-14 10:42:39 »

Quote
It is also therefore remarkably unsurprising that lone programmers usually produce their best work, and very small teams are vastly, vastly more productive than very large teams.

I don't think its limited to programing. Smaller teams just work better for us humans. Its sort of the way we are wired. Also there is just less communication overhead.

As the old management saying goes. You can't get a baby in 1 month by getting 9 woman pregnant.

I have no special talents. I am only passionately curious.--Albert Einstein
Offline Roquen
« Reply #53 - Posted 2012-05-14 12:06:44 »

@philfrei: The only reason to pull out length is if the compiler can't tell if the array might have been changed behind its back.  So if the reference is a local variable and is never assigned within the loop to something else, then it will be read exactly once.

Micro-management is madness.  Massive time cost for everyone involved and it create the exact opposite atmosphere from what you really want.  The team is "us" and the project is our baby that we all want to be proud of.

Quote
One unfortunate aspect of programming is the very wide disparity in understanding and ability of programmers on all levels.
On the flip side it's fantastic when the levels of knowledge are scattered across different areas.
Offline gimbal

JGO Knight


Medals: 25



« Reply #54 - Posted 2012-05-14 12:48:13 »

On the flip side it's fantastic when the levels of knowledge are scattered across different areas.

Unless those areas are conflicting in nature. The design problems that can occur when you put an Oracle PL/SQL developer in a team with any kind of web app developer for example. You'll have one person wanting to put everything in the database and keeping the application layer as thin as possible and one person treating a database as something to put data in and nothing more.
Offline sproingie

JGO Kernel


Medals: 202



« Reply #55 - Posted 2012-05-14 15:47:06 »

Memoizing the loop condition after proving it never changes during the loop is a trivial optimization for most cases; hell, I bet even Dalvik manages that one. 
 
You don't have to guess what the VM does: you can tell java to dump the assembly instructions hotspot generates and see for yourself.  See the last answer on this question: http://stackoverflow.com/questions/9336704/jvm-option-to-optimize-loop-statements

Offline philfrei
« Reply #56 - Posted 2012-05-14 20:50:41 »

Thx Roquen & sproingie

"It's after the end of the world! Don't you know that yet?"
Offline Eli Delventhal

JGO Kernel


Medals: 42
Projects: 11
Exp: 10 years


Game Engineer


« Reply #57 - Posted 2012-05-15 00:07:15 »

Yes, the guy needed to be better managed. Unfortunately we were a small team, I was professionally inexperienced so didn't feel comfortable straight up telling him he was wrong, and I didn't have time to deal with it. But he had instructions, which were to do something completely different than micro or anti or whatever you want to call them optimizations. That's why we had him leave.

He also made class names that were massively long and made ASCII graphs in the code, but hey, that's just preference. Tongue

Roquen - I know all his optimizations were wrong, that was the point in posting them. I would also continue to disagree with you on your examples for optimizations (not storing angles and storing vectors instead, scenegraphs, etc.). Write it quickly and intelligently the first time, but don't worry about stuff like that. Chances are in 99% of situations, calculating a square root 5,000 times per frame is going to do nothing to your FPS. If you make your game and you've only got 20 FPS, figure out specifically what is taking the most time. If it's square roots (probably won't be), then make that change.

But whatever. You can do what you want to, my man.

See my work:
OTC Software
Offline Roquen
« Reply #58 - Posted 2012-05-16 05:25:12 »

Remember I'm talking about attempting to make reasonable (not necessarily best) design decisions based on the problem at hand and making forward progress.  If the design "tells you" that there are no big potentials for performance bottlenecks, space issues, etc. then you spend zero time thinking about them. On the other hand ignoring what your design is telling you about the problem is a recipe for failure.  I'd say that 99% of failed or in trouble project are due to a combination of over & under design and lack of reasonable time estimates.  No amount of duct table and super glue at the tail end will address the problem (at least in a reasonable amount of time).
Offline loom_weaver

JGO Coder


Medals: 17



« Reply #59 - Posted 2012-05-17 03:56:17 »

Quote
This "optimization" of his caught my eye. Having i suddenly heading in the opposite direction seems like a dangerous change unless is it only being used for counting.
Quote
Change all:
1  
  for (int i = 0; i < arrayList.length; i++) 

to:

1  
  for (int i = arrayList.length-1; i > -1; i--) 


But what I am curious about is that I was reading the proper way to do this sort of thing was as follows:
1  
  for (int i = 0, n = arrayList.length; i < n; i++) 


Understanding assembly helps when optimizing simple loops.  Here's a couple of optimizations that should work in Java.

1. Unrolling a loop.  Each iteration of a loop requires a comparison.  If you can get rid of that comparison then it will go faster.  For example if you have a fixed array of 256 elements then 256 consecutive lines will be faster than a loop.

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
for (int i = 0;  i < 256;  i++) {
  array[i] = foo(i);
}

is slower than

array[0] = foo(0);
array[1] = foo(1);
array[2] = foo(2);
...
array[255] = foo(255);



2. Have the loop invariant compare against 0 if possible.  The reasons is that the basic CMP instruction will see if a register contains 0 or not.  The moment you compare against something that is non-zero then a subtraction instruction is required before the compare.

1  
2  
3  
4  
5  
for (int i = 0;  i < array.length();  i++)

is slower than

for (int i = array.length() - 1;  i >= 0;  i--)


Good compilers can optimize these.  I haven't checked recently but in 1.4 the JDK did not.
Pages: 1 [2] 3 4 ... 6
  ignore  |  Print  
 
 
You cannot reply to this message, because it is very, very old.

 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

Longarmx (46 views)
2014-10-17 03:59:02

Norakomi (37 views)
2014-10-16 15:22:06

Norakomi (29 views)
2014-10-16 15:20:20

lcass (32 views)
2014-10-15 16:18:58

TehJavaDev (62 views)
2014-10-14 00:39:48

TehJavaDev (62 views)
2014-10-14 00:35:47

TehJavaDev (52 views)
2014-10-14 00:32:37

BurntPizza (70 views)
2014-10-11 23:24:42

BurntPizza (40 views)
2014-10-11 23:10:45

BurntPizza (82 views)
2014-10-11 22:30:10
Understanding relations between setOrigin, setScale and setPosition in libGdx
by mbabuskov
2014-10-09 22:35:00

Definite guide to supporting multiple device resolutions on Android (2014)
by mbabuskov
2014-10-02 22:36:02

List of Learning Resources
by Longor1996
2014-08-16 10:40:00

List of Learning Resources
by SilverTiger
2014-08-05 19:33:27

Resources for WIP games
by CogWheelz
2014-08-01 16:20:17

Resources for WIP games
by CogWheelz
2014-08-01 16:19:50

List of Learning Resources
by SilverTiger
2014-07-31 16:29:50

List of Learning Resources
by SilverTiger
2014-07-31 16:26:06
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!