Java-Gaming.org    
Featured games (79)
games approved by the League of Dukes
Games in Showcase (475)
Games in Android Showcase (106)
games submitted by our members
Games in WIP (530)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
   Home   Help   Search   Login   Register   
  Show Posts
Pages: [1] 2 3 ... 87
1  Games Center / Showcase / Re: We Shall Wake demo (v6.0) on: 2014-07-22 11:50:44
Alright, we'll take a look at it! Thanks for testing and including the log!
2  Discussions / General Discussions / Re: What? Your game isn't a pixelated retro masterpiece? on: 2014-07-19 05:09:05
he term seems to be used very ambiguously

Well, everything where you can still see individual pixels...
Sooo... Every single game right now since proper MSAA is too hard to do?
3  Game Development / Performance Tuning / Re: Can someone explain what is going on with this square rooting? on: 2014-07-18 20:28:53
nanoTime() is fairly inaccurate in most cases. It's better to use getTimeMillis(). (generally)
Really? I have the exact opposite experience. On some Windows machines getTimeMillis() as an actual accuracy of around 15ms, e.g. the timer only updates every 15ms, so you get horrible precision. nanoTime() has always worked reliably for me.
4  Discussions / Miscellaneous Topics / Re: What I did today on: 2014-07-16 20:33:51
I wrote a small program which does raycasting through a 3D texture.

1. Find MRI image.


2. Split it up into individual frames.

3. Put frames in folder of program.

4. Profit, in the form of broccoli!
5  Java Game APIs & Engines / OpenGL Development / Re: Drawing big Objects far away on: 2014-07-15 17:48:18
What exactly is the "Outerra way"?
6  Discussions / Miscellaneous Topics / Re: What I did today on: 2014-07-15 06:15:28
I wrote a rant.
7  Discussions / Miscellaneous Topics / A rant on OpenGL's future on: 2014-07-15 06:10:26
Some background

TL;DR: OpenGL has momentum. The number of games utilizing it is steadily increasing, which has forced GPU vendors to fix their broken drivers. Compared to 2 years ago, the PC OpenGL ecosystem is looking much better.

Almost everyone on this forum have at at least one point in their life worked with OpenGL, either directly or indirectly. Some of us likes to go deep and use OpenGL directly through LWJGL or LibGDX, while others (a probable majority) prefer the increased productivity of abstractions of OpenGL like LibGDX's 2D support, JMonkeyEngine or even Java2D with OpenGL acceleration. In essence, if you're writing anything more advanced than console games in Java, you're touching OpenGL in one way or another. Outside of Java, OpenGL is used mainly on mobile phones as both Android and IOS supports it, but a number of AAA PC games have recently used OpenGL.

  • The id Tech 5 engine is based solely on OpenGL and is used for RAGE and the new Wolfenstein game, with two more games in development using the engine.
  • Blizzard has long since supported OpenGL as an alternative to DirectX in all their games to allow Mac and Linux users to play their games.
  • Valve is pushing a move to OpenGL. They're also porting the Source engine to OpenGL, and some of the latest Source games (Dota 2 for example) default to OpenGL instead of Windows.

These are relative recent events that have essentially started a chain reaction of improvements in OpenGL support throughout the gaming industry. The push by developers to support or even move to OpenGL has had a huge impact on OpenGL driver quality and how fast new extensions are implemented by the 3 graphics vendors. Some of you may remember the months following the release of RAGE when people with AMD cards had a large number of issues with the game, and RAGE was not exactly using cutting edge features of OpenGL. During the development of We Shall Wake I've had the pleasure of encountering a large number of driver bugs, but I've also had a very interesting perspective on how the environment has changed.

  • Nvidia's OpenGL drivers have always been of the highest quality among the three vendors, so there was never much to complain about here. My only complaint here is that they are impossible to report driver bugs to, as they never respond to anything. This is a bit annoying since when you actually do find a bug, it's almost impossible to get your voice heard (at least as a non-professional graphics programmer working on a small project).
  • AMD's drivers are significantly better today compared to a year or two ago, and almost all the latest important OpenGL extensions are supported. Their biggest problem is that they lag behind slightly with OpenGL extensions, leading to some pretty hilarious situations like AMD holding a presentation for optimized OpenGL rendering techniques that are only supported by their competitors.
  • Even more impressive are Intel's advances. Around a year ago I had problems with features that dated back to OpenGL 1.3. Their GLSL compiler was a monster which had bugs that should've been discovered within hours of release. Even worse, they were very quick to discontinue driver development as soon as a new integrated GPU line was released. Today, they have a respectable OpenGL 4 driver compatible with all their OGL4 capable integrated GPUs, and they also support a majority of the new important extensions. Intel also takes the prize for best developer support, as I have reported 3 bugs which have all been fixed in the following new driver release.
  • OpenGL on OSX has also gotten a big improvement lately. The latest drivers support OpenGL 4.1 on all GPUs that have the required hardware, but most cutting-edge features are still missing.




What's wrong with OpenGL?

TL;DR: OpenGL is horribly bloated. For the sake of simpler, less buggy and faster drivers that can be developed more quickly, they need to get rid of unnecessary functionality.

We'll start by taking a look at OpenGL's past. The main competitor (LWJGL) of OpenGL, DirectX, has traditionally had (and still has) a large market share, and not without good reasons. A big difference between the two is how the handle legacy functions. DirectX does not have a significant amount of backwards compatibility. The most important transition happened between DirectX 9 and 10. They completely remade the API from the ground up to better fit the new generation of GPUs with unified architectures that were emerging. This was obviously a pain in the ass for developers, and many games are still being developed with DirectX 9. It was however a very good choice. Why? Because the alternative was what OpenGL did. First of all, OpenGL 3 (the functional equivalent of DirectX 10) was delayed for several months, allowing DirectX to gain even more of a head start. Secondly, they decided to instead of starting from scratch, they decided to deprecate old functionality and eventually remove it in later versions. Sounds like a much easier transition for developers, right? Here's the catch: They also provided specifications for a compatibility mode. Since all 3 vendors felt that they were obliged to support this compatibility mode, they could not actually get rid of the deprecated functionality. In essence, OpenGL 3 is OpenGL 2 with the new functionality nastily nailed to it. The horror story continued with OpenGL 4 and the new revolutionary extensions that will make up the future OpenGL 5. OpenGL is so ridiculously bloated with functionality that haven't existed in hardware for over 10 years. Nvidia, AMD and Intel are still emulating completely worthless functionality with hidden shaders, like fixed functionality multitexturing and the built-in OpenGL lighting system. Implementing and maintaining these functions for every new GPU that they release is a huge waste of resources for the three vendors. This is one of the sources of the traditionally bad driver support OpenGL has had. It was simply not worth it until more games using OpenGL started popping up. A fun fact is that Apple actually decided not to support the compatibility mode, so to access OGL3+ on OSX you need to specifically request a context without compatibility mode.

Sadly, the existence of the compatibility mode has encouraged many important industrial customers of the graphics cards vendors (CAD programs and other 3D programs) to become dependent on the compatibility mode, so it's essentially here to stay. So we have 3 vendors, each with their own ridiculously massive unmaintainable driver, and we just get more and more functionality. We're seeing a similar shift in how things are done in hardware as we did between DirectX 9 and 10 right now between DirectX 11 and 12. Interestingly, OpenGL is leading here thanks to extensions that expose these new hardware features. These are available right now on all vendors with the latest beta drivers, except on Intel which is lacking a few of them. In essence, we already have the most important features of a theoretical OpenGL 5.

Here's what's wrong with OpenGL at the moment. There are too many ways of doing the same thing. Let's say you want to render a triangle. Here's the different ways you can upload the exact same vertex data to OpenGL with.

Fixed functionality:

  • Immediate mode with glBegin()-glEnd(). Generally slow, but easy to use. (1992)
  • Vertex arrays with data reuploaded each frame. Faster but still slow as data is reuploaded each frame. (1997)
  • Create a display list. Fastest on Nvidia hardware for static data. (1997)
  • Upload to VBO with glBufferData() each frame. Generally stable performance, but slow due to additional copies and complicated memory management in the driver. (2003)
  • Allocate VBO once, upload to VBO with glSubBufferData() each frame. Slow if you're modifying the same buffer multiple times per frame. Also requires copying of data. (2003)
  • Map a VBO using glMapBuffer() and write to the mapped memory. Avoids an extra copy of the data, but forces synchronizations between the GPU, driver thread and game thread. (2003)
  • Map a VBO using glMapBufferRange() with GL_MAP_UNSYNCHRONIZED_BIT and handle synchronization yourself. Avoids extra copy and synchronization with the GPU, but still causes synchronization between the driver thread and the game thread. (2008)
  • Allocate persistent coherent buffer, map once and handle synchronization yourself. No extra copy, no synchronization. Allows for multithreading. (2013)

So we literally have 8 different ways of uploading vertex data to the GPU, and the performance of these methods depend on GPU vendor and driver version. It took me years to learn which ones to use for what kind of data, and which ones are fast on which hardware, and which you should avoid in what cases. Today, all but the last one are completely redundant. They simply complicate the driver, introduce more bugs in the features that matter and increases development time for new drivers. We literally have code from 1992 (the year I was born, I may add) lying right next to the most cutting edge method of uploading data to OpenGL from multiple threads while avoiding unnecessary copies and synchronization. It's ridiculous. The same goes for draw commands. The non-deprecated draw commands currently in OpenGL 4.4 (+ extensions):

  • glDrawArrays
  • glDrawArraysInstanced
  • glDrawArraysInstancedBaseInstance
  • glDrawArraysIndirect
  • glMultiDrawArrays
  • glMultiDrawArraysIndirect
  • glDrawElements
  • glDrawRangeElements
  • glDrawElementsBaseVertex
  • glDrawRangeElementsBaseVertex
  • glDrawElementsInstanced
  • glDrawElementsInstancedBaseVertex
  • glDrawElementsInstancedBaseVertexBaseInstance
  • glDrawElementsIndirect
  • glMultiDrawElementsBaseVertex
  • glMultiDrawElementsIndirect
  • glMultiDrawArraysIndirectBindless <----
  • glMultiDrawElementsIndirectBindless <----

Only the last two functions marked with arrows are necessary to do everything the above commands do. This bloat needs to go. Oh, and here's another funny piece of information. GPUs don't actually have texture units the way OpenGL exposes them anymore. We can immediately deprecate texture units as well and move over to bindless textures any time we want. Imagine the resources the driver developers could spend on optimizing functionality that is actually useful instead of on maintaining legacy functions, not to mention the smaller number of bugs as there are less functions that can have bugs in the first place.





Why fix what isn't broken?

Competition. Mantle is a newcomer in the API war but has already gained developer support for many released and upcoming games thanks to its more modern, simpler API that is a better fit for the GPU hardware of today (well, and probably a large amount of money from AMD). DirectX 12 will essentially be a cross vendor clone of Mantle. Yes, OpenGL is ahead of DirectX by far right now thanks to extensions, but that won't last forever unless they can keep the API as simple and straightforward to use, fast and bug free as the competition. We're still behind Mantle when it comes to both functionality and simplicity. OpenGL is too complicated, too bug ridden and too vendor dependent. Unless OpenGL 5 wipes the slate clean and starts from scratch, it's time to start moving over to other APIs where the grass is greener.

DirectX is a construction company which builds decent buildings but tear them down every other year to build a new better building. Mantle is a shiny new sci-fi prototype building. OpenGL started as a tree house 20 years ago and has had new functionality nailed to it for way too long, so technically it's just as advanced as DirectX and Mantle; it's just that it's all attached to a frigging tree house so it keeps falling apart and is basically unfixable.
8  Games Center / Showcase / Re: We Shall Wake demo (v6.0) on: 2014-07-14 10:50:12
Appreciate for pendulum in the gameplay haha Grin
Unfortunately i couldn't test the game as i am on Java 6 (version 7 is available from OS X Lion :c ), so i am judging it by the yt video.
The game looks glorious, love the art style,whole thing looks like AAA to me Tongue
Those metal cling sfx on hit really add immersion, makes you believe the models are really made out of metal.
Animations are awesome, are they hand-made, downloaded or recorded?
Also, will there be multiplayer  Roll Eyes ? and witch platforms do you plan to support? Maybe go all the way for PS4 and XBone? THAT would be awesome.
Good luck!
Thanks!

The animations are handmade by SkyAphid.

We have plans for things like 4-player coop mode against hordes of AI robots. We only intend to support PC, as we don't have any experience with console development. Controller support and single machine 4 player split screen is planned though.
9  Games Center / Showcase / Re: We Shall Wake demo (v6.0) on: 2014-07-13 19:35:45
wont run, i assume this is a driver problem again.
http://pastebin.com/i6W86UkM

the video looks amazing but where do i get the neural accelerator implant to be able to react fast enough?

Looks like your GPU/driver does not support OpenGL 3.2. What's GPU do you have?
10  Games Center / Showcase / Re: We Shall Wake demo (v6.0) on: 2014-07-12 22:15:11
I've sent you a PM, SHC.
11  Games Center / Showcase / Re: We Shall Wake demo (v6.0) on: 2014-07-12 19:42:05
Got this error after clicking Play.

Full Log file

My GPU:
NVIDIA GeForce 210 with latest drivers.

Either your drivers are outdated and/or the driver erroneously reports that it supports gather4 when it doesn't. I'll take a look at it when I get home.

EDIT: Try replacing tex2Dgather() with textureGather() in merge.frag and see if it works. Never mind, the shaders are packed into the exe/jar.
12  Games Center / Showcase / We Shall Wake demo (v6.0) on: 2014-07-12 15:53:36


Hello, everyone. I thought it was about time that I threw up what I've been working on for almost two years now. I'm the graphics and physics programmer for this project. We're two programmers on the project; me and SkyAphid, who's not very active on this forum anymore. We also have two people working on character modelling, a concept artist and an environmental designer.



ABOUT WE SHALL WAKE

Thousands of years after the extinction of the human race, and hundreds after the extinction of an alien race simply known as the Creators, you wake up in a desolate tower inhabited by robots incapable of true emotion, acting simply as simulations for the long gone forerunners.

You are a MORS model, the most advanced machine ever created - capable of true human emotion, and possessing a bio-mechanical body that can cause mass destruction. You must choose whose side you will take in a war between three factions...if any.

However, you are not alone - other MORS models have different ideals and philosophies they wish to enforce, and it's up to you to decide whether or not they are right.

We Shall Wake is a high-speed action game, being developed by Circadian.

We're aiming at high speed gameplay with flexible and versatile movement and combat systems.
13  Discussions / Miscellaneous Topics / Re: [Girls] How to completely block them from our lives? on: 2014-07-12 15:49:00
This thread really needs to die...
Maybe if you keep posting in it it'll die.
14  Java Game APIs & Engines / OpenGL Development / Re: Somehow using GL_REPEAT on a sub-texture in a sprite-sheet? on: 2014-07-10 00:03:52
Yes.
15  Java Game APIs & Engines / OpenGL Development / Re: Somehow using GL_REPEAT on a sub-texture in a sprite-sheet? on: 2014-07-09 23:55:19
If the hardware supports texture arrays, it also supports non-power-of-two textures with filtering.
16  Java Game APIs & Engines / OpenGL Development / Re: Somehow using GL_REPEAT on a sub-texture in a sprite-sheet? on: 2014-07-09 20:55:05
If you do this and want mipmaps to function, you'll get discontinuities in your texture coordinates, so you need to manually calculate your texture gradients from the original texture coordinates:

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
uniform vec2 uvMin;
uniform vec2 uvMax;

in vec2 texCoords;

out vec4 fragColor;

void main(){
    vec2 modifiedTexCoords = uvMin + fract(texCoords) * (uvMax - uvMin);
    fragColor = textureGrad(mySampler, modifiedTexCoords, dFdx(texCoords), dFdy(texCoords));
}
17  Game Development / Game Mechanics / Re: Any multithreading people out there? Multithreaded loading of a huge array. on: 2014-07-07 20:19:47
Loading files on multiple threads is a no-no, that's the fastest way to trash your performance as the disk is a serial device, best used in long continuous read/writes... skipping around on the disk is like skipping around in memory and nullifying your cache except thousands of times worse Sad

This is completely false in the first place. Both HDDs and SSDs perform better when you read multiple things from them using multiple threads. For HDDs, this is because it can optimize the searching by basically reordering the reads so that the reading head takes a more efficient "path". It can also queue up and pipeline the requests better, similar to how OpenGL works.

1  
2  
CPU: request file-------------------------------file received, request next--------------
HD:  -----------start reading~~~~~~~~~~~~~~~done----------------start reading~~~~~~~~~~~~

As you can see, a small stall occurs once the HD has finished reading a file. If you have multiple threads, it immediately knows what to do after.

For an SSD, using multiple threads is even more ideal. SSDs have the amazing ability to write and read from multiple parts of itself in parallel, so if the files are in different places on the SSD, you can basically read several times faster from it by having multiple threads request files.

Look up any harddrive or SSD benchmark and you'll see that they check it with different "queue depth", e.g. how many threads they use to read or write with.
18  Game Development / Newbie & Debugging Questions / Re: Ordering draw calls for alpha blending on: 2014-07-05 22:16:48
Most games don't have anything transparent except for particle effects, e.g. smoke, fire, sparks etc, which are easier to sort as they are flat quads facing the camera (so no impossible scenarios can occur). Games generally completely avoid having semi-transparent geometry due to the complexity of handling that. The closest thing available is alpha testing, which isn't really transparency. It's exactly what Longarmx wrote about, where you discard pixels that are "too transparent", effectively achieving binary transparency (either fully opaque or fully transparent).
19  Game Development / Newbie & Debugging Questions / Re: Screen Capture To Texture (OpenGL / Slick2D) on: 2014-07-04 19:41:15
I need a way to call back to the captured screen image though while I'm in a screen transition loop, so I figured having a Texture object to reference would be the best way.

Capturing the screen basically helps me avoid redrawing every object each step through the transition loop; instead I just draw the captured screen.

- Steve
You're missing my point. You don't have to "capture" the screen to a texture. You can render everything onto a texture in the first place instead of rendering to the screen.
20  Game Development / Newbie & Debugging Questions / Re: Screen Capture To Texture (OpenGL / Slick2D) on: 2014-07-04 18:44:20
Just render the scene to a texture using an FBO instead of reading back the data to RAM and then reuploading it again, which is a magnitude or so slower.
21  Game Development / Newbie & Debugging Questions / Re: Game Inefficiencies on: 2014-07-04 04:51:39
The System Idle Process is there to keep the CPU idle when the scheduler finds no threads ready to execute. That's why it's always shown as the percentage not being used, as there must always be a thread running on a CPU at all times. More information on Wikipedia.
From that link:

Quote
Because of the idle process's function, its CPU time measurement (visible through Windows Task Manager) may make it appear to users that the idle process is monopolizing the CPU. However, the idle process does not use up computer resources (even when stated to be running at a high percent), but is actually a simple measure of how much CPU time is free to be utilized. If no ordinary thread is able to run on a free CPU, only then does the scheduler select that CPU's System Idle Process thread for execution. The idle process, in other words, is merely acting as a sort of placeholder during "free time".

In Windows 2000 and later the threads in the System Idle Process are also used to implement CPU power saving. The exact power saving scheme depends on the operating system version and on the hardware and firmware capabilities of the system in question. For instance, on x86 processors under Windows 2000, the idle thread will run a loop of halt instructions, which causes the CPU to turn off many internal components until an interrupt request arrives. Later versions of Windows implement more complex CPU power saving methods. On these systems the idle thread will call routines in the Hardware Abstraction Layer to reduce CPU clock speed or to implement other power-saving mechanisms.
You're right that it is indeed a real thread (which I didn't know), but it's not exactly a normal thread. My main point was that neither the CPU or GPU are unnecessarily burning energy because it's supposed to be good for them. CPUs and GPUs have massive power saving functions so they don't have to run at 100% load all the time, which includes shutting down unused parts of the processor or even complete cores and lowering the clock speed to a fraction of what it can run at. My CPU idles at room temperature and my GPUs at 35 degrees. My CPU can drop down to 800 MHz instead of running at 3.9GHz all the time. My GPUs' cores drop down to 135MHz instead of 1.2GHz and their memory to 162MHz from 1.75GHz. Hardware makers are doing everything they can to decrease power usage and heat generation to be able to get better battery life and smaller devices.
22  Game Development / Newbie & Debugging Questions / Re: Game Inefficiencies on: 2014-07-04 02:32:02
I think (don't quote me on it), CPUs and GPUs last longer when they're forced to always run at 100%. Something about transiter load. I don't know the details or if I am even right, I just remember reading this somewhere like a decade ago.

Windows does this as well, if you look at your taskbar on older versions of windows they have the "System idle process" that's always maxed out at whatever percentage of the processor currently is not being used. Windows 7 (and possibly vista) don't show it anymore though.
I find it hard to believe that this is true. If it was, then you'd be wasting a shitload of money and/or battery life on that "idle process". The System idle process is simply there to show you how much of the time the CPU idles (and it's still there for 7).
23  Discussions / Miscellaneous Topics / Re: What I did today on: 2014-07-04 00:48:50
Implemented initial and hacky texture streaming. At start first 4 mip levels are skipped and texture loading is almost instant(instead of seconds). When game is started I just load one texture per frame and this use about 10ms extra and all textures are loaded after ~300 frames. Next task would move this to worker thread. I also need to implement some kind of importance ranking and texture memory budget limiter so it will scale to thousands high res textures.
Ah so through the use of a SharedDrawable, two threads can call GL codes at the same time, of course making sure no problems arise from usage of the same resources?
Relevant to both of you: http://www.java-gaming.org/topics/tutorial-stutter-free-texture-streaming-with-lwjgl/32571/view.html

Have you tested any loading heuristic based on usage, distance or texture importance? How about when you want to hard limit amount of texture memory usage.

Not really. I only base it on the time until it was last used. I leave the VRAM usage constraint to the user in the form of a texture quality setting. It's not all over if you run out of VRAM, usually the driver just swaps out resources that haven't been used for a while to RAM.
24  Game Development / Newbie & Debugging Questions / Re: Game Inefficiencies on: 2014-07-04 00:45:05
Precision? It doesn't really matter.
25  Game Development / Performance Tuning / Re: Efficient Fog of War on: 2014-07-03 23:12:52
Have you even tried this and timed it? 10 000 tiles is almost nothing.
26  Discussions / Miscellaneous Topics / Re: What I did today on: 2014-07-03 20:47:14
Implemented initial and hacky texture streaming. At start first 4 mip levels are skipped and texture loading is almost instant(instead of seconds). When game is started I just load one texture per frame and this use about 10ms extra and all textures are loaded after ~300 frames. Next task would move this to worker thread. I also need to implement some kind of importance ranking and texture memory budget limiter so it will scale to thousands high res textures.
Ah so through the use of a SharedDrawable, two threads can call GL codes at the same time, of course making sure no problems arise from usage of the same resources?
Relevant to both of you: http://www.java-gaming.org/topics/tutorial-stutter-free-texture-streaming-with-lwjgl/32571/view.html
27  Game Development / Newbie & Debugging Questions / Re: Game Inefficiencies on: 2014-07-03 17:56:53
Your GPU is currently the limit here. When you call Display.update() the driver makes sure that the GPU hasn't fallen too far behind. If it has, then the driver forces the CPU to wait until the GPU has caught up. Most drivers seem to implement this with a busy loop that uses 100% on one CPU core.
Mine (GTX 680) even spawns an additional thread so that 2 cores are 100 % busy even if i limit the fps to 60. However, what i actually wanted to express is, that looking at the cpu load while the game is running tells you nothing about the efficiency of the code. Or in other words: Having one core fully loaded isn't necessarily a sign of bad coding. Cores are there to be used.
That's a feature of the Nvidia driver, not the GPU. Intel also has this feature, and I believe AMD does as well. They basically just append all OpenGL commands to a queue that the other driver thread reads from and runs. It essentially makes most OpenGL commands free for the game's thread and gives you some extra CPU time to play with. The problem with this is mapping buffers. Every time you call glMapBuffer() or any of its variations (regardless of if you use GL_MAP_UNSYNCHRONIZED or not) the game's thread has to wait for the driver's thread to finish, so most of the benefit of the extra thread is lost. This is why the new persistently mapped buffers are so awesome. You can map a buffer once and keep it mapped forever, so you never have to synchronize with the driver's thread.
28  Game Development / Newbie & Debugging Questions / Re: Game Inefficiencies on: 2014-07-03 15:41:46
As long as your code doesn't pause while waiting for a vertical sync or a Thread.sleep(), you'll always end up with at least one CPU core used to ~100%. For that, it doesn't matter how "efficient" the code is. If it's more efficient, it might output higher frame rates but that doesn't change the cpu load.
If you are walking for one hour, you are walking for one hour. It doesn't matter if you are walking pretty fast or crawling on your knees. The distance after one hour will differ, but the actual load (your body used to 100% for moving around) doesn't differ.
To clarify on this...

Your GPU is currently the limit here. When you call Display.update() the driver makes sure that the GPU hasn't fallen too far behind. If it has, then the driver forces the CPU to wait until the GPU has caught up. Most drivers seem to implement this with a busy loop that uses 100% on one CPU core.
29  Game Development / Performance Tuning / Re: Efficient Fog of War on: 2014-07-03 15:33:59
That's nothing. You're prematurely optimizing. Test it, time it and if it's slow fix it. The only thing you should optimize right now is possibly pooling your GridPoint2 instances so you don't generate as much garbage.
30  Game Development / Performance Tuning / Re: Efficient Fog of War on: 2014-07-03 15:16:38
How large area are we talking about here? How many tiles per actor? How many actors?
Pages: [1] 2 3 ... 87
 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

ctomni231 (32 views)
2014-07-18 06:55:21

Zero Volt (28 views)
2014-07-17 23:47:54

danieldean (24 views)
2014-07-17 23:41:23

MustardPeter (25 views)
2014-07-16 23:30:00

Cero (40 views)
2014-07-16 00:42:17

Riven (42 views)
2014-07-14 18:02:53

OpenGLShaders (29 views)
2014-07-14 16:23:47

Riven (29 views)
2014-07-14 11:51:35

quew8 (26 views)
2014-07-13 13:57:52

SHC (63 views)
2014-07-12 17:50:04
HotSpot Options
by dleskov
2014-07-08 03:59:08

Java and Game Development Tutorials
by SwordsMiner
2014-06-14 00:58:24

Java and Game Development Tutorials
by SwordsMiner
2014-06-14 00:47:22

How do I start Java Game Development?
by ra4king
2014-05-17 11:13:37

HotSpot Options
by Roquen
2014-05-15 09:59:54

HotSpot Options
by Roquen
2014-05-06 15:03:10

Escape Analysis
by Roquen
2014-04-29 22:16:43

Experimental Toys
by Roquen
2014-04-28 13:24:22
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!