Java-Gaming.org    
Featured games (79)
games approved by the League of Dukes
Games in Showcase (477)
Games in Android Showcase (107)
games submitted by our members
Games in WIP (536)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
   Home   Help   Search   Login   Register   
  Show Posts
Pages: [1] 2 3 ... 87
1  Discussions / Miscellaneous Topics / Re: A rant on OpenGL's future on: 2014-07-24 01:16:13
I get that you don't think it's worth higher hardware requirements, but you're really not speaking for everyone. The primary reason I targeted OGL3 for We Shall Wake is performance. The only OpenGL 4 features I take advantage of if they're available are performance or memory optimizations. By limiting the game to newer hardware I reduce the performance requirements of the game. Also, we do have modelers so our game will actually look good if we use expensive shaders and postprocessing.

The pollution argument is ridiculous. Why would you encourage people to waste energy on drawing pretty triangles at all if you care so much about pollution. Newer hardware is more energy efficient.

Finally, I couldn't care less about if OpenGL is good enough for other applications, or even for you. It's not good enough for me, and I'm sure a lot of game developers would agree. It'd be extremely naive for me to think that you or the CAD industry will care about what I want. This is a game developing forum. If WE don't even say what we want, no one else is gonna do it for us.
2  Games Center / Showcase / Re: We Shall Wake demo (v6.0) on: 2014-07-23 16:34:11
Freezes during loading screen, or kinda slightly after loading but before any game.

Quote
Dungeon successfully generated.
Uncaught exception in draw (main) thread:
java.lang.NullPointerException
   at engine.tile.TileRenderer.nextFrame(TileRenderer.java:302)
   at engine.WSWGraphics.renderScene(WSWGraphics.java:579)
   at engine.WSWGraphics.access$2(WSWGraphics.java:556)
   at engine.WSWGraphics$1.run(WSWGraphics.java:285)
   at net.mokyu.threading.MultithreadedExecutor.processTask(MultithreadedExecutor.java:157)
   at net.mokyu.threading.MultithreadedExecutor.run(MultithreadedExecutor.java:131)
   at engine.WSWGraphics.render(WSWGraphics.java:544)
   at game.state.StateDungeon.draw(StateDungeon.java:144)
   at game.core.Core.render(Core.java:226)
   at game.core.framework.GameFramework.step(GameFramework.java:53)
   at game.core.Core.run(Core.java:170)
   at game.core.Core.main(Core.java:143)
Exception in thread "main" java.lang.IllegalStateException: Error ending frame. Not all tasks finished.
   at engine.profile.GPUProfiler.endFrame(GPUProfiler.java:66)
   at game.state.StateDungeon.draw(StateDungeon.java:153)
   at game.core.Core.render(Core.java:226)
   at game.core.framework.GameFramework.step(GameFramework.java:53)
   at game.core.Core.run(Core.java:170)
   at game.core.Core.main(Core.java:143)

No gamepad support. Now come on guys - you really wanna play a game like this with a keyboard ? ^^'

Long time fan of these games: DMC 3, 4, Bayonetta, Prototype, Infamous, MGS Rising.   Actually did some DMC 3 and mostly 4 speedruns back then, so this excites me. Would love a well written story of course... x)
Null pointer exception bug fixed! Way too simple error... ._. If possible, try updating your graphics drivers as the bug was in the fallback code for when a newer function wasn't supported, so if the latest driver has support for it it should work.

I believe game pad support didn't make it for the latest version. We're currenlty working on some bug fixing; me on a few new graphics engine bugs that emerged, and SkyAphid on the sound engine.
3  Discussions / Business and Project Management Discussions / Re: what now? on: 2014-07-23 16:28:47
start to code on it? =D half a year is alot off time xD
I have coded a few tests to check that it was actually possible to get good enough performance and all. =P
4  Discussions / Business and Project Management Discussions / Re: what now? on: 2014-07-23 14:12:36
A month?? NO! God DAMMIT. Of course, right when school restarts. Maybe I'll get lucky the 5th time around.
Yes that's right. It's happened 4 times now in a time when I can't do it. In a row. Grr  Roll Eyes
I've had an idea for a completely EPIC game for over half a year now. Tongue
5  Discussions / Miscellaneous Topics / Re: A rant on OpenGL's future on: 2014-07-23 14:05:28
As you use some bleeding edge features, you know you're in the front line, you find bugs, it's not surprising.
The only bugs I've found in cutting edge features are the following:
 - Nvidia: When persistent buffers were first released, they had forgotten to remove the check to use the buffer while it was mapped (which was the whole point), so they were useless. (Fixed)
 - AMD: BPTC texture compression works, but can't be uploaded with glCompressedTexSubImage() as that throws an error.
 - Intel: glMultiDrawIndirect() does not work if you pass in an IntBuffer with commands instead of uploading them to the indirect draw buffer VBO.

All other bugs were in really old features:
 - Intel: Rendering to a cube map using an FBO always made the result end up on the first face (X+), regardless of which face you bound to the FBO. (Fixed)
 - Intel: Their GLSL compiler was pretty much one huge bug.
vec3 array[5];
was supported,
vec3[5] array;
threw unrelated errors all over the place. (Fixed)
 - Intel: 4x3 matrices multiplied with a vec4 resulted in a vec4 while it should result in a vec3. (Fixed)
 - AMD: Sometimes on older GPUs the result of FBO rendering becomes grayscale. I have no idea. (Fixed in newer GPUs at least)
 - AMD: Allocating texture mipmaps  in reverse order (Allocate level 10, upload level 10, allocate level 9, upload level 9, ...) hard locks the GPU. You need to allocate all mipmaps before you start uploading to them.
 - AMD: Not sure if this is a bug, but depth testing against a depth buffer with depth writing disabled and reading the same depth buffer in the shader causes undefined results on AMD cards only.

These are the ones I can remember right now anyway. This is why I like Nvidia's drivers, by the way.

Almost everyone on this forum have at at least one point in their life worked with OpenGL, either directly or indirectly. Some of us likes to go deep and use OpenGL directly through LWJGL or LibGDX
LibGDX is a middle-level API and there are other actively maintained Java bindings for the OpenGL and/or OpenGL-ES APIs including JGLFW (used in LibGDX), Android OpenGL and JOGL (JogAmp).

while others (a probable majority) prefer the increased productivity of abstractions of OpenGL like LibGDX's 2D support, JMonkeyEngine or even Java2D with OpenGL acceleration.
The build-in Java2D with OpenGL acceleration isn't very optimized and it drives Java2D performance inconsistent. Slick2D, Golden T Game Engine (GTGE) and GLG2D are more efficient.
Those examples were not meant to be exhaustive. I'm only saying that no matter how your render stuff, OpenGL matters to you since pretty much everything (except unaccelerated Java2D of course) relies on OpenGL.

OpenGL is used a lot in scientific visualization and in CAD softwares too.

Maybe you should mention that the situation is quite different in mobile environments. Intel hardware is still far behind when it deals with offscreen rendering.
I did mention that OpenGL's use in other places is what made them decide to add so much backwards compatibility.
Sadly, the existence of the compatibility mode has encouraged many important industrial customers of the graphics cards vendors (CAD programs and other 3D programs) to become dependent on the compatibility mode, so it's essentially here to stay.

I can't say much about the mobile market since I've never developed stuff for it, but I do know that OpenGLES 2 is essentially OpenGL 3 without compatibility mode. Is that what you meant?

I don't think about "market shares" and things like that but I have worked as an engineer in computer science specialized in 2D/3D visualization for more than 7 years and we are numerous to be mostly satisfied by the way OpenGL has evolved even though the slowness of some decision makers were really annoying. I prefer the evolution with a certain continuity rather than brutal questionable changes. Microsoft isn't a good example, it is very good at creating brand new APIs that only work with one or two versions of Windows and abandoning them. 2 softwares on which I worked already existed in the nineties and not all corporations can afford rewriting its rendering code every three years. OpenGL isn't only for games, it can't be designed only to please some game programmers.

I know that almost nobody agrees with me about that here but I prefer fighting against planned obsolescence. What theagentd wrote isn't apolitical. "Science without conscience is but the ruin of the soul" (Fran├žois Rabelais).
I'm not entirely sure I understood what you're saying. You're saying that I'm not taking all applications of OpenGL into account?

Spasi talked about regal. JOGL provides immediate mode and fixed pipeline emulation too in ImmModeSink and PMVMatrix which is very useful when writing applications compatible with both OpenGL and OpenGL-ES.
This is the exact reason why we don't need driver implemented immediate mode anymore. It took me just a few hours to write a decent clone of immediate mode with support for matrices, colors and texture coordinates. It even takes advantage of persistently mapped buffers to upload the vertex data if they're supported by the driver. Sure, it's probably not as fast as the driver optimized immediate mode, but frankly, if you're using immediate mode you don't care about CPU performance in the first place.



I believe that the OpenGL drivers have too many features which slows down development of them and increases the number of bugs. If the OpenGL spec only contained the bare minimum required to do everything it currently does, it'd shift the load away from the driver developers and let them focus on optimizing the drivers and more quickly adopt new functionality. The lack of older "features" can be easily implemented by for example open source libraries that expose an API that's more familiar for people coming from older versions of OpenGL, and I believe that it would not be a significant effort for them to implement the stuff that they need. The important thing here is that we'd at least get rid of the completely unused game related features, like fixed functionality multitexturing and lighting.

I think that it's clear that the game industry is pretty open to this mentality of quickly deprecating old features. See how quickly developers jumped on to Mantle for example. Maybe you're right though. Maybe what we really need isn't a new OpenGL version, but a version of OpenGL that is specifically made for games. But that's pretty much what OpenGL 5 SHOULD be. OpenGL 5 most likely won't have any new hardware features. If they completely drop backwards compatibility, it'd essentially be the same functionality as OpenGL 4 but with an API targeted for games. If the "game version" of OpenGL continued to build on the new OpenGL 5, we'd essentially get what we want from OpenGL 5. Non-game applications could still use OpenGL 4, with new features introduced as extensions over time. This wouldn't necessarily decrease the load on driver developers, but it would make the game version of OpenGL faster to develop and maintain.
6  Discussions / Business and Project Management Discussions / Re: what now? on: 2014-07-23 02:10:59
Here are my 2 cents.

I used to start on new projects on a whim. I'd get an idea, and started coding on it the same day. I generally lasted a few weeks or even a month before I got stuck or bored of it and just dropped it. Nowadays when I get an idea I think about it for a long time. Something that sounds awesome when you come up with it but gets really meh after just a few days or even a few weeks isn't really a good idea in the first place. I imagine the players will get bored of the concept even quicker than I will, so why settle for anything less than what you yourself would find interesting in the long run?

TL;DR: Don't jump right in on coding your "great" idea. Instead, let it "boil" for a few weeks and see if it still has you hooked. Basically, have one or two ideas cooking while you work on your current "proved" good idea(s).
7  Games Center / Showcase / Re: We Shall Wake demo (v6.0) on: 2014-07-22 11:50:44
Alright, we'll take a look at it! Thanks for testing and including the log!
8  Discussions / General Discussions / Re: What? Your game isn't a pixelated retro masterpiece? on: 2014-07-19 05:09:05
he term seems to be used very ambiguously

Well, everything where you can still see individual pixels...
Sooo... Every single game right now since proper MSAA is too hard to do?
9  Game Development / Performance Tuning / Re: Can someone explain what is going on with this square rooting? on: 2014-07-18 20:28:53
nanoTime() is fairly inaccurate in most cases. It's better to use getTimeMillis(). (generally)
Really? I have the exact opposite experience. On some Windows machines getTimeMillis() as an actual accuracy of around 15ms, e.g. the timer only updates every 15ms, so you get horrible precision. nanoTime() has always worked reliably for me.
10  Discussions / Miscellaneous Topics / Re: What I did today on: 2014-07-16 20:33:51
I wrote a small program which does raycasting through a 3D texture.

1. Find MRI image.


2. Split it up into individual frames.

3. Put frames in folder of program.

4. Profit, in the form of broccoli!
11  Java Game APIs & Engines / OpenGL Development / Re: Drawing big Objects far away on: 2014-07-15 17:48:18
What exactly is the "Outerra way"?
12  Discussions / Miscellaneous Topics / Re: What I did today on: 2014-07-15 06:15:28
I wrote a rant.
13  Discussions / Miscellaneous Topics / A rant on OpenGL's future on: 2014-07-15 06:10:26
Some background

TL;DR: OpenGL has momentum. The number of games utilizing it is steadily increasing, which has forced GPU vendors to fix their broken drivers. Compared to 2 years ago, the PC OpenGL ecosystem is looking much better.

Almost everyone on this forum have at at least one point in their life worked with OpenGL, either directly or indirectly. Some of us likes to go deep and use OpenGL directly through LWJGL or LibGDX, while others (a probable majority) prefer the increased productivity of abstractions of OpenGL like LibGDX's 2D support, JMonkeyEngine or even Java2D with OpenGL acceleration. In essence, if you're writing anything more advanced than console games in Java, you're touching OpenGL in one way or another. Outside of Java, OpenGL is used mainly on mobile phones as both Android and IOS supports it, but a number of AAA PC games have recently used OpenGL.

  • The id Tech 5 engine is based solely on OpenGL and is used for RAGE and the new Wolfenstein game, with two more games in development using the engine.
  • Blizzard has long since supported OpenGL as an alternative to DirectX in all their games to allow Mac and Linux users to play their games.
  • Valve is pushing a move to OpenGL. They're also porting the Source engine to OpenGL, and some of the latest Source games (Dota 2 for example) default to OpenGL instead of Windows.

These are relative recent events that have essentially started a chain reaction of improvements in OpenGL support throughout the gaming industry. The push by developers to support or even move to OpenGL has had a huge impact on OpenGL driver quality and how fast new extensions are implemented by the 3 graphics vendors. Some of you may remember the months following the release of RAGE when people with AMD cards had a large number of issues with the game, and RAGE was not exactly using cutting edge features of OpenGL. During the development of We Shall Wake I've had the pleasure of encountering a large number of driver bugs, but I've also had a very interesting perspective on how the environment has changed.

  • Nvidia's OpenGL drivers have always been of the highest quality among the three vendors, so there was never much to complain about here. My only complaint here is that they are impossible to report driver bugs to, as they never respond to anything. This is a bit annoying since when you actually do find a bug, it's almost impossible to get your voice heard (at least as a non-professional graphics programmer working on a small project).
  • AMD's drivers are significantly better today compared to a year or two ago, and almost all the latest important OpenGL extensions are supported. Their biggest problem is that they lag behind slightly with OpenGL extensions, leading to some pretty hilarious situations like AMD holding a presentation for optimized OpenGL rendering techniques that are only supported by their competitors.
  • Even more impressive are Intel's advances. Around a year ago I had problems with features that dated back to OpenGL 1.3. Their GLSL compiler was a monster which had bugs that should've been discovered within hours of release. Even worse, they were very quick to discontinue driver development as soon as a new integrated GPU line was released. Today, they have a respectable OpenGL 4 driver compatible with all their OGL4 capable integrated GPUs, and they also support a majority of the new important extensions. Intel also takes the prize for best developer support, as I have reported 3 bugs which have all been fixed in the following new driver release.
  • OpenGL on OSX has also gotten a big improvement lately. The latest drivers support OpenGL 4.1 on all GPUs that have the required hardware, but most cutting-edge features are still missing.




What's wrong with OpenGL?

TL;DR: OpenGL is horribly bloated. For the sake of simpler, less buggy and faster drivers that can be developed more quickly, they need to get rid of unnecessary functionality.

We'll start by taking a look at OpenGL's past. The main competitor (LWJGL) of OpenGL, DirectX, has traditionally had (and still has) a large market share, and not without good reasons. A big difference between the two is how the handle legacy functions. DirectX does not have a significant amount of backwards compatibility. The most important transition happened between DirectX 9 and 10. They completely remade the API from the ground up to better fit the new generation of GPUs with unified architectures that were emerging. This was obviously a pain in the ass for developers, and many games are still being developed with DirectX 9. It was however a very good choice. Why? Because the alternative was what OpenGL did. First of all, OpenGL 3 (the functional equivalent of DirectX 10) was delayed for several months, allowing DirectX to gain even more of a head start. Secondly, they decided to instead of starting from scratch, they decided to deprecate old functionality and eventually remove it in later versions. Sounds like a much easier transition for developers, right? Here's the catch: They also provided specifications for a compatibility mode. Since all 3 vendors felt that they were obliged to support this compatibility mode, they could not actually get rid of the deprecated functionality. In essence, OpenGL 3 is OpenGL 2 with the new functionality nastily nailed to it. The horror story continued with OpenGL 4 and the new revolutionary extensions that will make up the future OpenGL 5. OpenGL is so ridiculously bloated with functionality that haven't existed in hardware for over 10 years. Nvidia, AMD and Intel are still emulating completely worthless functionality with hidden shaders, like fixed functionality multitexturing and the built-in OpenGL lighting system. Implementing and maintaining these functions for every new GPU that they release is a huge waste of resources for the three vendors. This is one of the sources of the traditionally bad driver support OpenGL has had. It was simply not worth it until more games using OpenGL started popping up. A fun fact is that Apple actually decided not to support the compatibility mode, so to access OGL3+ on OSX you need to specifically request a context without compatibility mode.

Sadly, the existence of the compatibility mode has encouraged many important industrial customers of the graphics cards vendors (CAD programs and other 3D programs) to become dependent on the compatibility mode, so it's essentially here to stay. So we have 3 vendors, each with their own ridiculously massive unmaintainable driver, and we just get more and more functionality. We're seeing a similar shift in how things are done in hardware as we did between DirectX 9 and 10 right now between DirectX 11 and 12. Interestingly, OpenGL is leading here thanks to extensions that expose these new hardware features. These are available right now on all vendors with the latest beta drivers, except on Intel which is lacking a few of them. In essence, we already have the most important features of a theoretical OpenGL 5.

Here's what's wrong with OpenGL at the moment. There are too many ways of doing the same thing. Let's say you want to render a triangle. Here's the different ways you can upload the exact same vertex data to OpenGL with.

Fixed functionality:

  • Immediate mode with glBegin()-glEnd(). Generally slow, but easy to use. (1992)
  • Vertex arrays with data reuploaded each frame. Faster but still slow as data is reuploaded each frame. (1997)
  • Create a display list. Fastest on Nvidia hardware for static data. (1997)
  • Upload to VBO with glBufferData() each frame. Generally stable performance, but slow due to additional copies and complicated memory management in the driver. (2003)
  • Allocate VBO once, upload to VBO with glSubBufferData() each frame. Slow if you're modifying the same buffer multiple times per frame. Also requires copying of data. (2003)
  • Map a VBO using glMapBuffer() and write to the mapped memory. Avoids an extra copy of the data, but forces synchronizations between the GPU, driver thread and game thread. (2003)
  • Map a VBO using glMapBufferRange() with GL_MAP_UNSYNCHRONIZED_BIT and handle synchronization yourself. Avoids extra copy and synchronization with the GPU, but still causes synchronization between the driver thread and the game thread. (2008)
  • Allocate persistent coherent buffer, map once and handle synchronization yourself. No extra copy, no synchronization. Allows for multithreading. (2013)

So we literally have 8 different ways of uploading vertex data to the GPU, and the performance of these methods depend on GPU vendor and driver version. It took me years to learn which ones to use for what kind of data, and which ones are fast on which hardware, and which you should avoid in what cases. Today, all but the last one are completely redundant. They simply complicate the driver, introduce more bugs in the features that matter and increases development time for new drivers. We literally have code from 1992 (the year I was born, I may add) lying right next to the most cutting edge method of uploading data to OpenGL from multiple threads while avoiding unnecessary copies and synchronization. It's ridiculous. The same goes for draw commands. The non-deprecated draw commands currently in OpenGL 4.4 (+ extensions):

  • glDrawArrays
  • glDrawArraysInstanced
  • glDrawArraysInstancedBaseInstance
  • glDrawArraysIndirect
  • glMultiDrawArrays
  • glMultiDrawArraysIndirect
  • glDrawElements
  • glDrawRangeElements
  • glDrawElementsBaseVertex
  • glDrawRangeElementsBaseVertex
  • glDrawElementsInstanced
  • glDrawElementsInstancedBaseVertex
  • glDrawElementsInstancedBaseVertexBaseInstance
  • glDrawElementsIndirect
  • glMultiDrawElementsBaseVertex
  • glMultiDrawElementsIndirect
  • glMultiDrawArraysIndirectBindless <----
  • glMultiDrawElementsIndirectBindless <----

Only the last two functions marked with arrows are necessary to do everything the above commands do. This bloat needs to go. Oh, and here's another funny piece of information. GPUs don't actually have texture units the way OpenGL exposes them anymore. We can immediately deprecate texture units as well and move over to bindless textures any time we want. Imagine the resources the driver developers could spend on optimizing functionality that is actually useful instead of on maintaining legacy functions, not to mention the smaller number of bugs as there are less functions that can have bugs in the first place.





Why fix what isn't broken?

Competition. Mantle is a newcomer in the API war but has already gained developer support for many released and upcoming games thanks to its more modern, simpler API that is a better fit for the GPU hardware of today (well, and probably a large amount of money from AMD). DirectX 12 will essentially be a cross vendor clone of Mantle. Yes, OpenGL is ahead of DirectX by far right now thanks to extensions, but that won't last forever unless they can keep the API as simple and straightforward to use, fast and bug free as the competition. We're still behind Mantle when it comes to both functionality and simplicity. OpenGL is too complicated, too bug ridden and too vendor dependent. Unless OpenGL 5 wipes the slate clean and starts from scratch, it's time to start moving over to other APIs where the grass is greener.

DirectX is a construction company which builds decent buildings but tear them down every other year to build a new better building. Mantle is a shiny new sci-fi prototype building. OpenGL started as a tree house 20 years ago and has had new functionality nailed to it for way too long, so technically it's just as advanced as DirectX and Mantle; it's just that it's all attached to a frigging tree house so it keeps falling apart and is basically unfixable.
14  Games Center / Showcase / Re: We Shall Wake demo (v6.0) on: 2014-07-14 10:50:12
Appreciate for pendulum in the gameplay haha Grin
Unfortunately i couldn't test the game as i am on Java 6 (version 7 is available from OS X Lion :c ), so i am judging it by the yt video.
The game looks glorious, love the art style,whole thing looks like AAA to me Tongue
Those metal cling sfx on hit really add immersion, makes you believe the models are really made out of metal.
Animations are awesome, are they hand-made, downloaded or recorded?
Also, will there be multiplayer  Roll Eyes ? and witch platforms do you plan to support? Maybe go all the way for PS4 and XBone? THAT would be awesome.
Good luck!
Thanks!

The animations are handmade by SkyAphid.

We have plans for things like 4-player coop mode against hordes of AI robots. We only intend to support PC, as we don't have any experience with console development. Controller support and single machine 4 player split screen is planned though.
15  Games Center / Showcase / Re: We Shall Wake demo (v6.0) on: 2014-07-13 19:35:45
wont run, i assume this is a driver problem again.
http://pastebin.com/i6W86UkM

the video looks amazing but where do i get the neural accelerator implant to be able to react fast enough?

Looks like your GPU/driver does not support OpenGL 3.2. What's GPU do you have?
16  Games Center / Showcase / Re: We Shall Wake demo (v6.0) on: 2014-07-12 22:15:11
I've sent you a PM, SHC.
17  Games Center / Showcase / Re: We Shall Wake demo (v6.0) on: 2014-07-12 19:42:05
Got this error after clicking Play.

Full Log file

My GPU:
NVIDIA GeForce 210 with latest drivers.

Either your drivers are outdated and/or the driver erroneously reports that it supports gather4 when it doesn't. I'll take a look at it when I get home.

EDIT: Try replacing tex2Dgather() with textureGather() in merge.frag and see if it works. Never mind, the shaders are packed into the exe/jar.
18  Games Center / Showcase / We Shall Wake demo (v6.0) on: 2014-07-12 15:53:36


Hello, everyone. I thought it was about time that I threw up what I've been working on for almost two years now. I'm the graphics and physics programmer for this project. We're two programmers on the project; me and SkyAphid, who's not very active on this forum anymore. We also have two people working on character modelling, a concept artist and an environmental designer.



ABOUT WE SHALL WAKE

Thousands of years after the extinction of the human race, and hundreds after the extinction of an alien race simply known as the Creators, you wake up in a desolate tower inhabited by robots incapable of true emotion, acting simply as simulations for the long gone forerunners.

You are a MORS model, the most advanced machine ever created - capable of true human emotion, and possessing a bio-mechanical body that can cause mass destruction. You must choose whose side you will take in a war between three factions...if any.

However, you are not alone - other MORS models have different ideals and philosophies they wish to enforce, and it's up to you to decide whether or not they are right.

We Shall Wake is a high-speed action game, being developed by Circadian.

We're aiming at high speed gameplay with flexible and versatile movement and combat systems.
19  Discussions / Miscellaneous Topics / Re: [Girls] How to completely block them from our lives? on: 2014-07-12 15:49:00
This thread really needs to die...
Maybe if you keep posting in it it'll die.
20  Java Game APIs & Engines / OpenGL Development / Re: Somehow using GL_REPEAT on a sub-texture in a sprite-sheet? on: 2014-07-10 00:03:52
Yes.
21  Java Game APIs & Engines / OpenGL Development / Re: Somehow using GL_REPEAT on a sub-texture in a sprite-sheet? on: 2014-07-09 23:55:19
If the hardware supports texture arrays, it also supports non-power-of-two textures with filtering.
22  Java Game APIs & Engines / OpenGL Development / Re: Somehow using GL_REPEAT on a sub-texture in a sprite-sheet? on: 2014-07-09 20:55:05
If you do this and want mipmaps to function, you'll get discontinuities in your texture coordinates, so you need to manually calculate your texture gradients from the original texture coordinates:

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
uniform vec2 uvMin;
uniform vec2 uvMax;

in vec2 texCoords;

out vec4 fragColor;

void main(){
    vec2 modifiedTexCoords = uvMin + fract(texCoords) * (uvMax - uvMin);
    fragColor = textureGrad(mySampler, modifiedTexCoords, dFdx(texCoords), dFdy(texCoords));
}
23  Game Development / Game Mechanics / Re: Any multithreading people out there? Multithreaded loading of a huge array. on: 2014-07-07 20:19:47
Loading files on multiple threads is a no-no, that's the fastest way to trash your performance as the disk is a serial device, best used in long continuous read/writes... skipping around on the disk is like skipping around in memory and nullifying your cache except thousands of times worse Sad

This is completely false in the first place. Both HDDs and SSDs perform better when you read multiple things from them using multiple threads. For HDDs, this is because it can optimize the searching by basically reordering the reads so that the reading head takes a more efficient "path". It can also queue up and pipeline the requests better, similar to how OpenGL works.

1  
2  
CPU: request file-------------------------------file received, request next--------------
HD:  -----------start reading~~~~~~~~~~~~~~~done----------------start reading~~~~~~~~~~~~

As you can see, a small stall occurs once the HD has finished reading a file. If you have multiple threads, it immediately knows what to do after.

For an SSD, using multiple threads is even more ideal. SSDs have the amazing ability to write and read from multiple parts of itself in parallel, so if the files are in different places on the SSD, you can basically read several times faster from it by having multiple threads request files.

Look up any harddrive or SSD benchmark and you'll see that they check it with different "queue depth", e.g. how many threads they use to read or write with.
24  Game Development / Newbie & Debugging Questions / Re: Ordering draw calls for alpha blending on: 2014-07-05 22:16:48
Most games don't have anything transparent except for particle effects, e.g. smoke, fire, sparks etc, which are easier to sort as they are flat quads facing the camera (so no impossible scenarios can occur). Games generally completely avoid having semi-transparent geometry due to the complexity of handling that. The closest thing available is alpha testing, which isn't really transparency. It's exactly what Longarmx wrote about, where you discard pixels that are "too transparent", effectively achieving binary transparency (either fully opaque or fully transparent).
25  Game Development / Newbie & Debugging Questions / Re: Screen Capture To Texture (OpenGL / Slick2D) on: 2014-07-04 19:41:15
I need a way to call back to the captured screen image though while I'm in a screen transition loop, so I figured having a Texture object to reference would be the best way.

Capturing the screen basically helps me avoid redrawing every object each step through the transition loop; instead I just draw the captured screen.

- Steve
You're missing my point. You don't have to "capture" the screen to a texture. You can render everything onto a texture in the first place instead of rendering to the screen.
26  Game Development / Newbie & Debugging Questions / Re: Screen Capture To Texture (OpenGL / Slick2D) on: 2014-07-04 18:44:20
Just render the scene to a texture using an FBO instead of reading back the data to RAM and then reuploading it again, which is a magnitude or so slower.
27  Game Development / Newbie & Debugging Questions / Re: Game Inefficiencies on: 2014-07-04 04:51:39
The System Idle Process is there to keep the CPU idle when the scheduler finds no threads ready to execute. That's why it's always shown as the percentage not being used, as there must always be a thread running on a CPU at all times. More information on Wikipedia.
From that link:

Quote
Because of the idle process's function, its CPU time measurement (visible through Windows Task Manager) may make it appear to users that the idle process is monopolizing the CPU. However, the idle process does not use up computer resources (even when stated to be running at a high percent), but is actually a simple measure of how much CPU time is free to be utilized. If no ordinary thread is able to run on a free CPU, only then does the scheduler select that CPU's System Idle Process thread for execution. The idle process, in other words, is merely acting as a sort of placeholder during "free time".

In Windows 2000 and later the threads in the System Idle Process are also used to implement CPU power saving. The exact power saving scheme depends on the operating system version and on the hardware and firmware capabilities of the system in question. For instance, on x86 processors under Windows 2000, the idle thread will run a loop of halt instructions, which causes the CPU to turn off many internal components until an interrupt request arrives. Later versions of Windows implement more complex CPU power saving methods. On these systems the idle thread will call routines in the Hardware Abstraction Layer to reduce CPU clock speed or to implement other power-saving mechanisms.
You're right that it is indeed a real thread (which I didn't know), but it's not exactly a normal thread. My main point was that neither the CPU or GPU are unnecessarily burning energy because it's supposed to be good for them. CPUs and GPUs have massive power saving functions so they don't have to run at 100% load all the time, which includes shutting down unused parts of the processor or even complete cores and lowering the clock speed to a fraction of what it can run at. My CPU idles at room temperature and my GPUs at 35 degrees. My CPU can drop down to 800 MHz instead of running at 3.9GHz all the time. My GPUs' cores drop down to 135MHz instead of 1.2GHz and their memory to 162MHz from 1.75GHz. Hardware makers are doing everything they can to decrease power usage and heat generation to be able to get better battery life and smaller devices.
28  Game Development / Newbie & Debugging Questions / Re: Game Inefficiencies on: 2014-07-04 02:32:02
I think (don't quote me on it), CPUs and GPUs last longer when they're forced to always run at 100%. Something about transiter load. I don't know the details or if I am even right, I just remember reading this somewhere like a decade ago.

Windows does this as well, if you look at your taskbar on older versions of windows they have the "System idle process" that's always maxed out at whatever percentage of the processor currently is not being used. Windows 7 (and possibly vista) don't show it anymore though.
I find it hard to believe that this is true. If it was, then you'd be wasting a shitload of money and/or battery life on that "idle process". The System idle process is simply there to show you how much of the time the CPU idles (and it's still there for 7).
29  Discussions / Miscellaneous Topics / Re: What I did today on: 2014-07-04 00:48:50
Implemented initial and hacky texture streaming. At start first 4 mip levels are skipped and texture loading is almost instant(instead of seconds). When game is started I just load one texture per frame and this use about 10ms extra and all textures are loaded after ~300 frames. Next task would move this to worker thread. I also need to implement some kind of importance ranking and texture memory budget limiter so it will scale to thousands high res textures.
Ah so through the use of a SharedDrawable, two threads can call GL codes at the same time, of course making sure no problems arise from usage of the same resources?
Relevant to both of you: http://www.java-gaming.org/topics/tutorial-stutter-free-texture-streaming-with-lwjgl/32571/view.html

Have you tested any loading heuristic based on usage, distance or texture importance? How about when you want to hard limit amount of texture memory usage.

Not really. I only base it on the time until it was last used. I leave the VRAM usage constraint to the user in the form of a texture quality setting. It's not all over if you run out of VRAM, usually the driver just swaps out resources that haven't been used for a while to RAM.
30  Game Development / Newbie & Debugging Questions / Re: Game Inefficiencies on: 2014-07-04 00:45:05
Precision? It doesn't really matter.
Pages: [1] 2 3 ... 87
 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

Riven (18 views)
2014-07-29 18:09:19

Riven (13 views)
2014-07-29 18:08:52

Dwinin (12 views)
2014-07-29 10:59:34

E.R. Fleming (31 views)
2014-07-29 03:07:13

E.R. Fleming (12 views)
2014-07-29 03:06:25

pw (42 views)
2014-07-24 01:59:36

Riven (41 views)
2014-07-23 21:16:32

Riven (28 views)
2014-07-23 21:07:15

Riven (29 views)
2014-07-23 20:56:16

ctomni231 (60 views)
2014-07-18 06:55:21
HotSpot Options
by dleskov
2014-07-08 03:59:08

Java and Game Development Tutorials
by SwordsMiner
2014-06-14 00:58:24

Java and Game Development Tutorials
by SwordsMiner
2014-06-14 00:47:22

How do I start Java Game Development?
by ra4king
2014-05-17 11:13:37

HotSpot Options
by Roquen
2014-05-15 09:59:54

HotSpot Options
by Roquen
2014-05-06 15:03:10

Escape Analysis
by Roquen
2014-04-29 22:16:43

Experimental Toys
by Roquen
2014-04-28 13:24:22
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!