Java-Gaming.org    
Featured games (81)
games approved by the League of Dukes
Games in Showcase (489)
Games in Android Showcase (112)
games submitted by our members
Games in WIP (555)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
   Home   Help   Search   Login   Register   
  Show Posts
Pages: [1] 2 3 ... 87
1  Discussions / General Discussions / Re: Performance Test for the Voxel Thing on: 2014-08-31 22:53:39
245-255 FPS when the whole screen is covered with voxels. GPU load still at 50% when running at that FPS.
2  Discussions / General Discussions / Re: Performance Test for the Voxel Thing on: 2014-08-30 16:39:39
and as opposed to theagentd, my CPU is not burning Smiley
My CPU idles at under 20 degrees. I'm fairly sure it's not gonna burn no matter what I do... xd
3  Discussions / General Discussions / Re: Performance Test for the Voxel Thing on: 2014-08-30 16:20:03
Intel i7-4770K @ stock
8GB RAM @ 2400 MHz
Nvidia GTX 770 4GB

Minimum 165 FPS.
CPU load 100% on one core.
GPU load slightly under 50%.
4  Discussions / Miscellaneous Topics / Re: What I did today on: 2014-08-27 10:39:07
It's much much worse than that Smiley

The scene is rendered in 2D with a sprite per voxels. Voxel based models are loaded in and rendered using the per-voxel sprites in the right order. The whole scene is dynamically rendered.

Cheers,

Kev
For some reason I get the impression that all the shadow caster are slightly off the ground. The fact that the shadows start one "pixel" away from the edge makes it looks a bit weird, I think.
5  Discussions / Miscellaneous Topics / Re: What I did today on: 2014-08-22 13:48:55
Got back to programming and started looking what kind of style I'm going with this project.

Probably going this way Smiley
Needs a healthy dose of 64x antialiasing.
6  Discussions / Miscellaneous Topics / Re: League of Legends ;D on: 2014-08-21 03:41:56
I wanted to know the same thing. So far I play about 4 - 5 matches on school days (after homework of course) and 8 - 10 on weekends and breaks. Well, unless I'm coding. Wink

EDIT: We should all do a match or something

Well, how many of us are on EU West?

The gameplay in dota2/lol is decent, although I do agree that their communities (especially lol) can be pretty bad. I'm stuck in silver Tongue
Silver and Gold are f**ked. Everyone just throws themselves to the ground and screams like 2 year old. At least in plat people don't give up immediately.
7  Discussions / Miscellaneous Topics / Re: What I did today on: 2014-08-20 15:55:53
More pixel art + CRT shader: https://www.shadertoy.com/view/XsjSzR# 

blog post here

Hey Riven:  what about allowing embedded shadertoy links?
What the hell?!?!?! You're Timothy Lottes?!
8  Discussions / Miscellaneous Topics / Re: League of Legends ;D on: 2014-08-20 15:53:12
I'm platinum 4, but I regularly duo with my diamond 3 friend... persecutioncomplex
9  Game Development / Game Mechanics / Re: Getting into multi-threading. on: 2014-08-18 06:50:09
Writing code that can be easily multithreaded is something that can be a bit difficult to get into, but once you get the hang of it it's not hard at all. The problem usually lies in bug fixing, which is harder of course, but again it isn't that hard if you learn to write manageable code.
10  Game Development / Newbie & Debugging Questions / Re: opengl MSAA resolve efficient ? on: 2014-08-17 14:24:53
i was hoping one could skip the mask-blitting (attachment-1 blit in my example) :

- gl_SampleMaskIn in "should" be the same for all samples of a fragment (right?)
- naively i tried to read just the first sample of the non-blitted-msaa-mask

.. but it makes sence, not all samples are processed by the fragment shader (during forward rendering). so even if the output is the same for all samples one cannot tell in which sample the information ends beeing stored. eventually sample-zero is never written.

using gl_SampleID or gl_SamplePosition could be used to work around this but those are methods : causes the entire fragment shader to be evaluated per-sample rather than per-fragment... what defeats the purpose of all this.

anyway, that would be just a minor optimisation.

I'm not sure what you're trying to achieve here since detecting triangle edges like that results in supersampling for edges that don't need it as you can see on the sphere, but anyway... The easiest way of detecting a triangle edge in the shader is to check if gl_FragCoord.xy is at the center of the pixel:

1  
2  
3  
4  
5  
if(fract(gl_FragCoord.xy) != vec2(0.5)){
    //edge!
}else{
    //not edge!
}

Of course this only works during scene rendering, not during the post processing fullscreen pass (that's just a fullscreen quad).


just trying to keep everthing more readable then perfect. which branching are you referring to ? the "if" or the "for"-loop ?

I'm referring to this code:
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
    if ( texelFetch(aamask, coord, 0).r != 0.0 )
    {
      float d     = 0.0;
      for(int i   = 0; i < samples; i++) d += linear0(texelFetch(aa_tex, coord, i).r);
      frag0_r = d / samples ;
    }
    else
    {
      frag0_r = linear0(texelFetch(aa_tex, coord, 0).r); // just grab first sample since it's expected to be the same value for all samples
   }

You have two main problems:
1. As I said before, if just a single pixel enters the if() statement (= requires per sample processing), then the whole work group has to wait for that pixel to finish.
2. You should make the "samples" variable a constant or a #define so that the for-loop can be unrolled by the GLSL compiler and 1/samples can be precomputed for the division.

To improve the branching performance you can write your if-statement like this instead:
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
//Inject this into the shader code before compiling the shader
#define SAMPLES 4

...


frag0_r = linear0(texelFetch(aa_tex, coord, 0).r);

if(texelFetch(aamask, coord, 0).r != 0.0 ){
    for(int i = 1; i < SAMPLES; i++){
        frag0_r += linear0(texelFetch(aa_tex, coord, i).r);
    }
    frag0_r /= SAMPLES;
}


With your original shader it's essentielly like this:


if(at least one pixel does NOT requires MSAA){
    those pixels sample 1 sample, the rest runs no-ops while waiting
}
if(at least one pixel requires MSAA){
    those pixels sample 4 samples, the rest runs no-ops while waiting
}


In most cases (a tile has both MSAA and non-MSAA pixels) you're effectively wating for it to sample 5 samples. With my modified version you're instead doing this:


sample 1 sample
if(at least one pixel requires MSAA){
    those pixels sample the remaining 3 samples, the rest runs no-ops
}


At worst, this would sample 4 instead of 5 samples. Although this doesn't really matter since you're bandwidth limited, it's a good trick which is applicable in a large number of cases.


A nice OpenGL 3 trick is to compute a (non-MSAA) stencil mask of which pixels that need to have per-sample shading (the above edge detection can be modified to generate a stencil texture instead).
that pretty much describes what i was thinking about this topic when it poped up.

my first attempt to write the stencil failed pretty bad. how would one create such map without having https://www.opengl.org/registry/specs/ARB/shader_stencil_export.txt available ?

i do not understand fully what you mean by modifying the edge detection.  Clueless .. oh wait .. you mean, by setting the pipeline just to write into stencil buffer ...
1  
2  
glColorMask(false, false, false, false);
glDepthMask(false);
and redraw all triangles with the shader discarding samples ... now it get lost, that stencil buffer is multisampled at this point. i get stucked on this part every time Smiley.

.. oh wait. not redrawing the triangles. just discarding (instead of processing) pixels (the msaa-gl_SampleMaskIn output) in a fullscreen-quad to generate the stencil and then rendering the 2-pass-stencil trick (thanks for the pointer to that, very neat!) with heavy computing ? i guess in my example the computing is not heavy enough to see gain but i can see guess it would work.

First you should create a (non-MSAA) renderbuffer and attach it to an FBO. I strongly recommend using GL_DEPTH24_STENCIL8​ instead of GL_STENCIL_INDEX8, as the latter is only guaranteed to be supported in GL4.3 and later. You'd attach this texture as a GL_DEPTH_STENCIL_ATTACHMENT to the FBO. There is no need to disable color writes or depth writes, as we have no color attachments and we won't enable the depth test in the first place.

To generate a stencil mask, you'd first clear the stencil buffer to all 0. Then you'd render a fullscreen quad to the FBO with the stencil test enabled like this:
1  
2  
3  
glEnable(GL_STENCIL_TEST); //Also enables stencil writes
glStencilFunc(GL_ALWAYS, 1, 0xFF); //Always succed, ref = 1, modify all bits (not necessary, but standard)
glStencilOp(GL_REPLACE, GL_REPLACE, GL_REPLACE); //Stencil test cannot fail, depth test cannot fail, if both succeed replace stencil value at pixel with ref (=1)

The problem is that this code will mark everything as an edge since the fullscreen quad covers all pixels. The solution is to create a custom shader which checks the MSAA samples of each pixel and discards it if MSAA  isn't necessary. discard; will prevent the stencil write.

So we have our stencil mask! We then add the same stencil renderbuffer to the postprocessing FBO so it can be used by the stencil test when doing postprocessing. We set the stencil func to GL_EQUALS and ref to 0. glStencilOp is set to GL_KEEP for all cases so we don't modify the pixels. This means that only the pixels with stencil=0 will get processed. Similarly, you just change ref to 1 and only stencil=1 (= needs MSAA) gets processed.





I'm starting to question the gains from doing this though. I'm coming from deferred shading where that stencil mask would be used for lighting as well as postprocessing, so the cost of generating the mask (which requires sampling the depth and normals of all samples) is offset by avoiding a lot of expensive lighting. From what I can see, your postprocessing shader is so cheap that it's actually surprising that you're getting any performance improvement at all considering you're increasing the bandwidth required during scene rendering with the additional render target and also resolving that extra render target. Unless the cost of generating the mask is less than the work saved by the mask it's better to just brute-force it. I have a feeling that the main reason you're seeing a performance increase is because a large part of the scene is the sky. In a more realistic scene I think your current method could actually be slower than simply brute-forcing it. If a larger number of pixels require MSAA; you'd essentially be brute forcing it anyway, but also computing and resolving the mask.
11  Game Development / Newbie & Debugging Questions / Re: opengl MSAA resolve efficient ? on: 2014-08-17 09:41:10
Okay, your optimized version (although better) is far from optimal, but obviously better than supersampling your post processing.

You're still supersampling too much, but it's not very noticable with the simple geometry you have. Detecting triangle edges like you do results in redundant supersampling on internal edges of 3D models. A more tessellated 3D model would have triangle edges everywhere, but only a small fraction of those edges actually need more than 1 sample shaded. A much better approach is to get rid of the extra buffer you use for edge detection and instead run a fullscreen pass after rendering your scene. In this pass, you'd analyze the scene's depth and normals (If you don't have the normals available, you may need to write them as well to a second MSAA texture during scene rendering), check if there's a significant spread of the depth or if the normal varies a lot, and write out a mask to  a non-MSAA texture. This mask texture can then be used during post processing and other places.

Branching like that in the resolve shader is a bad idea. GPUs shade pixels in larger groups, between 8x8 and 16x16 pixels at a time (depends on your graphics card). If even a single pixel in this group of pixels requires per-sample postprocessing, the whole group has to pause while that single pixel does per sample computations so that the group can stay in sync. The gain you're seeing right now most likely comes from the saved bandwidth of not having to sample all samples for most pixels, but you should get an even higher increase if you modify how you do the per-sample computations. A nice OpenGL 3 trick is to compute a (non-MSAA) stencil mask of which pixels that need to have per-sample shading (the above edge detection can be modified to generate a stencil texture instead). You then do the resolving twice, the first time with the stencil test set to only process pixels that don't need MSAA, and the second pass so that it processes the pixels that DO need MSAA. This prevents the above problem as there's no branching in the shader (you have two shaders instead), and in the second pass the GPU can pack together pixels that need per-sample resolving into a single pixel group.

A OpenGL 4 alternative is to use a compute shader and reschedule pixels that need MSAA to a second pass. This is a bit complicated to describe, but you'd essentially postprocess the first sample of all pixels, and build a list per work group using an atomic counter and a shared array of pixels that need to have the rest of their samples shaded. After the first sample is shaded, you switch to processing the rest of the samples that need shading using all shaders.

More information can be found here: http://dice.se/wp-content/uploads/GDC11_DX11inBF3_Public.pdf This describes the second technique which they use for tile based deferred shading, but the same technique applies for your use case as well.
12  Game Development / Newbie & Debugging Questions / Re: opengl MSAA resolve efficient ? on: 2014-08-17 01:56:03
I'll take a look at this tonight once I get back!!!
13  Discussions / Miscellaneous Topics / Re: A rant on OpenGL's future on: 2014-08-15 01:29:13
interestingly see this news story: AMD Offers Mantle For OpenGL-Next
Wow. AMD may not have the best drivers, but they sure have their hearts in the right place.
14  Discussions / Miscellaneous Topics / Re: A rant on OpenGL's future on: 2014-08-14 06:59:07
One of the things I look forward to the most is explicit multi-GPU programming. Being able to control each GPU in any way you want has some extremely nice use cases, instead of relying on drivers and vendor specific interfaces to perhaps kind of maybe possibly get it to work with only some flickering artifacts.
15  Game Development / Newbie & Debugging Questions / Re: Rectangle to Circle collision test? (2D) on: 2014-08-12 06:38:01
I use the following algorithm:

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
//find closest point inside rectangle from circle center
float closestX = clamp(circle.x, rect.x, rect.x + rect.width);
float closestY = clamp(circle.y, rect.y, rect.y + rect.height);

//find distance from point on rectangle to circle
float dx = closestX - circle.x;
float dy = closestY - circly.y;
float distanceSqrd = dx*dx + dy*dy;

//test distance against radius
if(distanceSqrd < circle.radius*circle.radius){
    //collision detected!!!
}else{
    //no collision
}


Code for clamp()
1  
2  
3  
4  
5  
6  
7  
8  
public static float clamp(float f, float min, float max){
    return f < min ? min : f > max ? max : f;
    /*
    if(f < min){ return min;}
    if(f > max){ return max;}
    return f;
    */

}
16  Discussions / Miscellaneous Topics / Re: A rant on OpenGL's future on: 2014-08-12 03:27:43
After a good night's sleep, I've had the time to take a look at OpenGL NG more. It's clear that Khronos is going the same direction as DirectX 12 and Mantle, thank god.

A completely redesigned API made for modern GPUs, most likely targeting the same hardware as DX12/Mantle which would be the Nvidia 600 series and the AMD 7000 series and up. I am unsure if Intel would be able to support it with their current generation of hardware; they may have they hardware but lacks the drivers.

Which leads to the second point. As with DX12/Mantle, we're looking at a very thin and simple driver. All the old redundant features are thrown out. This should allow AMD, Nvidia and Intel to simply build a new, small driver for OpenGL NG, finally slowing down or halting further development of the old OpenGL. Newly released hardware would obviously still need an OpenGL 4.5 driver, but from now on we can assume that OpenGL 4 will get fewer updates and new extensions, but I guess we can assume that some OpenGL NG features will drip down to OpenGL 4 through extensions... Well, hopefully we'll at least get much more stable OpenGL NG drivers which will be released faster!

With more low-level control of how the GPU and CPU work, we should be able to do some pretty cool optimizations by taking advantage of the GPU in better ways. For example, depending on what's exposed by OpenGL NG it might be able to render shadow maps in parallel to lighting the scene. Rendering shadow maps has very low compute load as the vertex shader is usually simple and there is no fragment shader. Filling pixels and computing the depth is handled by the hardware rasterizers on the GPU, leaving the thousands of shader cores on your GPU idle. Tile based deferred lighting on the other hand is done by a compute shader which bypasses the vertex handling hardware and the rasterizer, and only uses the shader cores and a little memory bandwidth. We could essentially double buffer our shadow map and render a new shadow map while computing lighting with the previous one in parallel.

They're also promising massively improved CPU performance. The push for direct state access (promoted to core in OpenGL 4.5) implies that this is the way OpenGL NG will work, meaning simpler, shorter and clearer code. This also means that we won't be getting the same problem with state leaking as before, which would make it easier to avoid hard-to-find bugs. We're also promised proper multithreading. Multithreaded texture streaming is nice and all, but hardly a replacement for being able to actually build command queues from multiple threads as Mantle and DX12 will allow. Games that use OpenGL NG will at least have the potential for almost linear scaling on any number of cores. FINALLY.

Precompiled shaders! An intermediate shader format is something that has been wanted for a long time. GLSL basically just got a lot more like Java. Instead of letting each vendor develop their own GLSL compiler with their own sets of bugs and quirks, Khronos will (or I assume they will) develop their own compiler which compiles GLSL shaders to some intermediate format, just like Java bytecode. This should result in much more predictable performance as all GPUs and vendors will be able to take advantage of any optimizations the intermediate compiler does. This is especially good for mobile, which suffers from compilers that are bad at optimizing shaders. This also means that the GPU vendors will only have to be able to compile the GLSL "bytecode" to whatever their GPUs can run, which should be muuuuuuch less bug prone than compiling text source code. We'll only have to work with a single GLSL compiler from now on. As someone who's encountered so many broken compilers, this is a HUGE improvement. This will speed up development a lot on my end as well.

OpenGL NG will run on phones as well! Although I believe OpenGLES does not suffer from the same bloating as OpenGL, it's still extremely nice to be able to run the same code on both PC and mobiles. This is almost gift-wrapped and addressed to LibGDX.



Aaah... This is so great. Khronos basically gave CAD the boot and went full gamer. So many great things here. Obviously, OpenGL NG isn't complete yet and may even result in a second Longs Peak disaster, but there are lots of reasons to be hopeful. The circumstances are completely different, with Mantle pushing development of DX12 and OGL NG.


Click to Play
17  Discussions / Miscellaneous Topics / Re: A rant on OpenGL's future on: 2014-08-11 16:33:10
Quote
For the next generation of OpenGL – which for the purposes of this article we’re going to shorten to OpenGL NG – Khronos is seeking nothing less than a complete ground up redesign of the API. As we’ve seen with Mantle and Direct3D, outside of shading languages you cannot transition from a high level abstraction based API to a low level direct control based API within the old API; these APIs must be built anew to support this new programming paradigm, and at 22 years old OpenGL is certainly no exception. The end result being that this is going to be the most significant OpenGL development effort since the creation of OpenGL all those years ago.

f**k. Yes.

Interesting OGL4.5 extensions:
  • ARB_clip_control: Except for the purely convenience features this provides when porting from DX, it can also be used to improve depth precision slightly. This could be solved before as well, but was a bit unclear and hacky.
  • ARB_direct_state_access: FINALLY. This should eliminate almost all binding!
  • ARB_pipeline_statistics_query: Looks cool. Should make bottleneck identification easier.
  • KHR_context_flush_control: I have no idea how this magically works. Will investigate. Multithreaded OpenGL for more than just texture streaming would be amazing. EDIT: Most likely this is not very revolutionary. The improved multithreading is expected in OpenGL NG.
18  Game Development / Game Mechanics / Re: Improving Texture Loading on: 2014-08-02 07:59:30
Maybe this could be useful: http://www.java-gaming.org/topics/tutorial-stutter-free-texture-streaming-with-lwjgl/32571/view.html
19  Games Center / Featured Games / Re: We Shall Wake demo (v6.0) on: 2014-07-31 11:48:00
When it is loading the game it doesn't load, that is what was at the end of log.txt:
This bug was reported above. It has already been fixed. Thanks anyway for reporting it!
20  Discussions / Miscellaneous Topics / Re: A rant on OpenGL's future on: 2014-07-23 23:16:13
I get that you don't think it's worth higher hardware requirements, but you're really not speaking for everyone. The primary reason I targeted OGL3 for We Shall Wake is performance. The only OpenGL 4 features I take advantage of if they're available are performance or memory optimizations. By limiting the game to newer hardware I reduce the performance requirements of the game. Also, we do have modelers so our game will actually look good if we use expensive shaders and postprocessing.

The pollution argument is ridiculous. Why would you encourage people to waste energy on drawing pretty triangles at all if you care so much about pollution. Newer hardware is more energy efficient.

Finally, I couldn't care less about if OpenGL is good enough for other applications, or even for you. It's not good enough for me, and I'm sure a lot of game developers would agree. It'd be extremely naive for me to think that you or the CAD industry will care about what I want. This is a game developing forum. If WE don't even say what we want, no one else is gonna do it for us.
21  Games Center / Featured Games / Re: We Shall Wake demo (v6.0) on: 2014-07-23 14:34:11
Freezes during loading screen, or kinda slightly after loading but before any game.

Quote
Dungeon successfully generated.
Uncaught exception in draw (main) thread:
java.lang.NullPointerException
   at engine.tile.TileRenderer.nextFrame(TileRenderer.java:302)
   at engine.WSWGraphics.renderScene(WSWGraphics.java:579)
   at engine.WSWGraphics.access$2(WSWGraphics.java:556)
   at engine.WSWGraphics$1.run(WSWGraphics.java:285)
   at net.mokyu.threading.MultithreadedExecutor.processTask(MultithreadedExecutor.java:157)
   at net.mokyu.threading.MultithreadedExecutor.run(MultithreadedExecutor.java:131)
   at engine.WSWGraphics.render(WSWGraphics.java:544)
   at game.state.StateDungeon.draw(StateDungeon.java:144)
   at game.core.Core.render(Core.java:226)
   at game.core.framework.GameFramework.step(GameFramework.java:53)
   at game.core.Core.run(Core.java:170)
   at game.core.Core.main(Core.java:143)
Exception in thread "main" java.lang.IllegalStateException: Error ending frame. Not all tasks finished.
   at engine.profile.GPUProfiler.endFrame(GPUProfiler.java:66)
   at game.state.StateDungeon.draw(StateDungeon.java:153)
   at game.core.Core.render(Core.java:226)
   at game.core.framework.GameFramework.step(GameFramework.java:53)
   at game.core.Core.run(Core.java:170)
   at game.core.Core.main(Core.java:143)

No gamepad support. Now come on guys - you really wanna play a game like this with a keyboard ? ^^'

Long time fan of these games: DMC 3, 4, Bayonetta, Prototype, Infamous, MGS Rising.   Actually did some DMC 3 and mostly 4 speedruns back then, so this excites me. Would love a well written story of course... x)
Null pointer exception bug fixed! Way too simple error... ._. If possible, try updating your graphics drivers as the bug was in the fallback code for when a newer function wasn't supported, so if the latest driver has support for it it should work.

I believe game pad support didn't make it for the latest version. We're currenlty working on some bug fixing; me on a few new graphics engine bugs that emerged, and SkyAphid on the sound engine.
22  Discussions / Business and Project Management Discussions / Re: what now? on: 2014-07-23 14:28:47
start to code on it? =D half a year is alot off time xD
I have coded a few tests to check that it was actually possible to get good enough performance and all. =P
23  Discussions / Business and Project Management Discussions / Re: what now? on: 2014-07-23 12:12:36
A month?? NO! God DAMMIT. Of course, right when school restarts. Maybe I'll get lucky the 5th time around.
Yes that's right. It's happened 4 times now in a time when I can't do it. In a row. Grr  Roll Eyes
I've had an idea for a completely EPIC game for over half a year now. Tongue
24  Discussions / Miscellaneous Topics / Re: A rant on OpenGL's future on: 2014-07-23 12:05:28
As you use some bleeding edge features, you know you're in the front line, you find bugs, it's not surprising.
The only bugs I've found in cutting edge features are the following:
 - Nvidia: When persistent buffers were first released, they had forgotten to remove the check to use the buffer while it was mapped (which was the whole point), so they were useless. (Fixed)
 - AMD: BPTC texture compression works, but can't be uploaded with glCompressedTexSubImage() as that throws an error.
 - Intel: glMultiDrawIndirect() does not work if you pass in an IntBuffer with commands instead of uploading them to the indirect draw buffer VBO.

All other bugs were in really old features:
 - Intel: Rendering to a cube map using an FBO always made the result end up on the first face (X+), regardless of which face you bound to the FBO. (Fixed)
 - Intel: Their GLSL compiler was pretty much one huge bug.
vec3 array[5];
was supported,
vec3[5] array;
threw unrelated errors all over the place. (Fixed)
 - Intel: 4x3 matrices multiplied with a vec4 resulted in a vec4 while it should result in a vec3. (Fixed)
 - AMD: Sometimes on older GPUs the result of FBO rendering becomes grayscale. I have no idea. (Fixed in newer GPUs at least)
 - AMD: Allocating texture mipmaps  in reverse order (Allocate level 10, upload level 10, allocate level 9, upload level 9, ...) hard locks the GPU. You need to allocate all mipmaps before you start uploading to them.
 - AMD: Not sure if this is a bug, but depth testing against a depth buffer with depth writing disabled and reading the same depth buffer in the shader causes undefined results on AMD cards only.

These are the ones I can remember right now anyway. This is why I like Nvidia's drivers, by the way.

Almost everyone on this forum have at at least one point in their life worked with OpenGL, either directly or indirectly. Some of us likes to go deep and use OpenGL directly through LWJGL or LibGDX
LibGDX is a middle-level API and there are other actively maintained Java bindings for the OpenGL and/or OpenGL-ES APIs including JGLFW (used in LibGDX), Android OpenGL and JOGL (JogAmp).

while others (a probable majority) prefer the increased productivity of abstractions of OpenGL like LibGDX's 2D support, JMonkeyEngine or even Java2D with OpenGL acceleration.
The build-in Java2D with OpenGL acceleration isn't very optimized and it drives Java2D performance inconsistent. Slick2D, Golden T Game Engine (GTGE) and GLG2D are more efficient.
Those examples were not meant to be exhaustive. I'm only saying that no matter how your render stuff, OpenGL matters to you since pretty much everything (except unaccelerated Java2D of course) relies on OpenGL.

OpenGL is used a lot in scientific visualization and in CAD softwares too.

Maybe you should mention that the situation is quite different in mobile environments. Intel hardware is still far behind when it deals with offscreen rendering.
I did mention that OpenGL's use in other places is what made them decide to add so much backwards compatibility.
Sadly, the existence of the compatibility mode has encouraged many important industrial customers of the graphics cards vendors (CAD programs and other 3D programs) to become dependent on the compatibility mode, so it's essentially here to stay.

I can't say much about the mobile market since I've never developed stuff for it, but I do know that OpenGLES 2 is essentially OpenGL 3 without compatibility mode. Is that what you meant?

I don't think about "market shares" and things like that but I have worked as an engineer in computer science specialized in 2D/3D visualization for more than 7 years and we are numerous to be mostly satisfied by the way OpenGL has evolved even though the slowness of some decision makers were really annoying. I prefer the evolution with a certain continuity rather than brutal questionable changes. Microsoft isn't a good example, it is very good at creating brand new APIs that only work with one or two versions of Windows and abandoning them. 2 softwares on which I worked already existed in the nineties and not all corporations can afford rewriting its rendering code every three years. OpenGL isn't only for games, it can't be designed only to please some game programmers.

I know that almost nobody agrees with me about that here but I prefer fighting against planned obsolescence. What theagentd wrote isn't apolitical. "Science without conscience is but the ruin of the soul" (François Rabelais).
I'm not entirely sure I understood what you're saying. You're saying that I'm not taking all applications of OpenGL into account?

Spasi talked about regal. JOGL provides immediate mode and fixed pipeline emulation too in ImmModeSink and PMVMatrix which is very useful when writing applications compatible with both OpenGL and OpenGL-ES.
This is the exact reason why we don't need driver implemented immediate mode anymore. It took me just a few hours to write a decent clone of immediate mode with support for matrices, colors and texture coordinates. It even takes advantage of persistently mapped buffers to upload the vertex data if they're supported by the driver. Sure, it's probably not as fast as the driver optimized immediate mode, but frankly, if you're using immediate mode you don't care about CPU performance in the first place.



I believe that the OpenGL drivers have too many features which slows down development of them and increases the number of bugs. If the OpenGL spec only contained the bare minimum required to do everything it currently does, it'd shift the load away from the driver developers and let them focus on optimizing the drivers and more quickly adopt new functionality. The lack of older "features" can be easily implemented by for example open source libraries that expose an API that's more familiar for people coming from older versions of OpenGL, and I believe that it would not be a significant effort for them to implement the stuff that they need. The important thing here is that we'd at least get rid of the completely unused game related features, like fixed functionality multitexturing and lighting.

I think that it's clear that the game industry is pretty open to this mentality of quickly deprecating old features. See how quickly developers jumped on to Mantle for example. Maybe you're right though. Maybe what we really need isn't a new OpenGL version, but a version of OpenGL that is specifically made for games. But that's pretty much what OpenGL 5 SHOULD be. OpenGL 5 most likely won't have any new hardware features. If they completely drop backwards compatibility, it'd essentially be the same functionality as OpenGL 4 but with an API targeted for games. If the "game version" of OpenGL continued to build on the new OpenGL 5, we'd essentially get what we want from OpenGL 5. Non-game applications could still use OpenGL 4, with new features introduced as extensions over time. This wouldn't necessarily decrease the load on driver developers, but it would make the game version of OpenGL faster to develop and maintain.
25  Discussions / Business and Project Management Discussions / Re: what now? on: 2014-07-23 00:10:59
Here are my 2 cents.

I used to start on new projects on a whim. I'd get an idea, and started coding on it the same day. I generally lasted a few weeks or even a month before I got stuck or bored of it and just dropped it. Nowadays when I get an idea I think about it for a long time. Something that sounds awesome when you come up with it but gets really meh after just a few days or even a few weeks isn't really a good idea in the first place. I imagine the players will get bored of the concept even quicker than I will, so why settle for anything less than what you yourself would find interesting in the long run?

TL;DR: Don't jump right in on coding your "great" idea. Instead, let it "boil" for a few weeks and see if it still has you hooked. Basically, have one or two ideas cooking while you work on your current "proved" good idea(s).
26  Games Center / Featured Games / Re: We Shall Wake demo (v6.0) on: 2014-07-22 09:50:44
Alright, we'll take a look at it! Thanks for testing and including the log!
27  Discussions / General Discussions / Re: What? Your game isn't a pixelated retro masterpiece? on: 2014-07-19 03:09:05
he term seems to be used very ambiguously

Well, everything where you can still see individual pixels...
Sooo... Every single game right now since proper MSAA is too hard to do?
28  Game Development / Performance Tuning / Re: Can someone explain what is going on with this square rooting? on: 2014-07-18 18:28:53
nanoTime() is fairly inaccurate in most cases. It's better to use getTimeMillis(). (generally)
Really? I have the exact opposite experience. On some Windows machines getTimeMillis() as an actual accuracy of around 15ms, e.g. the timer only updates every 15ms, so you get horrible precision. nanoTime() has always worked reliably for me.
29  Discussions / Miscellaneous Topics / Re: What I did today on: 2014-07-16 18:33:51
I wrote a small program which does raycasting through a 3D texture.

1. Find MRI image.
Click to Play


2. Split it up into individual frames.

3. Put frames in folder of program.

4. Profit, in the form of broccoli!
30  Java Game APIs & Engines / OpenGL Development / Re: Drawing big Objects far away on: 2014-07-15 15:48:18
What exactly is the "Outerra way"?
Pages: [1] 2 3 ... 87
 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

Nickropheliac (12 views)
2014-08-31 22:59:12

TehJavaDev (23 views)
2014-08-28 18:26:30

CopyableCougar4 (27 views)
2014-08-22 19:31:30

atombrot (40 views)
2014-08-19 09:29:53

Tekkerue (38 views)
2014-08-16 06:45:27

Tekkerue (34 views)
2014-08-16 06:22:17

Tekkerue (24 views)
2014-08-16 06:20:21

Tekkerue (34 views)
2014-08-16 06:12:11

Rayexar (72 views)
2014-08-11 02:49:23

BurntPizza (47 views)
2014-08-09 21:09:32
List of Learning Resources
by Longor1996
2014-08-16 10:40:00

List of Learning Resources
by SilverTiger
2014-08-05 19:33:27

Resources for WIP games
by CogWheelz
2014-08-01 16:20:17

Resources for WIP games
by CogWheelz
2014-08-01 16:19:50

List of Learning Resources
by SilverTiger
2014-07-31 16:29:50

List of Learning Resources
by SilverTiger
2014-07-31 16:26:06

List of Learning Resources
by SilverTiger
2014-07-31 11:54:12

HotSpot Options
by dleskov
2014-07-08 01:59:08
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!