Java-Gaming.org    
Featured games (81)
games approved by the League of Dukes
Games in Showcase (498)
Games in Android Showcase (114)
games submitted by our members
Games in WIP (563)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: 1 [2] 3
  ignore  |  Print  
  2D Lighting Tribulations with Shaders  (Read 20148 times)
0 Members and 1 Guest are viewing this topic.
Online theagentd
« Reply #30 - Posted 2011-08-27 20:13:28 »

Uh, why are you generating mipmaps for a un-initialized texture which contains random data?

For your backbuffer, you don't need mipmaps. You wont be using the lower levels anyway as you'll just copy the backbuffer texture to the real backbuffer.
Some code to setup your backbuffer:
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
//Texture
IntBuffer id = BufferUtils.createIntBuffer(1);
GL11.glGenTextures(id);
textureID = id.get(0);
GL11.glBindTexture(GL11.GL_TEXTURE_2D, textureID);

GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_LINEAR);
GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_LINEAR);

GL11.glTexImage2D(GL11.GL_TEXTURE_2D, 0, GL11.GL_RGB, width, height, 0, GL11.GL_RGB, GL11.GL_UNSIGNED_BYTE, (ByteBuffer) null);

//FBO
IntBuffer id = BufferUtils.createIntBuffer(1);
GL30.glGenFramebuffers(id);
fboID = id.get(0);

GL30.glBindFramebuffer(GL30.GL_FRAMEBUFFER, fboID);
GL30.glFramebufferTexture2D(GL30.GL_FRAMEBUFFER, GL30.GL_COLOR_ATTACHMENT0, GL11.GL_TEXTURE_2D, textureID, 0);


I also recommend creating a bind function that also sets the viewport, as all rendering explodes funnily if you have a different sized FBO bound and didn't change the viewport.

1  
2  
3  
4  
public void bind(){
    GL30.glBindFramebuffer(GL30.GL_FRAMEBUFFER, fboID);
    GL11.glViewport(0, 0, texture.getWidth(), texture.getHeight());
}


Myomyomyo.
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #31 - Posted 2011-08-27 20:44:12 »

Thanks, I'll fiddle with this a bit later.

EDIT: So, we have this:



Which is an improvement over a few hours ago.  I'm pretty sure this is related to how we are setting our viewport before we draw the scene.  More specifically we don't use glViewport anymore, we use gluLookAt to get the cursor to the right position before rendering our sprites.  So I'm assuming somewhere along the way we are missing a step.  However this is better than nothing.  It also lags to hell after a few seconds ,I'm sure this is because I haven't made an "init" feature in our gamewindow and am doing all the FBO calls every frame, but right now my main interest is getting a normal render.

For reference the final code which dumps the FBO texture to the screen:

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
20  
21  
22  
23  
24  
25  
26  
27  
28  
        gl.glPopAttrib(); // Restore our glEnable and glViewport states 
       gl.glBindFramebuffer(GL2.GL_FRAMEBUFFER, 0); // Unbind our texture
     
        gl.glEnable(GL2.GL_TEXTURE_2D);
        gl.glClearColor(0, 0, 0, 1.0f);
        gl.glClear(GL2.GL_COLOR_BUFFER_BIT);
        gl.glMatrixMode(GL2.GL_MODELVIEW);
        gl.glLoadIdentity();
        gl.glViewport(0, 0, OpenGLGameWindow.screenDimension.width, OpenGLGameWindow.screenDimension.height);
        glu.gluLookAt(viewport.getxPosition(), viewport.getBottomLeftCoordinate().y, 1, viewport.getxPosition(), viewport.getBottomLeftCoordinate().y, 0, 0, 1, 0);    
        gl.glBindTexture(GL2.GL_TEXTURE_2D, fbo_texture);
       
        gl.glBegin(GL2.GL_QUADS);
            gl.glTexCoord2d(0.0, 0.0);
            gl.glVertex2f(0,0); //bottom left
           gl.glTexCoord2d(1.0, 0.0);
            gl.glVertex2f(0,1440);  //bottom right
           gl.glTexCoord2d(1.0, 1.0);
            gl.glVertex2f(1440,900);  //top right
           gl.glTexCoord2d(0.0, 1.0);
            gl.glVertex2f(0,900);  //top left
       gl.glEnd();
       
        gl.glBindTexture(GL2.GL_TEXTURE_2D, 0);
        gl.glDeleteFramebuffers(1, fbo_intbuffer);
        gl.glDeleteFramebuffers(1, fbo_tex_intbuffer);
       
        gl.glFlush();


It just feels like too much to me, but something tells me that's wrong.  I know a lot of the problem with this distortion has to do with this section of the code, but nothing I change makes any logical sense.

I was also intrigued by this bit:

1  
2  
3  
4  
public void bind(){
    GL30.glBindFramebuffer(GL30.GL_FRAMEBUFFER, fboID);
    GL11.glViewport(0, 0, texture.getWidth(), texture.getHeight());
}


Would you do this before rendering ANY texture to the FBO (sprites, backgrounds, etc.)?  If you bind the framebuffer once before going through your sprite rendering loop, you wouldn't really need to call the first line every time... I think.

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Online theagentd
« Reply #32 - Posted 2011-08-28 06:55:40 »

Like I said, FBO's are tricky int (EPIC TYPO) this way. I recommend that you make a small FBO test program instead of trying to integrate it into your game. Do a very simple test which binds an FBO, renders a triangle or something to it, then binds the FBO's texture and draws it to the screen. If you get that working, you'll probably get a hang of it.

About the viewport stuff: Just remember to set the viewport to correct values if you change the resolution of whatever you're rendering too. What you're doing right now is fine I guess. I'm a little bit skeptical to using a gluLookAt for a 2D game. Wouldn't a simple glTranslate and maybe a glScale be much less error prone? Also you do know what glOrtho is, right?

I kind of got carried away and implemented a HDR bloom effect myself. Results:



The color of the vertices in RGB are: (1, 1, 1) for the 2 left ones, and (1, pretty high 100ish, 1).


There are a few things that I've realized from implementing this.

This is an art. Bloom doesn't really exist IRL, so what we call the bloom effect is just something that we think look better, so there is no right or wrong.
What tone mapping function you use affects the bloom experience a LOT. Tone mapping functions are however more art than science too. This means that you should decide on a tone mapping function FIRST, THEN experiment with bloom settings.

The performance of my implementation is... disappointing. I used 4 levels of bloom in this one, with a 7x7 gaussian blur. I used HDR textures for all rendering of course. Without bloom, the time for one frame is about 0.87 milliseconds, but with this in my opinion small bloom, it takes 5.71. That's almost 4 ms just for a post process. If use 7 levels I get 6.42ms. I can see a number of reasons for this low performance.

(EDIT: This was at a resolution of 1600x900.)

 - I'm on a laptop. My GTX 460m isn't exactly top of the line, but it isn't exactly weak either. Desktops should have a lot better performance.
 - My card doesn't get "hot", it only goes to about 70c. It's obviously very texture limited. It should be, it's 16 bits per channel, and it sure is a lot of lookups.
 - For the that reason, ATI cards should perform better as they have more texture performance (I think?).

If you want to test it, I'll try to hack up a stand-alone test. Would be interesting to know your specs, too.

Myomyomyo.
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #33 - Posted 2011-08-28 20:12:50 »

We have a viewport class that moves with the player and tries to represent what we should see to aid in converting game world coordinates to what openGL needs.  Before we would translate in the negative of where the viewport was, and then translate in the positive to the world coordindates of the thing we were trying to render.  This felt kind of weird so we abandoned that approach and now use LookAt?  I'm not really sure, I have to talk to my partner about it, but I swear I've never really seen that used in 2D examples either.  I don't really know why it should be causing a problem but I'll do what you suggested and try to make a little sandbox where I can figure out what is actually going on here.

I'll try to keep code coming but it might help to see your triangle example at least the simple HDR+triangle part, I don't really need to see shaders and such. 

Thanks as always.

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Online theagentd
« Reply #34 - Posted 2011-08-29 06:44:21 »

I'm pretty sure glViewport actually sets what part of the window you want to draw to. This is useful for rendering split-screen games. You seem to have gotten it right in the code though.  Wink Your gluLookAt would (probably) just be equal to glTranslate(playerX - screenWidth/2, playerY - screenHeight/2). Also when doing fullscreen passes, it's easier to just not use the matrices at all. Just load identity on all of them and send in the vertex coordinates as -1 to 1. It'll minimize these errors.
About the triangle example, what part are you interested in? I'm basically just drawing to an HDR texture attached to an FBO and then copying it with tone mapping to the backbuffer.

Myomyomyo.
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #35 - Posted 2011-08-29 11:22:12 »

I'm not sure.  I think I need to just play with a sandbox for awhile and see exactly why the image first of all seems to be drawing as a triangle, and upside down (you mentioned something about that awhile back)

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #36 - Posted 2011-08-30 04:25:18 »

Shameless double post:



So the HDR worked.  We figured everything out with the FBOs and are now using RGBA16F for the three textures (originally we forgot and used RGBA16 and it didn't work, the F for floating point is key).  We calculate the int ID's for each of the different FBO elements (1 fbo, 3 textures) and store them as public static values in our game window so we can access them when needed.  

Next up is shadows, and then finally a bloom shader.

An aside, we noticed that there is a distinct "ring" of color near the edge of our light.  Awhile back in the thread I posted our shader that draws the light, I'm not sure if this is at fault but we suspect that it must be.  It just feels like the light should make a "smoother" transition.  Maybe it's because the red component of the color gets maxed out.  We'll do research.

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Online theagentd
« Reply #37 - Posted 2011-08-30 06:16:23 »

Nice! That ringing is a little concerning though...
Are you using tone mapping when you finally copy the HDR backbuffer to the screen backbuffer? You shouldn't get any rings like that at all if you use proper tone mapping as you won't get (for example) a clamped red channel.
It might also be OpenGL clamping the color value you're supply and the ones it calculates in shaders. You need to disable this, or you won't really get much from the increased precision in the backbuffer.

Try to add this after creating your Display:
1  
2  
3  
GL30.glClampColor(GL30.GL_CLAMP_VERTEX_COLOR, GL11.GL_FALSE); //Disables clamping of glColor
GL30.glClampColor(GL30.GL_CLAMP_READ_COLOR, GL11.GL_FALSE); //Kind of unnecessary but whatever
GL30.glClampColor(GL30.GL_CLAMP_FRAGMENT_COLOR, GL11.GL_FALSE); //Disables clamping of the fragment color output from fragment shaders


If you've already done all these, you could just try to use a different tone mapper. There are lots and LOTS of articles around with code for different tone mappers and their pros and cons.

Myomyomyo.
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #38 - Posted 2011-08-30 11:32:40 »

Such as in the init method of our game window?  Or is once before the first render pass at any time sufficient?

EDIT: Adding these didn't seem to make a difference so that clearly wasn't the problem.  I'm trying to do some research on how we'd implement tone mapping.  It looks to me like a lot of people do it through a shader when they finally render their backbuffer texture to the screen..?  I'll post when I know more.

EDIT2: Found this... example of some fragment shaders that use tone mapping equations when they pass the final image to the screen: http://portfolio.kajon.se/Ibr/RealtimeTonemapper_rapport.pdf

EDIT3: Also this PDF transporter-game.googlecode.com/files/HDRRenderingInOpenGL.pdf Contains a weird tone mapping param at the end.

My thing with tone mapping is, for example your function color / color + 1, won't this make the dark areas of the screen much darker than we want them to be?

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Online theagentd
« Reply #39 - Posted 2011-08-30 15:47:38 »

You're correct about applying tone mapping during the final copying to the back buffer. Like I said earlier, HDR doesn't solve much if you don't use tone mapping. Just do it in a fragment shader.
The dark parts won't get much darker. Just calculate a few values in your head:

1  
2  
3  
4  
5  
6  
Color -> Calculation -> Tone mapped color
0.0 -> 0.0 / 1.0 -> 0.0
0.5 -> 0.5 / 1.5 -> 0.33333
1.0 -> 1.0 / 2.0 -> 0.5
2.0 -> 2.0 / 3.0 -> 0.66666
4.0 -> 4.0 / 5.0 -> 0.8

If the value is small, it will not become very much smaller.


Here is a template tone mapping shader. It's using GLSL 3.30, but you should easily be able to figure out how to convert it to any GLSL version (fragColor -> gl_FragColor, texCoords -> gl_TexCoord[0], etc).
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
uniform sampler2D sampler;

in vec2 texCoords;

out vec4 fragColor;

void main(){
    vec4 color = texture2D(sampler, texCoords);

    //Insert any tone mapping function here.
   fragColor = color / (color + 1);
}



Here is the function I thought looked the best for what I was doing. You can try it if you want to. It's not even close to as simple as the above one though.
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
uniform sampler2D sampler;

in vec2 texCoords;

out vec4 fragColor;

const vec3 power = vec3(1/2.2, 1/2.2, 1/2.2);

void main(){
    vec3 color = texture2D(sampler, texCoords).rgb;
    color *= 0.1;  // Hardcoded Exposure Adjustment
   color = color/(1+color);
    fragColor = vec4(pow(color, power), 1);
}


(I've removed some stuff from the code (#version and #include, and some layout() lines) as they are only meant for higher GLSL versions. Would probably be a too much to start with that too. xD Just note that the above code wouldn't work copy-pasted.)

Myomyomyo.
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #40 - Posted 2011-08-30 15:57:16 »

I see.  So the "higher colors" like the bright lights will be preserved but just toned down to 0-1.  I'm guessing it preserves the vibrance somehow but I'll just see what it looks like when I get home from work.

I see in the second example you're raising the color to a power, I can grasp what you are actually doing.  It's the "coming up with that" part that we'd be lost at. 

Good to know about the dark colors not getting much darker, it makes sense given what kind of math is actually happening.  As long as the vibrance produced by the HDR in the first place is preserved, it all works out, I guess.

Is the color range for HDR 0-2.0f, or can color values higher than this end up in the FBO textures?

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Online theagentd
« Reply #41 - Posted 2011-08-30 16:54:50 »

No, I just didn't want to bother to write out any higher values. xD You should use 16-bit float textures as 32-bit float textures are waaaay too slow. The maximum representable value for 16-bit float (AKA half precision) is 65504. Although you only have about 3 decimals of precision, it's way more than enough to represent colors as we don't have much error buildup. In practice there shouldn't be any difference at all between 16-bit and 32-bit floats when used as render targets that are cleared each frame.
If you go over 65504 though you might end up on the "infinity" value. If this happens, your bloom blur will blur out infinity (=more infinity), and you'll get pretty funny images. Basically try to keep your values below 1000. Shouldn't be a problem at all if you're used to having values around 1.  Grin
Fun fact 2: You can have negative values in float textures. Negative bloom, anyone? xD

Myomyomyo.
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #42 - Posted 2011-08-30 17:07:00 »

No, I just didn't want to bother to write out any higher values. xD You should use 16-bit float textures as 32-bit float textures are waaaay too slow. The maximum representable value for 16-bit float (AKA half precision) is 65504. Although you only have about 3 decimals of precision, it's way more than enough to represent colors as we don't have much error buildup. In practice there shouldn't be any difference at all between 16-bit and 32-bit floats when used as render targets that are cleared each frame.
If you go over 65504 though you might end up on the "infinity" value. If this happens, your bloom blur will blur out infinity (=more infinity), and you'll get pretty funny images. Basically try to keep your values below 1000. Shouldn't be a problem at all if you're used to having values around 1.  Grin
Fun fact 2: You can have negative values in float textures. Negative bloom, anyone? xD

So I guess since the HDR modifies the proportions of everything on the screen to look glowy and bright, as long as we are using a decent formula for this we will just get better contrast and a nicer looking final result, even if we push the colors back under 1.0, since they are scaled a certain way now.

So... I get that we're rendering our images to a RGBA16F target.  What I don't really get is: the color information for my textures/lights/etc. are stored based on the 0-1 range, becase this is what I'm used to being restricted to when calling glColor4f.  How does having HDR enabled allow me to circumvent this limitation?  Using a higher max value than 1, for the gl_FragColor in the light rendering?  Or, for sprites, when we import our textures with the texture loader they are all RGBA8.  

UPDATE:



This is what I achieved with the first tone mapping formula. I see what you mean by putting whatever tonemapping algorithm I could find, and then call your one liner at the end.  See the update below.  It definitely looks nice and smooth and still gets the point across, it eliminates that crazy HDR glow (which could look nice for explosions or effects, not sure if bloom will take care of that).  Is this the right track?  I know with tonemapping the point is to get everything under 1.0 color but given how cool full HDR display can look, I can't help feeling weird about the result.  We'll see what happens when shadows are done.

UPDATE 2:

So I'm using this cute little fragment shader.

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
uniform sampler2D sampler;
varying vec2 tex_coord;

// Control exposure with this value
uniform float exposure;
// Max bright
uniform float brightMax;

void main()
{
    vec4 color = texture2D(sampler, tex_coord);
    // Perform tone-mapping
   float Y = dot(vec4(0.30, 0.59, 0.11, 0), color);
    float YD = exposure * (exposure/brightMax + 1.0) / (exposure + 1.0);
    color *= YD;
    gl_FragColor = color / (color + 1);
    //gl_FragColor = color;
}


Found this in a PDF.  The original shader doesn't do your trick of normalizing everything to [0,1], and everything is still all vibrant, the higher the exposure relative to the max brightness, the more vibrant everything looks.  When I use your line, everything is obviously reduced to [0,1] due to the nature of the equation but everything is smoothed out nicely.  

Is the intent of bloom to "fill back in" the lost "vibrance" that we lose when we standardize everything to [0,1]?  Because without the unlimited color things look somewhat boring, I assume bloom's intent is to spruce things up a bit.  Here's an example with this shader's exposure var set to .4f, and the brightMax to 1.5f, screenshot on left is with your code enabled, screenshot on the right is using their tone mapping shader with no [0, 1] normalization of the color.



Based on what you've told me, the point of the tone mapping step is to get everything into [0,1] to prepare for bloom.  I'm assuming that by blurring the lit scene and drawing it on top of itself, we'll artificially add 'bloomy brightness' back to the scene, and the dull look of the left screenshot will go away.  Please confirm/deny Smiley I'm just confused now, apparently tone mapping happens after bloom which means if we are converting everything back to LDR, we'll lose the vibrance.  Unless I'm missing something here

 

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Online theagentd
« Reply #43 - Posted 2011-08-31 11:40:16 »

HDR just allows you to have values over 1.0 stored on the graphics card. There is no way to display that information exactly on a computer monitor as no monitors or TVs have the contrast/brightness to display it, or even a way to send the floating point data to them. You will have to convert everything to LDR to be able to display it. How you do this is however a HUGE topic.
There are way too many tone mapping algorithms out there, and there is no perfect one as this is something that doesn't happen IRL. The only real criteria for a good tone mapping algorithm is that it looks good, e.g. reinforces the style of the game. You complain about dull colors, but in a horror game this might be desirable. You're just not using the right tone mapper for what you want.
Small note: You don't HAVE to keep the fragColor = color / (color + 1); line. There are other ways to tone map your values, that just happens to be the simplest one. A tone mapper doesn't even have to map the every color from 0 to 65000 to [0-1], as long as you're okay with the clamping to 1. Like I said, forget about "real" tone mapping and go with something that looks good for your game.

I believe that the tone mapping fragment shader you posted is meant to have another feature along it to keep values in the displayable range (0-1): dynamic exposure. As the exposure is an uniform variable, it is possible to change the exposure on the fly. You can either calculate the average luminance of the last frame (like a real camera or your eye) and adjust the exposure based on that value, or have different exposure values for different areas in your game. For example if your character enters a cave or some other dark place, you can increase the exposure as it's most likely gonna be darker in there. Both of these methods will give you the extremely awesome effect of being blinded by light when you get out of the dark place and into sunlight. The point of such a tone mapper isn't to map every possible color to [0-1], but to map the most common range of colors to [0-1]. If there are clamping of colors darker or brighter than that, it'll still feel and look natural, as that is more how things look in real life. Personally I don't think dynamically calculating the exposure would be a very good idea in a 2D game you are more likely to have both very dark and very bright objects on the screen at the same time, and they would both be hard to see. Let's go with the cave example again: Your character might be inside the cave, but the outside might still cover a large part of the screen, which would make the cave extremely dark due to the exposure being low. I would do it some other way to actually adjust the exposure to what is important to show.

Again, there is nothing forcing you to use my color / (color + 1) line. Just use what looks good.

So where does bloom come in? I posted this a long time ago:
Quote
Bloom is a way of presenting the HDR information better. In real life as your eyes' lenses contain impurities and dust in the air also reflects a small amount of light, the brighter an object is the "larger" it will seem. Bloom is a way of simulating this effect in games by basically adding an image containing the brightest parts of the scene blurred on top of the scene.
For a more concrete example, go out a sunny day and look at the sun (well, not for too long though xD). Even though the sun is just a small circle in the sky, its light still covers your whole field of view making it very hard to see anything. As your screen won't be able to blind you with light like that even if you draw something with extremely high HDR colors, you'll have to simulate this yourself. This is called the bloom effect/filter in games.
I said before that you should apply bloom before tone mapping. This is also just a matter of preference. If you use a very simple tone mapper (which you at the moment definitely aren't) this is the case, as bright objects will always be bright. If you however change the exposure (= multiply the colors by a value) in some way, I would do things in this order:
1  
2  
3  
4  
renderStuff();
applyExposureToBackbuffer();
applyBloom();
toneMap();

Let's go back to a my sun example again. Take a look at this foto:

To take this kind of foto, one would use an extremely low exposure time. Is there much "bloom"? Some, but it's not enough to render the details impossible to see. Bloom should obviously be applied AFTER exposure.

(But it might also make perfect sense to apply bloom after tone mapping in some special cases where you have a very different tone mapping algorithm, for example a linear one.)

Oh, almost forgot:
glColor4f isn't clamped to 0-1 if you've called GL30.glClampColor(GL30.GL_CLAMP_VERTEX_COLOR, GL11.GL_FALSE); (or the corresponding ARB function) earlier, so feel free to supply higher values than 1.0 to it.
gl_FragColor isn't clamped if you've called GL30.glClampColor(GL30.GL_CLAMP_FRAGMENT_COLOR, GL11.GL_FALSE);
Concerning your sprites and any other RGB8 textures you may use: while they will of course only be able to hold [0-1] values, feel free to scale them in any way you want using glColor4f. Even though they have lower precision than your HDR, it shouldn't be noticeable at all due to lighting, linear filtering (if you use it) and tone mapping. You could use float textures for your sprites too, but that would require the sprite source images to actually be HDR too. Just don't. It'll look fine. Do you see any color banding or bad gradients related to the textures in your game? Many 3D games even use RGB8 textures, as the higher than 1 values will appear after lighting, especially specular lighting.

And an important final note: Forget about the color 1 being "bright". There is absolutely nothing wrong with having a light that is brighter than 10.

Myomyomyo.
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #44 - Posted 2011-08-31 13:30:06 »

Quote
HDR just allows you to have values over 1.0 stored on the graphics card. There is no way to display that information exactly on a computer monitor as no monitors or TVs have the contrast/brightness to display it, or even a way to send the floating point data to them. You will have to convert everything to LDR to be able to display it.

I kind of get this, but I can't wrap my head around how I can achieve that insane vibrance (when I don't use an equation that gets the values under 1 again) and actually have it be displayed on the monitor, if this is true.  It just feels like if I bring everything under one, I'm again capped by the sprite's native color.  With a tone mapping function that brings everything in line with [0,1], there isn't a way to avoid being restricted by the texture's native color.  You then go on to say, however:

Quote
Both of these methods will give you the extremely awesome effect of being blinded by light when you get out of the dark place and into sunlight. The point of such a tone mapper isn't to map every possible color to [0-1], but to map the most common range of colors to [0-1]. If there are clamping of colors darker or brighter than that, it'll still feel and look natural, as that is more how things look in real life.

So here we have an example of a tone mapper that doesn't quite map everything to 0,1; assuming that they are left above 1 (as can clearly be seen in the picture).  So is the point not to necessarily get everything to [0,1], but to apply a scaling function to the colors on the screen to ensure the best possible display on an LDR screen?  Does this mean that the graphics card/OpenGL does "something" to the colors that are still higher than 1 at render time, to make them displayable?  This is the only way for me to rationalize what is going on, because obviously they're being displayed, and obviously the vibrance is being retained.

Quote
1  
2  
3  
4  
renderStuff();
applyExposureToBackbuffer();
applyBloom();
toneMap();

Is the exposure in this case just a flat multiplier to the colors in the backbuffer? i.e. a camera's exposure is just letting more light come in before closing the shutter.  The bright spots get brighter by a larger amount than the dark spots because more light is coming into the lens every second. 

Quote
Concerning your sprites and any other RGB8 textures you may use: while they will of course only be able to hold [0-1] values, feel free to scale them in any way you want using glColor4f. Even though they have lower precision than your HDR, it shouldn't be noticeable at all due to lighting, linear filtering (if you use it) and tone mapping.

This is good to know.  This should solve the problem of having to change to additive blending when I am rendering a sprite that I want to flash white.  I am still sort of stuck in the mentality that anything related to glColor must be locked into [0,1] but I guess I'll just have to slowly get over this.

Status update on the lighting project: We have everything working correctly with multiple HDR textures, shadow geometry is being drawn (albeit in immediate mode, we still don't quite get how to do otherwise),  and all is fine up until bloom.  I'm applying a darkening shader (to remove dark spots) by setting some arbitrary color threshold (say, 1.5 pre-tonemapping), and just doing fragColor = color-threshold, and colors that are below the threshold will simply go to zero (do they actually go negative?).  Unfortunately, when I do this for the bloom filter I and enable additive blending, the rest of the scene still ends up black somehow (where the non-bright spots are).  I think the problem might be my function, because if I can have negative colors stored on my GPU, and I'm using additive blending, I'm actually subtracting color from the dark areas when I add the bloom map to the scene.

Multitexturing is also causing me a bit of a headache.  I don't really know what GLSL version we're using, but instead of texCoord2D[0], I have to do this weird somewhat hacky way of making a vec2 called tex_coord0, 1, whatever, in my vertex shader, and use that in my Texture2D call along with the sampler.  I also have no idea what a uniform sampler2D is, and why it feels like sometimes I need to pass it in from the application and other times openGL just magically knows what that variable is. 

I've tried glActiveTexture(GL_TEXTURE0), glActiveTexture(GL_TEXTURE1) before each call to bindTexture, so that my shader sees all three textures.  So far no luck.  I'm not even sure what the final gl_FragColor would be, would it just be the three bloom maps added together?  Even though all three textures are a different size, I'm drawing them on top of the same quad (the scene) so it shouldn't matter.

+100 internets for your post.  Thanks again.



Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Online theagentd
« Reply #45 - Posted 2011-08-31 14:32:34 »

Glad to see things clearing up!
Quote
So here we have an example of a tone mapper that doesn't quite map everything to 0,1; assuming that they are left above 1 (as can clearly be seen in the picture).  So is the point not to necessarily get everything to [0,1], but to apply a scaling function to the colors on the screen to ensure the best possible display on an LDR screen?  Does this mean that the graphics card/OpenGL does "something" to the colors that are still higher than 1 at render time, to make them displayable?  This is the only way for me to rationalize what is going on, because obviously they're being displayed, and obviously the vibrance is being retained.
No, nothing fancy happens. When you finally write to the LDR backbuffer, they will just be clamped to [0-1]. This isn't as bad as it might seem though.

Quote
Is the exposure in this case just a flat multiplier to the colors in the backbuffer? i.e. a camera's exposure is just letting more light come in before closing the shutter.  The bright spots get brighter by a larger amount than the dark spots because more light is coming into the lens every second.  
Yes, exposure is just a multiplier. The combination of exposure and bloom will give quite nice results. If you want to see the dark details in a scene, you have to increase the shutter time (e.g. the exposure), but if you have something bright, you'll just get a very bright image as the bright part blooms over the whole screen. It's just like a real camera! =D

Quote
This should solve the problem of having to change to additive blending when I am rendering a sprite that I want to flash white.
Indeed, just draw it with glColor3f set to a high value, maybe (10, 10, 10) or so, or just draw it with an alpha of 10, as that would just multiply each color with 10, giving the exact same result. xd

Quote
I am still sort of stuck in the mentality that anything related to glColor must be locked into [0,1] but I guess I'll just have to slowly get over this.
Get over it, man! You're too good for LDR!

Quote
Status update on the lighting project: We have everything working correctly with multiple HDR textures, shadow geometry is being drawn (albeit in immediate mode, we still don't quite get how to do otherwise),  and all is fine up until bloom.  I'm applying a darkening shader (to remove dark spots) by setting some arbitrary color threshold (say, 1.5 pre-tonemapping), and just doing fragColor = color-threshold, and colors that are below the threshold will simply go to zero (do they actually go negative?).  Unfortunately, when I do this for the bloom filter I and enable additive blending, the rest of the scene still ends up black somehow (where the non-bright spots are).  I think the problem might be my function, because if I can have negative colors stored on my GPU, and I'm using additive blending, I'm actually subtracting color from the dark areas when I add the bloom map to the scene.
Bwahahahaha! Yes, floats can be negative... xD Ahahahahaha...!
You need to clamp each color channel to 0, but doing it manually would be stupid. Just use the built-in max() function to clamp each channel individually and fast!
1  
gl_FragColor = max( texture2D(sampler, gl_TexCoord[0]) - threshold, vec4(0, 0, 0, 0));


Quote
Multitexturing is also causing me a bit of a headache.  I don't really know what GLSL version we're using, but instead of texCoord2D[0], I have to do this weird somewhat hacky way of making a vec2 called tex_coord0, 1, whatever, in my vertex shader, and use that in my Texture2D call along with the sampler.  I also have no idea what a uniform sampler2D is, and why it feels like sometimes I need to pass it in from the application and other times openGL just magically knows what that variable is.  
Yes, in the latest GLSL versions all non-essential built-in variables have been removed. The only ones that are left are the really necessary ones, like gl_Position and gl_PointSize in vertex shaders. Even gl_FragColor/Data have been removed and have to be replaced manually! I think this is good, as you'll just have the ones you need.
It is pretty rare to have more than 1 texture coordinates. Mostly you're sampling the same place on 2+ textures (texture mapping + normal mapping + specular mapping for example), so there is no need for 2+ texture coordinates as they will be identical. If you're sampling from two way different textures, like a color texture and a light map, you will however need 2 texture coordinates.

Quote
I've tried glActiveTexture(GL_TEXTURE0), glActiveTexture(GL_TEXTURE1) before each call to bindTexture, so that my shader sees all three textures.  So far no luck.  I'm not even sure what the final gl_FragColor would be, would it just be the three bloom maps added together?  Even though all three textures are a different size, I'm drawing them on top of the same quad (the scene) so it shouldn't matter.
You might want to call glEnable(GL_TEXTURE_2D); on each active texture unit, but I 99.9% sure you don't have to when you use shaders.
The reason why single texturing works is that the samplers default to 0 (obviously), so all samplers are automatically bound to the first texture unit on shader compilation. If you want to sample from multiple textures you need to set the samplers manually with glUniformi() to different texture units (that's 0, 1, 2, 3, etc, NOT texture IDs!!!).

Really, all the twisted fixed functionality was driving me crazy, so I just started working solely with OpenGL 3.0. Sure, it's a lot more code to get a single triangle on the screen, but it's much faster and more elegant for anything more advanced than that. No more glEnable crap exploding all the time! Okay, glEnable(GL_BLEND) but that doesn't count! T_T

Quote
+100 internets for your post.  Thanks again.
Jeez, thanks! I'm gonna hang them on my wall!  Grin

EDIT: I forgot one thing. You were complaining about your colors not being vibrant. In this awesome article on tone mapping there was a comment in which one of them complained about his colors not being vibrant. He also linked(/made?) this article: http://imdoingitwrong.wordpress.com/2010/08/19/why-reinhard-desaturates-my-blacks-3/ It seems very interesting and relevant. Just look at the screenshots!

Myomyomyo.
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #46 - Posted 2011-08-31 15:17:57 »

Quote
I forgot one thing. You were complaining about your colors not being vibrant. In this awesome article on tone mapping there was a comment in which one of them complained about his colors not being vibrant. He also linked(/made?) this article: http://imdoingitwrong.wordpress.com/2010/08/19/why-reinhard-desaturates-my-blacks-3/ It seems very interesting and relevant. Just look at the screenshots!

Damn.  This suddenly makes everything seem a lot more logical.  I don't quite understand luminance of a pixel but this is definitely helping me to understand what's going on under the hood of the shader we're implementing.  It's preserving the saturation by converting it into the equivalent LDR colors per pixel.  Unless I'm crazy or something.  It's probably a bit more complicated but I haven't gotten to read thoroughly. 

When I get home and have time to read what you wrote about multitexturing in a single shader I'll hopefully be able to post with good results.. we'll see.  It's not very intuitive and most examples I find are just GLSL and completely assume that your application code is going to be 100% correct (binding the right textures in the right order and passing the correct unifs).  As far as converting to 3.0 code, we're shooting for just getting this to work on what we understand instead of trying to implement 4 new technologies at once and then not be sure what's screwing us over.  Optimize later, and all that.  But hopefully it's not too much more of a headache before we have something good. 

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Online theagentd
« Reply #47 - Posted 2011-08-31 16:10:16 »

I believe the reason for colors looking "washed out" is that most tone mappers apply their function per channel. All the tone mappers I've posted so far apply a non-linear function to each color channel; R, G, and B. That means that the tone mapping will change the actual color, as the color will change non-linearly.
Now what does that do in practice?
Let's say we use the simple tone mapper fragColor = color / (color + 1). We have the color (5, 0, 10). This would be some kind of dark purple, right? However, after tone mapping we get:
1  
(5, 0, 10) -> (5/6, 0/1, 10/11) = (0.833, 0, 0.91)

We suddenly end up with something that looks a lot more like magenta than a bluish purple! The ratio between red and blue (originally 5:10 = 1:2) has changed a lot, as it is now about 1 : 1.15!
This tone mapper causes any color to approach white as the brightness increases. In practice, this means it grays/whites out everything, e.g. you lose color vibrancy. This is however similar to camera film IRL, but you shouldn't care about filmic tone mappers for games, especially if you do it in 2D and use sprites. Instead use a tone mapper that better preserves the ratio between the colors. The article I linked proposes calculating the luminancy (Exact same thing as converting the color to gray scale), tone map the luminance and finally scale the colors by this tone mapped luminance. This would preserve the RGB ratio, and might be exactly what you want. I think an approach like this would be better for a sprite based 2D game.

EDIT:
I seem to have ****ed up the URLs in my earlier post, so I'll just do this the easy way:
Good article on tone mapping: http://filmicgames.com/archives/75
Better colors from tone mapping: http://imdoingitwrong.wordpress.com/2010/08/19/why-reinhard-desaturates-my-blacks-3/

Myomyomyo.
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #48 - Posted 2011-08-31 17:18:14 »

Quote
The article I linked proposes calculating the luminancy (Exact same thing as converting the color to gray scale), tone map the luminance and finally scale the colors by this tone mapped luminance. This would preserve the RGB ratio, and might be exactly what you want. I think an approach like this would be better for a sprite based 2D game.

This makes a lot of sense, and now I am seeing the pixel shader I found in a new light (the original one handles bloom and the tone mapping together, I took out the bloom multiplication and do it separately):

Quote
uniform sampler2D sampler;
varying vec2 tex_coord;
 
// Control exposure with this value
uniform float exposure;
// Max bright
uniform float brightMax;
 
void main()
{
    vec4 color = texture2D(sampler, tex_coord);
    // Perform tone-mapping
    float Y = dot(vec4(0.30, 0.59, 0.11, 0), color);
    float YD = exposure * (exposure/brightMax + 1.0) / (exposure + 1.0);
    color *= YD;
    gl_FragColor = color;
}

There's that intensity vector that figures out the grayscale luminance.  There's... a problem though... where does it ever use the vector Y again?  It seems like the creator put that in there and then just never uses it unless there's something about GLSL that I'm missing.  So the steps needed would be to multiply the Y vector by the YD tone mapping and then reverse the conversion to make that color the final fragment color.  I have no idea if this would work.

Upon further examination, it looks like this function IS indeed the Reinhard operator.  The difference is that brightMax is not squared in this equation (easy enough).  The shader I have here doesn't seem to make use of the calculated float Y, which seems to be important in preserving the color ratios by using luminance instead.  In the second to last example in the second link where he loses all his whites, he translates his color to grayscale, applies the color/color+1, then translates back to normal.  However in his final example he uses the Reinhard operator ON THE LUMINANCE-SCALED color, then changes it back. 

If my shader is already doing this somehow, I'm not sure why it's not more evident in the code (since the value Y doesn't seem to be doing much of anything).

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Online theagentd
« Reply #49 - Posted 2011-08-31 17:59:47 »

You're right, it's not doing anything with the Y variable. There's also something very wrong with the YD line. Currently it calculates a value only based on exposure and brightMax. This will obviously be constant unless you change the exposure/brightMax manually. Right now it's just scaling the colors by a simple value. Definately wrong. I'm almost certain that this is how it should be:
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
uniform sampler2D sampler;
varying vec2 tex_coord;
 
// Control exposure with this value
uniform float exposure;
// Max bright
uniform float brightMax;
 
void main()
{
    vec4 color = texture2D(sampler, tex_coord);
    // Perform tone-mapping
   float Y = dot(vec4(0.30, 0.59, 0.11, 0), color);
    float YD = exposure * (Y/brightMax + 1.0) / (Y + 1.0);
    color *= YD;
    gl_FragColor = color;
}


It... It... It just makes sense...
EDIT: Oh, wait, it doesn't.
Why not just remove brightMax, multiply Y by exposure and replace YD with
1  
float YD = Y / (Y+1);

? That would probably work fine.

Myomyomyo.
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #50 - Posted 2011-08-31 18:06:45 »

I'd like to challenge this and propose that the correct function is this:

1  
float YD = Y* (Y/brightMax^2 + 1.0) / (Y + 1.0);


The exposure, as I understand it, is just a scale factor.  In fact if you look at the tonemapping operator in the second article you linked me, towards the bottom, it's this:


No where in here does it say anything about exposure.  Not really sure where you'd multiply in the exposure if this is the case but it definitely seems like it's using L(x,y) in all three places in the equation (where we're using Y, the luminance).  It seems like if we wanted to use "exposure" we'd have to multiply that into the original color.  Seems like something you'd want to play with, but for some reason, only having L(x,y), Y, or whatever you want to call it in two of the three spots doesn't feel right when taken in the context of this article.  

Maybe your way with just multiplying the entire thing through by the exposure would work, I don't know.  He also uses something like:

1  
2  
3  
4  
5  
double L = dot(vec3(0.2126, 0.7152, 0.0722), color);
double nL = ToneMap(L);
double scale = nL / L;
color *= scale;
gl_FragColor = color;


Where his L is our Y, and ToneMap(L) returns our YD.  The division of nL / L seems to be the inverse of what he's doing in the first line, to get the correct ratios to modify the original color by.  I'm not sure where in this process you'd apply exposure, assuming the my modified YD calculation is correct.  Additionally the luminance vector he uses is different than the one I found originally but I'm assuming this is just going to result in a slightly different outcome and is a matter of preference.

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #51 - Posted 2011-08-31 18:12:09 »

EDIT: Double post, please delete

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Online theagentd
« Reply #52 - Posted 2011-08-31 18:20:44 »

I reread the article, and that formula is only for tone mapping the luminance value. It should be complimented by this code:
1  
2  
3  
4  
5  
6  
double L = 0.2126 * R + 0.7152 * G + 0.0722 * B;
double nL = ToneMap(L);
double scale = nL / L;
R *= scale;
G *= scale;
B *= scale;


This one doesn't have any exposure variable though. I thinky you're meant to control the exposure indirectly through the brightMax uniform.

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
uniform sampler2D sampler;
varying vec2 tex_coord;

// Max bright
uniform float brightMax;

float toneMap(float luminance){
    //This is the function in the article
   return luminance * ( 1 + luminance / (brightMax*brightMax)) / (1 + luminance);
}
 
void main()
{
    vec4 color = texture2D(sampler, tex_coord);
    // Perform tone-mapping
   float luminance = dot(vec4(0.30, 0.59, 0.11, 0), color);
    float toneMappedLuminance = toneMap(luminance);
    gl_FragColor = color * toneMappedLuminance;
}


WARNING: PROGRAMMED IN POST.
Probably lots of errors.

I think this would be the tone mapper used in the article.

Myomyomyo.
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #53 - Posted 2011-08-31 18:22:36 »

Yep, looks like we basically came to the same conclusion.  This seems pretty solid if it works as advertised.  Can't wait to test it out.  I guess if you wanted a really bright scene you'd just up brightmax (coming out of the cave, etc). 

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Online theagentd
« Reply #54 - Posted 2011-08-31 18:26:04 »

Nonono, brightMax defines the LOWEST value that will be mapped to 1.0 on the screen. If you increase brightMax, the whole scene will get darker! You'd want to reduce brightMax in caves. It's like inverse exposure (if I got this right).

Myomyomyo.
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #55 - Posted 2011-08-31 18:28:47 »

Quote
Nonono, brightMax defines the LOWEST value that will be mapped to 1.0 on the screen. If you increase brightMax, the whole scene will get darker! You'd want to reduce brightMax in caves. It's like inverse exposure (if I got this right).

Oh oh oh I see.  Yeah, I remember now actually.  When I had the exposure as a uniform along with brightMax, the brightness of the scene would be dependant on exposure relative to brightMax (exposure being a higher percentage of brightMax would result in a brighter scene.

I like this better because it more accurately reflects what's actually happening and if I want a way to control exposure directly I can just put that somewhere else.  Doing the exposure within the luminance modification tonemap func just seems wrong, since the exposure is a scale of the actual HDR color, not of the color ratios, if I'm thinking about it right.

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Online theagentd
« Reply #56 - Posted 2011-08-31 18:33:59 »

Yes, I agree. I also don't know how to implement bloom with this. You can't apply it before the tone mapping, as that would make bright objects overbloomed, regardless of exposure/brightMax. You might be able to after tone mapping as it doesn't reduce all colors to LDR (if a pixels luminance is over brightMax, it should get over 1), but I don't know how that would look. You also wouldn't be able to tone map the bloom. Once again, I have no idea how that would look.  I have to sleep now though so no more posting today... I'll try this tone mapper tomorrow.

Myomyomyo.
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #57 - Posted 2011-08-31 18:39:55 »

Yes, I agree. I also don't know how to implement bloom with this. You can't apply it before the tone mapping, as that would make bright objects overbloomed, regardless of exposure/brightMax. You might be able to after tone mapping as it doesn't reduce all colors to LDR (if a pixels luminance is over brightMax, it should get over 1), but I don't know how that would look. You also wouldn't be able to tone map the bloom. Once again, I have no idea how that would look.  I have to sleep now though so no more posting today... I'll try this tone mapper tomorrow.

Good night and thanks.  I think applying the bloom last in this case wouldn't necessarily be terrible as long as you don't go overboard with the bloom effect... I'll play around with it.  If the bloom ends up being too intense with this method, could you blend the bloom in with an alpha < 1 so that it didn't overpower the image?  I also think if you set your threshold for capturing which parts of the screen should be bloomed, you can still collect only the brightest of bright spots for the gaussian shader.  Again I'll spend tonight tweaking this and post with my results.

EDIT: Fixed the darkener, using three passes for stacking on the bloom right now (still having problems with passing multitexture). 



Weird red effect is due to the fact that the center of the light is extremely bright and the darkness filter isn't 100% perfect.  You basically set one uniform that is the threshold ignored by the darkener, that is bloomed 3 times with downsampling, and smacked onto the scene with additive blending.  It's pretty quick and dirty.  This effect is a little extreme and I doubt there'd be something like this in game except for the 'coming out of cave' example.  I ended up with a modified version of the tone mapping shader we discussed earlier that actually converted the luminance back onto the color vector before passing.

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
uniform sampler2D sampler;
varying vec2 tex_coord;
 
// Max bright
uniform float bThresh;
 
float toneMap(float luminance){
    //This is the function in the article
   return luminance * ( 1 + luminance / (bThresh*bThresh)) / (1 + luminance);
}
   
void main()
{
    vec4 color = texture2D(sampler, tex_coord);
    float L = dot(vec4(0.2126, 0.7152, 0.0722, 0), color);
    float nL = toneMap(L);
    float scale = nL / L;
    gl_FragColor = color * scale;
}


Using a combination of our code and the article's.  Everything can be tweaked.  Darkness threshold, maxBright, the intensities and colors of lights, you name it.  There are a ton of variables so it's a lot of trial and error to get something that looks nice but I think we are definitely on the right train of thought.  How bloom is going to interact with shadows for very bright lights is impossible to say at this point.

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Online theagentd
« Reply #58 - Posted 2011-09-01 03:57:59 »

Oh, jeez. Sorry, I forgot the scale part!!! My code quality is equal to the inverse of the time since I woke up. Your one is obviously correct! I'll try it out and post my results after school...

Myomyomyo.
Offline Rejechted

Senior Member


Medals: 1
Projects: 1


Just a guy making some indie games :D


« Reply #59 - Posted 2011-09-01 04:04:02 »

Oh, jeez. Sorry, I forgot the scale part!!! My code quality is equal to the inverse of the time since I woke up. Your one is obviously correct! I'll try it out and post my results after school...

Cool, let me know how it looks.

Yeah based on the intensity of the light, the darkener threshold, and the maxBright param, we get some varied effects.  I've managed to rig up the demo in such a way that tones down that crazy reddish look quite nicely for the intensity of light we're using.  I'm doing the tonemapping at the very end right before the final render.  I forget if this is wrong or right.. but I haven't tried moving the bloom code after the tonemap because I assume then we'd get this ridiculous over-brightness unless we tonemapped again which just seems wrong.

Blog for our project (Codenamed Lead Crystal): http://silvergoblet.tumblr.com
Pages: 1 [2] 3
  ignore  |  Print  
 
 
You cannot reply to this message, because it is very, very old.

 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

BurntPizza (12 views)
2014-09-21 02:42:18

BurntPizza (11 views)
2014-09-21 01:30:30

moogie (13 views)
2014-09-21 00:26:15

UprightPath (23 views)
2014-09-20 20:14:06

BurntPizza (27 views)
2014-09-19 03:14:18

Dwinin (40 views)
2014-09-12 09:08:26

Norakomi (70 views)
2014-09-10 13:57:51

TehJavaDev (96 views)
2014-09-10 06:39:09

Tekkerue (49 views)
2014-09-09 02:24:56

mitcheeb (70 views)
2014-09-08 06:06:29
List of Learning Resources
by Longor1996
2014-08-16 10:40:00

List of Learning Resources
by SilverTiger
2014-08-05 19:33:27

Resources for WIP games
by CogWheelz
2014-08-01 16:20:17

Resources for WIP games
by CogWheelz
2014-08-01 16:19:50

List of Learning Resources
by SilverTiger
2014-07-31 16:29:50

List of Learning Resources
by SilverTiger
2014-07-31 16:26:06

List of Learning Resources
by SilverTiger
2014-07-31 11:54:12

HotSpot Options
by dleskov
2014-07-08 01:59:08
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!