Java-Gaming.org Hi !
 Featured games (91) games approved by the League of Dukes Games in Showcase (757) Games in Android Showcase (229) games submitted by our members Games in WIP (843) games currently in development
 News: Read the Java Gaming Resources, or peek at the official Java tutorials
Pages: [1]
 ignore  |  Print
 Lighting in LWJGL and opengl 2  (Read 9699 times) 0 Members and 1 Guest are viewing this topic.
obsidian_golem

Senior Newbie

 « Posted 2012-04-24 20:10:01 »

I am trying to figure out lighting in Opengl 2 with lwjgl. I have two problems. The first is a lack of information. I do not understand how lighting works. Could someone point out a good some information on how lighting is done? My other problem is that I am using lwjgl's built in Matrix classes. The tutorials I have found use a matrix called a normal matrix (or the Opengl 2 equivalent) to get the normal direction. In my understanding the Opengl 2 equivalent of the normal matrix it the same as the transposed inverse of the top 3x3 elements of the model matrix. The functions to do this do not exist in lwjgl's Matrix class, and I do not know the math behind them. Do I really need the normal matrix, and if I do how do I get it?
sproingie

JGO Kernel

Medals: 202

 « Reply #1 - Posted 2012-04-24 21:57:34 »

I would recommend using the matrix classes in javax.vecmath rather than the exceedingly crippled classes in org.lwjgl.util.  The former has a normalize method, the latter doesn't have much of anything.
theagentd
 « Reply #2 - Posted 2012-04-25 12:45:59 »

I just made my own function to extract those numbers using the LWJGL library...

If you use the built-in matrices in OpenGL you can just use gl_NormalMatrix in your shader, but if you want to only use your own attributes/uniforms you´ll have to calculate it yourself the way you wrote.

Diffuse lighting is very simple. You pretty much just calculate the cosine of the angle between the normal and the light direction vector and that´s it. This should be done in eye space though, so the normals have to be rotated which is what multiplying them with the normal matrix does. There are basically 3 different kind of lights:

- Direction lights. Example: the sun. This is the easiest one, just give upload the light direction vector to your shader, do a dot product and you´re done.

- Point lights. Example: a light bulb. This one is slighty more difficult. You upload the light eye space position, calculate a light direction vector per pixel (normalize(pixelEyePosition - lightPosition), or was it the other way around? xD) and use that instead of a constant direction as with directional lighting.

- Spot light: Example: a flash light This one uses 3 variables: A position, a direction and an angle stored as the cosine of that angle. This one is obviously the most difficult one, but not by much. It´s pretty much a fusion of the math done by direction and point lights. First we calculate lighting just as we did for point lights, but we also need to take the direction of the point light and the angle of the light one into account. This is done by dot producting the calculated light direction to the pixel and the direction of the spot light. Together with the cosine of the light cone (supplied from the program to the shader) we can limit the light to a cone in front of the spot light.

Another thing you can add to point and spot lights are distance fall off. We can use the calculated unnormalized light direction vector before it´s normalized to do this. Real life lights never disappear completely with distance; you wouldn´t be able to see a lot of things if light didn´t travel infinitely. The most accurate way is therefore to just divide the calculated intensity by (distance^2) which is what happens in real life. However, it´s pretty impractical for lights to reach infinitely in games since a point light, no matter how weak, will always affect everything in the world. Therefore most lighting engines use a fall off equation that does end up at zero at a certain distance.

Oh, I forgot, of course all 3 light types have an intensity value. If you use fall off, this value can be a lot higher than 1 even though you can´t store color values over that in the standard framebuffer. Consider a pixel 4 meters away from a point light. The pixel is perfectly facing the light so the angle between the pixel´s normal and the calculated light direction vector equals 0. The cosine value of 0 degrees is 1.0, and the pixel´s texture value is multiplied by this value. However, the pixel is 4 meters away from the light! With a fall off of distance^2, we end up with an intensity of cosine(0) / (4^2) = 1 / 16. in this case, it makes perfect sense having a light with an intensity much higher than 1.

This is pretty much the basics of the math behind lights. I don´t have a working computer, so I can´t give you much code, but the tutorials are out there everywhere. If you run into any problems with them, just ask. =D This is literally just scratching the surface of lighting though. There´s specular lighting, shadow mapping, different light equations to give the look of different materials, HDR rendering, tone mapping, bloom, deferred shading, ambient occlusion, global illumination, volumetric lighting, reflections... I mean, you can stay busy for a life time.

Myomyomyo.
Roquen

JGO Kernel

Medals: 517

 « Reply #3 - Posted 2012-04-25 12:50:48 »

Don't forget BRDF, BTDF, and constructive methods like Spherical Harmonics, etc. etc.  Yeah you could spend a lifetime on this stuff.
obsidian_golem

Senior Newbie

 « Reply #4 - Posted 2012-04-25 14:14:03 »

I just made my own function to extract those numbers using the LWJGL library...

If you use the built-in matrices in OpenGL you can just use gl_NormalMatrix in your shader, but if you want to only use your own attributes/uniforms you´ll have to calculate it yourself the way you wrote.

Diffuse lighting is very simple. You pretty much just calculate the cosine of the angle between the normal and the light direction vector and that´s it. This should be done in eye space though, so the normals have to be rotated which is what multiplying them with the normal matrix does. There are basically 3 different kind of lights:

- Direction lights. Example: the sun. This is the easiest one, just give upload the light direction vector to your shader, do a dot product and you´re done.

- Point lights. Example: a light bulb. This one is slighty more difficult. You upload the light eye space position, calculate a light direction vector per pixel (normalize(pixelEyePosition - lightPosition), or was it the other way around? xD) and use that instead of a constant direction as with directional lighting.

- Spot light: Example: a flash light This one uses 3 variables: A position, a direction and an angle stored as the cosine of that angle. This one is obviously the most difficult one, but not by much. It´s pretty much a fusion of the math done by direction and point lights. First we calculate lighting just as we did for point lights, but we also need to take the direction of the point light and the angle of the light one into account. This is done by dot producting the calculated light direction to the pixel and the direction of the spot light. Together with the cosine of the light cone (supplied from the program to the shader) we can limit the light to a cone in front of the spot light.

Another thing you can add to point and spot lights are distance fall off. We can use the calculated unnormalized light direction vector before it´s normalized to do this. Real life lights never disappear completely with distance; you wouldn´t be able to see a lot of things if light didn´t travel infinitely. The most accurate way is therefore to just divide the calculated intensity by (distance^2) which is what happens in real life. However, it´s pretty impractical for lights to reach infinitely in games since a point light, no matter how weak, will always affect everything in the world. Therefore most lighting engines use a fall off equation that does end up at zero at a certain distance.

Oh, I forgot, of course all 3 light types have an intensity value. If you use fall off, this value can be a lot higher than 1 even though you can´t store color values over that in the standard framebuffer. Consider a pixel 4 meters away from a point light. The pixel is perfectly facing the light so the angle between the pixel´s normal and the calculated light direction vector equals 0. The cosine value of 0 degrees is 1.0, and the pixel´s texture value is multiplied by this value. However, the pixel is 4 meters away from the light! With a fall off of distance^2, we end up with an intensity of cosine(0) / (4^2) = 1 / 16. in this case, it makes perfect sense having a light with an intensity much higher than 1.

This is pretty much the basics of the math behind lights. I don´t have a working computer, so I can´t give you much code, but the tutorials are out there everywhere. If you run into any problems with them, just ask. =D This is literally just scratching the surface of lighting though. There´s specular lighting, shadow mapping, different light equations to give the look of different materials, HDR rendering, tone mapping, bloom, deferred shading, ambient occlusion, global illumination, volumetric lighting, reflections... I mean, you can stay busy for a life time.

Does javax.vecmath include a function to get the top 3x3 elements of a matrix? I do not know how to do this myself.
DavidW

Junior Devvie

Medals: 3
Exp: 7 years

 « Reply #5 - Posted 2012-04-25 14:32:49 »

You said you are using lwjgl's matrix class?  You can extract those numbers like this:

 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19 Matrix3f threeFromFour(Matrix4f in) {  // You should use a better name than this  Matrix3f out = new Matrix3f();  /* You can access the individual elements of a Matrix object directly     by matrix.mAB where A is the column and B is the row.     (I might have that backwards... ? This will still work though)     Remember, indexing starts at zero!                                    */  out.m00 = in.m00;  out.m01 = in.m01;  out.m02 = in.m02;  out.m10 = in.m10;  out.m11 = in.m11;  out.m12 = in.m12;  out.m20 = in.m20;  out.m21 = in.m21;  out.m22 = in.m22;    return out;}

Hope this helps.

Hello!
theagentd
 « Reply #6 - Posted 2012-04-25 17:07:35 »

Exactly, they´re just public variables.

Myomyomyo.
obsidian_golem

Senior Newbie

 « Reply #7 - Posted 2012-04-25 20:47:35 »

I am running into another few problems. First when I multiply my normal matrix(gotten by the method a couple posts back) by the current point and normalize it the light does nothing. I am using separate model view and projection matrices, and I am using the model matrix as the base for the normal matrix. Am I that the wrong matrix, or should I be multiplying the model and view matrices before inverting and transposing. My other problem is that when I multiply any of the matrices by the vector coord the whole thing disappears. Here is my current fragment shader:

 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15 #version 120varying vec3 f_color; //color from vertex shadervarying vec3 f_normal; //normaluniform mat4 m,v; // These are just here if necessary I prefer do my matrix multiplication on the cpu.uniform mat3 nm; // normal matrixvarying vec3 f_coord;void main(void) {  float intensity;  vec4 color;  vec3 n = normalize(nm*f_normal); //nm = first 3x3 elems of model matrix.invert().transpose(). These are the matrices from org.lwjgl.util  vec3 pos = normalize(vec3(0.4,0.8,0.4)-(f_coord));  intensity = max(dot(pos,n),0.0f);  color = vec4(f_color+intensity,1.0);  gl_FragColor = color;}
theagentd
 « Reply #8 - Posted 2012-04-26 01:19:17 »

The normal rotation should be done in the vertex shader. Your fragment shader should just normalize the interpolated normal. Also make sure your uniforms and attributes are working correctly.

Myomyomyo.
obsidian_golem

Senior Newbie

 « Reply #9 - Posted 2012-04-26 03:35:28 »

The normal rotation should be done in the vertex shader. Your fragment shader should just normalize the interpolated normal. Also make sure your uniforms and attributes are working correctly.
Oops... I was using glUniformMatrix4 instead of glUniformMatrix3. Thanks for the help.
obsidian_golem

Senior Newbie

 « Reply #10 - Posted 2012-04-27 01:36:43 »

Now it seems to work, but when I translate the model to the left the lighting gets brighter. When I translate the model to the left the lighting disappears. Here is my frag shader:

 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17 #version 120varying vec3 f_color;varying vec3 f_normal;uniform mat4 m,v;uniform mat3 nm;varying vec3 f_coord;void main(void) {  float diffuse;  vec4 color;  vec3 n = normalize(f_normal);  vec3 udir = vec3(0.4,0.8,0.4)-(m*vec4(f_coord,1.0)).xyz;  float dis = length(udir);  vec3 direction = normalize(udir/dis);  diffuse = max(dot(direction,n), 0.0f);  color = vec4(f_color+diffuse,1.0);  gl_FragColor = color;}
theagentd
 « Reply #11 - Posted 2012-04-27 06:46:59 »

Your variable names are a bit confusing, but I think the problem is that you don´t have a light position uniform. The light position has to be in EYE space, not world space, since that´s where you do lighting.

Myomyomyo.
obsidian_golem

Senior Newbie

 « Reply #12 - Posted 2012-04-27 21:54:42 »

I am using a static varable for the light position since I'm just trying to learn the theory. I have cleaned up my variable names a bit, and have multiplied the view matrix by the light position. I am still having the problem with the left right light intensity, and now it also gets more intense the farther back I move from it. Here is my current shader:

 1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18 #version 120varying vec3 f_color;varying vec3 f_normal;uniform mat4 m,v;uniform mat3 nm; //normal matrixvarying vec3 f_coord;void main(void) {  float diffuse;  vec4 color;  vec3 lightpos = (v*vec4(0.4,0.8,0.4,1.0)).xyz; // using static light position for debugging  vec3 n = normalize(f_normal);  vec3 newpos = lightpos-(m*vec4(f_coord,1.0)).xyz;  float dis = length(newpos);  vec3 direction = normalize(newpos/dis);  diffuse = max(dot(direction,n), 0.0f);  color = vec4(f_color+diffuse,1.0);  gl_FragColor = color;}
pitbuller
 « Reply #13 - Posted 2012-04-27 22:25:09 »

Couple notes:
For fragment lighting you don't need any matrix*vec operation at fragment level.

Lighting can be calculated in any coordinate system that is linear just as easily. If you do it at view space then you save one operation becouse you get dir to eye vector more conviniently and you get some precision more. But world space work just as easily but then you just need camera world position as uniform.

 1  2 float dis = length(newpos);vec3 direction = normalize(newpos/dis)

This is not how you normalize distance to direction. If you don't need dis then just do
 1 vec3 direction = normalize(newpos)

If you need it then fastest way to do it is using inverseSqrt.(If yuo want to know why just google 0x5f3759df)
 1  2 float invLen = inverseSqrt(dot(newpos,newpos));vec3 direction = newpos* invLen

Then you can use that invLen to calculate falloff.

But to learn how this work you just need to reduce everything to minimum and understand everything. Then start adding stuff.

This is all what you need at fragment shader level to achieve what you got now. Everything else does belong to vertex shader or cpu.
 1 gl_FragColor.rgb = lightCol * max(dot(normalize(L), normalize(N),0.0));

obsidian_golem

Senior Newbie

 « Reply #14 - Posted 2012-04-28 01:27:14 »

Could you show me a version of inversesqrt that works in glsl 120? According to the reference this inversesqrt is not available in glsl 120. I am not using opengl 3, so I can not use glsl 130. Also, would this fix the problem with the left movement?

One more thing. I just noticed that the light does not seem to shift positions on the model. It always seems to light the same portion of the model. If I translate it to the right, the left side should get lighted, but instead it appears to simply fade away to nothing. When I translate to the left, the light covers more and more of the model, until all but the very left of the model is white. Is this because I missed a step in making the light a point light? If not then what could cause something like this to happen?
theagentd
 « Reply #15 - Posted 2012-04-28 04:05:29 »

+1 what Pitbuller said. For god´s sake... I have a shader for that 2.5 meters away from me... In my external hard drive... No computer to use it with... X_X

Myomyomyo.
pitbuller
 « Reply #16 - Posted 2012-04-28 09:39:41 »

Could you show me a version of inversesqrt that works in glsl 120? According to the reference this inversesqrt is not available in glsl 120. I am not using opengl 3, so I can not use glsl 130. Also, would this fix the problem with the left movement?

One more thing. I just noticed that the light does not seem to shift positions on the model. It always seems to light the same portion of the model. If I translate it to the right, the left side should get lighted, but instead it appears to simply fade away to nothing. When I translate to the left, the light covers more and more of the model, until all but the very left of the model is white. Is this because I missed a step in making the light a point light? If not then what could cause something like this to happen?

Just forget everything that you have and start with empty shader. InvSqrt is something that you can safely ignore by now.

Easiest light model is directional light at vertex level.
So let start from that.

Let just ignore the syntax, I only use gles2.0 glsl.

 1  2  3  4  5  6 varying float v_intensity;const vec3 directionToLight = vec3(0,1,0);void main() {   intensity = max (dot(a_normal, directionToLight), 0.0);   gl_Position = u_modelViewProjection * vec4(a_position,1.0);}

 1  2  3  4 varying float v_intensity;void main() {   gl_FragColor.rgb = vec3(v_intensity);}

Yeah, light work but its damnn ugly. Problems here are that normals do no follow the model. There are no specular but  its so simple that its understable.

Next step would be do it properly.

 1  2  3  4  5 varying vec3 v_normal;void main() {  v_normal = u_normal * a_normal; //u_normal is 3x3 matrix, its either inverse and transpose of model matrix or just upper corner of that matrix if no uniform scaling is used  gl_Position = u_modelViewProjection * vec4(a_position,1.0);}

 1  2  3  4  5  6  7 const vec3 directionToLight = vec3(0,1,0);varying vec3 v_normal;void main() {  vec3 normalizedNormal = normalize(normal);  float intensity = max (dot(normalizedNormal, directionToLight), 0.0);  gl_FragColor.rgb = vec3(intensity);}

So you can clearly see the pattern. Add simple things top of simple things. More reading http://www.arcsynthesis.org/gltut/Illumination/Illumination.html
If you get stuck I can give more examples but I won't give final answer becouse that won't help in longer run.
obsidian_golem

Senior Newbie