# LWJGL Tutorial Series - Lighting

Welcome to the fifteenth part of the LWJGL Tutorial Series. In this tutorial, I'm going to show you how to implement lighting with GLSL. We are going to implement a point light, a light which emits lighting in all the directions. I'm going to explain only the important concepts since this topic (in physics) comes in high school. If you've found any mistakes, please notify them through comments.

# Vertex Normals

Though we've been using normals since the past few tutorials, I've not explained them in detail. Now I'm explaining them since they are the the key to do lighting. Before getting into lighting, let's first learn what normals actually are and how they relate to lighting. Actually the normals exist for faces, i.e., each face has a normal which is a vector perpendicular to the face like in this diagram.

You should note that a normal is a directional vector only. The above image shows a normal

**N** which is perpendicular to the triangle formed by the vertices

**V**_{1},

**V**_{2} and

**V**_{3}. So it is also said that

**the normal of V**_{1} with respect to the face is N. It is also possible that each vertex can have many normals depending on the current face. Don't be worried since we are using Blender to generate the normals for us and we are simply using them with our model loader. All we need now is how to do lighting. So we are using the Phong Reflection Model to do lighting for us.

# Phong Reflection Model

The Phong Reflection Model is an algorithm that we can use to do

**Per Fragment Lighting** which means we do most of the lighting code in the fragment shader. Phong shading improves upon Gouraud shading and provides a better approximation of the shading of a smooth surface. Here's an image on the wikipedia showing the difference between flat shading and phong shading.

Now let's get into the different components of the Phong Shading Model. This model has three components, namely

**Ambient, Diffuse** and

**Specular** components. There's another image on the wikipedia which explains what they actually are.

Here,

**ambient** refers to the color when there is no light on that part. To be more precise, it's the light that is present all over the scene whose direction is impossible to determine.

**Diffuse** light is a light that comes from a specific direction (from position of light towards the fragment). And finally,

**specular** light comes from a specific direction and bounces off the surface of the face. Combining all these will cause the final phong reflection to be seen. Now let's learn how each light works by implementing them.

# Diffuse Component

To understand how it actually works, examine the following image. It helps us understand how it actually works.

It shows how a light ray reflects when incident on the surface. The incident ray collides with the surface and bounces as the reflected ray. The normal bisects both the incident ray and the reflected ray. Now assume that the incident ray started from the light and fell on the vertex. If

is the position of the light,

be the vertex normal and

be the vertex position, we can find the direction of light, which is called as

by using this piece of code.

1
| vec3 surfaceToLight = normalize(lightPos - vPosition); |

The expression

gives a vector that is along the incident ray but in opposite direction. This is because the color of the object is determined by the color of light it emits. We then normalize that vector to get the direction of the light, we don't need the position of the light ray. Next, we need to calculate the diffuse coefficient. If you don't know what it is, here's a simple experiment. Position a card or a book vertically and point a torch to it. If the torch is perpendicular to it, then more light is reflected back to your eye making the book appear brighter. Now, try the same by rotating the book and you will see some decrease in brightness. This is because of the ambient coefficient. I don't go into much detail and I'm giving out some formulas.

Since both the vectors are normalized vectors, there is no need to divide the dot product of those vectors with the product of their magnitudes. So we can remove them from the above equation. This creates the following equation.

So the diffuse coefficient is the

**cos(θ)** in the above equation. The default range of the cosine function is

**[-1, 1]** but negative diffuse coefficient makes the dark areas even darker which we do not want. So we use the

function to get only the positive values. This can be achieved in GLSL with the following.

1
| float diffuseCoefficient = max(0.0, dot(vNormal, surfaceToLight)); |

And finally, we calculate the diffuse color by multiplying the vertex color with the diffuse coefficient and the intensity of light.

1
| vec3 diffuse = diffuseCoefficient * vColor.rgb * lightIntensity; |

This is all about the diffuse light. We still had to see about ambient and specular lights. Don't get asleep yet, we have more to learn.

# Ambient Component

Now, we are going to see the second component of the Phong Shading Model called the ambient component. What this component says is basically the color of the face when there is no light. This is used since we don't see completely dark models due to having no light in the area. We calculate the ambient color by using a percentage of the intensity of light. We keep this percentage in a variable called as

and calculate the ambient color by using a similar formula which is used to calculate the defuse color. It is achieved with this code.

1
| vec3 ambient = ambientCoefficient * vColor.rgb * lightIntensity; |

The field

is stored as a constant and we define it to be 0.05 meaning that we have 5 percent of light at places where there is no light. That's all with ambient component. The only component left by us is the specular component which we are going to learn now.

# Specular Component

Now comes the specular component. This is the component that makes the surfaces look shiny. Before going through that, let's see the image I've shown earlier.

There are two things I didn't explain before when I showed the image. They are the angles.

For a flat surface, the angle of incidence is equal to the angle of the reflectance. Now consider that the surface is irregular, the light can reflect in any direction depending on the surface. This is a key difference between specular and diffuse components, the diffuse component models the irregular surface and the specular component is used to model regular surfaces. Now that we are ready to calculate the specular component, let's start that by some equations.

These are fairly simple. The reflection vector is calculated by using the

function of GLSL. Here

**cos(θ)** is used in the calculation of the specular coefficient by raising this cos angle to the power of the shininess. Like this.

1 2 3 4 5 6
| float specularCoefficient = 0.0; if(diffuseCoefficient > 0.0) { specularCoefficient = pow(max(0.0, dot(surfaceToLight, reflect(-surfaceToLight, vNormal))), shininess); } vec3 specular = specularCoefficient * vec3(1.0, 1.0, 1.0) * lightIntensity; |

We used the if clause to only calculate the specular coefficient for the faces that are visible since if the diffuse coefficient is equal to zero, then the face is actually behind another face and is not visible. This explains the specular component and the next part is putting them together. But before we do that, we need to learn about normal matrices.

# Normal Matrices

Normals are usually provided in the model space, relative to the vertices for which we are using the view matrix. Though normals are in the model space as well, since they are unit vectors, we need a separate matrix called as the normal matrix. The problem with using the same view matrix doesn't keep the normals as unit vectors since that matrix is affected with scaling and translation. So we need to remove scaling and translations from the matrix. So we make a new method in the

class called as

which looks like this.

1 2 3 4 5 6 7 8
| public Matrix4f getNormalMatrix() { Matrix4f mat = getViewMatrix(); mat.m30 = 0; mat.m31 = 0; mat.m32 = 0; mat.m33 = 1; Matrix4f.invert(mat, mat); Matrix4f.transpose(mat, mat); return mat; } |

What it does is very simple. It first changes the last row of the matrix with the last row of an identity matrix so that rotations are removed from the matrix. Now, scaling is removed by inverting the matrix and transposing it. This removes the scaling from the matrix and the matrix can now be used for transformation of the normals. All you had to do is to update the correct uniforms which I've explained in the previous tutorial and I'm not going through that code in this tutorial. We are now ready for the final part, putting all together.

# Putting it all together

Now it's time to write the actual shaders that does these actual calculations. Since I've explained all of the shader code earlier, I'm now just listing the code. First let's see the vertex shader.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
| varying vec4 vColor; varying vec3 vPosition; varying vec3 vNormal;
uniform mat4 mView; uniform mat4 mProjection; uniform mat4 mNormal;
void main() { vColor = gl_Color; vPosition = (mView * gl_Vertex).xyz; vNormal = normalize(mNormal * vec4(gl_Normal, 1.0)).xyz; gl_Position = mProjection * mView * gl_Vertex; } |

This is the vertex shader. What it does is it sends some data to the fragment shader and transforms the vertex. Now let's see the source code of the fragment shader. The source is easier to understand since I've commented each line.

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
| varying vec4 vColor; varying vec3 vPosition; varying vec3 vNormal;
uniform vec3 lightPos;
const vec3 lightColor = vec3(1.0, 1.0, 1.0); const float lightIntensity = 2.0; const float ambientCoefficient = 0.05; const float shininess = 128.0;
void main() { vec3 surfaceToLight = normalize(lightPos - vPosition); vec3 ambient = ambientCoefficient * vColor.rgb * lightIntensity;
float diffuseCoefficient = max(0.0, dot(vNormal, surfaceToLight)); vec3 diffuse = diffuseCoefficient * vColor.rgb * lightIntensity; float specularCoefficient = 0.0; if(diffuseCoefficient > 0.0) { specularCoefficient = pow(max(0.0, dot(surfaceToLight, reflect(-surfaceToLight, vNormal))), shininess); } vec3 specular = specularCoefficient * vec3(1.0, 1.0, 1.0) * lightIntensity; gl_FragColor = vColor + vec4(ambient, 1.0) + vec4(diffuse, 1.0) * vec4(lightColor, 1.0) + vec4(specular, 1.0); } |

This makes the fragment shader which does all of the hard work. You can see that I've implemented the values as constants but you can use uniforms too. And now when we run it,

That's the end of this tutorial and in the next tutorial, let's see how to load textured models. If you've found any mistakes, please notify them to me with comments.

# Source Code

Tutorial15.javashader.vertshader.fragCamera.java