The OpenGL Rendering Pipeline
Welcome to another tutorial in the LWJGL Tutorial Series. In the previous tutorial, we've setup our basic
class which acts as a framework for our tutorials and we've been seeing forward to render something onto the screen. So, in this tutorial, I'm going to explain what is the OpenGL Rendering Pipeline and how OpenGL renders to the screen.
If you find anything wrong, or has improvements, please notify me via comments. Now let's start by seeing what this pipeline is.
Working of the Rendering Pipeline
To draw any shape or object, we will first construct the vertices of it and send them to the OpenGL. The process in which OpenGL converts the raw vertices into a raster image and sends to the display is called as rendering pipeline. You can get an idea of the graphics pipeline by looking at the following image.
The above image is actually showing the phases which are only required to get a basic understanding of the pipeline. There is also a Geometry Shader which comes between Vertex Shader and Rasterisation but is not necessary. In the previous versions of OpenGL, the pipeline is fixed, meaning that you can only do use what is existing but cannot modify it, but modern OpenGL has a programmable pipeline, meaning that parts of the pipeline can (should) be programmed by the developer for his needs. We will be programming the pipeline by using shaders, which are programs that run on the graphics card.
It all starts with the raw vertices for the shape. We usually create the vertices as floats and send it to the OpenGL which are sent to the Vertex Shader
, which transforms the vertices into new vertices. The transformations may contain rotating, translating, scaling and also different projections. These transformed vertices will go through primitive assembly
, a step which converts these vertices into primitive shapes by connecting those vertices. These shapes are then fed into rasteriser
which converts those 3D shapes into 2D by projecting them onto a 2D plane. Then finally, fragment shader
comes into play, coloring each and every fragment of the generated raster and produces the output in a framebuffer which will be copied to the display.
Between the vertex shader
and the rasteriser
, there exists another type of shader called as Geometry shader
which is optional and capable of generating geometry with the vertices transformed by the vertex shader on the GPU itself. Since it is not necessary to use the geometry shaders and it is somewhat an advanced topic, I'm omitting them for now and in the diagram.
We'll discuss about the shaders in the next tutorial and let's learn in detail about the vertex and fragment shaders and also the GLSL (The OpenGL Shading Language).
There is no source code in this tutorial, since this tutorial only covers the theory part which is necessary for understanding Modern OpenGL.