Java-Gaming.org    
Featured games (91)
games approved by the League of Dukes
Games in Showcase (580)
games submitted by our members
Games in WIP (500)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: [1] 2
  ignore  |  Print  
  OpenGL Questions  (Read 4750 times)
0 Members and 1 Guest are viewing this topic.
Offline Troubleshoots

JGO Coder


Medals: 35
Exp: 7-9 months


Damn maths.


« Posted 2013-11-28 18:37:43 »

Scroll below to see the newest Q&As. Some of the earlier questions regard later OpenGL versions and vise-versa, so if you're looking at this topic to see an answer, make sure you look around. It's a little muddled up. Smiley

Old question
I wasn't really sure where to start with OpenGL so I had a look at SHC's tutorials and started there. Hopefully I'm wording this correctly (a lot of my confusion is with the wording of things).
Let me start off with what I think I know:

  • Everything is done in a line in the order you call functions
  • The modelview matrix is used to manipulate objects in the world
  • The projection matrix adjusts the view of the camera
  • You use the glMatrixMode() function to select the matrix you want to manipulate
  • The glLoadIdentity() function resets the matrix to its original state
  • You need to use glOrtho() to set up an orthographic projection to view the modelview matrix
  • You manipulate a matrix between the glPushMatrix() and glPopMatrix() functions
  • The glTranslatef() function moves the modelview matrix position
  • You draw vertices between the glBegin() and glEnd() function. The parameter of glBegin() is the shape you want to draw.
  • The glViewport() function resizes the orthographic camera

I've probably got half that wrong and I'd really appreciate if someone could tell me exactly what I've got wrong and in the simplest possible way you can think of, the correct definitions.

Also I have a question:

Why are modelview and projection both matrices. I see a lot of references to the modelview matrix but I don't understand what makes it a matrix. I also have seen references to the matrix stacks. I understand what a stack is, but how are these processed?

Thanks in advance to anyone who helps me. This is confusing.  Huh

Why are all OpenGL tutorials written in Brainf**k?
Offline theagentd
« Reply #1 - Posted 2013-11-28 19:14:12 »

The modelview matrix is actually two matrices in one. The model matrix (AKA object matrix) is used to position a model. This is very useful when you have a 3D model with vertices in local space since it allows you to position, scale and rotate the model to its appropriate position in world space. The view matrix does something completely different: It holds the inverse of the position and orientation of the camera. The projection matrix can be seen as the lens of the camera; It defines stuff like field-of-view angle, aspect ratio and near/far planes when using 3D perspective projection or the bounds of the orthographic projection when using orthographic projection. Finally, the viewport defines what part of the window the projected coordinates should map to.

Crash course on matrices: Multiplying a matrix with a vertex takes it from one space to another. If we want to reverse this process, we can take the inverse of a matrix. This new matrix does the same transformation backwards so we get back the original vertex.


1. Local space --model matrix--> World space. This is pretty easy to understand. If we call glTranslatef() on this matrix we'll move the object around in world space.

2. World space --view matrix--> Eye space. This is a bit more complicated. Let's say we have a camera position (x, y). If we do the same thing we did to the model matrix by calling glTranslatef(x, y, 0), we're actually getting a matrix that takes things in eye space and transforms them into world space. We want to do the opposite, AKA the inverse of it. The simplest fix is therefore to simply do it backwards manually by calling glTranslatef(-x, -y, 0). That's a proper view matrix.

3. Eye space --projection matrix--> Normalized device coordinates. The projection matrix takes in coordinates relative to the eye (camera) and maps all three dimensions to [-1, 1], so if the coordinates are (0, 0, 0) after transformation, they're at the center.

4. Normalized device coordinates --viewport--> screen coordinates. Finally these [-1, 1] coordinates are mapped to actual pixels using the viewport settings. If you call glViewport(100, 100, 200, 200) and end up with a coordinate with the normalized device coordinates (0, 0, 0), they'll be mapped to to (150, 150) in the actual window.


Why 2 matrices when we actually need 3? This has to do with 3D. World space coordinates aren't really needed for anything special, but lighting is usually done in eye space. By premultiplying the model and view matrices together, we get a single matrix that does the same as both the original matrices, so it's basically a shortcut to eye space. We can't skip eye space though since we need to do lighting there, so in 3D we can't multiply in the projection matrix too. For 2D however, this point is moot. There's no actual need for 2 separate matrices, but since you're stuck with two matrices you might as well use them as they're supposed to be used, if only because it's a good habit once you start with basic 3D rendering.

glPushMatrix() stores the current matrix on the matrix stack. Think of the stack as a pile of matrices. Push stores the matrix on top of the pile and pop takes off the matrix on top again. This is also called a First-in-First-out queue. This is very useful when working with the modelview matrix:

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
glLoadIdentiy(); //Reset matrix

glTranslatef(-cameraX, -cameraY, 0); //Set up the view matrix part


glPushMatrix(); //Store the current matrix
glTranslate(objectX, objectY, 0); //Position object (model matrix part)
glBegin(...);
//Render object...
glEnd();
glPopMatrix(); //Restores the pushed matrix. Basically undoes the glTranslate(objectX, objectY, 0) call.

glPushMatrix(); //Store the current matrix
//render another object
glPopMatrix(); //Restores the pushed matrix.

Myomyomyo.
Offline Troubleshoots

JGO Coder


Medals: 35
Exp: 7-9 months


Damn maths.


« Reply #2 - Posted 2013-11-28 21:13:07 »

Thank you. Your explanation is very helpful and I understand the parts about each Matrix, but;

I don't understand this.
1. Local space --model matrix--> World space. This is pretty easy to understand. If we call glTranslatef() on this matrix we'll move the object around in world space.

2. World space --view matrix--> Eye space. This is a bit more complicated. Let's say we have a camera position (x, y). If we do the same thing we did to the model matrix by calling glTranslatef(x, y, 0), we're actually getting a matrix that takes things in eye space and transforms them into world space. We want to do the opposite, AKA the inverse of it. The simplest fix is therefore to simply do it backwards manually by calling glTranslatef(-x, -y, 0). That's a proper view matrix.

3. Eye space --projection matrix--> Normalized device coordinates. The projection matrix takes in coordinates relative to the eye (camera) and maps all three dimensions to [-1, 1], so if the coordinates are (0, 0, 0) after transformation, they're at the center.

4. Normalized device coordinates --viewport--> screen coordinates. Finally these [-1, 1] coordinates are mapped to actual pixels using the viewport settings. If you call glViewport(100, 100, 200, 200) and end up with a coordinate with the normalized device coordinates (0, 0, 0), they'll be mapped to to (150, 150) in the actual window.

I think the root of the problem is that I don't understand what local space, eye space or NDCs are.

Also, why in your code example do you push, pop and then re-push and re-pop the matrix? What does undoing the glTranslate(...) call do?

Why are all OpenGL tutorials written in Brainf**k?
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline RobinB

JGO Knight


Medals: 37
Projects: 1
Exp: 3 years


Spacegame in progress


« Reply #3 - Posted 2013-11-28 22:15:54 »

This is why its a matrix:

Offline theagentd
« Reply #4 - Posted 2013-11-28 22:39:31 »

Local space is the simplest, but also not very relevant for 2D. If you open up a 3D model file, you'll find vertex positions. These are relative to some origin (0, 0, 0) point chosen by the modeler. For a cube it'd most likely be the center of the cube, and for a human it's usually the point the human is standing on right between his feet. The same concept applies for a 2D sprite. Let's say you set up a sprite centered over (0, 0).

1  
2  
3  
4  
5  
6  
7  
8  
9  
(-1, -2)        (1, -2)
       +--------+
       |        |
       |        |
       | (0, 0) |
       |        |
       |        |
       +--------+
(-1, 2)         (1, 2)

This sprite is made out of 4 vertices forming a quad, and its vertices are in currently in local space. However, it should be obvious that we don't always want to draw this sprite centered at (0, 0). Let's say that this is a player sprite, so we want to move it to where the player currently is, which we'll say currently is at (50, 50). We'll therefore need to translate the model matrix to (50, 50) to move the object's coordinates so they're centered at (50, 50) instead of the sprite's original origin of (0, 0). For the sake of it, we also want to make the sprite twice as big, so we also scale it using glScalef(2, 2, 1). When we multiply it by this matrix, we get the following sprite:
1  
2  
3  
4  
5  
6  
7  
8  
9  
(48, 46)      (52, 46)
       +--------+
       |        |
       |        |
       |(50, 50)|
       |        |
       |        |
       +--------+
(48, 54)       (52, 54)

The sprite is now at its world position. Again, this can be in any unit you want as long as it makes sense to you. It could be pixels on the screen, millimeters in an ant strategy game, blocks in tetris or light-years in a space game. This is a space defined by you and it's the same space as all your game object's coordinates are in.

Next we can also move around the camera in the world. Since the above sprite is the player sprite, the camera happens to be tracking a point 5 units to the right of the player sprite. As I wrote in my previous post, the view matrix is responsible for camera movement. Basically it's supposed to move vertices so they're relative to the camera instead of relative to world's arbitrarily chosen (by you) origin. So we translate our view matrix using glTranslatef(-50, -50, 0) and transform our sprite using it:
1  
2  
3  
4  
5  
6  
7  
8  
9  
(3, -4)          (7, -4)
       +--------+
       |        |
       |        |
       | (5, 0) |
       |        |
       |        |
       +--------+
(3, 4)           (7, 4)

As you can imagine eye space is very similar to world space, only that the objects are relative to the camera instead. In other words, in this space the camera is always at (0, 0). For 2D, this doesn't actually mean much and is usually not actually the case though. The reason lies in how most people use glOrtho(). glOrtho(0, 100, 100, 0, -1, 1) will map (0, 0) in eye space to (-1, 1) in normalized device coordinates, which is the top left corner. If you pass in (100, 100), you'll end up at (1, -1), which is the bottom right corner. These coordinates are then mapped to the screen using the viewport. What this glOrtho() call in practice does is map the area (0, 0) to (100, 100) of your eye space coordinates to the viewport. With that glOrtho() call and our viewport set to (0, 0, 100, 100), our sprite would end up at the top left corner of the screen, with the top half of it being outside of the screen.
1  
2  
3  
4  
5  
6  
7  
8  
9  
   (3, -4)          (7, -4)
          +--------+
          |        |
(0, 0)    |        |   Screen edge
    +-------|-(5, 0)-|----------
    |     |        |
    |     |        |
    |     +--------+
    |  (3, 4)      (7, 4)


EDIT: It should be obvious that this system is supposed to work well with 3D rendering, and is vastly overcomplicated for 2D...

Myomyomyo.
Offline quew8

JGO Coder


Medals: 23



« Reply #5 - Posted 2013-11-29 00:16:13 »

@theagentd Whilst I was super impressed with your artwork (and it is art) I feel that an actual image might be more clear. Especially since there are a couple of excellent ones in the Red Book.





I know they're in a 3D context but I think they're just as relevant for 2D.
Offline theagentd
« Reply #6 - Posted 2013-11-29 01:55:26 »

Evidently I skipped a step: The perspective divide. It's done automatically so I didn't want to confuse you...

Myomyomyo.
Offline Troubleshoots

JGO Coder


Medals: 35
Exp: 7-9 months


Damn maths.


« Reply #7 - Posted 2013-11-29 15:01:11 »

@theagentd Thank you so much. Your explanations are so brilliant. Smiley
You saved me from getting confused with all the jargon that you find on Google.

One more thing though. In your example you push then pop the matrix, then push and pop it again. Do you have to do this for every object that you want to draw in world space, or can you just do:
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
glPushMatrix();
glTranslatef(x, y, z);
glBegin(...);
//Render object...
glEnd();
glTranslatef(anotherX, anotherY, anotherZ);
glBegin(...);
//Render object...
glEnd();
glPopMatrix();


If so, why would you pop the matrix then push it again?

Why are all OpenGL tutorials written in Brainf**k?
Offline theagentd
« Reply #8 - Posted 2013-11-29 18:07:56 »

@theagentd Thank you so much. Your explanations are so brilliant. Smiley
You saved me from getting confused with all the jargon that you find on Google.

One more thing though. In your example you push then pop the matrix, then push and pop it again. Do you have to do this for every object that you want to draw in world space, or can you just do:
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
glPushMatrix();
glTranslatef(x, y, z);
glBegin(...);
//Render object...
glEnd();
glTranslatef(anotherX, anotherY, anotherZ);
glBegin(...);
//Render object...
glEnd();
glPopMatrix();


If so, why would you pop the matrix then push it again?
Ah, sorry, forgot to answer that question in my writing frenzy. xD


The reason we have do that is because the model and view matrices are combined into one matrix. If we had two separate matrices for this, it'd be much cleaner to do something like this each frame:
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
20  
21  
22  
glMatrixMode(GL_PROJECTION);
glLoadIdentity(); //reset
glOrtho(...);

glMatrixMode(GL_VIEW); //THIS IS NOT A VALID LINE!!!
glLoadIdentity(); //reset
glTranslatef(-cameraX, -cameraY, 0);

glMatrixMode(GL_MODEL); //THIS IS NOT A VALID LINE!!!

for(int i = 0; i < objects.size(); i++){
    GameObject obj = objects.get(i);
   
    glLoadIdentity(); //reset model matrix
   glTranslatef(obj.getX(), obj.getY(), 0);
    glRotatef(obj.getAngle(), 0, 1, 0);
    glScalef(obj.getScale(), obj.getScale(), 1);

    glBegin(...);
    ...
    glEnd();
}

Note: This code was written in this window and may have typos or minor errors. Focus on the big picture.

Basically we set up the projection and view matrices at the beginning of the frame, but the object matrix needs to be reset each for each object because all matrix modifying commands stack up.
glTranslatef(5, 0, 0) + glTranslatef(5, 0, 0) = glTranslatef(10, 0, 0)

If we were to not reset the matrix in the object rendering loop, only our first object would render correctly while the rest would most likely end up far outside the screen somewhere.

However, the above code is as I said not valid since we actually only have two matrices. Since our view matrix doesn't change in the middle of a frame being rendered, we still want to set it up just once. We could solve this quite easily by setting up our view matrix in our modelview matrix, saving it, then modifying its "model matrix part" for our object, render that object and then reloading the saved matrix into OpenGL so we're back to our original view matrix again:

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
20  
21  
22  
23  
24  
25  
26  
27  
28  
29  
30  
31  
32  
33  
34  
35  
36  
private FloatBuffer viewMatrix = BufferUtils.createFloatBuffer(16);

...

glMatrixMode(GL_PROJECTION);
glLoadIdentity(); //reset
glOrtho(...);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity(); //reset
glTranslatef(-cameraX, -cameraY, 0);

//glMatrixMode(GL_MODEL);
//There is no separate model matrix.
//Instead we'll retrieve the unmodified view matrix and save it in our FloatBuffer.
glGetFloatv(GL_MODELVIEW_MATRIX, viewMatrix);

for(int i = 0; i < objects.size(); i++){
    GameObject obj = objects.get(i);
   
    //glLoadIdentity(); //We can't reset it completely! That'd undo our camera translation!
   glTranslatef(obj.getX(), obj.getY(), 0);
    glRotatef(obj.getAngle(), 0, 1, 0);
    glScalef(obj.getScale(), obj.getScale(), 1);

    glBegin(...);
    ...
    glEnd();
   
    //By now, the three object specific transformations above aren't needed anymore,
   //and we need to get rid of them so we can render the next object. glLoadIdentity()
   //would also reset our view matrix, so let's just overwrite the matrix with our stored
   //unmodified view matrix instead!
   glLoadMatrixf(viewMatrix);
    //And voila! We've effectively reset our model matrix but left our view matrix untouched!
}


This works perfectly fine, and you could even say that this looks cleaner than using glPush/PopMatrix(). The exact same thing can be accomplished with glPush/PopMatrix() though and you wouldn't need a FloatBuffer variable to hold the view matrix since it does that for you.

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
20  
21  
22  
23  
24  
25  
26  
27  
28  
glMatrixMode(GL_PROJECTION);
glLoadIdentity(); //reset
glOrtho(...);

glMatrixMode(GL_MODELVIEW);
glLoadIdentity(); //reset
glTranslatef(-cameraX, -cameraY, 0);

for(int i = 0; i < objects.size(); i++){
    GameObject obj = objects.get(i);
   
    //glLoadIdentity(); //We can't reset it completely! That'd undo our camera translation!
   glPushMatrix(); //Stores the current matrix on the stack, in our case the unmodified view matrix.
   glTranslatef(obj.getX(), obj.getY(), 0);
    glRotatef(obj.getAngle(), 0, 1, 0);
    glScalef(obj.getScale(), obj.getScale(), 1);

    glBegin(...);
    ...
    glEnd();
   
    //Resetting time! Since we pushed the unmodified view matrix to the matrix stack, we can pop it
   //off the stack again to get back the matrix we pushed onto it. This also removes it from the stack
   //which is why we need to push before rendering each object. You can think of push as a kind of
   //matrixStack.add(getCurrentMatrix()) and pop as setCurrentMatrix(matrixStack.removeLast()).
   glPopMatrix();
    //And voila! We've restored the unmodified view matrix!
}


I hope that explains it.


Pushing and popping matrices is especially useful when you have hierarchical objects. Let's say you have a city object with a number of buildings in it. Each building has a position relative to the city it is in.
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
for(City city : cities){
    glPushMatrix(); //Save view matrix
   glTranslate(city.getX(), city.getY(), 0);
    for(Building b : city.getBuildings()){
        glPushMatrix(); //Save modelview matrix of the city
       glTranslate(b.getX(), b.getY(), 0); //Stacks up with the city's glTranslate()
       glBegin(...);
        ... //Render building
       glEnd();
        glPushMatrix();
    }
    glPopMatrix();
}


A very important thing to note though is that this can be extremely slow if you're approaching 1000 objects. It's worth noting that that for this reason all built-in matrices have been deprecated starting with OpenGL 3 and above, but I still think that this is a good place to start if you're new to OpenGL. Learn how it works and then quickly try to move on to shaders. The missing matrix functionality can be replaced by a math library, like the one included in LWJGL. It's not the best one out there, but for 2D it should be more than enough.

Myomyomyo.
Offline Troubleshoots

JGO Coder


Medals: 35
Exp: 7-9 months


Damn maths.


« Reply #9 - Posted 2013-11-30 20:22:28 »

So you're saying that since the model and view matrices are combined, you have to save the modelview matrix after setting up the view matrix, then because translations stack up you have to keep saving and loading the matrix with every translation otherwise the first translation would effectively be done twice on the second translation and so on?

Also I've read SHC's tutorial on textures:
  • Am I right in saying that glGenTextures() returns a unique id number for the texture?
  • Does binding a texture use that id number to select which texture to bind and does binding a texture ensure that the current bounded texture is the only texture affected by OpenGL calls?
  • Why do you bind a texture every time you render? Is this a way of indicating which texture to render?

So essentially, what is 'binding' a texture?

Why are all OpenGL tutorials written in Brainf**k?
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline theagentd
« Reply #10 - Posted 2013-11-30 21:22:47 »

Concerning glGenTextures() you're right. It basically gives you a currently free texture handle and marks it as "in use" for future calls to glGenTextures().

Binding in OpenGL means that the all subsequent commands will affect or use the bound object. Binding a texture both allows you to modify it with subsequent calls like glTexImage() and glTexParameter*(), and to apply it to your rendered geometry. This is a recurring concept in OpenGL and is also used for Vertex Buffer Objects (VBOs), Vertex Array Objects (VAOs), Framebuffer Objects (FBOs), etc.

It's important to understand how the target parameter works. The OpenGL specification has this to say:
Quote
When a texture is first bound, it assumes the specified target: A texture first bound to GL_TEXTURE_1D becomes one-dimensional texture, a texture first bound to GL_TEXTURE_2D becomes two-dimensional texture, a texture first bound to GL_TEXTURE_3D becomes three-dimensional texture [...]
What this essentially means is that the first call to glBindTexture() also associates a texture handle you've gotten from glGenTextures() with the specified target. The spec continues:
Quote
While a texture is bound, GL operations on the target to which it is bound affect the bound texture, and queries of the target to which it is bound return state from the bound texture.
Note how the targets must match between your texture related commands! In essence, you can have both a 1D texture and a 2D texture bound at the same time since they're bound to different targets, and direct OpenGL commands to either of them using GL_TEXTURE_1D and GL_TEXTURE_2D as targets to your commands.

Myomyomyo.
Offline Troubleshoots

JGO Coder


Medals: 35
Exp: 7-9 months


Damn maths.


« Reply #11 - Posted 2013-12-05 23:32:53 »

So I'm learning how to use shaders. I have a few questions:
  • What's the difference between creating a program and creating a shader?
  • Why do you have to load the shader program from a string? Why aren't they loaded from buffers like vbo's?
  • What does attaching a shader to a program do?
  • Once you've attached a shader, why do you have to link it? What is linking a shader?

Thanks. Smiley

Why are all OpenGL tutorials written in Brainf**k?
Offline davedes
« Reply #12 - Posted 2013-12-05 23:46:37 »

A shader 'object' defines a vertex shader, a fragment shader, or even just a single function. A 'program' links together all of its attached 'objects' so you can use it. The idea was to decouple everything so that you can re-use functions / shaders across multiple programs. All good in theory but not all drivers implement that correctly and usually it isn't worth the trouble trying to share shader objects.  In ES it's not supported afaik.

LWJGL uses strings for convenience. Pretty sure you can use the direct buffer method too.

Offline Troubleshoots

JGO Coder


Medals: 35
Exp: 7-9 months


Damn maths.


« Reply #13 - Posted 2013-12-06 00:00:09 »

Thanks but I'm still unclear on what the glLinkProgram function does. The glAttachShader function attaches the shaders to the program so what does linking the program do? Also this may be a stupid question but when you say shader object do you mean shader? Are they the same thing?

Why are all OpenGL tutorials written in Brainf**k?
Offline theagentd
« Reply #14 - Posted 2013-12-06 02:59:52 »

Thanks but I'm still unclear on what the glLinkProgram function does. The glAttachShader function attaches the shaders to the program so what does linking the program do? Also this may be a stupid question but when you say shader object do you mean shader? Are they the same thing?
Although your vertex and fragment shaders may both have successfully compiled, they still have to be linked together to form a sort of pipeline:

(vertex data) --> vertex shader --> (rasterizer generates pixels) --> fragment shader --> (pixels written to framebuffer)

Linking does further optimizations based on how the vertex shader and pixel shader interact with each other. Let's say you have color data in your VBO and your vertex shader reads this and passes it on to the pixel shader. However, the pixel shader ignores the color value and makes everything white regardless of the color value. In this case, the linking compiler will realize that generating a color value for each pixel is just wasted work since it won't be used at all, so it removes that output from the vertex shader. This in turn makes the color vertex attribute (vertex shader input) unnecessary since it's not being used either, and poof; there goes that as well, and you'll get -1 when you try to query the location of that attribute from Java (= attribute doesn't exist). GLSL always optimizes away unused uniforms and attributes.

What's the point of this? For example, you can write a massive vertex shader that does everything you'll ever need: Colors, texture coordinates, shadow map coordinates, normals, tangents, you name it. Then you can reuse this vertex shader for any number of fragment shaders that only use a small number of those output variables without having to worry about performance, since the compiler will automatically optimize away unused variables and computations that aren't needed by that specific fragment shader. The linking step allows you to mix and match vertex and fragment shaders and get optimal performance anyway. It also has uses in more advanced OpenGL.

Myomyomyo.
Offline Riven
« League of Dukes »

JGO Overlord


Medals: 605
Projects: 4
Exp: 16 years


Hand over your head.


« Reply #15 - Posted 2013-12-06 07:19:09 »

It also has uses in more advanced OpenGL, like
Don't keep us waitin'! The anticipation is killing me.

Hi, appreciate more people! Σ ♥ = ¾
Learn how to award medals... and work your way up the social rankings
Online SHC
« Reply #16 - Posted 2013-12-06 07:48:33 »

Sorry for not replying before, I'm at the college hostel and I came home due to a lot of strikes today.

Thanks but I'm still unclear on what the glLinkProgram function does. The glAttachShader function attaches the shaders to the program so what does linking the program do? Also this may be a stupid question but when you say shader object do you mean shader? Are they the same thing?

Before writing about those functions, I want to say how C programs get created (Just to get the term LINKING). There, linking means that the generated object code is (in some compilers) linked to a stub (format the OS can understand) which is the executable file we see after compilation. This is the same concept here,
glLinkProgram
links the program into an executable that the GPU can run. Before linking the program, we attach the shaders to the program using the
glAttachShader
program. This function attaches the shaders in an order that first the vertex shader is attached and then the fragment shader is attached, no matter what the order we used in our source code.

Then at the time of executing, the GPU passes the program passes the vertex data to the vertex shader.

1  
(vertex_data) --> (vertex_shader)  // Transforms the vertices and generates the pixels

The generated pixels are passed to the fragment shader.

1  
(pixels) --> (fragment_shader)     // Adds the colour data from the textures to the pixels and lighting

Those pixels will be then transformed to the screen coordinates and displayed on the screen. This is just a basic view of the SHADERS and you can get more info on that topic here.

Offline theagentd
« Reply #17 - Posted 2013-12-06 17:50:07 »

It also has uses in more advanced OpenGL, like
Don't keep us waitin'! The anticipation is killing me.
I thought it'd be unrelated so I decided not to write anything, but here goes:

You have to set up certain things before linking. In OpenGL 3+, there's no built-in gl_FragColor output for fragment shaders, so you have to define your output(s) yourself. When combined with MRT (rendering to multiple textures at the same time), you have to specify which output goes to which color attachment using glBindFragDataLocation(). This has to be done before linking. (Note: The output index can also be defined in your shader.)

The same is true when capturing vertex data using transform feedback. You have to tell OpenGL which outputs of your vertex or geometry shader you're interested in using glTransformFeedbackVaryings() to prevent the GLSL compiler from potentially optimizing those attributes away. Again, this has to be done before linking.


...

The generated pixels are passed to the fragment shader.

1  
(pixels) --> (fragment_shader)     // Adds the colour data from the textures to the pixels and lighting

Those pixels will be then transformed to the screen coordinates and displayed on the screen. ...
Just a small detail, but I'd like to point out that pixels are generated by transforming the geometry to screen coordinates and filling in the pixels that have their centers covered by the geometry. Screen coordinate transformation happens before the fragment shader and is available to the fragment shader in the built-in varying gl_FragCoords.

Myomyomyo.
Offline StumpyStrust
« Reply #18 - Posted 2013-12-06 18:09:32 »

This is my reaction to these explanations.


Offline Troubleshoots

JGO Coder


Medals: 35
Exp: 7-9 months


Damn maths.


« Reply #19 - Posted 2013-12-06 18:13:56 »

This is my reaction to these explanations.


Well said. Cheesy

@theagentd @SHC Thanks a lot, nice explanations.

Why are all OpenGL tutorials written in Brainf**k?
Offline Troubleshoots

JGO Coder


Medals: 35
Exp: 7-9 months


Damn maths.


« Reply #20 - Posted 2013-12-08 23:46:12 »

Consider I have this field in my vertex shader:

1  
in vec4 position

Now consider that I've set up my vbo, created my shaders, bound my vertex attributes, etc. and now I'm rendering the vertices. My code is:

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
glUseProgram(program);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 4, GL_FLOAT, false, 0, 0);

glDrawArrays(GL_TRIANGLES, 0, 3);

glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glUseProgram(0);

Question: in what order is everything done?
My understanding (I'm assuming things here) is that when glDrawArrays is called, the glVertexAttribPointer call formats the data in the VBO and it sends that data to the attribute index specified by the first argument. Since attribute index 0 is enabled, the position field is initialized?  Am I correct?

Why are all OpenGL tutorials written in Brainf**k?
Offline theagentd
« Reply #21 - Posted 2013-12-09 02:44:55 »

Question: in what order is everything done?
My understanding (I'm assuming things here) is that when glDrawArrays is called, the glVertexAttribPointer call formats the data in the VBO and it sends that data to the attribute index specified by the first argument. Since attribute index 0 is enabled, the position field is initialized?  Am I correct?
glVertexAttribPointer() does not "format" the data in the VBO. It simply explains how to interpret it.

glVertexAttribPointer(0, 4, GL_FLOAT, false, 0, 0);

Arguments:
1: Which attribute location this should be put in. Basically, which shader input variable should we store this in?
2: The number of components of this attribute. If 4, then the shader input variable has to be a vec4.
3: Data type.
4: Should the data be normalized? Used when uploading bytes, shorts and ints. If true with GL_UNSIGNED_BYTE, then the byte range 0-255 is mapped to 0.0 to 1.0. If false, then treated as 0.0 to 255.0. If normalized and GL_BYTE, the signed byte range is mapped to -1 to 1.
5: How many bytes each vertex is. 0 = tightly packed, in which case it'll calculate the size of this variable and use that. In this case, that's 4 components times 4 bytes per float, so 16.
6: Offset in bytes. As with stride, useful when having more than one attribute interleaved in the VBO.

As you can see, nothing here actually modifies the VBO. It tells what to do with the data in it. When you then call glDrawArrays(), OpenGL will read vertex data from the VBO based on the vertex attribute setup.
glDrawArrays(GL_TRIANGLES, 0, 3);
renders 3 vertices. For each vertex, it goes over all enabled attributes and reads data from a VBO as specified by each corresponding call to glVertexAttribPointer(). For the first vertex (vertex 0), it'd look at attribute 0, see that it should read 4 floats from the position (offset + vertexID * stride) = (0 + 0*16), so it reads byte 0 to 16. For the second, it sees (0 + 1*16) = 16, etc...

Myomyomyo.
Offline Troubleshoots

JGO Coder


Medals: 35
Exp: 7-9 months


Damn maths.


« Reply #22 - Posted 2013-12-17 15:14:00 »

I took a couple of days break to clear my head and I decided to try and learn the matrix maths behind all the translations etc. which I realised was a mistake not to do before. Anyway, I read this page:  http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/. Before I plod on let me re-post this image that RobinB posted previously.



The first few lines confused me.
Quote
If w == 1, then the vector (x,y,z,1) is a position in space.
If w == 0, then the vector (x,y,z,0) is a direction.

So first off, what are a scenarios where we'd need a direction vector? Is this related to the direction of scaling an object or something? Also I thought that W could be between 0 and 1. What happens if the W component is defined as lets say 0.25?

Translation:
Lets now refer to this image that it provides.


Lets say we changed that matrix to:

1, 1, 0, 10
0, 1, 0, 0
0, 0, 1, 0
0, 0, 0, 1

We'd get (30, 10, 10, 1). We get a change to the X coordinate but we're multiplying it by our Y coordinate. Can someone explain of what use this is? Do any of the OpenGL functions modify that part of the matrix? Also what is the translation column actually for? Couldn't you do translations in the X/Y/Z columns?

Why are all OpenGL tutorials written in Brainf**k?
Offline Riven
« League of Dukes »

JGO Overlord


Medals: 605
Projects: 4
Exp: 16 years


Hand over your head.


« Reply #23 - Posted 2013-12-17 19:10:14 »

When you rotate over the Z axis:
    the incoming X affects the outgoing Y
    the incoming Y affects the outgoing X

That is when 'OpenGL uses that part of the matrix'.

Hi, appreciate more people! Σ ♥ = ¾
Learn how to award medals... and work your way up the social rankings
Offline Troubleshoots

JGO Coder


Medals: 35
Exp: 7-9 months


Damn maths.


« Reply #24 - Posted 2013-12-23 17:11:07 »

When you rotate over the Z axis:
    the incoming X affects the outgoing Y
    the incoming Y affects the outgoing X

That is when 'OpenGL uses that part of the matrix'.

Ahh thanks, I found a page which explains everything.

Next Question:
How do the stride and offset parameters of glVertexPointer() and the like work? I've tried searching around and I can only find defines the byte offset between data etc. explanations which I don't understand and C++ related explanations which uses sizeof() as a parameter which I don't understand. Examples would be appriciated.

Say I have data packed as VCVCVC;
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(
      GL_ARRAY_BUFFER,
      (FloatBuffer) BufferUtils
         .createFloatBuffer(18)
            .put(new float[] { -0.5f, -0.5f, 0, 1f, 1f, 1f, -0.5f,
                  0.5f, 0f, 1f, 0f, 0f, 0.5f, -0.5f, 0f, 0f, 0f,
                  1f }).flip(), GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);

.....

glVertexPointer(3, GL_FLOAT, 6 << 2, 0 << 2);
glColorPointer(3, GL_FLOAT, 6 << 2, 3 << 2);


That would translate to:
1  
2  
glVertexPointer(3, GL_FLOAT, 24, 0);
glColorPointer(3, GL_FLOAT, 24, 12);

That doesn't really make sense to me. I only have 18 pieces of data in my buffer but I'm defining the stride as 24 and one of the offsets as 12.  Huh

Now lets consider I have the data packed as VVVCCC.
1  
2  
glVertexPointer(3, GL_FLOAT, 3 << 2, 0 << 2);
glColorPointer(3, GL_FLOAT, 3 << 2, 9 << 2);

Would I be correct in saying that the stride is
(pieces of data * proportional gap between data) << 2
and the offset is
starting position of the data << 2
? Why do you left-shift everything by 2?

And finally, how is the below used?
Quote
public static void glVertexPointer(int size, int stride, java.nio.FloatBuffer pointer)

Is it used when the offset can be 0 i.e tightly packed non interleaved buffers? Everything I've seen so far binds the buffer and uses
Quote
public static void glVertexPointer(int size, int type, int stride, long pointer_buffer_offset)

EDIT: Never mind I'm pretty sure that I'm correct in thinking that that function is used for VAOs. The difference between VBOs and VAOs is that the data for a VBO is placed on the GPU and you access it via a handle whereas with a VAO you have to keep creating the buffer. Am I correct?

Why are all OpenGL tutorials written in Brainf**k?
Offline Danny02
« Reply #25 - Posted 2013-12-23 18:11:56 »

About the stride and offset thing.The stride defines how big all your vertex attributes are in bytes.

In your example you have two vertex attributes. Each of them consist of 3 floats and a float has a size of 4 bytes, so you need 24 bytes(2*3*4 == 6 << 2) for one vertex.
So that OpenGL knows where it can find each single attribute, you define an offset into this vertex data block. In your example the position attribute is at the beginning of each block so its offset is 0. The color attribute is placed right after the position attribute, so it's offset is equal to the size of the position attribute(3*4=12 bytes).

When you don't want to create an interleaved data-structure(VVVCCC) you would bind the same buffer, but with different pointer offsets.
Offline Troubleshoots

JGO Coder


Medals: 35
Exp: 7-9 months


Damn maths.


« Reply #26 - Posted 2013-12-24 00:17:15 »

@Danny02 Thanks.
Next Question: (I think two in a day is my record  Roll Eyes)

Lets make this short and sweet. You can define an offset with glVertexPointer() so what's the point of the first (not literally) parameter in
glDrawArrays(mode, first, count)
?

Why are all OpenGL tutorials written in Brainf**k?
Offline ra4king

JGO Kernel


Medals: 322
Projects: 2
Exp: 4 years


I'm the King!


« Reply #27 - Posted 2013-12-24 01:13:27 »

Beware, glVertexPointer and glColorPointer are deprecated and not part of core OpenGL.

Concerning your question, "first" is the first index to start at, as defined in the docs (which you should most likely try to reference more often Wink). If you want to render everything, you start at 0 and "count" is how many vertices there are.

Offline Troubleshoots

JGO Coder


Medals: 35
Exp: 7-9 months


Damn maths.


« Reply #28 - Posted 2013-12-24 16:44:01 »

Beware, glVertexPointer and glColorPointer are deprecated and not part of core OpenGL.

Concerning your question, "first" is the first index to start at, as defined in the docs (which you should most likely try to reference more often Wink). If you want to render everything, you start at 0 and "count" is how many vertices there are.

So you're saying that the first parameter is used for newer versions of OpenGL because you cannot define an offset for a VBO?

Why are all OpenGL tutorials written in Brainf**k?
Offline PandaMoniumHUN

Junior Member


Medals: 4



« Reply #29 - Posted 2013-12-24 22:04:11 »

Well, there are some weird parameters in OpenGL that you're likely to never use, although having options never hurts. Smiley

In modern OpenGL you fill up buffers just as you do now (glBufferData(...)/glBufferSubData(...)) but to assign data that goes to your shaders you have to use glVertexAttribPointer(...) and using that function you can set offsets and strides, letting you use a single buffer for multiple parameters. This means that you can store your vertex, normal and texture coordinate (and even more) data in a single buffer/VBO than you can render from that using your shaders.

To answer your question: You can use offsets in pointers to tell OpenGL where your attribute begins in the buffer but you shouldn't use this to "skip over" indices as you think right now.
If you want to skip over for example the first 5 indices you should render with glDrawArrays()'s first parameter set to 5.
I know it possibly sounds a bit overwhelming right now but if you have any questions just ask, after all that's what this topic is for. Roll Eyes

Pages: [1] 2
  ignore  |  Print  
 
 

 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

xsi3rr4x (48 views)
2014-04-15 18:08:23

BurntPizza (44 views)
2014-04-15 03:46:01

UprightPath (60 views)
2014-04-14 17:39:50

UprightPath (42 views)
2014-04-14 17:35:47

Porlus (58 views)
2014-04-14 15:48:38

tom_mai78101 (81 views)
2014-04-10 04:04:31

BurntPizza (140 views)
2014-04-08 23:06:04

tom_mai78101 (240 views)
2014-04-05 13:34:39

trollwarrior1 (200 views)
2014-04-04 12:06:45

CJLetsGame (207 views)
2014-04-01 02:16:10
List of Learning Resources
by SHC
2014-04-18 03:17:39

List of Learning Resources
by Longarmx
2014-04-08 03:14:44

Good Examples
by matheus23
2014-04-05 13:51:37

Good Examples
by Grunnt
2014-04-03 15:48:46

Good Examples
by Grunnt
2014-04-03 15:48:37

Good Examples
by matheus23
2014-04-01 18:40:51

Good Examples
by matheus23
2014-04-01 18:40:34

Anonymous/Local/Inner class gotchas
by Roquen
2014-03-11 15:22:30
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!