Java-Gaming.org Hi !
Featured games (83)
games approved by the League of Dukes
Games in Showcase (539)
Games in Android Showcase (132)
games submitted by our members
Games in WIP (603)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: 1 [2]
  ignore  |  Print  
  OpenGL Questions  (Read 6775 times)
0 Members and 1 Guest are viewing this topic.
Offline Troubleshoots

JGO Knight


Medals: 36
Exp: 7-9 months


Damn maths.


« Reply #30 - Posted 2013-12-24 21:49:26 »

Well, there are some weird parameters in OpenGL that you're likely to never use, although having options never hurts. Smiley

In modern OpenGL you fill up buffers just as you do now (glBufferData(...)/glBufferSubData(...)) but to assign data that goes to your shaders you have to use glVertexAttribPointer(...) and using that function you can set offsets and strides, letting you use a single buffer for multiple parameters. This means that you can store your vertex, normal and texture coordinate (and even more) data in a single buffer/VBO than you can render from that using your shaders.

To answer your question: You can use offsets in pointers to tell OpenGL where your attribute begins in the buffer but you shouldn't use this to "skip over" indices as you think right now.
If you want to skip over for example the first 5 indices you should render with glDrawArrays()'s first parameter set to 5.
I know it possibly sounds a bit overwhelming right now but if you have any questions just ask, after all that's what this topic is for. Roll Eyes

Ahh I understand. It's going a little off the buffer object topic, but when do you use shaders in modern OpenGL? I'd have thought you'd use a vertex shader for the camera for example and various fragment shaders for different visual effects, however I've seen forum posts in the past mentioning that shaders aren't used very often but also seen forum posts saying that they're an essential part of modern OpenGL. After a little peek around the LibGDX source I found it uses no shaders and only uses up to v2.0 of OpenGL but runs pretty fast. Are shaders only really essential for 3D?

Why are all OpenGL tutorials written in Brainf**k?
Offline davedes
« Reply #31 - Posted 2013-12-24 22:23:45 »

Shaders are an essential part of the OpenGL 2.0 + pipeline. That's why it's called the "programmable pipeline."

To get a triangle on screen, you need to upload its vertex data. That's what a VBO is for. A VAO sets up attributes for that VBO. Then when it gets rendered, the vertices first pass through a vertex shader. There is no default shader -- so you have to create one and bind it. During rasterization, the pixels that make up your triangles (aka fragments) go through the currently bound fragment shader.

All of these questions would be answered with a book or some reading. Smiley

http://www.arcsynthesis.org/gltut/
http://www.opengl-tutorial.org/
http://open.gl/
http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Table-of-Contents.html

Offline Troubleshoots

JGO Knight


Medals: 36
Exp: 7-9 months


Damn maths.


« Reply #32 - Posted 2013-12-24 22:32:49 »

Shaders are an essential part of the OpenGL 2.0 + pipeline. That's why it's called the "programmable pipeline."

To get a triangle on screen, you need to upload its vertex data. That's what a VBO is for. A VAO sets up attributes for that VBO. Then when it gets rendered, the vertices first pass through a vertex shader. There is no default shader -- so you have to create one and bind it. During rasterization, the pixels that make up your triangles (aka fragments) go through the currently bound fragment shader.

All of these questions would be answered with a book or some reading. Smiley

http://www.arcsynthesis.org/gltut/
http://www.opengl-tutorial.org/
http://open.gl/
http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Table-of-Contents.html
Quote
also seen forum posts saying that they're an essential part of modern OpenGL

I didn't ask how to use a shader program or what a shader program does. I asked when shaders are used. I've looked at all of those books and started to read the arcsynthesis book, but my difficulty comes with the language used, both in the context of the programming language and of the wording. I've downloaded r4king's port of the code for the arcsynthesis book but I don't learn by reading code. Currently I'm focusing on VBOs but then I'll continue to read the book and do some experimentation.

Why are all OpenGL tutorials written in Brainf**k?
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline Danny02
« Reply #33 - Posted 2013-12-24 22:40:59 »

You need shaders to even get a single triangle on screen. OpenGL doesn't know anything about the vertex data you provide in a VBO, you have to tell it through a shader what and where to draw something.
Offline Troubleshoots

JGO Knight


Medals: 36
Exp: 7-9 months


Damn maths.


« Reply #34 - Posted 2013-12-24 22:48:37 »

You need shaders to even get a single triangle on screen. OpenGL doesn't know anything about the vertex data you provide in a VBO, you have to tell it through a shader what and where to draw something.

Well I know that. I guess my original question wasn't too clear. Let me re-word it.
How would you split up the use of shaders in a game? Would you use a vertex shader for the camera and several fragment shaders for textures and visual effect etc.? What other things would you need a shader for in for example a simple 2D platformer?
I had a peek around the LibGDX source and noticed that it doesn't use the programmable pipeline. Why is this? Is it because the fixed function pipeline is good enough to use in 2D games? Are shaders and modern OpenGL functions that are part of the programmable pipeline only essential for content-rich 3D games?

Why are all OpenGL tutorials written in Brainf**k?
Offline davedes
« Reply #35 - Posted 2013-12-25 00:52:33 »

You need a vertex and fragment shader to render anything. LibGDX definitely does use shaders and the programmable pipeline. It also has some fixed-function fallbacks so that your game will even render on older versions of Android.

A 2D game probably just needs one shader program, to get a textured quad on screen in orthographic projection. Certain effects like normal mapping (needs WebGL to run) will use different shaders, but most 2D games don't do this.

Here's a tutorial that goes into depth on shaders and 2D effects. I wrote it -- so let me know if you have questions or find the language difficult.
https://github.com/mattdesl/lwjgl-basics/wiki/Shaders

Offline ra4king

JGO Kernel


Medals: 356
Projects: 3
Exp: 5 years


I'm the King!


« Reply #36 - Posted 2013-12-25 06:48:10 »

Also another quick note, LibGDX uses OpenGL ES so its versions are different from OpenGL. GL ES 1.X was fixed function, GL ES 2.0 is programmable. There's now GL ES 3.0 which adds more goodies to the programmable pipeline.

Offline PandaMoniumHUN

JGO Coder


Medals: 32
Exp: 3 years


White-bearded OGL wizard


« Reply #37 - Posted 2013-12-25 08:59:06 »

Well I know that. I guess my original question wasn't too clear. Let me re-word it.
How would you split up the use of shaders in a game? Would you use a vertex shader for the camera and several fragment shaders for textures and visual effect etc.? What other things would you need a shader for in for example a simple 2D platformer?
I had a peek around the LibGDX source and noticed that it doesn't use the programmable pipeline. Why is this? Is it because the fixed function pipeline is good enough to use in 2D games? Are shaders and modern OpenGL functions that are part of the programmable pipeline only essential for content-rich 3D games?
I think you don't really understand what is a shader and what they're used for.

In modern OpenGL (3.1+ counts as modern IMO, because that's when all the deprecated stuff has became unsupported) you HAVE TO use shaders for getting anything to the screen. Pointing
There are different kinds of shaders (vertex, fragment (or sometimes called pixel), geometry) all serving their own purposes.
In the vertex shader you usually calculate vertex position (using the vertex attribute input multiplied with your own projection, view, model, etc. matrices) and pass over incoming attributes, e.g. normals to the fragment shader, while in the fragment shader you calculate a single fragment's (or pixel's, although I think this isn't really appropriate) color.

This is not a too complicated process once you understand what's going on under the hood, however, I really suggest you to pick up a book on modern OpenGL since immediate mode isn't really viable for rendering something else than just a few triangles, also it won't let you do any cool effects like lighting (other than the built-in crap that's practically useless) or anything else.
There are also a few good tutorials on the internet too, but honestly it's extremely hard to find decent tutorials on modern OpenGL (and by modern I mean something like GLSL 3.30 and definitely not GLSL 1.20).

You should check out Davedes's shader tutorials (he linked it in 1 or 2 posts above) because that's a good point to start from, even though that is not really modern either, it's way better than using immediate mode without any shaders. Wink

My Blog | Jumpbutton Studio - INOP Programmer
Can't stress enough: Don't start game development until you haven't got the basics of programming down! Pointing
Offline Troubleshoots

JGO Knight


Medals: 36
Exp: 7-9 months


Damn maths.


« Reply #38 - Posted 2013-12-29 15:41:58 »

Regarding my previous questions about shaders; never mind, I was meaning to ask how shaders affect the design of the game, i.e how you structure and split up the use of shaders in the game. Lets ignore that question though for now.

I've started to re-read the start of the arcsynthesis book. I've noticed that in r4king's LWJGL code, when the buffer object is filled with data,
glBindVertexArray(glGenVertexArrays())
is called. If this is commented out it logs an error, however I realise that it works without that function call. Why is this? I don't really understand what the specification says about it, so I'd be grateful for a brief explanation.

Also I'm unclear on how the connection made between input variables in a shader and code.
If I have a position vec3 input variable bound to attribute location 0, how is the connection between the attribute and the below code made?

1  
2  
3  
4  
5  
6  
7  
8  
glUseProgram(programId);
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, false, 3 << 2, 0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glUseProgram(0);


I understand that glEnableVertexAttribArray(0) enables the use of attribute index 0. Is there a reason it has the word array in the function name? Now for the function glVertexAttribPointer(). I know what all the arguments mean, but I'm unclear with how data is passed from a buffer object to the vertex shader executable on the shader program. Does it pass each x, y, z component individually to the attribute because of the stride and the size specified?

Why are all OpenGL tutorials written in Brainf**k?
Offline Danny02
« Reply #39 - Posted 2013-12-29 17:06:23 »

You have to know that OpenGL is a huge state-machine. Think of an programm which is only written with global variables. Each of these function calls in question set some specific state/global var. Let's get over them one by one.
  • glBindBuffer just set the GL_ARRAY_BUFFER state to some id
  • glEnableVertexAttribArray sets the enabled state of some Attribute(e.g. 0) to true (glDisableVertexAttribArray sets it to false)
  • glVertexAttribPointer sets all other needed state of some Attribute(e.g. 0) and also sets the buffer-object state of the Attribute to the current value of the GL_ARRAY_BUFFER state
  • glDrawArrays just gives the command to draw X shapes with the current bound shader and enabled attributes

So this code would still work if you would paste the line
glBindBuffer(GL_ARRAY_BUFFER, 0);
in front of the glDrawArrays command.

Now about the
glBindVertexArray(glGenVertexArrays())
thing:
First of all these two functions have nothing to do with what is also called VertexArrays (drawing directly some client array). They create and use something called VertexArrayObject(VAO) which was introduced in OpenGL 3.
What this OpenGL object does is that it has an own copy of the global vertex-attribute state(glVertexAttribPointer, glEnableVertexAttribArray). When you bind such an object all following calls which change an attribute state does change the VAO state and not the global state(global == VAO with id 0).

So you can create one VAO for each model(your VBO now) you have and only have to set the attributes once. After that, to render one specific model you only have to set the shader and VAO. So only 3 function calls are needed to render something.

So in the end
glBindVertexArray(glGenVertexArrays())
is quite pointless to do because the id from the Gen call isn't saved and can't be reused.
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline PandaMoniumHUN

JGO Coder


Medals: 32
Exp: 3 years


White-bearded OGL wizard


« Reply #40 - Posted 2013-12-29 17:10:19 »

Well, it may sound harsh but try to think before you ask a question.
Last time when you was asking about the game design implementations of shaders we answered your question: They're used for everything.
And yes, "everything" includes everything from a simple cube to real time shadows and extreme lighting effects.
You can ask for examples but the answer is that they're everywhere. I've told you that the vertex shader is used for vertex position calculations and passing data to fragment shader, while the fragment shader tells OpenGL the color of the actual fragment it runs on.

glBindVertexArray(glGenVertexArrays()) simply binds an empty vertex array object, as far as I know this has no effect on rendering whatsoever, and you definitely should not get an error if it's missing (assuming that you do everything correctly). Also it's a bad practice since you should always store your generated object's id so later on you can delete it when you don't need it anymore, freeing up space VRAM.
Mod: Actually now I remember what vertex array objects do. As explained above by Danny02 they save the vertex attribute modifications and buffer bindings so if you use them all you have to do on rendering is to just bind the vertex array object and it'll set all the buffers and vertex attribute pointers for you again so all you have to do is just render using glDrawArrays(...) or glDrawElements(...).

The glVertexAttribPointer(...) in your example tells OpenGL that the data should be sent to attribute array 0 (so basically to location 0), that the attribute has 3 components (so it will become a vec3 in the shader), the type is float, the stride between the attribute in the buffer is 3<<2 (I don't really get it why do you do this, see my explanation on this parameter below) and that the data starts right on the first byte of the buffer.

You say that you understand every parameter in glVertexAttribPointer(...) and then you ask how does data gets passed to the shader, which clearly explains that you don't understand what glVertexAttribPointer(...) does.
Using glVertexAttribPointer(...) you can tell OpenGL what data it should send to the shader in what format.
One really important fact is that the vertex attribute pointer will always send data from the currently bidden array buffer.
The parameters are already explained here but I will try to rephrase them for you:
index - The index of the vertex attribute array (you will also have to enable it using glEnableVertexAttribArray(index))
size - The number of components that you're going to pass. Can be only 1, 2, 3 or 4 and OpenGL will automatically convert it to float (probably), vec2, vec3 or vec4.
normalized - If set to true OpenGL will normalize the value on access. Otherwise it should be set to false.
stride - The stride of the parameter IN BYTES. This is the most tricky value since not only the value is set in bytes but also you have to set the stride so that it will point from the first component to the next instance of this attribute. My explanation is probably crap because of my english skills but you should look it up once you start to use interleaved arrays (also remember that 1 float = 4 bytes). For now all you have to know that if you set it to 0 OpenGL will assume that the buffer is tightly packet and it can continuously read the data from it. TLDR: Set it to 0 for now, look it up later when you do more complex stuff.
offset - The offset before the attribute's first appearance in the buffer specified IN BYTES. This one's easier to explain than the stride: If your buffer looks like this [1, 2, 3, 4, 5, 6, ...] and you want OpenGL to read the buffer from number 4 you set the offset to 12 because that equals 3x4 so it'll skip the first 3 floats in the buffer. Also until you don't fill up your buffer with other stuff or you don't use interleaved arrays you shouldn't really worry about this and just set this to 0.

Some information I wrote here might not be 100% correct theoretically because I'm a hobbyist and not a professional (even though I'm planning to be one soon Grin), however most of the stuff should be correct. I know OpenGL can be hard to learn but never give up and you'll become good at it in no time. Smiley

My Blog | Jumpbutton Studio - INOP Programmer
Can't stress enough: Don't start game development until you haven't got the basics of programming down! Pointing
Offline ra4king

JGO Kernel


Medals: 356
Projects: 3
Exp: 5 years


I'm the King!


« Reply #41 - Posted 2013-12-29 21:35:04 »

So in the end
glBindVertexArray(glGenVertexArrays())
is quite pointless to do because the id from the Gen call isn't saved and can't be reused.
I want to clarify why this is in my code. In core OpenGL, there's no more global VAO. It is now required to bind a VAO before any draw calls. Since the Arcsynthesis tutorial doesn't cover VAO's til Chapter 5, it just binds one and doesn't use it yet.

This should also explain why you get an error when you remove it Wink

Offline Danny02
« Reply #42 - Posted 2013-12-29 23:45:17 »

Interesting, didn't knew that. One more reason to read up on OpenGL > 3.0
Offline ra4king

JGO Kernel


Medals: 356
Projects: 3
Exp: 5 years


I'm the King!


« Reply #43 - Posted 2013-12-30 00:33:04 »

Yup, and the other 99% of the questions in this thread are answered in the first 7 chapters, so read thoroughly OP Smiley

Offline Troubleshoots

JGO Knight


Medals: 36
Exp: 7-9 months


Damn maths.


« Reply #44 - Posted 2013-12-30 10:42:37 »

Now about the
glBindVertexArray(glGenVertexArrays())
thing:
First of all these two functions have nothing to do with what is also called VertexArrays (drawing directly some client array). They create and use something called VertexArrayObject(VAO) which was introduced in OpenGL 3.
What this OpenGL object does is that it has an own copy of the global vertex-attribute state(glVertexAttribPointer, glEnableVertexAttribArray). When you bind such an object all following calls which change an attribute state does change the VAO state and not the global state(global == VAO with id 0).

So you can create one VAO for each model(your VBO now) you have and only have to set the attributes once. After that, to render one specific model you only have to set the shader and VAO. So only 3 function calls are needed to render something.

I see, thanks. Does calling glGenVertexArrays() when the buffer is bound associate the buffer with the VAO, or is the association made another way?

Can be only 1, 2, 3 or 4 and OpenGL will automatically convert it to float (probably), vec2, vec3 or vec4.

I did previously know what the parameters were for but I was wondering how data was passed to the shader. You answered it here for me. Thanks.

I want to clarify why this is in my code. In core OpenGL, there's no more global VAO. It is now required to bind a VAO before any draw calls. Since the Arcsynthesis tutorial doesn't cover VAO's til Chapter 5, it just binds one and doesn't use it yet.

This should also explain why you get an error when you remove it Wink

I see. May I suggest adding a comment explaining this, especially since someone could confuse VAOs with VAs.

Yup, and the other 99% of the questions in this thread are answered in the first 7 chapters, so read thoroughly OP Smiley

I'd find it a lot easier if the book didn't say "this is explained in a later tutorial". I prefer to understand why I'm writing what I am.

Why are all OpenGL tutorials written in Brainf**k?
Offline ra4king

JGO Kernel


Medals: 356
Projects: 3
Exp: 5 years


I'm the King!


« Reply #45 - Posted 2013-12-30 11:28:55 »

Now about the
glBindVertexArray(glGenVertexArrays())
thing:
First of all these two functions have nothing to do with what is also called VertexArrays (drawing directly some client array). They create and use something called VertexArrayObject(VAO) which was introduced in OpenGL 3.
What this OpenGL object does is that it has an own copy of the global vertex-attribute state(glVertexAttribPointer, glEnableVertexAttribArray). When you bind such an object all following calls which change an attribute state does change the VAO state and not the global state(global == VAO with id 0).

So you can create one VAO for each model(your VBO now) you have and only have to set the attributes once. After that, to render one specific model you only have to set the shader and VAO. So only 3 function calls are needed to render something.

I see, thanks. Does calling glGenVertexArrays() when the buffer is bound associate the buffer with the VAO, or is the association made another way?
Hehe, another thing the Arcsynthesis tutorial made sure to be clear about (in Chapter 5).
- The association of the currently bound ARRAY_BUFFER with the currently bound VAO only occurs at the glVertexAttribPointer(...) call.
- The association of an ELEMENT_ARRAY_BUFFER and the currently bound VAO only occurs when you bind the buffer.

For either, after the condition is met, you can freely unbind the buffers, as the VAO now has a copy of the pointer.

I want to clarify why this is in my code. In core OpenGL, there's no more global VAO. It is now required to bind a VAO before any draw calls. Since the Arcsynthesis tutorial doesn't cover VAO's til Chapter 5, it just binds one and doesn't use it yet.

This should also explain why you get an error when you remove it Wink

I see. May I suggest adding a comment explaining this, especially since someone could confuse VAOs with VAs.
You're right. I'll go through and add that comment to each example that has that line. In fact, I think my code just needs to be more commented. Arcsynthesis's code is weird sometimes.

Yup, and the other 99% of the questions in this thread are answered in the first 7 chapters, so read thoroughly OP Smiley

I'd find it a lot easier if the book didn't say "this is explained in a later tutorial". I prefer to understand why I'm writing what I am.
Well it's quite difficult to explain everything at once. Patience, my friend. Smiley

Offline Troubleshoots

JGO Knight


Medals: 36
Exp: 7-9 months


Damn maths.


« Reply #46 - Posted 2014-01-03 21:09:08 »

New Question:
I've been reading more of the book and I'm currently on chapter 4. I understand pretty much the first half of the page then I get lost. What's confusing me is what it says after it explains the perspective divide. It says:

Quote
You might notice that the scaling can be expressed as a division operation (multiplying by the reciprocal). And you may recall that the difference between clip space and normalized device coordinate space is a division by the W coordinate. So instead of doing the divide in the shader, we can simply set the W coordinate of each vertex correctly and let the hardware handle it.

What I've always been confused about and what I still don't understand is the meaning of the W coordinate. What does it represent? Up until now I've always got the impression that the W coordinate is just a normalized Z coordinate, but that doesn't really make sense. All I can find on Google is "dividing by W converts clip space to NDCs". Maybe it explains it further on in the book, but I want to understand the maths behind all the projections before continuing.

I continued to read on through the section about camera space and got even more confused. It says:

Quote
The volume of camera space will range from positive infinity to negative infinity in all directions. Positive X extends right, positive Y extends up, and positive Z is forward. The last one is a change from clip space, where positive Z is away.

How can it be an infinite space? Why is +Z further away when in clip space -Z is further away? On the next three lines it says:

Quote
Our perspective projection transform will be specific to this space. As previously stated, the projection plane shall be a region [-1, 1] in the X and Y axes, and at a Z value of -1. The projection will be from vertices in the -Z direction onto this plane; vertices that have a positive Z value are behind the projection plane.

I get totally lost. It all seems very confusing. I'd appreciate if someone could try and explain camera space and possible throw in a diagram or two. The depth computation equation doesn't spread any light on it.

Why are all OpenGL tutorials written in Brainf**k?
Offline ra4king

JGO Kernel


Medals: 356
Projects: 3
Exp: 5 years


I'm the King!


« Reply #47 - Posted 2014-01-03 22:22:10 »

What I've always been confused about and what I still don't understand is the meaning of the W coordinate. What does it represent? Up until now I've always got the impression that the W coordinate is just a normalized Z coordinate, but that doesn't really make sense. All I can find on Google is "dividing by W converts clip space to NDCs". Maybe it explains it further on in the book, but I want to understand the maths behind all the projections before continuing.
That's really all you need to know, I promise. The 'W' coordinate will always be 1 as far as you're concerned (except for directional lights where it will be 0, but that is for the Lighting chapters Cheesy). Multiplying the camera world positions with the perspective matrix sets a "special" 'W' value in clip space that, divided into the XYZ components, will produce the correct NDC coordinate.

How can it be an infinite space? Why is +Z further away when in clip space -Z is further away?
In mathematical terms, it is infinite space because nothing is limiting where you place your objects. Technically, it is finite as it extends from the maximum and minimum you can fit into a 32-bit float Smiley

And again, don't worry about clip space and NDC. You will never deal with them, and that chapter just likes being complete and explaining how things work under the hood. Just remember that in camera space and above layers: -Z is forward.

EDIT: I found a nice little flowchart that shows the steps your vertices take to get from your vertex data to the screen:


Your vertex data first gets transformed by your Model-View Matrix (Model Matrix multiplied by View Matrix) where the Model Matrix positions the vertex (in Object Space) at its place in the world (World Space) followed with the View Matrix positioning the World Space vertex relative to the camera, aka Camera/Eye Space (where (0,0,0) is your camera). The Camera Space vertices are then transformed by the Projection Matrix into Clip Space (which you should never need to deal with), Perspective Divide into NDC space, and finally stretching NDC space onto the GL Viewport.

Offline Troubleshoots

JGO Knight


Medals: 36
Exp: 7-9 months


Damn maths.


« Reply #48 - Posted 2014-01-03 22:49:51 »

That's really all you need to know, I promise. The 'W' coordinate will always be 1 as far as you're concerned (except for directional lights where it will be 0, but that is for the Lighting chapters Cheesy). Multiplying the camera world positions with the perspective matrix sets a "special" 'W' value in clip space that, divided into the XYZ components, will produce the correct NDC coordinate.

I hate not knowing how something works. Sad
I forgot to ask something else, so while I'm at it, here goes:

Why are the frustum scale, zNear and zFar variables defined as uniform? I thought that uniform variables were used when the variable should be changed regularly. Why aren't they just defined as attributes, i.e in?

Just remember that in camera space and above layers: -Z is forward.

When you say above layers you mean local space, world space and camera space?

Why are all OpenGL tutorials written in Brainf**k?
Offline ra4king

JGO Kernel


Medals: 356
Projects: 3
Exp: 5 years


I'm the King!


« Reply #49 - Posted 2014-01-03 22:55:45 »

I hate not knowing how something works. Sad
Me too! However, knowing how the W/perspective divide works requires studying matrix theory and linear algebra. You can probably find resources online if you really really want to Smiley

Why are the frustum scale, zNear and zFar variables defined as uniform? I thought that uniform variables were used when the variable should be changed regularly. Why aren't they just defined as attributes, i.e in?
Uniform variables are variables that are applicable to all vertices rendered together, for example your matrices are the same for every vertex in some object.

Meanwhile, attributes are per-vertex data.

When you say above layers you mean local space, world space and camera space?
Yup.

Offline Troubleshoots

JGO Knight


Medals: 36
Exp: 7-9 months


Damn maths.


« Reply #50 - Posted 2014-01-03 23:06:36 »

Uniform variables are variables that are applicable to all vertices rendered together, for example your matrices are the same for every vertex in some object.

Meanwhile, attributes are per-vertex data.

Though wouldn't attributes apply to all vertices rendered together? I thought the only difference was that attributes can only be changed with every execution of the program whereas uniform variables can be changed before each rendering state. Is there something else I need to know? Undecided

Why are all OpenGL tutorials written in Brainf**k?
Offline ra4king

JGO Kernel


Medals: 356
Projects: 3
Exp: 5 years


I'm the King!


« Reply #51 - Posted 2014-01-04 00:14:58 »

You seem to confuse attributes and uniforms.

Attributes are per-vertex data, for example: position, color, texcoords, etc..., anything that is unique per vertex. If you are drawing an object that has 50 vertices, you upload all 50 positions, colors, texcoords, etc... to your VBO and tell glVertexAttribPointer which index each attribute is assigned to as well as enough info for it to know how big each of those 50 vertices are. When rendering with any glDraw* command, your currently bound vertex shader is called 50 times, once for each vertex. After that is done and all clipping/perspective division/rasterization happens, your fragment shader is called for each fragment (without anti-aliasing, you get 1 fragment per pixel rendered).

Uniforms are, well, uniform across that draw call. Before you call any glDraw* command, you can change your uniforms however you like. Once it's called, each vertex/fragment shader has the same uniforms. That's how you have the same matrices across all 50 vertices and the same texture bound for each time the fragment shader is called for example.

Pages: 1 [2]
  ignore  |  Print  
 
 
You cannot reply to this message, because it is very, very old.

 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

rwatson462 (32 views)
2014-12-15 09:26:44

Mr.CodeIt (23 views)
2014-12-14 19:50:38

BurntPizza (50 views)
2014-12-09 22:41:13

BurntPizza (84 views)
2014-12-08 04:46:31

JscottyBieshaar (45 views)
2014-12-05 12:39:02

SHC (59 views)
2014-12-03 16:27:13

CopyableCougar4 (58 views)
2014-11-29 21:32:03

toopeicgaming1999 (123 views)
2014-11-26 15:22:04

toopeicgaming1999 (114 views)
2014-11-26 15:20:36

toopeicgaming1999 (32 views)
2014-11-26 15:20:08
Resources for WIP games
by kpars
2014-12-18 10:26:14

Understanding relations between setOrigin, setScale and setPosition in libGdx
by mbabuskov
2014-10-09 22:35:00

Definite guide to supporting multiple device resolutions on Android (2014)
by mbabuskov
2014-10-02 22:36:02

List of Learning Resources
by Longor1996
2014-08-16 10:40:00

List of Learning Resources
by SilverTiger
2014-08-05 19:33:27

Resources for WIP games
by CogWheelz
2014-08-01 16:20:17

Resources for WIP games
by CogWheelz
2014-08-01 16:19:50

List of Learning Resources
by SilverTiger
2014-07-31 16:29:50
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!