Java-Gaming.org    
Featured games (81)
games approved by the League of Dukes
Games in Showcase (496)
Games in Android Showcase (114)
games submitted by our members
Games in WIP (563)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
   Home   Help   Search   Login   Register   
  Show Posts
Pages: [1]
1  Game Development / Newbie & Debugging Questions / depth cull not working with FBO on: 2011-10-13 10:10:12
I have a problem with FBO and depth in openGL. I am passing projection, view and model matrices to a shader that writes to the g buffer. When I unbind the FBO and write to gl_FragColor the scene displays as it ought. But when I write to gl_FragData[0] then write the accompanying texture to a screen aligned quad, objects are drawn according to inverse order processed rather than depth... I can see through objects processed first to objects processed after. Has anyone had the same problem and do they know a fix? Or could someone provide syntax on reading depth values from the vertex shader, querying the current depth, then writing to the depth buffer depending on a comparison, ie, handling the operation manually in the fragment shader.
2  Game Development / Game Mechanics / Re: reflection/refraction` on: 2011-08-14 09:30:41
Thanks for the reply Roquen. I was thinking of starting with clay tiles, and concrete.
3  Game Development / Game Mechanics / reflection/refraction` on: 2011-08-14 05:08:24
I am playing around with glsl and would like to be able to scale the amount of light reflected from a material by the amount of light that will be refracted into the material given the incidence angle and materials refractive index. My understanding is the refract and reflect functions in glsl only return the normalized direction vector for the reflected/refracted beam, not the quantity of the incidence light that was reflected or refracted. Is this mathematically doable?
4  Game Development / Game Mechanics / Re: unit normal math problem on: 2011-04-27 20:36:05
Moving lightDir into object space is to multiply the vector by the same transformation matrix which is applied to the model: object space is world space with transformation matrices applied. That you wish to consider the space as a logical entity in its own right after this fact is fine, but irrelevant to the discussion except insofar as both use orthogonal base frames (rotation matrices) to perform the switch between spaces.

The reason I belaboured the relationship between a vector and it being a dot product of its base frame is because this is the fundamental working aspect of tangent space. Tangent space, be it the base frames I have talked about or texture space, is a matter of moving vectors between orthogonal base frames so that the vectors come to share the same base frame, only then can the lighting calculations work.

Here's a ponderer: Given that photoshop does not read model data, it can have no idea of the tangent space attributed with each vertex, therefore the normal map filter cannot be encoding normals into the model's tangent space... it must be encoding the normals into world space based upon the assumption that all of the model's smoothed vertex normals point out the z axis. Tangent space is used to bend lightDir, etc, form how the light direction actually falls upon the transformed model into the same world space where all normals point out the z axis.

 
5  Game Development / Game Mechanics / Re: unit normal math problem on: 2011-04-27 10:05:38
Definitely if we are talking about a complex model. But if we are talking of a flat floor where all the model's vertex normals point up the y axis (not talking about the mapped perturbed normals), then my system only needs one tangent space (base frame) for all vertices that make up the floor. A texture base system still requires a tangent/bitangent per vertex.
6  Game Development / Game Mechanics / Re: unit normal math problem on: 2011-04-27 04:53:03
Just thought I'd drop this in here for comment, and because it helps me think it through. Basically it is a cheaper version of tangent space useful for floors, walls, and ceilings, all of which if they are flat do not need a tangent matrix per vertex. Unfortunately, because I cannot find an image program that creates world space normal maps; normals are defined in tangent space a la Gimp, Photoshop normal plugins etc, so a tangent matrix is still needed.

Lets say we have a normalized normal N=(0.2, 0.3, 0.87), generated in Gimp from a height map and stored in a normal map. As you can see, the greater part of the normal (0.87) points out of the screen toward the viewer who is sitting up the positive Z axis. Were this normal map to be applied to a south facing wall (one that faces the viewer) then we could read the normal from the map and use it directly in lighting calculations because, for all intents and purposes, it would be the same as a world/object space normal map.

As a quick aside it is important to understand the difference between object and world space. While it is said that models are defined in object space, they are not really, they are defined in world space only they have not been moved... object space is world space that has had no movement to it. As soon as the object is moved a transformation matrix is applied to it (glTranslate/rotate etc do this), the object now consists of its object space coordinates and a transformation to those coordinates that changes the location/orientation of the model in world space. If you reversed the transformation the model would return to its object space coordinates.

The normal map situation becomes problematic if we want to use this normal map on the floor, for there we want the greatest extent of the normals to point up the y axis. Intuitively, however, it seems that all we need do is bend the coordinate frame in which the original vertex normal and the perturbed normal (N`) sit through 90 deg about the X axis... that is, we want to glue the perturbed normal to the Z axis and rotate the Z axis till it points up; this will drag the perturbed normal to where we want it, pointing up the y axis, but correctly offset from it.

We can do that by providing a base frame and multiplying the perturbed normal by the inverse of that frame. What is a base frame? It is a set of vectors that define the x, y, and z axis of a 3D coordinate frame. World space defines the standard base frame
1  
2  
3  
x=(1.0, 0.0, 0.0)
y=(0.0, 1.0, 0.0)
z=(0.0, 0.0, 1.0)


Any vector is only meaningful if it is specified in a coordinate frame. The vector vec=(0.2, 0.3, 0.87) is actually:
1  
2  
3  
4  
vec dot standard base frame, or:
vec.x=(1.0, 0.0, 0.0).( 0.2, 0.3, 0.87)=0.2
vec.y=(0.0, 1.0, 0.0).( 0.2, 0.3, 0.87)=0.3
vec.z=(0.0, 0.0, 1.0).( 0.2, 0.3, 0.87)=0.87

which is to say, 0.2 units along the x axis, 0.3 units up the y axis, and 0.87 units out along the z axis of the base frame.

If we create a new base frame and do the dot products using the same vector but with the inverse of the base frame, that vector will be rotated as if it were glued to the standard base frame, and the standard base frame was rotated to align with the new base frame.

The base frame we need takes the world space y axis and uses it as the zbase of the new frame, the x axis remains unchanged, and the y axis now points down the Z axis. The new base frame is:
1  
2  
3  
xbase=(1.0, 0.0, 0.0)
ybase=(0.0, 0.0, -1.0)
zbase=(0.0, 1.0, 0.0)


We need the inverse of this base frame which because the frame is orthogonal (the axis are perpendicular to each other) we can get by transposition (basically substitute the columns for the rows):
1  
2  
3  
xbase`=(1.0, 0.0, 0.0)
ybase`=(0.0, 0.0, 1.0)
zbase`=(0.0, -1.0, 0.0)


Using dot products (as opposed to matrix multiplication but with identical effect) we can convert the perturbed normal to point up the y axis:
1  
2  
3  
x`=xbase`.N`=(1.0, 0.0, 0.0).(0.2, 0.3, 0.87)=(1.0*0.2+0.0+0.0)=0.2
y`=ybase`.N`=(0.0, 0.0, 1.0).(0.2, 0.3, 0.87)=(0.0+0.0+1.0*0.87)=0.87
z`=zbase`.N`=(0.0, -1.0, 0.0).(0.2, 0.3, 0.87)=(0.0+(-1*0.3)+0.0)=-0.3


Which gives us the normal we are looking for:
1  
N``=(0.2, 0.87, -0.3)


For the hell of it, and because it is extremely important for what follows, lets dot product the new vector with the non-inverted new base matrix:
1  
2  
3  
x=xbase.N``=(1.0, 0.0, 0.0).(0.2, 0.87, -0.3)=(1.0*0.2+0.0+0.0)=0.2
y=ybase.N``=(0.0, 0.0, -1.0).(0.2, 0.87, -0.3)=(0.0+0.0+(-1.0*-0.3))=0.3
z=zbase.N``=(0.0, 1.0, 0.0).(0.2, 0.87, -0.3)=(0.0+1.0*0.87+0.0)=0.87

leaving us with the original perturbed normal (0.2, 0.3, 0.87). The inverse of the matrix moves a vector into the new base; the non-inverse moves a vector out of the new base and into world space or the standard base frame.

Now let's consider the light vector. The light has a position in world space eg lightPos=(10.0, 10.0, 5.0). Let's say the pixel the fragment shader is currently working on is at pos=(2.0, 0.0, 1.5), ie, it is part of the floor we have been considering... we assume pos is gl_Position * ModelViewMatrix as interpolated into the fragment shader. LightDir would be:
1  
lightDir=normalize(gl_Position-lightPos)=normalize(2.0-10.0, 0.0-10.0, 1.5-5.0)=normalize(-8.0, -10.0, -3.5)=(-0.6, -0.75, -0.26)


Note that lightDir points from the light's direction toward the pixel. Note also that the perturbed normal we calculated above would work correctly with lightDir, that is, the normal would be pointing in the general direction of the light. The lightDir vector can be envisaged as a vector in the same new base frame, its tail attaches to the point defined by pos (the pixel being worked on), and it heads in a direction somewhat opposite the normal vector.

But if we could return N`` to N` using the non-inverted new base matrix, and if the light vector can be said to be defined within the same new base frame as N``, then we can also turn lightDir:
1  
2  
3  
4  
l.x=xbase.lightDir=(1.0, 0.0, 0.0).(-0.6, -0.75, -0.26)=-0.6
l.y=ybase.lightDir=(0.0, 0.0, -1.0).(-0.6, -0.75, -0.26)=0.26
l.z=zbase.lightDir=(0.0, 1.0, 0.0).(-0.6, -0.75, -0.26)=-0.75
Giving us lightDir`=(-0.6, 0.26, -0.75)


This is kind of difficult to envisage so grab a pen and hold it up representing the y axis (which is the z axis of the new base frame). Now grab another pen and stick its tail at the bottom of the y pen representing the lightDir, it will point to the left down and away from you. Now rotate both pens as if welded together so that the y axis pen points directly to you, the lightDir pen will now point to the left, up and away. Especially note that the y point moves from negative to positive.

OK, so what does all this mean? It means that the base frame can be used to convert lightDir, and by extension eyeDir, into correct position relative to the perturbed normal read from the normal map… ready to perform the required light calculations. Essentially, it acts like texture space matrix but we have had no need of uv cords to construct the tangent space, nor have we need of attaching the T vector (and possibly B vector) as attributes to each vertex; we need only pass one T vector as a uniform variable for the entire floor, wall, ceiling, etc. Furthermore, because z (b) values in normal maps are mapped different to x (r) and y (g) values, we might be better to recalculate the z value from the x and y and use the z channel to store some other goodie.

As far as I can see, what I have said is right, but before I continue I just want to open what has been said to any mathemagicians who might be lurking, ready with mathemagical spells that would undo all my intuitions on this matter. I have not extensively tested this hypothesis and quite frankly getting this far finds me several fishes short of a bicycle, so criticism about the truth of the intuitions are appreciated.

If the mathemagicians have all been purged (“Huzzah” to quote Hiro), I would just like to end by considering some of the shortfalls that strike me about texture space and expand on how base frames as I have suggested above might be implemented over complex models.

Texture space tangent mapping seems to face several problems:

1) one often submitted method for generating the texture space matrix is to calculate T, use N from the model and cross(T, N) to get B. The problem here being that N is most often exported from the modeling program as a smoothed vertex normal not an unsmoothed face normal, therefore it mostly will not be perpendicular to T therefore the matrix will not be orthonormal, and without a pet mathemagician I do not understand how out a non-orthonormal frame would throw vector during rotation

2) producing an orthonormal frame using the face normal and calculating both T and B is also mostly problematic, because any stretch introduced to the texture during the unwrapping of the model equates to non-perpendicular T and B vectors, and many parts of complex models suffer thus

3) another problem, which I suspect explains the difficulty getting texture spaced models to behave coherently around seams, is that different models will have different texture spaces. Thus if you have a model of a head that you want to place on the model of a shoulder, then even if the normals in the maps cohere at the pixels, I do not know if they should be expected to return the same lightDir values if the texture spaces are different.

4) maybe the seam problem is to do with a mixture of the above, and this. Another problem can be seen by exposing the misnomer that is calling texture space tangent space. A tangent off a 2D circle is a line perpendicular to a normal on the circumference of the circle. A tangent off a 3D sphere is a plane perpendicular to a normal on the circumference of the sphere. The normals being talked about here are smoothed vertex normals. Textures mostly do not lie on tangents to models for exactly the same reason as there is a difference between smoothed and unsmoothed normals. Now, if the lightDir is considered a vector in texture space, and if texture space from one model to the next differs, including what is taken to be the tangent and therefore the light vector relative to the pixel, then I’ll leave the rest for Socrates.  

This brings me to the final section; how could the above be exploited on complex models. I haven’t tried it, but intuitively I should think every vertex could have an orthonormal base frame constructed about it. Take the smoothed normal as the zbase. Cross product the zbase with world y to get the xbase. Cross product zbase with xbase to get the ybase. If zbase == world y, then create ybase first by crossing zbase and world x. This would give a true orthonormal frame universal to all models.

Author: Stephen Jones
7  Game Development / Game Mechanics / Re: unit normal math problem on: 2011-04-19 21:01:36
I am playing around with storing normals in maps. As a trade off, it might be worth encoding object space normals as x, y components of the unit normal vector and reconstructing the z component in the shader.
8  Game Development / Game Mechanics / unit normal math problem on: 2011-04-12 03:05:14
Probably a stupidly simple math problem, but could anyone tell me how to calculate the z length of a unit vector if I already have the x and y lengths? The unit length would normally be calculated by:
1  
unitNorm =sqrt(xSquared + ySquared + zSquared)
.

Scince I know unitNorm is 1, and I know x and y, z should be discoverable but I do not know how to handle the math. Thanks for any help.
9  Game Development / Newbie & Debugging Questions / SOLVED normals refuse to go outside on: 2011-03-20 07:31:14
I have a textured box. When I (the player) is outside the box the texture flickers in and out of existence whenever the player is moved or rotated. When I am inside the box everything is stable. I have tried gl.glFrontFace with both options to no avail. The problem could lie in the exporter except that the box appears as it ought in LWJGL with comparable code. Also, for some reason models imported into JOGL seem to be scaled down even though no glScale has been applied... they are definitely of a smaller size than the same models brought into LWJGL.


I had glu.gluPerspective(60.0, ratio, 0, 20); The 0 defining the near plane caused the textured model to flicker. By setting the near plane to 0.1 the issue has fixed. I do not know why if somebody could tell me.
10  Discussions / General Discussions / The point of tangent space on: 2011-03-01 00:50:00
Can anyone explain why lighting calculations must occur in tangent space rather than model space? I keep hearing that object (model) space normal maps will not remain accurate if the model is rotated, but if the lightPos and eyePos are multiplied by the inverseModelViewMatrix then they are brought into the same coordinate frame as the vertex, and by interpolation, fragment. The other thing I hear is that animation that distorts a mesh's triangles throws object space lighting calculations out, but every fragment normal is tightly defined within the triangle's vertices because these pin the normal map at the uv coords.
11  Discussions / General Discussions / Re: first post on: 2011-02-20 22:06:00
The link is http://www.sjonesart.com/gl.php. I have also posted into the shared Code section.

The tuts are already on jogamp wiki
12  Game Development / Shared Code / JOGL Tutorials on: 2011-02-20 22:03:45
Hi community. Here are a series of tutorials I created while figuring stuff out. They include:
keyboard/mouse polling
FPS implemented using matrix
VBO including a Blender 2.5 exporter and parser to read the .txt data into VBOs
shader setup GLSL
texturing via the shader
multi-texturing and vertexAttribs to hand tangents and bitangents to the shader.

The Blender script also exports tangents and bitangents.

The link is http://www.sjonesart.com/gl.php
13  Discussions / General Discussions / first post on: 2011-02-20 09:42:53
According to the setup blurb I have to have a few posts under my belt before I can post links. I have a series of JOGL tutorials I would like to share:
keyboard and mouse polling
first person shooter by constructing a matrix
VBO including a Blender 2.5 exporter and code to parse the data straight into VBOs
shader setup
texturing using shaders
multi texturing and setting up vertexAttributes to hand tangents and bitangents to the vertex shader.

But I guess it will have to wait.


Pages: [1]
 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

BurntPizza (14 views)
2014-09-19 03:14:18

Dwinin (32 views)
2014-09-12 09:08:26

Norakomi (58 views)
2014-09-10 13:57:51

TehJavaDev (80 views)
2014-09-10 06:39:09

Tekkerue (40 views)
2014-09-09 02:24:56

mitcheeb (62 views)
2014-09-08 06:06:29

BurntPizza (46 views)
2014-09-07 01:13:42

Longarmx (33 views)
2014-09-07 01:12:14

Longarmx (37 views)
2014-09-07 01:11:22

Longarmx (36 views)
2014-09-07 01:10:19
List of Learning Resources
by Longor1996
2014-08-16 10:40:00

List of Learning Resources
by SilverTiger
2014-08-05 19:33:27

Resources for WIP games
by CogWheelz
2014-08-01 16:20:17

Resources for WIP games
by CogWheelz
2014-08-01 16:19:50

List of Learning Resources
by SilverTiger
2014-07-31 16:29:50

List of Learning Resources
by SilverTiger
2014-07-31 16:26:06

List of Learning Resources
by SilverTiger
2014-07-31 11:54:12

HotSpot Options
by dleskov
2014-07-08 01:59:08
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!