Java-Gaming.org Hi !
Featured games (90)
games approved by the League of Dukes
Games in Showcase (739)
Games in Android Showcase (224)
games submitted by our members
Games in WIP (820)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: [1]
  ignore  |  Print  
  SSAO in LibGDX sans Deferred Rendering?  (Read 2222 times)
0 Members and 1 Guest are viewing this topic.
Offline Ecumene

JGO Kernel


Medals: 195
Projects: 4
Exp: 8 years


I did not hit her! I did not!


« Posted 2017-09-23 04:46:40 »

Hi All!

I'm working on a game that requires SSAO in LibGDX. Is there any way to implement SSAO with just the depth buffer and still have favourable results? I understand straight edge-detecting produces artifacts, but how bad are they really? Can a blur filter or other things help with this?

I could go the route of developing a deferred rendering pipeline in LibGDX using opengl 3. But I'd much prefer to stick to the forward-rendering + post-processing I have now for the library's sake. I did look over how to go about deferred rendering in LibGDX and I believe it would take some time to get everything going as some of the pipeline would be using 2.0's glsl and others 3.0..   

Thanks!

Offline theagentd
« Reply #1 - Posted 2017-09-23 07:33:36 »

Traditional SSAO doesn't require anything but a depth buffer. However, normals help quite a bit in improving quality/performance. You should be able to output normals from your forward pass into a second render target. It is also possible to reconstruct normals by analyzing the depth buffer, but this can be inaccurate if you got lots of depth discontinuities (like foliage).

EDIT: Technically, SSAO is <ambient> occlusion, meaning it should only be applied to the ambient term of the lighting equation. The only way to get "correct" SSAO is therefore to do a depth prepass (preferably output normal too), compute SSAO, then render the scene again with GL_EQUAL depth testing while reading SSAO from the current pixel. If you already do a depth prepass, this should essentially be free. If not, maybe you should! It could improve your performance.

Myomyomyo.
Offline basil_

« JGO Bitwise Duke »


Medals: 418
Exp: 13 years



« Reply #2 - Posted 2017-09-23 09:40:04 »

you can approach ssao with depth-only in a simple way where you apply simple unsharp-masking.

https://en.wikipedia.org/wiki/Unsharp_masking

you can then cutoff values <50% or >50% to achieve shadows/darkening or glowing/halos.
adding a depth-range check to falloff the effect can deal with high discontinuities in the depth-buffer to avoid "leaking" or false-shadows.

this can look very good on static images. as soon as the camera turns - you get perspective incorrect darkening. depends on the scene. this is the point when adding a normal-buffer helps alot.

also, bilateral upsampling, say your ssao pass is 50% resolution .. image quality will profit from normal-tests alot, tho' testing depth only works ok too. again depends on the scene you draw.
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline Ecumene

JGO Kernel


Medals: 195
Projects: 4
Exp: 8 years


I did not hit her! I did not!


« Reply #3 - Posted 2017-09-23 11:59:05 »

EDIT: Technically, SSAO is <ambient> occlusion, meaning it should only be applied to the ambient term of the lighting equation. The only way to get "correct" SSAO is therefore to do a depth prepass (preferably output normal too), compute SSAO, then render the scene again with GL_EQUAL depth testing while reading SSAO from the current pixel. If you already do a depth prepass, this should essentially be free. If not, maybe you should! It could improve your performance.

This is interesting! As I understand it, depth prepass is rendering the scene to a depth buffer similar to a shadow map before rendering the scene, yes? I could put the normal values in the color pixels and depth in the z coordinate.

Is there any benefit to the z-prepass other than performance and 'correct' SSAO? There must be some cool things I can do with it also Smiley

this can look very good on static images. as soon as the camera turns - you get perspective incorrect darkening. depends on the scene. this is the point when adding a normal-buffer helps alot.
also, bilateral upsampling, say your ssao pass is 50% resolution .. image quality will profit from normal-tests alot, tho' testing depth only works ok too. again depends on the scene you draw.

Okay, I'll see what I can do with generating the scene normals+z in a prepass. Thanks for the info guys! I'll post back later.

Offline theagentd
« Reply #4 - Posted 2017-09-23 12:59:11 »

The traditional purpose of doing a depth pre-pass is to avoid shading pixels twice. By rendering the depth first, the actual shading can be done with GL_EQUAL depth testing, meaning each pixel is only shaded once. The depth pre-pass also rasterize at twice the speed as GPUs have optimized depth-only rendering for shadow maps, so by adding a cheap pre-pass you can eliminate overdraw in the shading.

To also output normals, you need to have a color buffer during the depth pre-pass, meaning you'll lose the double speed rasterization, but that shouldn't be a huge deal. You can store normal XYZ in the color, while depth can be read from the depth buffer itself and doesn't need to be explicitly stored.

If you have a lot of vertices, rendering the scene twice can be very expensive. In that case, it's possible for you to do semi-deferred rendering where you do lighting as you currently do but also output the data you need to do SSAO afterwards. This require using an FBO with multiple render targets, but it's not that complicated. The optimal strategy depends on the scene you're trying to render.

Myomyomyo.
Offline Ecumene

JGO Kernel


Medals: 195
Projects: 4
Exp: 8 years


I did not hit her! I did not!


« Reply #5 - Posted 2017-09-23 13:35:37 »

To also output normals, you need to have a color buffer during the depth pre-pass, meaning you'll lose the double speed rasterization, but that shouldn't be a huge deal. You can store normal XYZ in the color, while depth can be read from the depth buffer itself and doesn't need to be explicitly stored.

A few questions here, in the final shader how do you read depth from a uniform texture? Currently I have 2 textures going into the last pass. One with scene normals, the other with the original scene colors (textures and all) Here's how the scene looks.



1  
2  
3  
4  
5  
6  
uniform PRECISION sampler2D u_texture0;   // scene
uniform PRECISION sampler2D u_texture1;   // normalmap
varying vec2 v_texCoords;
void main() {
   gl_FragColor = mix(texture2D(u_texture0, v_texCoords), texture2D(u_texture1, v_texCoords), 0.5);
}


The code for rendering objects:
1  
2  
gl_FragColor.xyz = ((normal + 1.0)/2.0).xyz;
gl_FragColor.w = ((gl_FragCoord.z));


Should I encode the frag z value in alpha? How do I read depth pixels from a colorbuffer texture?

As you can see a lot of the geometry is very simple, most scenes should keep it below 2,000 or so polygons. If that helps!

Offline theagentd
« Reply #6 - Posted 2017-09-23 14:07:54 »

You do not need to store depth in a color texture. You can simply bind the depth texture you use as depth buffer and bind that as any other texture. The depth value between 0.0 and 1.0 is returned in the first color channel (red channel) when you sample the texture with texture() or texelFetch().

Myomyomyo.
Offline Ecumene

JGO Kernel


Medals: 195
Projects: 4
Exp: 8 years


I did not hit her! I did not!


« Reply #7 - Posted 2017-09-24 02:28:40 »

You do not need to store depth in a color texture. You can simply bind the depth texture you use as depth buffer and bind that as any other texture. The depth value between 0.0 and 1.0 is returned in the first color channel (red channel) when you sample the texture with texture() or texelFetch().

It looks like LibGDX uses Depth render objects. How does this change how I bind the texture how you say?
1  
2  
Gdx.gl.glActiveTexture(GL20.GL_TEXTURE2);
Gdx.gl.glBindTexture(GL20.GL_TEXTURE_2D, prepass.getDepthBufferHandle());


Prepass is the framebuffer in question. Binding the getDepthBufferHandle() just binds a random texture loaded in the game.

Offline theagentd
« Reply #8 - Posted 2017-09-24 13:14:04 »

Is LibGDX using a renderbuffer?

Myomyomyo.
Offline Ecumene

JGO Kernel


Medals: 195
Projects: 4
Exp: 8 years


I did not hit her! I did not!


« Reply #9 - Posted 2017-09-24 14:21:05 »

Take a look

https://github.com/libgdx/libgdx/blob/master/gdx/src/com/badlogic/gdx/graphics/glutils/FrameBuffer.java
https://github.com/libgdx/libgdx/blob/master/gdx/src/com/badlogic/gdx/graphics/glutils/GLFrameBuffer.java

I've had trouble in other games with reading the depth in GLSL and I've always gotten away with using the Alpha channel. Do you have any info on how to maybe read depth from this?

Here's the relevant code for binding the depth buffer
1  
2  
3  
if (hasDepth) {
   gl.glFramebufferRenderbuffer(GL20.GL_FRAMEBUFFER, GL20.GL_DEPTH_ATTACHMENT, GL20.GL_RENDERBUFFER, depthbufferHandle);
}


Here's how Framebuffer.java initialises the color texture per-framebuffer
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
@Override
protected Texture createColorTexture () {
   int glFormat = Pixmap.Format.toGlFormat(format);
   int glType = Pixmap.Format.toGlType(format);
   GLOnlyTextureData data = new GLOnlyTextureData(width, height, 0, glFormat, glFormat, glType);
   Texture result = new Texture(data);
   result.setFilter(TextureFilter.Linear, TextureFilter.Linear);
   result.setWrap(TextureWrap.ClampToEdge, TextureWrap.ClampToEdge);
   return result;
}


Here's how I initialise a framebuffer + use it if that helps in reading the depth
1  
2  
// Arguments are: Format, Width, Height, useDepth (If it will attach a depth component or not, this is used in GLFramebuffer.java)
prepass = new FrameBuffer(Pixmap.Format.RGB888, width, height, false);


1  
2  
3  
4  
// This binds and sets the framebuffer's viewport 
prepass.begin();
// This resets the viewport and sets everything back to "defaultFramebufferHandle" (GLFramebuffer.java)
prepass.end();


EDIT: According to this article you can't read from renderbuffers... Looks like I'm gonna have to make my own framebuffer that disables the depth renderbuffer and attaches a texture for the depth component... Oh boy
https://stackoverflow.com/questions/9850803/glsl-renderbuffer-really-required

Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline theagentd
« Reply #10 - Posted 2017-09-24 18:18:23 »

Renderbuffers are a bit of a legacy feature. They are meant for exposing formats the GPU can render to but can't be read in a shader (read: multisampled stuff). The thing is that multisampled textures are supported by all OGL3 GPUs so they no longer fill any real purpose anymore. If you do the FBO setup yourself, you can attach a GL_DEPTH_COMPONENT24 texture as depth attachment and read it in a shader.

Myomyomyo.
Offline Ecumene

JGO Kernel


Medals: 195
Projects: 4
Exp: 8 years


I did not hit her! I did not!


« Reply #11 - Posted 2017-09-25 01:45:47 »

f**k yeah! All done!



One issue:


Any idea what this is and how to get rid of it? Thnx!

Offline theagentd
« Reply #12 - Posted 2017-09-25 14:11:52 »

To fix the SSAO going too far up along the cube's edges, you need to reduce the depth threshold.

I can also see some banding in your SSAO. If you randomly rotate the sample locations per pixel, you can trade that banding for noise instead, which is much less jarring to the human eye.

Myomyomyo.
Offline Ecumene

JGO Kernel


Medals: 195
Projects: 4
Exp: 8 years


I did not hit her! I did not!


« Reply #13 - Posted 2017-09-25 23:29:30 »

I reduced the threshold, and i tried to add the random texture. It looks like it's still getting some banding issues. Here's my code and some screenshots, take a look.

Here's the random texture:
1  
2  
3  
4  
5  
6  
7  
Pixmap pixmap = new Pixmap(4, 4, Pixmap.Format.RGB888);
for(int x = 0; x < 4; x++)
    for(int y = 0; y < 4; y++)
        pixmap.drawPixel(x, y, Color.rgb888(random.nextFloat(), random.nextFloat(), random.nextFloat()));
noiseTexture = new Texture(pixmap);
noiseTexture.setFilter(Texture.TextureFilter.Nearest, Texture.TextureFilter.Nearest);
noiseTexture.setWrap(Texture.TextureWrap.Repeat, Texture.TextureWrap.Repeat);


Kernel Calculation (The problem is probably here)
1  
2  
3  
4  
5  
6  
7  
8  
9  
for(int i = 0; i < kernels32.length/3; i++){
    temp.set(random.nextFloat(), random.nextFloat(), (random.nextFloat() + 1.0f) / 2.0f);
    temp.nor();
    float scale = (float)i/32f;
    temp.scl(Math.max(0f, Math.min(1, scale*scale)));
    kernels32[i * 3 + 0] = temp.x;
    kernels32[i * 3 + 1] = temp.y;
    kernels32[i * 3 + 2] = temp.z;
}




Offline theagentd
« Reply #14 - Posted 2017-09-26 04:05:01 »

I'm not sure what your "kernel" is. Are those the sample locations for your SSAO? I'd recommend precomputing some good sample positions instead of randomly generating them, as you're gonna get clusters and inefficiencies from a purely random distribution. JOML has some sample generation classes in the org.joml.sampling package that may or may not be of use to you.

It doesn't look like you're using your noise texture correctly. A simple way of randomly rotating the samples is to place random normalized 3D vectors in your noise texture, then reflect() each sample against that vector. I'm not sure how you're using your random texture right now, but it doesn't look right at all. If you let me take a look at your GLSL code for that, I can help you fix it.

Myomyomyo.
Offline Ecumene

JGO Kernel


Medals: 195
Projects: 4
Exp: 8 years


I did not hit her! I did not!


« Reply #15 - Posted 2017-09-26 10:01:34 »

I've got to go to school, but take a look!

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
20  
21  
22  
23  
24  
25  
26  
27  
28  
29  
30  
31  
32  
33  
34  
35  
36  
37  
38  
39  
40  
41  
42  
43  
44  
45  
46  
47  
48  
49  
50  
51  
52  
53  
54  
55  
56  
57  
58  
59  
60  
#define PRECISION
precision mediump float;

//uniform PRECISION sampler2D u_texture0;// scene
uniform PRECISION sampler2D u_texture1;   //  normalmap
uniform PRECISION sampler2D u_texture2;   //  depthmap
uniform PRECISION sampler2D u_texture3;   //  randommap

#define KERNEL_SIZE 32
#define CAP_MIN_DISTANCE 0.0001
#define CAP_MAX_DISTANCE 0.0005

uniform float u_radius;
uniform vec2 u_rotationNoiseScale;
uniform vec3 u_kernel[KERNEL_SIZE];
uniform mat4 u_inverseProjectionMatrix;
uniform mat4 u_projectionMatrix;

varying vec2 v_texCoords;

vec4 getViewPos(vec2 texCoord)
{
   float x = texCoord.s * 2.0 - 1.0;
   float y = texCoord.t * 2.0 - 1.0;
   float z = texture(u_texture2, texCoord).r * 2.0 - 1.0;
   vec4 posProj = vec4(x, y, z, 1.0);
   vec4 posView = u_inverseProjectionMatrix * posProj;
   posView /= posView.w;
   return posView;
}

void main()
{
    float occlusion = 0.0;
    if(texture(u_texture2, v_texCoords).r != 1.0){
        vec4 posView = getViewPos(v_texCoords);
        vec3 normalView = normalize(texture(u_texture1, v_texCoords).xyz * 2.0 - 1.0);
        vec3 randomVector = normalize(texture(u_texture3, v_texCoords * u_rotationNoiseScale).xyz * 2.0 - 1.0);
        vec3 tangentView = normalize(randomVector - dot(randomVector, normalView) * normalView);
        vec3 bitangentView = cross(normalView, tangentView);
        mat3 kernelMatrix = mat3(tangentView, bitangentView, normalView);
        for (int i = 0; i < KERNEL_SIZE; i++)
        {
            vec3 sampleVectorView = kernelMatrix * u_kernel[i];
            vec4 samplePointView = posView + u_radius * vec4(sampleVectorView, 0.0);
            vec4 samplePointNDC = u_projectionMatrix * samplePointView;
            samplePointNDC /= samplePointNDC.w;
            vec2 samplePointTexCoord = samplePointNDC.xy * 0.5 + 0.5;
            float zSceneNDC = (texture(u_texture2, samplePointTexCoord).r) * 2.0 - 1.0;
            float delta = samplePointNDC.z - zSceneNDC;
            if (delta > CAP_MIN_DISTANCE && delta < CAP_MAX_DISTANCE)
            {
                occlusion += 1.0;
            }
        }
        occlusion = 1.0 - occlusion / (float(KERNEL_SIZE) - 1.0);
    } else occlusion = 1.0;

    gl_FragColor = vec4(occlusion, occlusion, occlusion, 1.0);
}

Offline KaiHH

JGO Kernel


Medals: 446



« Reply #16 - Posted 2017-09-26 11:05:06 »

Your sample generation is indeed very odd. Why do you generate only vectors within the range ([0..1], [0..1], [0.5..1.5])?
And why do you weight/scale the vectors by i/1024?

As @theagentd suggested, you could use some sample pattern generators from JOML, such as "Best-Candidate" sampling, like so:
1  
2  
3  
4  
5  
6  
long seed = 12345L; // <- to seed the PRNG
int numSamples = 32; // <- number of samples to generate
int numCandidates = numSamples * 4; // <- increase this number to improve sample distribution quality
FloatBuffer fb = ByteBuffer.allocateDirect(numSamples * 3 * 4).order(ByteOrder.nativeOrder()).asFloatBuffer();
new BestCandidateSampling.Sphere(seed, numSamples, numCandidates, (x, y, z) -> fb.put(x).put(y).put(z));
fb.rewind();

Here is an image of what that typically produces:
http://www.java-gaming.org/topics/joml-1-9-0-pre-release/37829/msg/361955/view.html#msg361955
Offline Ecumene

JGO Kernel


Medals: 195
Projects: 4
Exp: 8 years


I did not hit her! I did not!


« Reply #17 - Posted 2017-09-26 12:45:29 »

Your sample generation is indeed very odd. Why do you generate only vectors within the range ([0..1], [0..1], [0.5..1.5])?
And why do you weight/scale the vectors by i/1024?

As @theagentd suggested, you could use some sample pattern generators from JOML, such as "Best-Candidate" sampling, like so:

https://github.com/McNopper/OpenGL/blob/master/Example28/

I followed this example to a tee, I'll be sure to fix my samples when I get home! Thank you so much for the help.

Offline theagentd
« Reply #18 - Posted 2017-09-26 18:29:47 »

A couple of tips:

 - The code you have is using samples distributed over a half sphere. Your best bet is a modified version of best candidate sampling over a half sphere, which would require some modification of the JOML code to get.

 - I'd ditch the rotation texture if I were you. Just generate a random angle using this snippet that everyone is using, then use that angle to create a rotation matrix around the normal (You can check the JOML source code on how to generate such a rotation matrix that rotates around a vector). You can then premultiply the matrix you already have with this rotation matrix, keeping the code in the sample loop the exact same.

 - To avoid processing the background, enable the depth test, set depth func to GL_LESS and draw your fullscreen SSAO quad at depth = 1.0. It is MUCH more efficient to cull pixels with the depth test than an if-statement in the shader. With an if-statement, the fragment shader has to be run for every single pixel, and if just one pixel in a workgroup enters the if-statement the entire workgroup has to run it. By using the depth test, the GPU can avoid running the fragment shader completely for pixels that the test fails for, and patch together full workgroups from the pixels that do pass the depth test. This massively improves the culling performance.

 - You can use smoothstep() to get a smoother depth range test of each sample at a rather small cost.

 - It seems like you're storing your normals in a GL_RGB8 texture, which means that you have to transform it from (0.0 - 1.0) to (-1.0 - +1.0). I recommend using a GL_RGB8_SNORM which can stores each value as a normalized signed byte, allowing you to write out the normal in the -1.0 to +1.0 range and sample it like that too. Not a huge deal of course, but gives you better precision and a little bit better performance.

Myomyomyo.
Offline Ecumene

JGO Kernel


Medals: 195
Projects: 4
Exp: 8 years


I did not hit her! I did not!


« Reply #19 - Posted 2017-09-27 01:33:34 »

- The code you have is using samples distributed over a half sphere. Your best bet is a modified version of best candidate sampling over a half sphere, which would require some modification of the JOML code to get.

How would this look? Do you have any info on producing half-sphere samples? Sorry I'm asking so many questions

- I'd ditch the rotation texture if I were you. Just generate a random angle using this snippet that everyone is using, then use that angle to create a rotation matrix around the normal (You can check the JOML source code on how to generate such a rotation matrix that rotates around a vector). You can then premultiply the matrix you already have with this rotation matrix, keeping the code in the sample loop the exact same.

Here's what I've come up with:
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
mat3 rotationMatrix(vec3 axis, float angle)
{
    axis = normalize(axis);
    float s = sin(angle);
    float c = cos(angle);
    float oc = 1.0 - c;

    return mat3(oc * axis.x * axis.x + c,           oc * axis.x * axis.y - axis.z * s,  oc * axis.z * axis.x + axis.y * s,
                oc * axis.x * axis.y + axis.z * s,  oc * axis.y * axis.y + c,           oc * axis.y * axis.z - axis.x * s,
                oc * axis.z * axis.x - axis.y * s,  oc * axis.y * axis.z + axis.x * s,  oc * axis.z * axis.z + c);
}


1  
2  
3  
4  
5  
6  
7  
8  
float randomAngle = rand(v_texCoords);
mat3 rotationMat3 = rotationMatrix(normalView, randomAngle);

vec3 randomVector = vec3(0, 1, 0);
vec3 tangentView = normalize(randomVector - dot(randomVector, normalView) * normalView);
vec3 bitangentView = cross(normalView, tangentView);
mat3 kernelMatrix = mat3(tangentView, bitangentView, normalView);
kernelMatrix *= rotationMat3;


This produces.. Unfavourable results Roll Eyes
What am I supposed to do with the random vector...?

Other than that your instructions were clear enough for me to implement! I got the smoothstep, GL_RGB8_SNORM, and GL_LESS to work! Thanks!

Offline theagentd
« Reply #20 - Posted 2017-09-27 14:23:37 »

I think the reason why you're getting wrong results is because you do the matrix multiplication the wrong way around. Remember that matA*matB != matB*matA. However, I've been thinking about this, and I think it's possible to simplify this.

What we really want to do is rotated the samples around the Z-axis. If we look at the raw sample offsets, this just means rotating the XY coordinates separately, leaving the Z intact. Such a rotation matrix should be much easier to construct:
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
   float angle = rand(texCoords) * PI2;
   float s = sin(angle);
   float c = cos(angle);
   mat3 rotation = mat3(
      c, -s, 0,
      s,  c, 0,
      0, 0, 1
   );
   //We want to do kernelMatrix * (rotation * samplePosition) = (kernelMatrix * rotation) * samplePosition
   mat3 finalRotation = kernelMatrix * rotation;


This should be faster and easier to get right!

Myomyomyo.
Offline KaiHH

JGO Kernel


Medals: 446



« Reply #21 - Posted 2017-09-27 18:29:20 »

- The code you have is using samples distributed over a half sphere. Your best bet is a modified version of best candidate sampling over a half sphere, which would require some modification of the JOML code to get.

How would this look? Do you have any info on producing half-sphere samples? Sorry I'm asking so many questions

The current JOML snapshot version 1.9.5-SNAPSHOT (latest version on GitHub) contains the possibility to generate best-candidate samples on the unit hemisphere around the +Z axis with Z in [0..+1]. The API changed a bit, too, towards a more "builder-like" pattern:
1  
2  
3  
4  
5  
6  
7  
float[] samples = new float[numSamples * 3];
new BestCandidateSampling.Sphere()
  .seed(seed)
  .numSamples(numSamples)
  .numCandidates(numCandidates)
  .onHemisphere(true)
  .generate(samples);
Offline Ecumene

JGO Kernel


Medals: 195
Projects: 4
Exp: 8 years


I did not hit her! I did not!


« Reply #22 - Posted 2017-09-27 23:21:18 »

Alright! I got the best candidate samples to work. I need a little more info theagentd:

The rotation matrix is calculating correctly, but I'm curious what I'm supposed to exchange the random vector with now the noise texture is gone? I set it to 0,1,0 to test in my post from yesterday, and I hoped you'd pick up on it.. Here's a sample.

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
float angle = rand(v_texCoords) * 6.28318;
float s = sin(angle);
float c = cos(angle);
mat3 rotation = mat3(
  c, -s, 0,
  s,  c, 0,
  0, 0, 1
);

vec3 randomVector = vec3(0, 1, 0);
vec3 tangentView = normalize(randomVector - dot(randomVector, normalView) * normalView);
vec3 bitangentView = cross(normalView, tangentView);
mat3 kernelMatrix = mat3(tangentView, bitangentView, normalView) * rotation;

Offline Ecumene

JGO Kernel


Medals: 195
Projects: 4
Exp: 8 years


I did not hit her! I did not!


« Reply #23 - Posted 2017-09-28 20:59:14 »

Sorry about the bump, but the only way I get any output past full 1.0 is when I remove the rotation matrix. Take a look at this.



This is using the bestcandidatesampling, and here's the output of that.

1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
15  
16  
17  
18  
19  
20  
21  
22  
23  
24  
25  
26  
27  
28  
29  
30  
31  
32  
[-0.7227158, -0.6058225, 0.33265734,
 0.7488476, 0.6624017, 0.021243215,
 -0.61065674, 0.7918451, 0.008934975,
0.57487714, -0.49303505, 0.65301824,
 0.03831022, 0.5855623, 0.80972165,
 0.9759306, -0.21806261, 0.0028834343,
 -0.72111094, 0.086945206, 0.6873424,
-0.0080795605, -0.99995875, 0.0041467547,
 -0.13685125, -0.649704, 0.7477677,
0.67209387, 0.35918146, 0.6475172,
 -0.97659266, 0.21375246, 0.024014235,
 0.087515585, 0.9748422, 0.20499706,
 -0.5405089, 0.6260205, 0.5620929,
0.15146911, -0.13660306, 0.97897744,
 0.6733053, -0.7337369, 0.09105086,
 0.90603316, -0.025514243, 0.42243695,
 0.5041927, 0.7524756, 0.42375714,
 -0.23089148, 0.08173944, 0.96954,
 -0.9509538, -0.29908586, 0.07895911,
 0.32695338, -0.8306433, 0.45070308,
 -0.11253592, 0.8298539, 0.54651463,
 0.5440647, -0.0063848053, 0.83901894,
 -0.8152317, 0.3753412, 0.44104004,
0.9406799, 0.3066772, 0.14515686,
 -0.3857175, -0.8732388, 0.29778516,
 -0.5603924, -0.29158565, 0.77520204,
 0.220142, -0.57200706, 0.7901553,
 0.33273157, 0.2870272, 0.89827895,
 0.8156582, -0.46193165, 0.34831142,
 -0.9124656, -0.0972097, 0.3974378,
 -0.8566877, 0.49673495, 0.13907069,
 -0.4933554, -0.65016884, 0.57782435]


This may be incorrect, I modified the code very slightly to work with LibGDX's vectors (but the functions are identical...)

Any help on my previous post + suggestions would help!!!

Offline theagentd
« Reply #24 - Posted 2017-09-30 01:12:59 »

I've been super busy, sorry.

I didn't realize the random vectors essentially filled the same purpose as the random rotations. You can drop the rotation matrix I gave you and just use the random vector texture you had. Please post how you sample from it. I recommend a simple texelFetch() with the coordinates &-ed to keep them in range.

Myomyomyo.
Offline Ecumene

JGO Kernel


Medals: 195
Projects: 4
Exp: 8 years


I did not hit her! I did not!


« Reply #25 - Posted 2017-09-30 02:57:48 »

I've reverted to the old code! Here's a screenshot of the current results, and the code that comes with it

https://imgur.com/a/6UT4q

Here's the raw output of the random texture:


Here's how I sample it:
1  
2  
3  
4  
// Setting the uniform
setParamsv(Param.RotationNoiseScale, new float[]{(float) Gdx.graphics.getWidth()/4f, (float)Gdx.graphics.getHeight()/4f}, 0, 2);
// Sampling
normalize(texture(u_texture3, v_texCoords * u_rotationNoiseScale).xyz * 2.0 - 1.0);


Here's the entire shader:
http://pastebin.java-gaming.org/7c4de38495c1c

Offline Ecumene

JGO Kernel


Medals: 195
Projects: 4
Exp: 8 years


I did not hit her! I did not!


« Reply #26 - Posted 2017-09-30 23:36:30 »

I'm almost there!!! Turns out I had a combination of problems due to calculating view normals and random texture input to the GLSL shader. One last problem:



Any idea how to make this fade more? Thanks!

Offline basil_

« JGO Bitwise Duke »


Medals: 418
Exp: 13 years



« Reply #27 - Posted 2017-10-01 09:25:56 »

if
occlusion
is
0.0 .. 1.0
then
occlusion *= occlusion
will fade more.
Offline KaiHH

JGO Kernel


Medals: 446



« Reply #28 - Posted 2017-10-01 09:50:42 »

Since you are already using samples on a normal-oriented hemisphere, you should also use cosine-weighted occlusion factors.
Currently, the contribution of light along every direction is the same so that occlusion is just a linear function of how many AO samples indicated occlusion.
But in the real world a surface receives more light (the surface's irradiance) from light sources directly facing the surface and less light from sources at an angle to the surface. Since with AO we are assuming that light is equally coming in from everywhere, when some light is blocked along the surface's normal then the surface will receive a lot less light than when light was instead blocked from directions at an angle to the normal, since that light would also not have contributed much to the irradiance of the surface. And the factor by which light at an angle contributes less to the irradiance than light directly facing the surface (i.e. along its normal) is exactly
cos(angleOfLightDirectionToNormal)
which can be computed via the dot product when you have both the normal vector and the light direction vector, which in our case of AO is the sample direction vector.
So, you do not just
occlusion += 1.0
but instead
occlusion += weightedOcclusionFactor
.
For that to work you must make sure that when all AO samples would indicate occlusion, then the sum of all weighted occlusions must add up to 1.0 in total. This can be done by simply computing the cosine weight factors of all AO sample directions beforehand, summing them all up and then divide each factor by that sum to normalize them to make their sum be 1.0.
Pages: [1]
  ignore  |  Print  
 
 

 
Ecumene (52 views)
2017-09-30 02:57:34

theagentd (76 views)
2017-09-26 18:23:31

cybrmynd (183 views)
2017-08-02 12:28:51

cybrmynd (182 views)
2017-08-02 12:19:43

cybrmynd (189 views)
2017-08-02 12:18:09

Sralse (197 views)
2017-07-25 17:13:48

Archive (747 views)
2017-04-27 17:45:51

buddyBro (881 views)
2017-04-05 03:38:00

CopyableCougar4 (1429 views)
2017-03-24 15:39:42

theagentd (1319 views)
2017-03-24 15:32:08
List of Learning Resources
by elect
2017-03-13 14:05:44

List of Learning Resources
by elect
2017-03-13 14:04:45

SF/X Libraries
by philfrei
2017-03-02 08:45:19

SF/X Libraries
by philfrei
2017-03-02 08:44:05

SF/X Libraries
by SkyAphid
2017-03-02 06:38:56

SF/X Libraries
by SkyAphid
2017-03-02 06:38:32

SF/X Libraries
by SkyAphid
2017-03-02 06:38:05

SF/X Libraries
by SkyAphid
2017-03-02 06:37:51
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!