Java-Gaming.org Hi !
Featured games (91)
games approved by the League of Dukes
Games in Showcase (799)
Games in Android Showcase (235)
games submitted by our members
Games in WIP (865)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: 1 ... 212 213 [214] 215
  ignore  |  Print  
  What I did today  (Read 3492594 times)
0 Members and 2 Guests are viewing this topic.
Offline SugarBlood
« Reply #6390 - Posted 2019-09-17 16:51:03 »

@dime26, Hello, I tried your game, its very nice for 48 hours Cheesy But on level 19 you have only 1 second to clear it, I don't understeand something? Huh

Hi, @VaTTeRGeR, I remember, I saw some interview about some VR archery game, and they said something like - you don't really have a bow in your hands, but you can feel it, when it's string is triggered Cheesy
Offline philfrei
« Reply #6391 - Posted 2019-09-17 20:28:38 »

VR is pretty fun, when it works. But I think it is going to have to become easier to use before it really catches on. A couple months ago, I brought my wife to a shop that supports gaming, where they had a corner roped off for VR with VIVE and there were all sorts of problems just setting up and getting programs to run correctly. The store employees rolled back the clock a couple of times (we were renting time) while trying to figure it out. Also there is a definite learning curve, trying to get used to the controllers, even with simple things like picking stuff up, and the programs themselves often don't do a good job of introducing themselves to novices. Somehow, I think it is easy to expect it to be a lot more intuitive (and thus leave disappointed but hopeful for improvements), but maybe any time one experiences a new medium, there is going to be a lot to learn in order to use it. My wife was gaga about an underwater adventure she took where a massive whale sidles up and looks her in the eye. One really can get a sense of space from these illusions.

My day: (after a couple days offloading data from my desktop PC), I have made a bootable USB with Ubuntu Server, but made the mistake(?) of running chkdsk on my Windows 10 machine. We've been sitting at "Scanning and repairing drive (C:): 12% complete" for 4 hours so far. Am heading out to run some errands. Maybe it will finish before I come back?

How long to give it before pulling the plug?

I don't want to damage the disk.
A prior, read-only run of chkdsk revealed three items to fix (and only took 5 or 10 minutes to execute).
At this point, I only hear the fan. I'm not hearing hard drive activity, as I was earlier.

I'm keeping my laptop (Windows 10) as my main dev environment, but I really want to learn more about working "full stack" and especially about the server level.

Am reading this: "Linux Administration: A Beginner's Guide, 7th Edition"
The "beginners" word is a bit deceptive. They are assuming that the reader knows a fair bit about Windows at the OS level and are just new to Linux. I know a bit about Windows OS, but am not an expert. Still, it's the closest book I found that matches my current level of knowledge.

music and music apps: http://adonax.com
Offline VaTTeRGeR
« Reply #6392 - Posted 2019-09-17 21:34:04 »

@SugarBlood Yep, that pretty much describes it, your mind fills in the gaps.

@philfrei I think/hope it will go the way of graphics cards with unified APIs like OpenGL for rendering/input and the most successful hardware designs becoming the common denominator. My experience with the Rift S was surprisingly painless but also only around 5 minutes long. I definitely won't be buying any VR headset anytime soon though, the combination of GPU and headset is just too expensive right now but that'll change at some point.

Can't really help you with that disk though, I would try to force windows to perform a shutdown if your goal is to install Linux on that disk anyway  Huh
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline philfrei
« Reply #6393 - Posted 2019-09-18 15:35:26 »

The decision on whether to shut down was decided for me. Right before leaving for a cowork space, our power went out! Good old CA-PG&E power grid. I just ran a read-only chkdsk and the results are worse than before, a dozen corrupt entries instead of 3.

I asked about this at the cowork space, and the consensus was that the hard drive should be replaced. I probably should have done this months ago, when the "disk" reading from the Windows TaskManager started routinely pinning 100% for the first 5 then 10 then 15 minutes when starting up. Always trying to save a buck, kept putting it off and just shifted more work to the laptop.

EDIT: Just sent off for a new SSD. Meanwhile, a chkdsk with /f only (fix, don't bother to try and recover) managed to execute and not hang. So maybe I will go ahead and try installing Ubuntu-Server today. Not a big deal having to do it over again in a week.

music and music apps: http://adonax.com
Offline dime26

JGO Wizard


Medals: 90
Projects: 7
Exp: 12 years


Should traffic wardens be armed?


« Reply #6394 - Posted 2019-09-20 22:34:50 »

@dime26, Hello, I tried your game, its very nice for 48 hours Cheesy But on level 19 you have only 1 second to clear it, I don't understeand something? Huh

Hi, @VaTTeRGeR, I remember, I saw some interview about some VR archery game, and they said something like - you don't really have a bow in your hands, but you can feel it, when it's string is triggered Cheesy

You have limited time to complete the game, if you run out of it then you need to go back to the start and do better on some of the levels to re-claim time.

You start with I think 300 seconds and use up time on each level, only your fastest time on each level is taken from the total. I probably should have had no time limit per level but kept a total and shown which levels were done too slow.
Offline SugarBlood
« Reply #6395 - Posted 2019-09-22 16:16:22 »

@dime26

Oh, got it, almost cleared it, but stuck on level 21
Offline KaiHH

JGO Kernel


Medals: 745



« Reply #6396 - Posted 2019-09-26 22:50:07 »

Had fun adding "Swept AABB/AABB" collision detection and response into a small demo, as I want to develop the render/OpenGL demos more into being an actual playable game.
Doing collision detection robustly was quite challenging, including not colliding with the "side" between two adjacent voxels. The nice thing is that I could reuse the already existing kd-tree, which was used for ray-tracing, for the broadphase collision detection to quickly collect potential candidate voxels given the swept/extended player AABB.
<a href="http://www.youtube.com/v/cGQQFSXdCxI?version=3&amp;hl=en_US&amp;start=" target="_blank">http://www.youtube.com/v/cGQQFSXdCxI?version=3&amp;hl=en_US&amp;start=</a>
(sorry for the weird colors, it's just a quick debug render)
Offline philfrei
« Reply #6397 - Posted 2019-09-29 19:27:17 »

Been slowly working through textbook and tutorials on Ubuntu-Server which I installed on my desktop this last week. Today's first task is a tutorial on compiling the gnu "Hello World" program. Got to start somewhere! Also my first SSD memory is in the mail, should arrive any day. Need to look at the vendors sites/info on prepping the hard drive for the switchover.

I got a super nice audio distortion algorithm working on my theremin on Thurs/Fri! It turns out the Java Math.tanh function sounds awesome and is super easy to use. I'm feeding it with an open fifth with a lot of built in phasing, so the source sound has some nice internal movement to keep the sound alive, as well as the FM equivalent of a low pass filter (sounds like a middling/mild Q, not super funky). Am calling this theremin voice/setting "PowerChord". Next up is figuring out how to cope with the aliasing the distortion function causes. (Plan--oversample plus filter, listen to hear how that works. But the effect will mostly be used at lower pitches, I assume.)

music and music apps: http://adonax.com
Offline mudlee
« Reply #6398 - Posted 2019-09-30 02:23:04 »

I’ve been in game dev for 3 years now. I made three small games for jams with Unity and Godot. I tried out Unreal, I like all, but still, the joy I have when I run my own rendering code is still bigger... so yesterday I finaly managed to update the rendering and obj loading logic to be able to load obj files with submeshes and materials. It worked at the first run, and tests were also green for first!
Offline orange451

JGO Kernel


Medals: 544
Projects: 8
Exp: 8 years


Your face? Your ass? What's the difference?


« Reply #6399 - Posted 2019-09-30 13:37:48 »

We're gunna need some pics persecutioncomplex

First Recon. A java made online first person shooter!
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline mudlee
« Reply #6400 - Posted 2019-09-30 19:30:51 »

Here it is Smiley https://drive.google.com/file/d/1YKeQBPKglP0nARTYqaRzsTT8z5nSYh05/view?usp=drivesdk

Today I managed to refactor my camera into something more general (Camera interface + AbstractCamera parent), and now with a small code change I now have an RPGCamera that has the right view angle and movement. Initally I wanted to create a camera arm that holds the camera itself. It tought its easier to have a camera rotated and the arm moved, but I use ECS amd when I started to think on parent-child relationship between transforms I realised that ECS is not meant for this Smiley So I ended up with a Camera refactor.
Offline KaiHH

JGO Kernel


Medals: 745



« Reply #6401 - Posted 2019-09-30 22:06:39 »

Today: Greedy Meshing

Up until now I only used actual cube and cuboid primitives to ray trace the scene, but a triangle mesh allows to:
1. do a raster pre-pass to compute "primary rays" giving the hit point position and normal to start any secondary rays from that point
2. use Nvidia Vulkan Ray Tracing, which gives a massive performance boost compared to a non-RT compute shader on Turing hardware!

Greedy Meshing Java Code here: https://github.com/LWJGL/lwjgl3-demos/blob/master/src/org/lwjgl/demo/util/GreedyMeshing.java
Offline KaiHH

JGO Kernel


Medals: 745



« Reply #6402 - Posted 2019-10-02 00:11:33 »

Today I quickly threw a simplex-noise-generated and greedy-meshed chunk onto NV_ray_tracing with a simple path tracing shader and got this:
<a href="http://www.youtube.com/v/LhN3w46kKmc?version=3&amp;hl=en_US&amp;start=" target="_blank">http://www.youtube.com/v/LhN3w46kKmc?version=3&amp;hl=en_US&amp;start=</a>
(16 samples per pixel, 4 bounces, 1440p, 60Hz)
One might think this is pre-rendered in e.g. Blender/Cycles, but it's actually realtime with a smooth camera rotation animation. Smiley
Offline KaiHH

JGO Kernel


Medals: 745



« Reply #6403 - Posted 2019-10-05 16:09:03 »

Today I implemented hybrid ray tracing (rasterization + ray tracing) in Vulkan, so that the first ray (starting from the eye) is not ray traced but actually rasterized/rendered as you would normally do, because rasterization still is much faster.
Normal output (rasterized):

Depth output (rasterized - linearized depth buffer):

1 sample-per-pixel 1 bounce (ray traced):

Code: https://github.com/LWJGL/lwjgl3-demos/blob/master/src/org/lwjgl/demo/vulkan/NvRayTracingHybridExample.java
Online SHC
« Reply #6404 - Posted 2019-10-05 16:50:39 »

I started learning Spring framework. P.S.: That syntax theme is my own port of the Ayu Dark colour scheme from VIM.


Offline KaiHH

JGO Kernel


Medals: 745



« Reply #6405 - Posted 2019-10-07 20:42:41 »

Blue noise sampling is so superior to white noise for the second bounce ray where the 2D input/sample vector to the blue noise function/image in screen space maps linear to the blue noise function/image domain.
Essentially, the blue noise function/image is sampled based on the X/Y screen space dimensions and the sample position is shifted by a random/hash function based on the bounce index and frame index (or frame/elapsed time) as input. This is what is commonly called "Cranley-Patterson rotation" in the literature.

White noise:


And blue noise:


The blue noise pattern is very amenable to low-pass spatial filtering (such as with a Guassian filter) because there is very little low frequency noise.
Java: https://github.com/LWJGL/lwjgl3-demos/blob/master/src/org/lwjgl/demo/vulkan/NvRayTracingHybridExample.java
GLSL: https://github.com/LWJGL/lwjgl3-demos/blob/master/res/org/lwjgl/demo/vulkan/raygen-hybrid.glsl

By the way: This video shows the current state of the art in sample generation (pretty tough stuff!)

EDIT: Images with 4 samples per pixel (left white noise, right blue noise):


EDIT2:
Also implemented https://eheitzresearch.wordpress.com/762-2/ today in OpenGL/GLSL, giving a big improvement to sample quality over white noise. Comparison (1spp multiple-importance sampling with single rectangular light source and 3 bounces):
(left is white noise, right is blue noise):



(you need to open these images in a separate tab or download them. the browser's downsampling destroys the effect)

Bottom line: NEVER use white noise (simple rand()) when generating samples! Smiley
Offline orange451

JGO Kernel


Medals: 544
Projects: 8
Exp: 8 years


Your face? Your ass? What's the difference?


« Reply #6406 - Posted 2019-10-13 05:00:38 »

Played around with a dark theme in my IDE:


Also re-implemented in-ide project testing, so you don't HAVE to test in a separate window anymore. This was done to make the program more prototype friendly. If I want to open it up and test how to write something in lua, I will no longer be required to save my work to a temporary file.

I also made undo/redo support

[EDIT]
I forgot to mention @Guerra24 has joined me, and implemented his rendering engine in my game engine. So now it looks a lot prettier Smiley

[EDIT2]
Here's a gif of internal testing
Click to Play

Click to Play

First Recon. A java made online first person shooter!
Offline philfrei
« Reply #6407 - Posted 2019-10-17 20:21:18 »

Been plugging away at setting up a Linode for my own website, with the goal of migrating fully to it before I have to pay the renewal fee for my current ISP (Oct. 28!).

I've managed to install Jetty as my webserver (instead of the recommended Apache) as I want to be able to play around with Servlets and JSP. It now works as a service, and is hosting a replica of my website. That all went fairly easily.

Now diving into learning about hosting a mail server. Linode recommends PostFix and I will likely end up going that route. There's a Java-based web server project from Apache.org called JAMES. I'm trying to give it a look, too. There is a lot to learn in this realm! And low-grade migraines don't make studying any easier.

I was telling my brother about the travails of packaging Java projects (where jlinking is involved). He asked if writing a tool to generate the needed shell command steps would be a useful project/product. Thoughts? I've been so occupied with learning server skills that I haven't been able to formulate the task requirements or even think about feasibility, or if this duplicates an existing tool that I don't know about.

music and music apps: http://adonax.com
Offline Dave_

Senior Newbie


Medals: 1
Projects: 1



« Reply #6408 - Posted 2019-10-17 21:47:47 »

Played around with a dark theme in my IDE:

[/spoiler]

Seems to be inspired by RobloxStudio, looks really good! Will this engine be publicly available to mess around in?
Offline orange451

JGO Kernel


Medals: 544
Projects: 8
Exp: 8 years


Your face? Your ass? What's the difference?


« Reply #6409 - Posted 2019-10-17 23:31:14 »

Seems to be inspired by RobloxStudio, looks really good! Will this engine be publicly available to mess around in?
That’s the idea. I really like how Roblox structured their engine, but severely dislike how managed all content is within it. So this engine will be designed similarly but offer much more control to the user. And yes it’ll be available, it’s already on my GitHub, I just don’t want to start advertising it because it’s not in a state that I’m totally comfortable with.

First Recon. A java made online first person shooter!
Offline Guerra2442

JGO Coder


Medals: 75
Exp: 3 years


Guerra24


« Reply #6410 - Posted 2019-10-18 04:37:59 »

Hey! I'm back, this time working with @orange451. After spending hours investigating and debugging a problem that in the end simply the context wasn't the correct one when certain code ran I continued my task of integrating the rendering code into the ECS and finally finished the dynamic sky.

Click to Play


Working on Anarchy Engine.
Offline KaiHH

JGO Kernel


Medals: 745



« Reply #6411 - Posted 2019-10-20 12:46:59 »

The past days I've been researching and implementing algorithms for efficient chunk/voxel generation and rendering, including:
1. Iterating chunks from front to back based on distance to camera
2. Computing "connectivity" between a chunk's faces (can a face X see a face Y?) with flood filling starting from one of three faces and seeing whether the flood reaches any of the other three faces. This will be used for CPU-side occlusion culling. Article about the algorithm: https://tomcc.github.io/2014/08/31/visibility-1.html
3. "Dense" Octree based on "mipmapped" arrays for storing chunks
4. Greedy meshing

I've read a few articles about chunk/voxel management and rendering, and concluded with the following design:
1. Use a "rolling"/sliding 3D array representing the maximum visible world
2. Each array item stores a tree of chunks as a dense "array-mipmapped" octree (so, no pointers but just array index computations, because we expect all of the octree leaf nodes to be filled eventually, either with a non-empty or empty chunk)
3. This octree stores 1 chunk per leaf node
4. The octree is used for combined frustum and occlusion culling as well as determining the next chunk to generate

Especially number 4. requires some thought: What we want is a very efficient algorithm to drive chunk generation. This needs to cooperate with CPU-side frustum and occlusion culling in order to avoid generating chunks which we know will not be visible. It also needs to generate chunks in front-to-back order starting from the camera's position to generate the best potential occluders first.

About point 1.: The purpose of this rolling array is for the managed chunks to "move" along with the player, so we can always generate chunks around the player. Another alternative that has also been proposed is a simple hashmap, hashed by world position. I'll go with a rolling array for now.

About the octree: We mainly need a datastructure with spatial hierarchy to accelerate frustum and occlusion culling and efficient k-nearest neighbor queries to determine the chunks to generate and render next. We could also just use an array/grid here, because chunks are basically everywhere and k-nearest neighbor in this case will simply be iterating in a spiral around the player, but: This is only true for initial chunk generation. When a chunk has been generated (and is possibly empty because it only consists of air) we can and should use hierarchical culling instead.
Also, since the CPU-based occlusion culling with the "is face X visible from face Y" algorithm is very conservative and good for an initial estimate of the potentially visible set of chunks. We would really like to combine that with GPU-side Z-buffer hierarchical occlusion culling/queries.

The hierarchy of the rolling array and the contained octrees is also necessary because I've not found an efficient way to "slide" an octree along with the player, except removing and re-inserting chunks.

There's still a lot to do, such as a potentially visible set algorithm for the CPU-side occlusion culling algorithm, which goes like this: Whenever it is determined that a chunk face entered by the view direction exits another face in this same chunk, we want to "narrow" the possible chunks being visited after that by the frustum made by the chunk faces with the current camera frustum. A glimpse of this idea is also demonstrated here: https://tomcc.github.io/frustum_clamping.html

EDIT: What this little Javascript demo doesn't show, however (because a single frustum is immediately updated after placing a block), is that we don't have just a single frustum which we narrow and filter chunks by, but we need actually multiple such frusta. Imagine the initial view/camera frustum and one completely opaque chunk rendered in front of the camera blocking the view in the center, whereas on the left and right side of the view, we can see further away into the world. In this case we actually have two frusta by which we filter further visited chunks. Since this can amount to possibly thousands of sub-frusta, an optimal solution to this problem would be an interval tree. We simply compute the left and right end of each frustum along the screen-space X and Y directions by determining the distance of the view frustum's planes to a chunk. This can either narrow down a single interval or split an interval into two intervals, if the chunk is opaque and culls everything behind it.
EDIT2: I think, I will go with a software rasterization approach for occlusion culling instead.
Here is a simple depth-only rasterizer without vertex attribute interpolation, which I am going to use for CPU-side occlusion culling: https://github.com/LWJGL/lwjgl3-demos/blob/master/src/org/lwjgl/demo/util/Rasterizer.java
It is tailored for my vertex format (unsigned 8-bit vertex positions and 16-bit indices).
Here are two images (the left/first is rasterized with OpenGL showing linear depth, the right/second image is showing the linear depth rasterized with the software rasterizer):

(the difference of both images is exactly black/zero at the actual rasterized triangles)
Offline KaiHH

JGO Kernel


Medals: 745



« Reply #6412 - Posted 2019-10-23 18:59:12 »

Spent yesterday and today evening on researching how to more efficiently render a discrete voxel grid from front-to-back (for occlusion culling) and back-to-front (for transparency) without explicitly sorting the voxels by view distance.

There are some papers about a simple slices, rows and columns algorithm which looks at the view vector and sort its components by their absolute lengths and define the nesting and order of three for-loops based on the different cases (vector component lengths, and whether it is directed along the negative or positive half-plane):
- "Back-to-Front Display of Voxel-Based Objects" - Gideon Frieder et al., 1985
- "A Fast Algorithm to Display Octrees" - Sharat Chandran et al., 2000

This however only works under orthographic projection, as is also mentioned by:
- "Improved perspective visibility ordering for object-order volume rendering" - Charl P. Botha, Frits H. Post, 2005

which presents an improvement of an ordering algorithm under perspective projection presented by:
- "Object-Order Rendering of Discrete Objects" - J. Edward Swan II, 1997
   notably in Chapter "2.3 The Perspective Back-to-Front Visibility Ordering"

So in essence, the slices/rows/columns algorithm can be used but the iteration direction needs to swap when the direction to the voxel swaps the side relative to the vector that is perpendicular to the voxel plane and starts at the eye location. This is also pretty obvious, when imagining the camera looking at a wall of voxels and looking very slightly to the right. When we wanted to render voxels from back to front with the orthographic projection slices/rows/columns-algorithm we would simply iterate the voxels from right to left, since we look slightly to the right. This will be correct as long as the voxels are to the right of the view vector. But when we reach the left-hand side, then under perspective projection, we would render voxels nearest to the viewer first, which would be incorrect.
Offline KaiHH

JGO Kernel


Medals: 745



« Reply #6413 - Posted 2019-10-26 19:33:26 »

Today was figuring out how to cheaply combat T-junction issues that arise with greedy meshing when faces share a single edge but not share vertices of that edge, leading to visible and distracting errors when those faces are being rasterized, because interpolated vertex positions are not always covering every pixel on that edge.
Since producing a proper mesh without any T-junctions is complicated and such an algorithm will likely be way slower than greedy meshing, I went for the simple hack of expanding/scaling faces a tiny bit so that those rounding errors will not occur/be visible anymore, increasing the potential for pixel overdraw just a tiny bit at those edges. But having a 100% correct rasterization without any holes in it for occlusion queries is more important.
Here is one of the debug images of using the vertex shader to offset the vertex positions based on the view distance (more precisely, the w component of the clip space vertex for inverse view/z distance to have constant offset in screen-space to avoid errors creeping up in more distant vertices):

In the image I used an exaggerated negative offset to test the view-dependent offset scale calculation.
Offline philfrei
« Reply #6414 - Posted 2019-10-30 17:38:51 »

Continuing to learn about servers and hosting.

I've managed to get Jetty functional using my domain name on the new Linode system, on port 80.

It's been...interesting. The Ubuntu repository installs Jetty as a systemd service. But the current Jetty.org documentation has no mention at all of systemd, and assumes one has loaded via wget and set things up in what seems to me to be a much cleaner fashion.

About yesterday: Jetty defaults to run on port 8080 and with Ubuntu, root permissions are required for access to 80. The Jetty.org docs advise using ipchains, iptables or Jetty's SetUID feature. Another tutorial I found prefers installing either apache or nginx and using one or the other to relay incoming 80 to jetty on 8080. (But the point of my picking jetty was to allow dynamic web serving without requiring a relay via apache, as in apache-tomcat.)

I've not been able to find documentation, in the classic sense, for the Ubuntu repository Jetty build. However, there are bread crumbs. A comment in a template start.ini file describes the use AUTHBIND and the assumed location of configuration files.

From this one can also infer something useful about how the Ubuntu Jetty separates the service from the application, in anticipation of application updates.

Today's task, having installed CertBot and successfully generating keys (for https), I get following for the next step: "You'll need to install your new certificate in the configuration file for your webserver." That's it.  Roll Eyes

Jetty.org's SSH section is NOT an easy read.  Tongue

music and music apps: http://adonax.com
Offline KaiHH

JGO Kernel


Medals: 745



« Reply #6415 - Posted 2019-11-01 16:16:56 »

Just implemented the SIGGRAPH 2016 paper Real-Time Polygonal-Light Shading with Linearly Transformed Cosines into my OpenGL test scene code:
(image only shows the specular GGX contribution)

The lighting calculation is completely analytic and very cheaply done in the shader without any stochastic elements like Monte Carlo integration - so no noise.
But note the lack of shadows from the table. Eric Heitz has a recent solution for this as well: Combining Analytic Direct Illumination and Stochastic Shadows
The solution of that paper is to calculate the direct lighting analytically (without shadows) and then use stochastic ray tracing to compute the occlusion factor, which is then blurred/denoised. The advantage of doing it this way is that this completely avoids any noise/variance whenever the light is completely visible from the sample point.
So, when having polygonal light sources (such as a rectangle) or basically any light shape for which analytic solutions exist (sphere, ellipsoid, disk, line, ...) one would never only just sample the light source area or the solid angle, but use the closed analytic solution and perform stochastic sampling only to compute the amount of shadow (hence the name "shadow ray").
I just love people like Eric Heitz who contribute to the research on ready applicable real-time rendering techniques.

EDIT: Here is a video showing GGX with varying roughness:
<a href="http://www.youtube.com/v/-g1USekNpmU?version=3&amp;hl=en_US&amp;start=" target="_blank">http://www.youtube.com/v/-g1USekNpmU?version=3&amp;hl=en_US&amp;start=</a>
Here is a very nice explanation of the "Linearly Transformed Cosines" technique: https://blog.magnum.graphics/guest-posts/area-lights-with-ltcs/
Offline mudlee
« Reply #6416 - Posted 2019-11-01 16:44:00 »

In the last few free evenings I've been working on an RPG camera and the Input system in my framework to be prepared for controlling characters from a starcraft like view. Both are ready, except some Linux problems with mouse clicks: http://forum.lwjgl.org/index.php?topic=6958.0 (please check it if you have time Smiley).

Soon I hope I have a small video about the progress.
Offline elect

JGO Knight


Medals: 71



« Reply #6417 - Posted 2019-11-01 18:05:34 »

@Kai, ever considered a blog?

All your valuable informations could be stored and accessed much more conveniently

Here they kind of get lost in the numerous pages..
Offline princec

« JGO Spiffy Duke »


Medals: 1116
Projects: 3
Exp: 20 years


Eh? Who? What? ... Me?


« Reply #6418 - Posted 2019-11-01 20:31:44 »

What luck, someone has made a brand new forum in which he might start a thread of his own.

Cas Smiley

Offline CommanderKeith
« Reply #6419 - Posted 2019-11-02 07:40:00 »

Just implemented the SIGGRAPH 2016 paper Real-Time Polygonal-Light Shading with Linearly Transformed Cosines into my OpenGL test scene code
Very cool demo video. Can the technique be integrated with normal shadow casting? I note that the table legs do not cast shadows in the video.
Cheers,
Keith

Pages: 1 ... 212 213 [214] 215
  ignore  |  Print  
 
 

 
Riven (130 views)
2019-09-04 15:33:17

hadezbladez (4615 views)
2018-11-16 13:46:03

hadezbladez (1676 views)
2018-11-16 13:41:33

hadezbladez (4836 views)
2018-11-16 13:35:35

hadezbladez (971 views)
2018-11-16 13:32:03

EgonOlsen (4265 views)
2018-06-10 19:43:48

EgonOlsen (5044 views)
2018-06-10 19:43:44

EgonOlsen (2888 views)
2018-06-10 19:43:20

DesertCoockie (3807 views)
2018-05-13 18:23:11

nelsongames (4154 views)
2018-04-24 18:15:36
Java Gaming Resources
by philfrei
2019-05-14 16:15:13

Deployment and Packaging
by philfrei
2019-05-08 15:15:36

Deployment and Packaging
by philfrei
2019-05-08 15:13:34

Deployment and Packaging
by philfrei
2019-02-17 20:25:53

Deployment and Packaging
by mudlee
2018-08-22 18:09:50

Java Gaming Resources
by gouessej
2018-08-22 08:19:41

Deployment and Packaging
by gouessej
2018-08-22 08:04:08

Deployment and Packaging
by gouessej
2018-08-22 08:03:45
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!