Java-Gaming.org Hi !
Featured games (90)
games approved by the League of Dukes
Games in Showcase (769)
Games in Android Showcase (230)
games submitted by our members
Games in WIP (855)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: [1] 2 3 ... 10
 1 
 on: 2018-10-19 12:29:15 
Started by GustavXIII - Last post by GustavXIII
If I create a random generated map with 2 dimension array like [50][50], how could I add random houses?
I thought about putting a random number inside every array field and if the number is like 7, then a house will be generated there.
I would have to edit the surrounding arrays to make it like a house shape.

But the problem is, what if many houses spawn close together? Then it would be weird having houses inside each other.
Is there some other solution?

1  
int mapArray[][]= new int[50][50];


1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
  public void createMap(){
            Random rand = new Random();

            for (int x = 0; x < mapArray.length; x++) {
               for (int y = 0; y < mapArray[x].length; y++) {
                  int i = rand.nextInt(200);
                  if (i==199) mapArray[x][y] = 2;
                  else if (i > 150) mapArray[x][y] = 1;                
                  else mapArray[x][y] = 0;
               }
            }
}


2 means a house should be build. 0 is grass tile and 1 earth.


I draw Map like this:
1  
2  
3  
4  
5  
6  
7  
8  
9  
10  
11  
12  
13  
14  
public void drawMap(Graphics g){
        for (int x = 0; x < mapArray.length; x++) {
            for (int y = 0; y < mapArray[x].length; y++) {
               switch (mapArray[x][y]) {
               case 0: g.drawImage(grass, x * 64, y * 64);  
               break;
               case 1: g.drawImage(earth, x * 64, y * 64);
               break;
               case 2: g.drawImage(stone, x * 64, y * 64);
               break;            
               }
            }
         }        
     }

 2 
 on: 2018-10-19 12:14:34 
Started by BurntPizza - Last post by KaiHH
Having seen that on Nvidia drivers glGetProgramBinary also outputs the ARB_vertex_program/ARB_fragment_program/NV_gpu_program4/5 assembly text, I begun learning the NV_gpu_program4 assembly language and reading about low-level shader optimizations (MAD instead of MUL and ADD, using negation on instructions operands instead of negating the result, etc.) to squeeze the most performance out of it.
EDIT: Hit exactly this GLSL compiler bug now: https://devtalk.nvidia.com/default/topic/952840/bug-report-linker-error-when-indexing-dvec-in-an-unbounded-loop/
Plus, whenever I use bounded loops the time the compiler takes to compile the GLSL code is directly proportional to the upper bound of the loop..... So if I have for (int i = 0; i < 1024; i++) ... glCompileShader() actually does not end in foreseeable time... This makes me want assembly even more...

 3 
 on: 2018-10-19 09:51:32 
Started by GustavXIII - Last post by ral0r2
I can recommed the maven assembly plugin which takes care of building a .jar (makes only sense if you use maven tho).
I wasnt able to create a fat .jar with either JarSplice or Launch4J in combination with Slick2D somehow as well.
Thats why I use the maven assembly plugin.

 4 
 on: 2018-10-19 08:46:40 
Started by GustavXIII - Last post by GustavXIII
Hi.

When I put everything by myself in a folder it works:


But I have to add something in my bat file or else it won't work because it can't find lwjgl64.
Quote
-Djava.library.path=.\lwjgl\native\windows



Now I wanted to only make one jar file or exe. I tried JarSplice and Launch4J.
But both times I get a error message:



I remember I had this same problem some years ago but I forgot how I solved it.
My Project looks like this:



 5 
 on: 2018-10-19 08:42:11 
Started by Zemlaynin - Last post by VaTTeRGeR
You don't have to make them, just grab some textures from *INSERT TEXTURE SITE HERE* and mix them with your current texture in your shader.

For example: https://www.textures.com/browse/grass/238
I used this site a lot before, they also have tiled textures.

 6 
 on: 2018-10-19 04:01:55 
Started by Zemlaynin - Last post by Zemlaynin
Thanks for the comments. Do the textures look flat. But I can not make such textures as not an expert in this. If you help me with this I will be very grateful.

 7 
 on: 2018-10-18 22:28:10 
Started by KaiHH - Last post by landocalrissian
I'm wondering if you have looked into voxel splat rending similar to what Media Molecule has done in Dreams?
http://media.lolrus.mediamolecule.com/AlexEvans_SIGGRAPH-2015-sml.pdf

 8 
 on: 2018-10-18 20:36:15 
Started by BurntPizza - Last post by philfrei
I got a "Power Chord" patch working as an option for the PTheremin yesterday.

Basically two sines, a perfect fifth apart [hmm, guitars generally use equal temperament instead? maybe, maybe not if they tune by using harmonics], with simple clipping, with a parameter available for real time interpolatione between the clip and the sine. It sounds decent, not ideal.

I tried a couple other nonlinear equations given by DspRelated and CCRMA, but neither were radical enough.

Not finished, though, as the aliasing is over the top. Am going to work on implementing a low-pass filter. My audio engineering friend points out I made need to over-sample to get enough filtering that doesn't also compromise the basic tone.

 9 
 on: 2018-10-18 20:07:57 
Started by KaiHH - Last post by KaiHH
I wanted to just find some time to do a quick write-up of all the rendering techniques I've been exploring the last month for voxel rendering and light transport for such scenes.

My approach to voxel rendering can be decomposed into two parts:
1. Solving light transport in the scene (how much light does each voxel receive (i.e. irradiance))
2. Actually efficiently rendering the voxels onto the screen

Let's start with 1.:

I started out with irradiance caching techniques, basically digesting all the papers/presentations from: http://cgg.mff.cuni.cz/~jaroslav/papers/2008-irradiance_caching_class/ and others. This basically means, that light transport is not constantly recomputed for each eye ray when it hits a surface, but instead those computations are cached in some form of spatial data structure which is then indexed-into during rendering.
Since I only store irradiance on the surface of voxels and not at all possible (regular) positions within the scene, for my purposes, this data structure can actually be the voxels themselves.
First, I used a single sample per voxel side, but that quickly and obviously resulted in ugly light discontinuities between nearby voxels and very unrealistic (as in "not at all") shadows at edges between two adjacent voxels. Even Minecraft looked better than that. So, to facilitate nice shade interpolation including soft shadows at edges between two voxels, I ended up with storing individual irradiance values at the four corners of each of the six faces of a voxel.
In my implementation, the irradiance is also not computed along with on-screen rendering, but is computed completely off-screen and then only sampled during rendering. Soon came the realization that there needed to be some form of LOD (level of detail) to avoid computing all those irradiance sampels at equal frequencies. For a typical 256x256x64 voxel cluster, which in the end could have around 32.000 visible voxels (yes, in the voxel generation phase I already removed voxels having other adjacent voxels at all sides), that meant computing 32.000 * 6 * 4 = 768.000 irradiance samples. And that is only one sample per vertex per face per voxel. A believable scene needed around 32 samples per vertex, so a total of around 25 million samples, and it was still very noticeable when another sample accumulated to that over time.
But clearly, there was no necessity to spend equal amounts of compute resources to all voxels alike. We could sample voxels nearer to the viewer more frequently. So, this is what I did as well. But still, this was still too much work per frame and required some seconds of pre-processing where only irradiances were sampled for the voxel cluster the player would start in.
Next came other LOD mechanisms which simply distributed the cost of a single dispatch compute over more frames:
- sampling only every N-th voxel
- sampling only a single voxel face
- sampling only visible faces (faces not adjacent to another voxel) (I will come later to why this actually is not a performance improvement when done naively!)
While this decreased frame time significantly, it also effectively slowed down irradiance sampling and made changes in lighting very noticeable whenever new samples accumulated slowly. We needed other ways to reduce computation. But more about that later.

Let's just quickly continue with part 2. from above: (actually rendering the voxels):

In addition to optimizing irradiance caching there was also the other part of rendering, which was... well... actually rendering the voxels. This started off with completely ray-traced cubes using a AABB-based bounding volume hierarchy with Morton Code sorting as spatial acceleration structure, and then switching to a kd-tree with surface-area heuristics which improved rendering time.
Then there came the jcgt paper "A Ray-Box Intersection Algorithm and Efficient Dynamic Voxel Rendering" which made me change the method of generating primary rays from ray tracing to a hybrid "point-sprite rendering and AABB/ray intersection test rendering" which yet again improved rendering performance. Other experiments have been done using the geometry shader to generate the convex hull polygon of a screen-space projected box, which again improved rendering times for close voxels.
Since rendering now was done using a rasterization algorithm, there were other typically rasterizer-friendly optimizations that became applicable now, such as frustum culling (which is really easy to do when already having a spatial acceleration structure) and occlusion culling.
The former I implemented with the kd-tree and glMultiDrawArrays. The latter is more involved and I started with a Hi-Z occlusion culling pass and a GPU-driven rendering approach using glMultiDrawArraysIndirectCount, using a compute shader to perform hierarchical culling driven by the kd-tree nodes using the previously generated Hi-Z mip chain to do occlusion culling. That part is currently still in work.

Now let's get quickly back to the first rendering part: irradiance caching. Distributing the work to sample irradiance for all voxels over multiple frames and doing that more often for nearby voxels is all good and well, but still too much to do. The optimal solution would be to determine the actual set of visible voxels, which by experimentation in a typical scene with the player standing on the gounrd can be around 10 - 6.000 voxels, depending on the view distance and the terrain. That on average is far better than the initial potentially visible 32.000 voxels of the whole voxel cluster. So, how do we determine the set of visible voxels?

The idea is to use the rasterizer and a simple render pass (at possibly half the resolution - assuming a visible voxel needs to conservatively account for at least 4 fragments) to first render the scene with depth write/testing enabled and using a simple uint render buffer to store the ID of the voxel for each fragment. Z-testing will ensure that only the IDs of the closest visible voxels will remain in the render buffer. Then, we do a full-screen quad render pass with a simple fragment shader sampling the voxel ID from the render buffer/texture and writing a flag/byte into a Shader Storage Buffer Object (SSBO) at the position/index of the sampled ID. As the voxel ID we simply use the gl_VertexID to avoid having to drag in an additional uint vertex attribute for each voxel.
Since the SSBO we write to only needs to be as large as there are actual voxels in the voxel cluster currently being processed, we only need roughly a 32KB SSBO.
After the full-screen quad pass, and a memory barrier of course, we end up with an SSBO which contains a 1 at every index of a visible voxel and a 0 everywhere else. Here you can ask: Wait, why do we need a full-screen pass to finally sample and write the visible voxels' IDs. Can't we do that in the initial voxel rendering fragment shader by using early-fragment tests and letting the fragment shader only be execute for visible voxels? Actually, no. First, we would need to sort the voxels from front to back, which would be doable, and then the fragment shader would only execute for actually visible fragments. But sadly, the point sprite rasterization approach needs to kill/discard fragments and also write to gl_FragDepth, so early-fragment tests cannot be enabled. This is also sad in particular because later we cannot use the Turing extension GL_NV_representative_fragment_test to have the fragment shader only execute for a single voxel fragment. So, we end up with a final full-screen quad pass which fills the SSBO with the visibility flags.

This can then be used to efficiently answer the question: "Is voxel with ID X visible or not?"
But we need an efficient answer to another question: "What is the set of voxels that are visible?"
Why do we pose the question this way? We could just loop over all 0..N voxel IDs and ask the first question at every such index.
This has to do with how we are going to dispatch a compute grid to only sample irradiance for those voxels. While we _could_ launch a compute grid over all voxels in the cluster and branch out for invisible voxels (which we query from the SSBO using the first question), this will in the worst case not improve performance in any way compared to the "just compute irradiance for all voxels", because of reduced occupancy of the thread groups. If for a warp of 32 threads only 1 thread is used to compute one visible voxel, due to the lock-step SIMT execution, the thread group will finish in the same time as it would, had all 32 threads been computed. And since we blocked 31 of the 32 threads of the warp from doing other useful work, such as computing irradiance for other actually visible voxels, this approach is bad.

A better way is to collect the IDs of the voxels in a compact way into a buffer. So, we need to filter our list of all voxels to only retain the visible voxels. Given a "predicate" list of flags that tell whether a voxel at index X is visible (in our SSBO), and given the list of all voxels, filter the list of voxels to only contain those voxels for which predicate(X) == true/1.
There is a popular algorithm that allows us to do this in parallel, which is called Parallel Prefix Sum or simply "Scan". For more information on that and the article which I used to base my implementation off of, see: https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch39.html
Actually, my implementation is an adaptation of the ideas mentioned in the section "39.2.5 Further Optimization and Performance Results" optimized for filtering with 8-bit inputs (loading 32 elements per thread instead of only 8 as has been done by David Lichterman) together with the ideas mentioned in "39.2.4 Arrays of Arbitrary Size" to allow computations over multiple thread groups.

Parallel Prefix Sums can be used to implement filtering and stream compaction. I use two compute shaders for that where the predicates buffer is filled by the mentioned full-screen rendering pass writing a byte to the SSBO at the index of a visible voxel. The first compute shader pass then computes the prefix sum for all thread groups, writing the partial per-thread-group results into a temporary SSBO and writing the total per-thread-group sum into another temporary SSBO. The second compute program then performs another prefix sum over the total per-thread-group sums and writes the final filtered voxels at their corresponding positions into an output result SSBO.
This result SSBO then contains only the visible voxels tightly packed. This allows us to launch a compute grid over only those visible voxels!

This might have been a pretty quick fly-over over the ideas and algorithms so far, and alot of important stuff hasn't even been mentioned, such as:
- how to actually allocate and order the voxels for efficient glMultiDrawArrays rendering when traversing the kd-tree
- how to make use of vendor-specific OpenGL extensions to improve memory footprint and shader runtime
- how I actually used directional irradiance cached Hemispherical Harmonics to implement view-dependent specular lighting
- how to do ray tracing against a kd-tree without a stack
- how to build a kd-tree with surface-area heuristics
- why atomic bitwise operations are not a good idea to save bandwidth
I will try to detail this more in followup posts.

 10 
 on: 2018-10-18 15:56:29 
Started by BurntPizza - Last post by KaiHH
Reloading of shaders at runtime sounds nice, definitely. With static (constant pool) strings this will of course not work. Most changes I had to do in the shaders also involved host/shader interface changes, especially when trying to find an optimal SSBO memory layout for BVH and kd trees. I had to play around with that a lot. So runtime reloading of shaders was simply not something I needed to look into in the past.

Pages: [1] 2 3 ... 10
 
EgonOlsen (1597 views)
2018-06-10 19:43:48

EgonOlsen (1699 views)
2018-06-10 19:43:44

EgonOlsen (1153 views)
2018-06-10 19:43:20

DesertCoockie (1581 views)
2018-05-13 18:23:11

nelsongames (1181 views)
2018-04-24 18:15:36

nelsongames (1704 views)
2018-04-24 18:14:32

ivj94 (2471 views)
2018-03-24 14:47:39

ivj94 (1686 views)
2018-03-24 14:46:31

ivj94 (2770 views)
2018-03-24 14:43:53

Solater (909 views)
2018-03-17 05:04:08
Deployment and Packaging
by mudlee
2018-08-22 18:09:50

Java Gaming Resources
by gouessej
2018-08-22 08:19:41

Deployment and Packaging
by gouessej
2018-08-22 08:04:08

Deployment and Packaging
by gouessej
2018-08-22 08:03:45

Deployment and Packaging
by philfrei
2018-08-20 02:33:38

Deployment and Packaging
by philfrei
2018-08-20 02:29:55

Deployment and Packaging
by philfrei
2018-08-19 23:56:20

Deployment and Packaging
by philfrei
2018-08-19 23:54:46
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!