Hi !
Featured games (90)
games approved by the League of Dukes
Games in Showcase (707)
Games in Android Showcase (207)
games submitted by our members
Games in WIP (781)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
   Home   Help   Search   Login   Register   
  Show Posts
Pages: [1] 2 3 ... 119
1  Java Game APIs & Engines / Engines, Libraries and Tools / Re: LWJGL3 and Android Redux on: 2017-01-19 20:31:21
Err, no OpenGL module in that list?

Also, I assume there's no hope for Vulkan support on Android?

EDIT: And even if Vulkan is a no-go, I'd still use the sh*t out of LWJGL3 with OGL ES on Android.
2  Discussions / Miscellaneous Topics / Re: What I did today on: 2017-01-19 20:29:32
today i learned - tex-coords stored in half-floats is a very bad idea.
If you don't have any wrapping, then normal unsigned normalized shorts could work up to a certain resolution. 16-bits give you decent-ish tex coord precision up to a resolution of 4096x4096 or so. You definitely don't want half floats as they don't have even precision.
3  Discussions / Miscellaneous Topics / Re: What I did today on: 2017-01-15 23:39:03
Finished my last exam! Now I just have a small essay left and I'm freeeeeee... until school starts again on Tuesday...
4  Discussions / Miscellaneous Topics / Re: What I did today on: 2017-01-04 16:31:28
@basil_: How did you solve the light bleeding?
5  Java Game APIs & Engines / OpenGL Development / Parallel/concurrent shader program loading on: 2016-12-26 02:19:32
TL;DR / SPOILER ALERT: You don't seem to be able to do stutterless shader program "streaming" in OpenGL. ARB_parallel_shader_compile is completely broken. The solution with the least amount of stuttering was using cached binaries.

Today I investigated parallel shader loading. Basically, the idea is to speed up loading time by compiling multiple shaders in parallel, and optimally to do this while the game is running without causing stuttering.

In OpenGL, compiling a Shader object is actually a very cheap thing to do. This step pretty much just does some rough validation and makes sure the GLSL code is valid, but doesn't actually generate any runnable code. For a very big shader (shaders with massive loops to unroll for example), this usually takes <0.5ms per shader, ~0.2ms on average. Even more amazingly, if you compile a shader on a separate thread with a shared context it doesn't seem to interrupt rendering at all. This is nice, because it means we can do it in parallel to the game rendering without causing any stuttering.

The reason why Shader objects are so cheap to compile is because the actual generation of binary code is done when linking a Program object. This allows the driver to for example optimize away outputs of the vertex shader that aren't used by the fragment shader for optimal performance. This part is significantly more expensive, often costing more than 100ms per program to link. Therefore it makes a lot of sense to try to attempt to do this concurrently while the game is running to show loading screens and information while linking the shaders. This is however difficult. Each call to glLinkProgram() seems to freeze the ALL OpenGL contexts, which the rendering context getting stuck for long durations when swapping buffers. This causes a significant amount of stuttering.

An interesting note I made was that the driver does some EXTREMELY aggressive caching behind the scene. If the same shader code has been compiled before or a set of shader code has been linked together before, then compiling/linking the same shader code again is much faster. This increase in speed remains even when recreating shader objects, and even after restarting the program! Compiling a cached shader takes less than 0.1ms, while linking a cached program takes around 2ms (50x faster!). This caching is helps a lot in improving the load time when loading the same shader over and over, like a player would. It helps less for developers that modify their shaders a lot, but obviously that's not that big of a problem. However, even when the caching kicks in there is still significant stuttering caused by glLinkProgram.

There's an extension called ARB_parallel_shader_compile which gives you a mean to provide a hint to the driver that you want it to compile and link using multiple threads. Only Nvidia "supports" this extension (I'll explain the quotes below) so far, but it was worth trying out. It only adds two things: The ability to give the driver a hint of how many threads it should use to compile/link with, and a way of checking if a shader/program has completed compiling/linking without actually waiting for the result. This should in theory allow us to compile much faster by using multiple threads, and possibly also remove the stuttering. However, my findings really sucked.

First of all, when supported ARB_parallel_shader_compile should be "enabled" by default (the default is "implementation chooses number of threads"), but Nvidia seems to require you to set the number of threads before it actually starts doing anything (even if you just set it to the default value). This was a bit annoying but easy to work around. Now, what this extension seems to do in the Nvidia implementation is that it causes glLinkProgram() to no longer block. Instead, the program will only block if you do something that requires the result of the shader (checking compilation result, getting compilation log, getting uniform locations, binding the program, etc) before it is ready. The idea is to fire off all your glLinkProgram()s without querying the result, and then waiting (drawing loading screens or whatever) until the new method for checking if the linking says its done. However, this is broken beyond belief:
 - It's unreliable. Half the time the driver ignores the preferred number of threads and just blocks when glLinkProgram() until the compilation is complete.
 - The driver doesn't actually start compiling anything until you actually do something that needs the result of a program link. In other words, you can fire off all glLinkProgram()s, sleep for 10 seconds and then try to check the link result (if it failed or succeeded) only to see the driver actually starting to link at that point instead of when glLinkProgram() is called, giving you the same stuttering but from glGetProgrami() instead.
 - You can trick the driver to start linking all "queued" programs by getting the link result of the first program you linked, causing the other programs to continue linking in the background. If you wait until the rest are done before querying any more programs, you won't get any noticeable amount of stuttering, besides the first program you linked...
 - ... except that's impossible to do, because the new function for querying if the linking is done is broken on Nvidia's implementation. It always returns false, so it's impossible to determine if the program is done linking before getting the link result. Since the time it takes to link a program differs so much depending on caching, it's impossible to predict how much time this will take too, making sleeping for a certain amount of time infeasible too.

In other words, it's completely useless.

Another thing I tried out was getting program binaries and loading those instead. This removes the need for compiling Shader objects and attaching them to Programs, allowing you to "link" a full program by just throwing it a binary file instead. When using this function, the ARB_parallel_shader_compile extension seemed to be completely ignored, with all "linking" happening in glProgramBinary(). However, you would always get the same performance as if the driver had fully cached the program you tried to link (AKA ~2ms/program) instead of the 100ms+, without the risk of the program getting evicted from the cache or anything like that. There was still a considerable amount of stuttering in the rendering while loading with it though. Still, at a cost of only ~2ms per program it'd be possible to load a couple of programs per frame without getting any dropped frames @ 60 FPS.
6  Java Game APIs & Engines / Engines, Libraries and Tools / Re: LWJGL3 and Android Redux on: 2016-12-20 23:38:53
I will support you as much as I can, but my time and skills are a bit limited in this area. I can however provide pretty big test programs once my Vulkan abstraction makes some more progress.
7  Game Development / Performance Tuning / Re: Most optimal collision detection algorithms on: 2016-12-14 16:50:34
Can confirm, uniform grids are awesome for collision detection!
8  Game Development / Performance Tuning / Re: Has anyone tried creating an optimized HashMap implementation? on: 2016-12-14 00:45:39
Yeeeaaah, I just realized that in the use case I was thinking about a simple array would be good enough and solve all my problems. Embarrassing...  Emo Still, it's an interesting problem. I may take a stab at optimizing it at some point.
9  Game Development / Performance Tuning / Re: Has anyone tried creating an optimized HashMap implementation? on: 2016-12-13 22:31:25
Hmm. I feel like the horrible linked list implementation that HashMap uses should be even worse for cache hits though. Is there any way of improving that part?
10  Game Development / Performance Tuning / Has anyone tried creating an optimized HashMap implementation? on: 2016-12-13 22:14:48
Hey, everyone.

As the title says, has anyone tried writing a HashMap implementation that is faster than the one included in Java? How much faster was it? I am mainly interested in getting elements out and don't really modify the hashmap a lot.
11  Java Game APIs & Engines / Engines, Libraries and Tools / Re: LWJGL uses wrong graphics card on: 2016-12-13 01:00:06
Can confirm that the Nvidia DWORD works. Presumably the AMD DWORD works too.
12  Game Development / Performance Tuning / Re: Renderer Optimization on: 2016-12-12 12:02:28
Rendering can be bottlenecked by different things. The key is to optimize the bottleneck or you won't actually see a performance increase.

CPU optimizations:

>Avoid garbage generation in your entire game. Routine operations should not generate garbage or you will get regular stuttering.

>If your bottleneck is your game logic, then the rendering performance isn't very relevant to increasing performance, so focus on optimizing the game logic in that case.

>The OpenGL driver has quite a bit of CPU overhead. The cost of a draw call (glDrawArrays/Elements() and their variations) is proportional to the OpenGL state you changed since the previous draw call. An FBO bind is very expensive, shader binds are pretty expensive, while texture, uniform and VAO binds are cheap. The frequency of said binds should be inversely proportional to the cost of them: You usually just bind a handful of FBOs each frame, you may have a few shaders used for each FBO, and you could be using a crapload of textures, uniforms and VAO binds for each shader.

>Batch draw calls together. Each draw call has a cost, so reducing the number of draw calls (especially reducing them to a constant number regardless of the number of objects) is an extremely powerful optimization.

>If your CPU performance is the bottleneck, consider offloading work to the GPU. Better balance = better overall performance. This is especially useful for 2D games as they generally have very low GPU load and very high CPU load. For example, instead of drawing a tilemap as quads (2 triangles each), you can draw a huge quad with a shader that chooses a texture for each tile. This requires less work for the CPU by having the GPU do slightly more work.

>Precompute things that you can precompute, for example the vertex data of tile maps and voxel worlds. If you need to dynamically update stuff, chunk together data so you only need to regenerate the affected chunks when a change happens (instead of the entire world).

GPU optimizations:

>The GPU has lots of dedicated hardware that can bottleneck your game. Figuring out where the bottleneck lies is key to improving GPU performance. Use OpenGL timer queries to figure out how much time each operation takes instead of playing guessing games.

>Using shaders is not slow. GPUs nowadays emulate all fixed functionality features with shaders anyway, so it's not like they're "not used".

>If applicable, make sure you use indexed rendering. Indexed rendering gives the GPU a change to reuse vertices instead of running the vertex shader for the same vertex twice, which can save you a lot of performance. For example, in a simple grid each vertex is reused an average of 5 times.

>If your fragment shader simply reads a single texture and multiplies together some colors, you're most likely bottlenecked by the ROPs (which write the result of the shader to the FBO), so don't be afraid to use more complicated shaders if it can save you work somewhere else. Adding more code to the fragment shader is free up to a certain point.

If you gave us more information on your general use case, then we could give you more specific tips. At this point, the only real tips we can give you is 1. find bottleneck and 2. fix bottleneck.
13  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-12-11 19:56:38
The GPU essentially seems to show the same artifacts as if it was overclocked, so I don't think it's the VRAM.
14  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-12-11 18:18:57
Which model/chip is that? If it's broken anyway, you might consider to bake it in the oven. There's a slight chance that this helps.
It's a Gigabyte GTX 770 4GB card. I hope you can understand my skepticism at putting a graphics card in an oven.

>What is it supposed to solve? What is supposed to happen?
>Time, temperature in the oven?
15  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-12-10 16:23:15
Summary of last week or so:
 - Blender plugins for exporting models and animations essentially feature complete.
 - Experimented with motion blur stuff, but no good results.
 - Finished my school assignments for the year. Just one interview/presentation thing left... and two massive exams and an essay right after christmas. "Who needs a winter break, eh?" - Swedish education.
 - Seemingly fried one of my GPUs. It starts to artifact and driver crash under load. My second one seems fine though. I really don't have the cash to replace, well, anything at all at the moment. My frigging fridge broke the other day too and I lost a huge amount of food there too... FML

16  Java Game APIs & Engines / Engines, Libraries and Tools / Re: LWJGL 3 - Assimp bindings on: 2016-11-30 20:27:09
@Elect: I'm not trying to discourage you when it comes to jAssimp; I'm just saying that I don't think the benefits (no native code, smaller file size) outweighs the drawbacks (porting effort, maintenance as new features come out, etc).

I will be evaluating Assimp and jAssimp at some point decently soon, presumably mid-end January next year. I'm completely drowned in school work ATM and my exams are in January, but once they're done I'm gonna try to sit down and look through the documentation and get some simple stuff working. I'm fairly sure that jAssimp will be easier to use than the huge amount of interaction with native code and data that the Assimp binding will need, so I may try jAssimp first. However, do realize that if we were to adopt jAssimp for our modeling importing we'd need pretty speedy bug fixes and possibly feature requests (not likely though).

I really don't envy your project, man. 3D modeling formats are a frigging nightmare. That's why I'm worried about how long you'll be able to "last" after a couple of months of debugging why the that FBX file doesn't look like the spec says it should look, or that .blend file is completely different when a new version of Blender comes out, or why the hell you have to implement an entire programming language parser to decode a .x 3D model file. It's nothing about you personally; I just know that *I* wouldn't be able to do that for long. >___> If you seem to be able to handle it and maintain it, jAssimp would be my first choice.
17  Java Game APIs & Engines / Engines, Libraries and Tools / Re: LWJGL 3 - Assimp bindings on: 2016-11-30 19:15:42
A rant worthy of a medal.

As for the Java vs native discussion - there is no such thing as time "wasted" on a Java implementation. I like things being in Java because they're easy to debug, easy to build, easy to maintain, easy to fix, easy to steal bits of code from. Native code adds a thick layer of impedance over all of that. And native code is pretty frequently fraught with the exact sorts of bugs that we switched to Java to avoid in the first place.

Cas Smiley

Thanks. =P

That's a good point, and I agree with that opinion 100%. However, in this case we're talking about replicating existing functionality in Java. I'm assuming that Assimp has its fair number of users already, so most bugs should be fixed by now. Manually converting the whole thing to Java would just add an extra layer of possible bugs on top of the original code, and cause my engine to rely on even more people that I have no real influence over. I see no real benefit in this situation from a Java version of jAssimp.
18  Java Game APIs & Engines / Engines, Libraries and Tools / Re: LWJGL 3 - Assimp bindings on: 2016-11-30 18:46:49
You are clearly not the target, theagenttd
Do you mean Assimp or jAssimp?

Assimp itself is perfect for my use. My problem has always been that 3D model formats contain way too much useless information that I have to sift through, and all the actually useful information is in a wrong format and extremely hard to decode unless your engine/tool uses the exact same data/object structures as the 3D model format assumes.

Example: I don't want to have 30 different light types (or any lights at all), crazy quaternion interpolation b├ęzier parameters, complete scene graphs with interleaved mesh "nodes" and skeleton "nodes" and texture "nodes" with 637 different matrices in each node used for different purposes with no clear explanation for what does what, support for four different animation systems, baked in textures in the model file, and even more shit. I just want a number of meshes grouped by material (1 mesh per material), simple key framed skeleton animation support and that's it.

Unless you've written a rendering system on the level of Blender or Maya or something like that, 75% of all 3D file formats are so much overkill and/or stores the data in a completely different much more complicated way that it'll make you cry yourself to sleep for months after trying (and failing) to implement them (assuming implementing the rendering system itself didn't already break you), and such a rendering system would perform so badly for a game that it'd be completely useless. The other 25% of 3D formats don't support skeleton animation and are useless to me.

Since all 3D model formats are overkill and/or not immediately compatible with my rendering systems, I need to do a lot of conversion. This means implementing a crapload of very complicated functionality: Flattening a scene graph and merging mesh nodes based on material but not the ones that are separately animated, make up material names, figuring out how this particular model format f**ked up bone matrices, bone-vertex mappings, bone indices, etc, baking the IK-joints into the normal joints, implementing advanced joint/quaternion interpolation that exactly matches whatever the program used to generate the file format uses, etc. This takes months if not years to implement for a complete model format, and it's slow as f**k to do every time you import something. If you are ever planning on releasing anything to the general public, you NEED to use a custom file format that only contains what you need in the right format already, or you're looking at several minutes of load times for each level.

Nobody has unlimited time or money, no matter what their goal is. This is not something that will improve my games if I do it "right". I just want something that works (as in does everything I need it to do) with the minimal amount of work invested by me. Investing more time isn't gonna make my games better. Currently, my by far simplest solution was to skip working with existing 3D model formats completely and just writing a Blender plugin that extracts what I need and generates a file in my custom model format. It took me around 2 weeks to get the essentials working.

My worry here is that if Assimp can support all of these completely crazy Blender, Maya and Autodesk that you can fit the entirety of the Toy Story 1-53 movies, their PC games, console ports of said games, music files, autographs and dinosaur fossil discoveries since the medieval ages in them, then Assimp must support essentially everything that every single model format has ever supported. This would essentially mean that Assimp's internal objects are by far the hardest thing ever to port to your own minimal custom format. If I need a team of 10 people and 2 years to extract 5 pieces of information from it, then it doesn't actually solve any real problems. In the end, all you need is a single solid workflow from one 3D modeling software to your game, and everything else can be funneled through that path.

In light of all this, I can not agree that jAssimp is significantly more useful than Assimp in any regard. On Android, the performance/memory usage/load time argument is 10x more important so a custom file format is even more required. This means that (j)Assimp's only real usage is in engine tools/editors running on beefy PCs used to export stuff into your own custom formats, at which point the convenience of having a pure Java implementation is far outweighed by having an automatically generated binding to a maintained project with no manual work/maintenance.

EDIT: Please be aware that this is my opinion from an indie game developer perspective. If you are just making a small game for fun where you just want to be able to throw in whatever 3D model format you have lying around........ you'll still be doing 99% of the job of writing your own custom format. No matter what you use, you will be taking the data returned by (j)Assimp when it loads the file and extract the stuff you need into your own custom Model object that you can actually render. From there on, it's a very small step to add binary serialization so you can save that Model object to disk and load it directly again.
19  Java Game APIs & Engines / Engines, Libraries and Tools / Re: LWJGL 3 - Assimp bindings on: 2016-11-30 16:40:54
Assimp is slow. Converting the same thing to Java is still slow. If you're loading random file formats into your final game build, you're doing it wrong. Just convert your models to your own internal format and load those instead. Much much faster, no unnecessary data, no library asset --> game asset converting overhead, etc. IMO, we don't need a pure Java implementation of an offline tool.
20  Java Game APIs & Engines / Engines, Libraries and Tools / Re: LWJGL 3 - Assimp bindings on: 2016-11-28 21:15:28
Looks nice, but forgive my skepticism... 3D models are a nightmare. I'm worried that writing a converter to my own format will essentially be the same amount of work as writing an importer for one of the big formats, like .blend, .fbx or something. That's impossible enough to do for one format, but I'll look into it when I have the time, AKA not soon. =/
21  Game Development / Newbie & Debugging Questions / Re: Updating VBO bogging down system on: 2016-11-21 04:39:34
How much data are we talking about here?
22  Game Development / Newbie & Debugging Questions / Re: Starcraft 2 Map and Path Finder. on: 2016-11-20 21:22:29
Hmm. I guess they could be using a 2D navmesh. Usually navmeshes are used to solving more complicated problems. Most games have simple heightmap based terrain which fits very well with grid-based pathfinding. Navmeshes are mostly useful when solving much more complicated problems, where you could be having multiple layers/floors that overlap, or need to pathfind in 3D environment in general. My intuition tells me that there's no way SC2 uses it, but I guess it is possible.

Again, what SC2 uses for finding the path from point A to B is really not that important. The overall approach/algorithm is the same; the rest is just changing the data or optimizing performance. Examples: grid vs navmesh, A* vs Dijkstra's, hierarchical pathfinding, etc. You shouldn't be aiming to copy SC2, but rather to code the pathfinder that you need. In most cases a simple Dijkstra's/breadth-first-search is good enough. A* is just an optimization. If you have extremely long paths, hierarchical pathfinding can improve performance a lot.

The actual complexity and problem that SC2 solved in a new way was keeping the units grouped together as they follow their paths. This really has nothing to do with HOW you calculate the path. It's simply a different AI for following the path while maintaining formation.
23  Game Development / Newbie & Debugging Questions / Re: Starcraft 2 Map and Path Finder. on: 2016-11-20 03:56:22
They do NOT use a navmesh. That would make zero sense in a game with 2D gameplay (everything is technically confined to a 2D heightmap after all).

The article has a few dead image links, but it's still correct. Basically, they use a modified version of pathfinding that helps with flocking. If you remember SC1, you'll remember how it was impractical to send units on long trips, because after pathfinding around a couple of corners your massive zergling rush would now be a long line of zerglings walking into the enemy base one by one and getting picked off easily. This is due to each zergling pathfinding independently, bumping into each other and then forming a line as they pretty much all have identical paths and need to wait for each other after a couple of corners. SC2 solved this by adding flocking behaviour to the pathfinding, so that the units maintain their "formation" as they follow their paths. This prevents a line of units from forming. How they actually find paths is not really a big deal. The simplest solution, A* on a grid, will be more than good enough for you.
24  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-11-17 16:18:22
More texturing.

25  Discussions / Miscellaneous Topics / Re: What I did today on: 2016-11-17 03:59:20
Terrain shenanigans.

Simple linear texture blending. Looks blurry as hell.

Advanced heightmap blending. Note how the grass fills in the cracks as it fades in instead of being blended in.

Automatic PN-triangle tessellation. Notice how nice and round the edges look instead of the obvious sharp edges from before.
26  Game Development / Newbie & Debugging Questions / Re: LWJGL GL30.glGenVertexArrays() returns duplicate ids on: 2016-11-09 13:42:09
Did you read my first post? You are using a shared OpenGL context (SharedDrawable) to load things on a background thread. Then you use things in the main thread later. The problem is that not everything is shared between contexts. The rule of thumb is that things that contain data (textures, buffers, samplers, renderbuffers, queries) are shared, but things that contain references to other objects (framebuffer objects, VAOs, a couple of others) are NOT shared. This means that although the data objects you've loaded in the background are shared (VBOs, textures) are visible to the main context, the VAOs and FBOs you use to actually read and write to them are NOT visible.

In other words:
 - If you create a buffer on the background thread with ID 1, that exact same buffer can be used on the main thread as well.
 - If you create a VAO on the background thread with ID 1, it is NOT shared with the main thread. If the main thread then creates its own VAOs, it is free to return ID 1 "again", because ID 1 refers to two different VAOs depending on the OpenGL context.

The solution is to load your buffers and textures on the background thread, create VAOs and FBOs on the main thread afterwards.
27  Game Development / Newbie & Debugging Questions / Re: LWJGL GL30.glGenVertexArrays() returns duplicate ids on: 2016-11-09 12:47:42
glGenVertexArrays(IntBuffer) fills the given buffer with new VAO IDs. If your buffer can fit 5 values, glGenVertexArrays(IntBuffer) will create 5 VAOs for you.

The point of this is to improve performance. If you're creating thousands of VAOs you can avoid an expensive JNI call and involving the driver for each and every one of those VAOs, but in reality there is a VERY minor difference in performance. Since it increases the complexity of your code a lot, it is much easier to use the simplified
int glGenVertexArrays()
method that returns a single new VAO as an int directly.
28  Java Game APIs & Engines / Java 2D / Re: Anything new on vsync in window mode? on: 2016-11-05 22:25:48
Chill, it's a good question.

As far as I know, If you use BufferStrategy you'll get V-sync enabled if it's available. There's no explicit control for it AFAIK.
29  Game Development / Newbie & Debugging Questions / Re: Polygon corners with GL_LINE on: 2016-11-03 12:24:44
Argh, I suck at this typing thing. xD

If you're missing V1, you can just set L1 to L2 for example.

EDIT: Also, I strongly STRONGLY recommend JOML. JOML is the best thing that has happened to Java game development in a decent time. It's fast, KaiHH that develops it is super responsive and helpful if you're missing some feature, it generates zero garbage when used correctly and has pretty much anything that you could ever ask for. However, in this case you're probably better off just using raw floats as the vectors are kinda unnecessary in this case.
30  Game Development / Newbie & Debugging Questions / Re: Polygon corners with GL_LINE on: 2016-11-03 12:11:17
That looks 100% correct! Good job! I'm glad you got it working.

Note that there are a couple of special cases you should be aware of.

 - As the angle between L1 and L2 gets smaller, x tends to infinity. This makes sense if you want a pointy end to the line (as is the case for your box rendering), but it may not always give the best result. In essence, it can cause the line to be drawn as much longer than it really is, which could be problematic.
 - Since you need four points in total draw a single line between the middle two points, it can become difficult to draw line strips where the first and last lines are missing one neighbor. In this case, you can either introduce a special case or generate the missing points from the ones you have.

Here is some more information:

 - Since you have 100% control of the vertices, it actually becomes possible to pick a line width per vertex. This allows you to draw continuous lines with varying thickness, which can look really nice.
 - Again, since you're emulating lines with triangles, there are no limit to the line width. Usually it's limited to between 1 and 16 or something like that.
 - This system does not work with line smoothing.
 - You can however use both MSAA and shader-based anti-aliasing to achieve extremely high-quality anti-aliasing.
 - Although the line width can be under 1 pixel, doing so will cause bad aliasing. It's a good idea to clamp the line width to 1 pixel and instead reduce the alpha of thin lines. Example: line width is 0.5 ---> make line width 1.0 and multiply alpha by 0.5.
Pages: [1] 2 3 ... 119
Galdo (331 views)
2017-01-12 13:44:09

Archive (488 views)
2017-01-02 05:31:41

0AndrewShepherd0 (950 views)
2016-12-16 03:58:39

0AndrewShepherd0 (889 views)
2016-12-15 21:50:57

Lunch (1021 views)
2016-12-06 16:01:40

ral0r2 (1254 views)
2016-11-23 16:08:26

ClaasJG (1351 views)
2016-11-10 17:36:32

CoffeeChemist (1364 views)
2016-11-05 00:46:53

jay4842 (1448 views)
2016-11-01 19:04:52

theagentd (1231 views)
2016-10-24 17:51:53
List of Learning Resources
by elect
2016-09-09 09:47:55

List of Learning Resources
by elect
2016-09-08 09:47:20

List of Learning Resources
by elect
2016-09-08 09:46:51

List of Learning Resources
by elect
2016-09-08 09:46:27

List of Learning Resources
by elect
2016-09-08 09:45:41

List of Learning Resources
by elect
2016-09-08 08:39:20

List of Learning Resources
by elect
2016-09-08 08:38:19

Rendering resources
by Roquen
2016-08-08 05:55:21 is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!