Java-Gaming.org Hi !
Featured games (83)
games approved by the League of Dukes
Games in Showcase (513)
Games in Android Showcase (121)
games submitted by our members
Games in WIP (577)
games currently in development
News: Read the Java Gaming Resources, or peek at the official Java tutorials
 
    Home     Help   Search   Login   Register   
Pages: [1]
  ignore  |  Print  
  Per poly texture?  (Read 3365 times)
0 Members and 1 Guest are viewing this topic.
Offline JeramieHicks

Senior Newbie




Java games rock!


« Posted 2004-09-30 19:09:02 »

I'm sorta new to Xith, but our last 3D system was capable of putting a unique texture on each polygon of a mesh object. All the Xith demos I've seen (yet) apply a single texture to an entire object. Is there any way to do the former?
Offline kevglass

JGO Kernel


Medals: 188
Projects: 24
Exp: 18 years


Coder, Trainee Pixel Artist, Game Reviewer


« Reply #1 - Posted 2004-10-01 05:09:25 »

You'll either need to create a special texture that contains all your textures and map the texture coordinates appropriately

or

Create a Shape3D for each polygon.


There is a good reason for this restriction, honest. Smiley Well, actually I believe its due to the ultimate aim of the eventual OpenGL calls drawing all polygons of the same texture in one fell swoop.

Kev

Offline Bombadil

Senior Duke





« Reply #2 - Posted 2004-10-01 09:53:36 »

Quote
You'll either need to create a special texture that contains all your textures and map the texture coordinates appropriately

Yes.
This is what many good 3d artists do anyway: pack as many different textures of a modell into one texture page to avoid texture context swapping as much as possible.
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline java

Senior Newbie




All games rock!


« Reply #3 - Posted 2004-10-01 10:47:15 »

Quote
This is what many good 3d artists do anyway: pack as many different textures of a modell into one texture page to avoid texture context swapping as much as possible.
True to a certain degree for models but for sure not for the level geometry. And to split an indoor level or even a terrain into a bunch of shapes by their textures is a really bad approach imho, because it makes a lot of tasks (like collision detection and response) unnecessarily difficult. Why won't xith let the programmer decide how many textures a shape should use!?
Offline Mithrandir

Senior Duke




Cut from being on the bleeding edge too long


« Reply #4 - Posted 2004-10-01 14:18:01 »

Actually, you're wrong about the collision detection. Having everything as a single big lump of geometry makes collision detection horribly slow. With a lot of separate objects, you can quickly cull almost everything before getting down to the per-triangle intersection tests. BSP, cells and portals, oct trees etc, all rely on splitting the geometry down to small sets of data spatially located to reduce the number of tests needed.

The site for 3D Graphics information http://www.j3d.org/
Aviatrix3D JOGL Scenegraph http://aviatrix3d.j3d.org/
Programming is essentially a markup language surrounding mathematical formulae and thus, should not be patentable.
Offline JeramieHicks

Senior Newbie




Java games rock!


« Reply #5 - Posted 2004-10-01 14:44:53 »

Quote
all rely on splitting the geometry down to small sets of data spatially located


Yeah, but having to split your dataset based on different textures is NOT splitting your dataset based on spatial proximity. The two are mutually exclusive.
Offline java

Senior Newbie




All games rock!


« Reply #6 - Posted 2004-10-01 15:23:19 »

Quote
Actually, you're wrong about the collision detection. Having everything as a single big lump of geometry makes collision detection horribly slow. With a lot of separate objects, you can quickly cull almost everything before getting down to the per-triangle intersection tests. BSP, cells and portals, oct trees etc, all rely on splitting the geometry down to small sets of data spatially located to reduce the number of tests needed.
That's not exactly what i was talking about. Storing all the level geometry in one shape doesn't mean that you can't use spatial subdivision on it. It's even easier IMO. If you split your level into a lot of texture separated objects, you either have to store the corresponding object of each polygon in your octree (for example) or (even worse) calculate an octree for each one.
Allowing just one texture per shape really is a bad decision IMHO. Imagine a Doom3 level that would have been build that way. How many shapes would that require? Gazillions? And what for? Just to minimize texture state changes?
Offline abies

Senior Duke





« Reply #7 - Posted 2004-10-01 16:32:30 »

Quote

Allowing just one texture per shape really is a bad decision IMHO. Imagine a Doom3 level that would have been build that way. How many shapes would that require? Gazillions? And what for? Just to minimize texture state changes?


It all depends on definition. Shape in xith3d/java3d is 'collection of geometries with same appearance'. You cannot make it contain multiple appearance - it is against definition.

What you want, is some different kind of object which has many-to-many relationship between geometries and appearances. Let's call it CompositeShape. You would probably need to add some kind of index for each polygon, pointing to correct appearance from given CompositeShape and engine would split polygons into separate shapes internally and group it itself to minimize state changes. Thing would get more complicated in case of dynamic shapes - it would have to be done in smart way to avoid copying data on every update.

Now, question is, do you really need CompositeShape ? So far, you have used two arguments: level geometry modelling and level geometry collisions.
For modelling, if your 3d editor mixes all textures in one big shape, thing about loader/converter which will split it into separate shapes per texture. How much work it can be ? One page of code ? Anyway, I doubt that you will put a lot of level into one shape, because of culling. You should partition your level anyway to avoid swamping GPU with non-visible objects.
As for the collisions, there is no requirement to use the same shapes you use for rendering for collision. If you have some kind of super-optimized collision representation of your level geometry, use it for collisions as whole - there is no need to split it per-texture.

Artur Biesiadowski
Offline JeramieHicks

Senior Newbie




Java games rock!


« Reply #8 - Posted 2004-10-01 19:37:06 »

Our system is based on user-created content from the average computer user today, similiar to how the Web made it possible for anyone to be a publisher. This means our content isn't ultra-optimized by professional 3D artists familiar with texture packing, etc. We're finding our users are using applications like TrueSpace which allow for per-polygon textures, and I don't want it to be my fault that their content isn't acceptable to our system. Additionally, there's a future design that includes painting directly within our system, where we provide the 3D mesh and they can paint the polygons individually to taste; so it's not just a matter of file loading, it's also a dynamic reconstruction issue.

So I guess in such a case, I just make each polygon a Shape3D?

Is there more overhead switching between hundreds of individual sub-objects, instead of checking an if-then switch per polygon? Is it possible to create a custom object on the application level that can feed its polygons directly to Xith, or is that too low-level for the application level?
Offline abies

Senior Duke





« Reply #9 - Posted 2004-10-01 20:22:55 »

Will you vary only textures or also other states per polygon ? For example, can single polygons in shape be wireframe, lit/non-lit, have different shaders, etc ? If yes, then probably one shape per polygon is good choice - it is going to be painfully slow anyway. If you vary _only_ textures, but share all other properties, then it will be probably better to have specialized object type for that.

If we are talking about painting on objects dynamically, maybe just per-object textures are then answer ? Prerender all needed things to big texture (single per object) and then perform all updates to it according to uv mapping of polygon. For painting with brush it is probably only choice anyway, unless you talk about 'select one of predefined textures for each polygon'.

Artur Biesiadowski
Games published by our own members! Check 'em out!
Legends of Yore - The Casual Retro Roguelike
Offline java

Senior Newbie




All games rock!


« Reply #10 - Posted 2004-10-02 10:48:55 »

@abies: I don't think that it's a very good argument to say that something is against definition and therefor not possible for a reason. Maybe not the feature but the definition is questionable in that case?!
Anyway, i agree that there are workarounds. As you mentioned, i could write my own loader to split my level into separate shapes and i can also use different geometry and spatial subdivision for my collision detection than i do for the rendering. But i don't think that that's the point when using a 3d engine like xith. It should offer such things and not force myself to reinvent the wheel here.
Another example: Imagine a 3d editor, where the user can load textures and assign them to every polygon he wants (i.e. he's texturing an untextured level). If i understand the current definition correctly, this is almost impossible with xith, because it would require every polygon to be a single shape (or to split and create shapes everytime he changes a texture, which sounds even worse). For a level with 20000 polygons (which is not much), this would require up to 20000 shapes. Am i right?
I'm sorry, but if i am (even to a degree), i don't think that the current approach is a very good one. Other engines i know are doing it differently and IMHO better. I think that this is something that should be rethought.
Offline abies

Senior Duke





« Reply #11 - Posted 2004-10-02 13:10:40 »

Quote
@abies: I don't think that it's a very good argument to say that something is against definition and therefor not possible for a reason. Maybe not the feature but the definition is questionable in that case?!


Xith3d tries to be mostly compatible with java3d as far as class concepts are concerned. Shape3D is well defined in java3d - so IMHO, if you want something different, it should be a different class, instead of putting very different functionality into old class.

Quote
For a level with 20000 polygons (which is not much), this would require up to 20000 shapes. Am i right?

Inside editor - yes. Inside game - they could be grouped into bigger entities.

Quote

I'm sorry, but if i am (even to a degree), i don't think that the current approach is a very good one. Other engines i know are doing it differently and IMHO better.


Can you tell what 3d engines allow assigning different texture per each polygon in same shape for models ? I know it happens for levels - but for models ?

I think that Shape3D is good enough for most models. You just need different entity for representing the world geometry.

Artur Biesiadowski
Offline java

Senior Newbie




All games rock!


« Reply #12 - Posted 2004-10-02 15:39:57 »

Ok, i now understand the reason for Shape3d to behave the way it does for compatibility reasons with Java3d. However, a  class that allows different textures for polygons would be a very valuable addition IMO.
JPCT (http://www.jpct.net/forum/viewtopic.php?t=88) is something i'm using from time to time and that offers support for both ways. By default, you assign textures for the whole object but the 3ds loader can handle multiple textures/object and the API lets you obtain a PolygonManager from each object which offers this option too. That's very convenient for coloring picked polygons for example. You just need about two lines of code to highlicht the polygon under the mouse pointer by simply changing its texture.
Offline Mithrandir

Senior Duke




Cut from being on the bleeding edge too long


« Reply #13 - Posted 2004-10-03 01:09:24 »

It's not just a compatibility with Java3D that's the issue here. It's the compatibility with the graphics card as well as the rendering API. Even if Xith3D did have some object type that allowed per-polygon texturing, then it still has to break the entire thing up into a lot of sub-objects each with their own separate geometry and texture. That's the way both OpenGL and Direct3D work. So saying it is by definition is absolutely correct.  

It's going to be very inefficient doing per-polygon texturing as it will have to do that work potentially every frame. There'll be a lot of data replication due to the need to create multiple copies of each vertex for each polygon that uses it with a different texture and so forth.  You're far, far better off doing that at the application level where you can control the the entire process and do it most efficiently for your application requirements. To give you an example, my .3ds loader takes the per-polygon texturing model and converts it through to the same setup that Xith3D uses - single shape per texture.  That takes about 300 lines of code to perform. Now, think about how that would effect performance if it had to be executed for every polygon, every frame.

Also, saying that spatial locatility is not a problem shows a fundamental lack of understanding of how geometry optimisation is used to gain massive performance increases through the use of standard algorithms. It's pretty clear by the definition of what you are calling a shape, and what everyone else knows a shape to be a very different. What you're completely confusing is the difference between a content development tool/environment, and a realtime graphics rendering API.  The requirements and abstractions are very different.  Saying that a content developer needs this and thus a programmers API should support it is like saying chalk and cheese are both a nice tasting after-dinner snack. They're very different beasts. Your job as the tool writer is to work between those two worlds and map the content developer's worldview, into a realtime 3D graphics worldview in the most efficient manner possible for your particular application.  What you need and what I need given the same data set are going to be very different when it comes to the optimised code for rendering.

The site for 3D Graphics information http://www.j3d.org/
Aviatrix3D JOGL Scenegraph http://aviatrix3d.j3d.org/
Programming is essentially a markup language surrounding mathematical formulae and thus, should not be patentable.
Offline Mithrandir

Senior Duke




Cut from being on the bleeding edge too long


« Reply #14 - Posted 2004-10-03 01:18:35 »

Quote
Yeah, but having to split your dataset based on different textures is NOT splitting your dataset based on spatial proximity. The two are mutually exclusive.


They are not exclusive by any stretch of the imagination. Objects using the same texture usually are spatially located in the same place. Think about walls inside a building - you'll have a a heap of polys all using the same sets of textures located together. Off that you'll have another room with another set of textures - possibly the same, possibly different.  If they're the same, you could keep them inthe same shape object if you wanted to, but it's more efficient not too as they can't be seen from this room and so culling them before it gets to rendering is a good strategy. If your graphics card is not transformation bound in your application, then leaving all the polygons that share a single texture in a single shape object (and thus a single glVertexArray) may well be higher performance option than spatially separating out objects with the same texture.  You can use either technique based on your own application and hardware needs, but they are not exclusive.

The site for 3D Graphics information http://www.j3d.org/
Aviatrix3D JOGL Scenegraph http://aviatrix3d.j3d.org/
Programming is essentially a markup language surrounding mathematical formulae and thus, should not be patentable.
Offline JeramieHicks

Senior Newbie




Java games rock!


« Reply #15 - Posted 2004-10-03 07:46:32 »

Quote
Objects using the same texture usually are spatially located in the same place.


I don't see how you can make this assumption. By that rationale, every tree in a forest could contained in a single object (since all trees could share the same texture) but it's silly to assume that every tree would be close to another.

Likewise, my company logo texture is used on a few dozen different avatar models, and it doesn't make sense to create a 1-polygon object just for the logo on their backs, when it's hundreds of avatars spread over several thousand square kilometers...

Quote
Saying that a content developer needs this and thus a programmers API should support it


My job is to fulfill the product requirements. If the product requirements are that typical users can paint per-polygon textures onto their avatars, then it's my job to figure out how. If the Xith answer is "make each of the 2000 polygons into 2000 seperate objects" then that, to me, is not a feasible answer. You're telling me there's less overhead with switching between 2000 Shape3Ds than any other possible solution?

Our current engine sorts the polys by texture in each object at loadtime. To draw, it changes to the first texture, calls OpenGL to draw polys A thru B, changes to the second texture, calls OpenGL to draw polys C thru D, etc. Best case scenario, we switch to one texture and make a single OpenGL call to draw polys A thru Z, which I fail to see how could be any more efficient. Worse case scenario, we make NumPoly calls with a texture switch and single polygon draw, which I hardly see how could be any better.
Offline William Denniss

JGO Coder


Projects: 2


Fire at will


« Reply #16 - Posted 2004-10-03 09:17:52 »

Quote

Our current engine sorts the polys by texture in each object at loadtime. To draw, it changes to the first texture, calls OpenGL to draw polys A thru B, changes to the second texture, calls OpenGL to draw polys C thru D, etc. Best case scenario, we switch to one texture and make a single OpenGL call to draw polys A thru Z, which I fail to see how could be any more efficient. Worse case scenario, we make NumPoly calls with a texture switch and single polygon draw, which I hardly see how could be any better.


It is more efficient because you are only dealing with one texture instead of n.   OpenGL only needs to load and store 1 texture.  I believe you will find this approach much faster even when using raw-opengl calls.

Will.

Offline java

Senior Newbie




All games rock!


« Reply #17 - Posted 2004-10-03 09:42:59 »

Quote
To give you an example, my .3ds loader takes the per-polygon texturing model and converts it through to the same setup that Xith3D uses - single shape per texture.  That takes about 300 lines of code to perform. Now, think about how that would effect performance if it had to be executed for every polygon, every frame.
Sorry, but i really don't get what you are trying to tell me here. That changing the texture state requires 300 lines of code?  For sure not.

Quote
It's pretty clear by the definition of what you are calling a shape, and what everyone else knows a shape to be a very different. What you're completely confusing is the difference between a content development tool/environment, and a realtime graphics rendering API...
I'm calling a shape whatever xith is calling a shape and i couldn't care less about what the programmer of the API is doing with this shape/model/whatever internally. I'm just interested in a feature that makes sense to me (and not just me). Maybe it's hard to implement and doesn't fit nicely into the current code but should i really care about this as the "user" of the API?  Back to my example: If your loader loads a level with a single texture, it would make it either one shape containing all the polys or it would make one shape for every poly because i somehow told it to do so. The later solution is totally out of question for me. That's far away from the optimized state you are talking about.  The former solution explodes when i'm trying to change a texture of a single polygon. I would have to reorganize the whole shape and split it into two seperate ones and so on and so on.
So i think you are basically telling me, that i can use xith for writing a game where almost everything is static (texture wise) but not for writing the tools for creating it!?
In my opinion, an engine's task is to abstract from the underlying rendering layer and its requirements. If it forces me to build wierd workarounds to get what i want (if its reasonable, which it is in this case), it has failed in this part IMHO.
Offline abies

Senior Duke





« Reply #18 - Posted 2004-10-03 11:52:21 »

Quote

So i think you are basically telling me, that i can use xith for writing a game where almost everything is static (texture wise) but not for writing the tools for creating it!?


You can dynamically change texture by painting on it. You can add decals with bullet holes. You just cannot randomly change texture on single polygons without making them separate shapes. I understand that you need this functionality, but I have yet see any game which would use such functionality. I have even played some kind of childish point-and-color game few years ago, but it allowed to color entire shape with one color/texture - not specific polygons (you would probably have problem explaining child why sphere is not a sphere but bunch of polygons).

Problem is that your use case is so strange to most people that there is a trouble grasping why exactly it is needed. Can you explaing the exact cases where it is needed ?

Artur Biesiadowski
Offline java

Senior Newbie




All games rock!


« Reply #19 - Posted 2004-10-03 15:25:58 »

Quote
Problem is that your use case is so strange to most people that there is a trouble grasping why exactly it is needed. Can you explaing the exact cases where it is needed ?
Well, "needed" is a bit too much because it's not something i'm making money with. Not even something that will evolve into a real game. It's just a fun project that i'm working on from time to time to learn things about 3d. The idea is this: There are birds hidden in a tree (with around 1000-2000 leafs) and you have to hit them using a fireball  Grin
I started this little project using the mentioned jpct engine and i wanted the leafs to burn down when hit by the fireball. In the earlier version of the engine, i had to create an object for every leaf too. Just like i would have to in xith . That's because the engine was able to detect the collision itself but it couldn't tell me which leafs were affected when i stored them all in one big object. That was quite slow. According to my profiling, most of the time was spent in the collision detection between my fireball and all the leafs. A newer version introduced the possibility to get the list of affected polygons (i.e. the leafs) from a collision. With this, i can easily maintain my own list of burning leafs, let them burn some time by changing the texture to an animated fire and finally i change the texture to a "burned leaf" one. (You can spot the birds better through the burned leafs  Wink )
That's what i'm doing and that's what i'm using this feature for. I don't really need xith to implement it, because i don't plan to use xith ATM. I was just wondering why something so obviously needed (to me at least) isn't possible with this engine.
And finally, albeit i'm not writing one, i think it's very usefull for texturing work in an editor. But i already mentioned that.
Offline abies

Senior Duke





« Reply #20 - Posted 2004-10-03 15:51:34 »

Quote
There are birds hidden in a tree (with around 1000-2000 leafs) and you have to hit them using a fireball  Grin
I started this little project using the mentioned jpct engine and i wanted the leafs to burn down when hit by the fireball. [...] With this, i can easily maintain my own list of burning leafs, let them burn some time by changing the texture to an animated fire and finally i change the texture to a "burned leaf" one. (You can spot the birds better through the burned leafs  Wink )


Create a texture with multiple stages of leaf burning in various places and then change texture coordinates of particular leaves. All leaves will be drawn in single opengl call, animating 'burning' is as simple as changing tex coords on few vertices. I would personally think about some kind of particle system for leaves - they could whirl from blast or just wind, fall down etc.

As you can see, no need to have per-polygon textures here and with current classes you will get a lot better performance than with any texture-switching solution.

Artur Biesiadowski
Offline JeramieHicks

Senior Newbie




Java games rock!


« Reply #21 - Posted 2004-10-03 15:55:15 »

Well, the first thing we have is we're displaying the MOLA data of the surface of Mars. There's 1,095,761,920 polygons representing a virtual surface area of 222,534,366 sq km. We break the data into 64x64 poly plates that are roughly 30 km sq in size, load 16 plates at a time, for a total of 65K quads (for the world alone) loaded at any given time. Each quad is roughly half a kilometer in size. We subdivide into 131K of tri polys.

I'd like to break the plates into smaller sizes, but there's already 250K plates.

Now you can't tell me that you can load a single texture that covers 30km of area (the size of one grid plate) with an acceptable resolution that you won't get sick looking at it from 5 feet away when you stand on the surface of it.

So we paint each half-kilometer-square polygon individually to obtain the necessary texture resolution. We use different textures (sometimes runtime generated to ensure randomness) on each polygon so you don't see the pattern repetition. I wouldn't think the correct solution is to create 131K seperate objects for each individual tri poly, or a single texture that is 16K x 16K pixels in size (256x256 texture by 64x64 polys)

We also need to be able to highlight a specific polygon. This isn't just "draw a decal on the spot", it's "here's exactly the polygon represented by these 3 data points" because we're looking for data visualization accuracy. For me, I just mapped a highlight texture onto it.

We're already dealing with 131K polygons just for the background, and even that isn't enough direct visualization, so we've "sacrificed" to reach down that low. That doesn't yet include the 50 avatar model representing the professor and his students as they stand on Mons Olympus either.

We also use the same 3D engine for our MMORPG, our virtual shopping center, a virtual physics and chemistry lab, and a 3D games development platform. So this isn't a "just use Java3D for large-scale visualization instead" problem.

Quote
You can add decals with bullet holes... but I have yet see any game which would use such functionality.


Well, at least that explains the resistance I met to every suggestion yet. I'm also the one that asked if it would be possible for a simple callback for the application to provide textures to the model loaders rather than model loaders taking it upon themselves to assume how to load textures directly (since we don't have, or even know to have, the textures at the time of model loading) but met with equal resistance of "why in the world would you ever not have a texture on disk available immediately when the model is being loaded?" (besides, of course, needing to obtain the texture from a delayed source [ie, runtime streamed from the network], needing to runtime procedurally generate the texture, having the texture already in application memory via a previous operation, keeping stats on texture usage frequency given an arbitrary model set, ... ) Not everything written in 3D is Quake, but it seems like we get resistance for any suggestion that doesn't pertain to a game.

The mentality here isn't "Sure, we can figure out how to provide that capability for your needs"... it's "Why in the world would you ever possibly need to do that in a game"... and one of the big reasons why we haven't moved to Xith yet. We're also looking at jME and jPCT. I know you guys aren't paid, that we're not paying you for support, that this is all a volunteer effort, and I understand. But most projects of this nature we deal with, the devs are hyper to see their system used in fields beyond their original aspirations, and giddy to include new capabilities that they themselves never imagined. We shouldn't have to justify why we need certain capabilities.

Quote
It is more efficient because you are only dealing with one texture instead of n.   OpenGL only needs to load and store 1 texture.  I believe you will find this approach much faster even when using raw-opengl calls.


Sure, and your two-seater car runs faster than my 50-person passenger bus. Speed comparisons are moot when we're talking systems with two different capabilities. To say "It's faster if you just drop that capability" doesn't say much when you need that capability in the first place, now does it? This is the equivalent of "Our program runs faster because it only prints Hello World".
Offline abies

Senior Duke





« Reply #22 - Posted 2004-10-03 16:48:36 »

Quote
Well, the first thing we have is we're displaying the MOLA data of the surface of Mars. There's 1,095,761,920 polygons representing a virtual surface area of 222,534,366 sq km. We break the data into 64x64 poly plates that are roughly 30 km sq in size, load 16 plates at a time, for a total of 65K quads (for the world alone) loaded at any given time. Each quad is roughly half a kilometer in size. We subdivide into 131K of tri polys.


We could start with that. You have very specific requirements - and now we can start to think how to solve the problem.

Quote

The mentality here isn't "Sure, we can figure out how to provide that capability for your needs"... it's "Why in the world would you ever possibly need to do that in a game"... and one of the big reasons why we haven't moved to Xith yet. We're also looking at jME and jPCT. I know you guys aren't paid, that we're not paying you for support, that this is all a volunteer effort, and I understand. But most projects of this nature we deal with, the devs are hyper to see their system used in fields beyond their original aspirations, and giddy to include new capabilities that they themselves never imagined. We shouldn't have to justify why we need certain capabilities.


Well, you should. Xith3d is game engine, not 'visualise-everything-and-a-bit-more' engine. This doesn't mean that other things cannot be added, but there has to be a specific request for them, together with explanation of why it is needed - because maybe some different solution, one which will fit well in engine AND solve your problem can be found (vide my solution to burning leaves).

Now, back to your problem. Do you have 131k different textures at once in system ? And you perform texture context switch 130 thousand times per frame, not to mention uploading all these textures to GPU each time if needed ? How do you store the textures in main memory ?

On side note, I suppose that you have already investigated this possibility, but just in case - have you tried big main texture with detail texture on top of that ?

Artur Biesiadowski
Offline JeramieHicks

Senior Newbie




Java games rock!


« Reply #23 - Posted 2004-10-03 17:16:31 »

We've tried everything from 1 texture per plate (which, given a 256x256 texture, results in each texture pixel being 100+ meters in size), up to 1 texture per poly (which is 131K textures and completely infeasible). So we checked into using 4 textures, 8 textures, etc... now to prevent pattern repetition, that means that you gotta distribute the different textures accordingly, like the colored squares of a chessboard. As a result, if we lump all the polys with a similiar texture into one object, it's like lumping all the red squares of a chessboard together... they collectively occupy a huge surface area with equally massive holes in between, which just runs havoc on collision detection and picking... since the textures are evenly distributed, each "texture set" eventually occupies the entire plate (just like the "set of red squares" covers the entire surface area of the chessboard, equally with the black squares). That's why when somebody said "similiar texture objects are usually spatially close to each other", I can say that given our project, a polygon may have 2000 "texture twins" and each may be 50 kilometers away with five different 10 kilometer holes betwen them.

As far as choosing the number of textures (which will affect how many polys use that texture), we haven't settled on a good value yet. Basically we'd like to push as close to 1 texture per poly as the user can handle (for maximum resolution), but naturally we don't get anywhere near that in actuality.

I'd understand resistance if I was asking for something beyond reasonable, far beyond feasibility, etc. But the few things I've asked (texture loading callback, a capability for per-poly texture, etc) seem like fairly simplistic issues, even if they aren't for the mainstream folks writing Quakes. The static seems to be far more along the lines of "You'd never need that in a Quake" rather than "That's only a few lines of code you need, we can do that even if we don't use it ourselves". If only game-specific features are going to be considered, then you need to bill Xith as a "game API" rather than as a "lean scenegraph renderer".
Offline nuntius

Senior Newbie




...


« Reply #24 - Posted 2004-10-03 18:30:02 »

Note: In Open Source projects, change usually happens when it fits within the currently-conceived framework, and you are willing to do it yourself.  Mailing lists are good for finding ways to work with the current system.  The same holds for corporate projects, but they have a well-established system to pay for both changes and support.

Regarding the specified problem...
Problem: Desire to load a pallette of numerous discrete textures and then pseudo-randomly map them to a terrain map of Mars.  A reasonably-sized texture map for all of Mars looks horrible when zoomed in, while one that looks good up close exceeds the GPU's capabilities when zoomed out.

Proposed solution: Load a discrete texture per polygon.  Load N textures of X*Y resolution.  Pseudo-randomly map them to discrete polygons in an object.  Restructure the Xith API to allow individual polygons in an object to have unique, separately-loaded textures.

Xith solution: Use texture coordinates to map each polygon.
Load 1 texture of N*X*Y resolution (so (N*X)*Y or X*(N*Y) or some other arrangement).  Pseudo-randomly map texture coordinates to discrete polygons in an object.

Analysis:
In either method, the same amount of data has to be loaded to display the same amount of detail.  If either method fails due to excessive texture size, then the other method would also fail.

For this specific problem, it sounds like level-of-detail (LOD) features should be used to manage the texture mapping problem.  Using this concept, a system can be set up to fractally (or recursively) map the terrain as one gets closer, hence eliminating the need to trade-off between 130k textures and decent resolution.  You will probably want to define different texture sets for each level.  Using mipmap levels may help.  Simply divide "big" polygons into smaller ones as you get closer...

Conclusion:  The need for 1 texture/polygon is caused by a questionable design decision (want 130k unique plates at once) rather than a fundamental limitation in the Xith API.



*****
Regarding the "delayed texture loading/texture callback" situation...  The purpose of this is to allow for the creation of Xith objects before their texture is loaded/determined, correct?  How about creating dummy objects with a standard texture, and then fixing them with the correct texture later?  com.xith3d.scenegraph.Texture.setImage may help.

In this way, the programmer has more control over how and when to update textures than any cookie-cutter callback/delayed display routine could provide.

As for a paint-by color system, a 256*256 pixel rainbow image could provide the pallette, and the x-y coordinates of each color would be specified as the three TextureCoordinates for each polygon as it is painted.  This scheme makes all 16-bit colors available.  It might be better to use a GIF-like indexed color table, though.  Offhand, I don't remember how big a texture can become before it slows down standard graphics hardware.
Offline JeramieHicks

Senior Newbie




Java games rock!


« Reply #25 - Posted 2004-10-03 19:16:43 »

Thank you for a reasoned analysis of my problem. However, I'm not quite sure I understand.

A single polygon of the Mars data is roughly half a kilometer in size. Drawn at full size (ie, user is standing on the surface), even a 256x256 texture on that single poly is just barely sufficient resolution. If we assume that we allow a 5% repetition factor, that's still 20 seperate images, and that only barely covers the fact that all the 16 neighboring polys won't have the same graphic as the current one. 20 images at 256x256 each packed makes for one seriously large single texture, somewhere between 1024x1024 and 1280x1280.  If we assume that UV mapping from a single texture is the solution to this problem, what's the largest texture that Xith can handle? Which would be easier on the graphics card, one _insanely_ large texture with UV mapping, or 20 smaller textures with per-poly mapping?

As far as the delayed texture loading... I'm not sure what you mean.
1) I don't know which textures to download/generate unless the model loaders tell me what they need.
2) The model loaders don't tell me which textures are needed.
3) The model loaders fail to load because the textures are unavailable.

My recommendation seemed simple enough. Make it an option for the application to be responsible for providing the textures, and the model loaders simply request them from the application. The default can be that the model loaders attempt to load from disk directly for backwards compatibility. The implementation would be as simple as a model loader being: "If I have a TextureProvider assigned, ask it for the texture I need; else, try to load from disk myself". I think it's an error on the part of a model loader to make assumptions of where, when, and how textures are made available during runtime.

The application, once it knows which textures are needed, can always provide a temporary dummy texture and update it later with the real thing.
Offline nuntius

Senior Newbie




...


« Reply #26 - Posted 2004-10-03 20:37:43 »

FWIW: The reason for Xith's "change texture coords" instead of "change polygon textures" is that the first is usually accelerated in the GPU while the second isn't.


> Which would be easier on the graphics card, one _insanely_ large texture with UV mapping, or 20 smaller textures with per-poly mapping?

Either one would be bad.  Which one is worse depends on hardware details and the exact sizes involved.  In other words, while the texture can fit into the proper cache, the large UV-mapped texture is faster.  When it becomes too big, the smaller textures will be faster.

I remember seeing benchmarks/guidelines for various texture sizes, but I don't remember the numbers.  It should be a fairly simple benchmark to code, though I don't have the time right now.  A conservative estimate is probably around 32x32 to 64x64 pixels, depending on the graphics card generation.


>  A single polygon of the Mars data is roughly half a kilometer in size.

Herein lies the problem.  Assume the user's monitor is 1024 pixels wide, and they are looking straight down with a field of view of +/- 45 degrees.  This is convenient since sin(45)=cos(45)=0.71; thus (viewable width)=2*(viewing height).

At 256 km up, the view is 512 km wide; each 500 m polygon covers a single pixel.  At 500 m up, the view is 1km wide; each polygon covers half the screen.   From 1 m (roughly human height), the view is 2 m, or 2/500=0.004 times the size of your base polygon.

Photo-realistism at a 1 m height therefore requires your texture for a 500 m polygon to have 1024*500/2=256,000 pixels in width.  Yet at 256 km up, photo-realism only requires a single pixel per polygon texture.

Thus fixing your polygon size to be 500 m is causing a nasty tradeoff between excessive texture size and unacceptable image quality.  Therefore, your only solution can be to make your polygon mesh finer as your view gets closer to it, and coarser as the viewer moves away.

> Each quad is roughly half a kilometer in size.
Your current scheme is to assign 2 triangles per quad.  An improved scheme is to further subdivide it into 2, 8, 32, 128, ... triangles dynamically, based on the height above the surface.  With some clever coding, this can be made to happen rather seamlessly.  (e.g. match the general light/dark/color patterns whenever you do a split, and split before the viewer is close enough to be bothered by the change)

There are several approaches to doing this, depending on your specific needs.  Look at terrain demos for inspiration.


*****
I misunderstood the delayed texture loading problem.  I agree that this seems like a limitation with the current model loading interface.  However, I don't have enough experience with the model loaders to comment.  If things are as you say, you're probably stuck downloading the whole model before using it (the easy solution) or implementing the fixes yourself.

I'd recommend starting a new thread on "delayed images and model loaders" or somesuch to see what others have to say.
Offline Mithrandir

Senior Duke




Cut from being on the bleeding edge too long


« Reply #27 - Posted 2004-10-04 02:19:19 »

Quote
I'm calling a shape whatever xith is calling a shape and i couldn't care less about what the programmer of the API is doing with this shape/model/whatever internally. I'm just interested in a feature that makes sense to me (and not just me). Maybe it's hard to implement and doesn't fit nicely into the current code but should i really care about this as the "user" of the API?


Yes. What makes a good non-realtime system, does not make a good realtime system. The two objectives are almost diametrically opposed. Non-realtime is about handling as much detail as possible, in as configurable way as possible. Realtime is about doing as little as possible between the user code and the graphics hardware. Anything that has to be calculated had better result in a net increase in performance, not a decrease. It is there to provide optimisations for speed so that an end user does not have to write the same thing over and over every time they want to write a 3D application. For example, view frustum culling, picking and state sorting.

Quote
I don't see how you can make this assumption. By that rationale, every tree in a forest could contained in a single object (since all trees could share the same texture) but it's silly to assume that every tree would be close to another.  


You don't have to make that assumption at all. There is a single Texture object that can be shared between all the trees. Then, the trees can be spatially separated. So long as you use the same texture object, then state sorting can also be to your benefit, and the culling algorithms will remove useless data eliminating a large percentage of those tree geometries from view. By placing all those trees into a single geometry/shape, you've now caused a great amount of problems from a performance perspective. You cover a very large spatial area, so that no matter which direction you face, the scene graph cannot cull some of the geometry. So, instead of only rendering 10K vertices, you now have to render 100K.

Quote
If the Xith answer is "make each of the 2000 polygons into 2000 seperate objects" then that, to me, is not a feasible answer. You're telling me there's less overhead with switching between 2000 Shape3Ds than any other possible solution?


Quite simply - yes. It is going to gain you far, far greater performance benefits than doing it the other way. The difference in performance grows at an exponential rate as you increase the number of objects in scene. There's a darn good reason why every game engine since Doom I have been running spatial partitioning algorithms. It's certainly not for the programmer or content developer ease of use.  Besides, there is no need to have a single shape for every polygon. As others have pointed out, the standard way of solving this problem is to use texture coordinates. Group each object into a single shape (eg a tree model) and then use the texture coords to modify the tree on a per-object basis. It's not that hard to do and pretty much any programming book that is about game development talks about how to implement these strategies.

Quote
Sorry, but i really don't get what you are trying to tell me here. That changing the texture state requires 300 lines of code?  For sure not.


If you are going to run multiple textures per object, then yes, it will take this many lines of code. That's precisely the model that the .3ds file format uses internally. Each Object chunk consists of material lists that link to the texture. To work out how to change this to something suitable for openGL to draw, you have to iterate through each object list, breaking apart the coordinate array into a smaller array, set the texture and material state, then send the array to OpenGL, lather, rinse, repeat for all textures on the object. It's a horrible, inefficient process because of all the for loops that need to be executed per object, per frame.

So what I see here is a case of Having a Hammer problem.  Everything looks like a nail.  It also appears that JeremieHicks does not have any experience rendering geospatial data.  These problems have been solved time and again by the big geospatial engines out there. It's nothing new in his requirements by any means.  You're using your knowledge of 3DS max, which is designed for doing non-realtime graphics, into assuming that the same techniques are used for realtime, which they're not.

If you really want to do large scale terrain rendering, I suggest you wander over to the Virtual Terrain Project (commonly known as VTP) at http://www.vterrain.org and have a read through the hundreds of links to the various large-scale terrain rendering algorithms that they have there.  You'll most likely want to look at ROAM or one of the CLOD algorithms.  But, in general, what I am seeing here from both of the parties wanting this is a lack of knowledge about fundamental graphics techniques.  Do yourself a favour and grab a few books on game engine design or visualisation design and get familiar with the various algorithmic options available. It will save you a heap of time asking questions like this and getting the same "lack of interest" responses.

j3d.org has an implementation of ROAM available in the generic sense, and a specific implementation on top of Java3D. Porting that to work with Xith3D should take very little work. There's a few bugs in it that are not sorted out, but it will solve all these questions you're already working on.  Managing texture resources and managing polygonal resources can be separated into two orthogonal systems. That's the way the big rendering engines like Performer work.  What you're asking for is above the design scope for what Xith3D and other scene graphs are aiming for. You can implement these techniques on top of them, but they are not part of the core API for a very good reason - the technique to use is highly application-specific.

As a side note, and the Xith3D guys are probably going to be cranky at me for mentioning this here: Xith3D probably will not be the engine that you'll want to use if you need to deal with anything more than a single CPU machine. If you're really doing large-scale terrain visualisation, then you'll want to make use of my project - Aviatrix3D, which is specifically designed for the visualisation crowd. Multithreaded internals, pluggable rendering pipeline strategies, scales from PC to CAVE with only 2-3 lines of code change etc etc. Still uses JOGL internally for the rendering.

The site for 3D Graphics information http://www.j3d.org/
Aviatrix3D JOGL Scenegraph http://aviatrix3d.j3d.org/
Programming is essentially a markup language surrounding mathematical formulae and thus, should not be patentable.
Offline EgonOlsen
« Reply #28 - Posted 2004-10-04 14:56:56 »

@java: After reading your posts about why Xith3D doesn't support multiple textures/shape where jPCT does, i think i (as the author of jPCT) can help to clarify some things.
Basically, you are right: jPCT can do this while Xith3D can't. But there are reasons for this. I think that the Xith-guys already did a good job on explaining why their baby doesn't support this feature. Maybe you can live with that, maybe you can't...it all depends on your needs.
Now for the reason why jPCT can do this: It would be stupid not to...for jPCT. Other than Xith3D, jPCT is a software/hardware hybrid engine (just like the Unreal1/Unreal Tournament engine was), i.e. it can do both: Software and hardware rendering with a similar feature set. Therefor, it can't do what Xith3D does: Let the graphics card do all the transformation and lighting stuff. It has to provide its own T&L pipeline written in good old and pure Java. It can't rely on the graphics card for that...it IS the graphics card! For such an engine, there is no speed penalty when changing textures. You can do that thousands of times in a frame...it simply doesn't matter. That's for software rendering. When using hardware rendering, it does matter somehow. Anyway, jPCT's hardware renderer has to be seen as an addition to the software renderer. The pipeline that every triangle has to pass is basically the same for both renderers (with some optimizations for the hardware one). jPCT doesn't really care if you use software or hardware for rendering until the very end of the process (in fact, you can use both at the same time). Therefor, it can't do a lot of things that Xith3D can do to speed up things...on the other hand, it can do a lot of things that Xith3D can't..simply because Xith3D transfers control to the GPU where jPCT keeps everything in its own hands. That's the reason why its hardware accelerated performance is still quite good compared with other, more hardware oriented engines. (In fact jPCT can render the Quake3 level taken from a Xith3D demo faster than the Xith-demo does and with a better collision detection...Tongue)
Long story short: Multiple textures/object is a no-go for Xith3D according to its design. For jPCT, it will still affect performance but not that much and because the software renderer supports it, the hardware renderer has to support it too (that's the basic idea behind the whole engine).  Ob the other hand, you'll find a lot of things that Xith3D can do that jPCT can't. Again, it all depends on your needs.

BTW: For killing birds with a fireball, both engines should do... Grin

Pages: [1]
  ignore  |  Print  
 
 
You cannot reply to this message, because it is very, very old.

 

Add your game by posting it in the WIP section,
or publish it in Showcase.

The first screenshot will be displayed as a thumbnail.

theagentd (16 views)
2014-10-25 15:46:29

Longarmx (52 views)
2014-10-17 03:59:02

Norakomi (45 views)
2014-10-16 15:22:06

Norakomi (34 views)
2014-10-16 15:20:20

lcass (39 views)
2014-10-15 16:18:58

TehJavaDev (68 views)
2014-10-14 00:39:48

TehJavaDev (68 views)
2014-10-14 00:35:47

TehJavaDev (60 views)
2014-10-14 00:32:37

BurntPizza (74 views)
2014-10-11 23:24:42

BurntPizza (45 views)
2014-10-11 23:10:45
Understanding relations between setOrigin, setScale and setPosition in libGdx
by mbabuskov
2014-10-09 22:35:00

Definite guide to supporting multiple device resolutions on Android (2014)
by mbabuskov
2014-10-02 22:36:02

List of Learning Resources
by Longor1996
2014-08-16 10:40:00

List of Learning Resources
by SilverTiger
2014-08-05 19:33:27

Resources for WIP games
by CogWheelz
2014-08-01 16:20:17

Resources for WIP games
by CogWheelz
2014-08-01 16:19:50

List of Learning Resources
by SilverTiger
2014-07-31 16:29:50

List of Learning Resources
by SilverTiger
2014-07-31 16:26:06
java-gaming.org is not responsible for the content posted by its members, including references to external websites, and other references that may or may not have a relation with our primarily gaming and game production oriented community. inquiries and complaints can be sent via email to the info‑account of the company managing the website of java‑gaming.org
Powered by MySQL Powered by PHP Powered by SMF 1.1.18 | SMF © 2013, Simple Machines | Managed by Enhanced Four Valid XHTML 1.0! Valid CSS!