i'm working on an application used to prototype interactive and non-interactive 3D animations. currently, the app is focused on opengl. however, i have designed from the beginning to output Maya ascii files. while designing the scenegraph and geometry i ran into a little design issue and would love some commentary:
considering the fact that my app ultimately will render to opengl in realtime, and also to export scene data to ray tracers or other non-real-time renderers, how would you store your internal representation of shapes? for example i have been wavering between something like this:
abstract shape ( such as elipse, plane, cube, etc. ) controls basic information used to derive renderable geometry. for example, plane may have width, height. then each shape can contain a reference hi-res polyMesh, a lowResPolyMesh ( for wireframes ), and possibly a cubic or parametric representation data such as uspan, vspan, tesselation preference, etc.
traditional class heirarchy where each shape and type is explicit. anyone familiar with maya will recognize this style: instead of general forms attached to different representations (as above), you have very specifc ones. for example, a polySphere, nurbsSphere would be completely independent objects subclassed from something like polyPrimitive and nurbsPrimitve, etc.
in all examples objects would be organized and managed in a scenegraph of course.
i hope this makes some sense. in summary, when requiring both real-time and non-real-time rendering capabilities, what is a well designed approach to organizing geometry?