I would like to take a shot at it. There are few design decisions which needs to be solved first.
1) Do we use standard java serialization together with some trickery with read/writeObject and transient field or created totally own serialization protocol ?
Java serialization has main benefit of being fool proof at protocol level. It is also quite robust as far as future changes are concerned - can read old version of objects unless really major change happens. On the other hand, own serialization protocol would be a lot faster to read, plus it allows to do some tricks with sharing components between separately serialized branches.
2) How link nodes should be managed ?
To be honest, I have no idea if links are supported in xith3d at the moment, but anyway, it is something which needs to be thought about at this moment. I don't think that serializing nodes which are linked to is acceptable. I see two choices - either require user to manually reattach broken links, or provide some standarized way of managing/resolving references - probably through names. This way user would just have to put 'GenericTree' reference to some map. But the question is, if SharedGroup is referenced ONLY through links (as it probably should be), how can it be serialized ? Explicitly ?
3) What about sharing components between separate streams ?
This is a harder case of above. What if some geometry components were shared (the same) in few different objects/parts of scenegraph and then serialized into different streams ? Should they be merged somehow ? From Paul Byrne's quote it seems that java3d scenegraph io somehow managed to do it:
There are also some key features of the SceneGraph IO API which you would loose by just using Serialization.
The main feature is that the API preserves the sharing of Node Components between BranchGraphs which are sent over the IO Stream in separate operations. This is not possible with serialization.....
4) Textures
Should they be serialized together with rest of stream ? Inline (in same file) or in separate file ? Using what compression ? (configurable?) Should there be a way to provide fallback for resolving textures from separate file anyway ?
5) Memmapped optimalization
In case of uncompressed textures and vertex data, it would be possible to align this data and use memory mapped file to directly use this buffer in GeomContainer (as long as it is read-only of course). This would allow to keep number of copies of same data down (1 at disk and 1 in GPU, instead of having another one in main memory).
We are talking only about preserving Locale/BranchGroup down. I'm not concerned about View/VirtualUniverse/etc - please say if you see any reasons for worrying about them.