Emerald, I see what you are up to, but I fear you cant gain control over the ObjectStore they way you want it. There is only one ObjectStore which is accessed by all the GLEs in the cluster and access to this ObjectStore is transparent to the server application (=only the SGS backend has direct access to the ObjectStore). There is not even a fixed model how the ObjectStore is actually implemented or handled (=which database is used, it does not need to be the current one afaik).
This is true that the ObjectStore is under the control of the system and not directly acessible from the coding model. We have *talked* internally about openoing up the ObjectStore interface and making the store pluggable in order to support third party ObjectStore vendors but we havent doen that yet.
Im still not quite following the issue. is the issue that youa re trying to synchroinize clusters on different contents so they all reflect a single view of the world? This is a hugely difficult problem due to the latencies you will encounter going trans-continental.
I also wonder on which level the transactional services are provided, inside the SGS or does the SGS use database services to ensure transactional integrity -
Some of both, depending on the ObjectStore implementation. From the POVof the GLE the ObjectStore provides the transactional context and is responsible for it. However we build ontop of the databases we currently use in order to implement our deadlock avoidance scheme (Time Stamp Ordering.)
I dont believe in the last because PEEK is non-reread-safe,
Actually, if you are talking about within 1 event (which means within 1 transactional contaex) PEEK is re-read safe in that all peeks on GLO references that reference the same GLO will return a reference to tyhe same object EXCEPT in the case that a PEEK is followed by a GET. The GET will return a new object and every PEEK after that wil lreturn the same object as the GET. This turns out to be necessary to insure ACID properties.
a new PEEK into the ObjectStore - this could be a problem, but maybe I am wrong and this is already handled and the term "non-reread-safe" is misleading here.
Yes this is already handled. When we say "non-repeatable read" we are referring to reads across tasks/transactions. Within any 1 task a PEEK is a repeatable read with the caveat above.
Also I am curious if the ObjectStore is stored actually on the same hardware or farmed out onto the servers in the cluster -
Our current "big" implementation is based on HADB which is a seperate set of servers which have their own fail-over and disaster recovery. Having said that, we are also lookign at the moment at some intermediate layers to deal with some huge- scale scaling issues.
There is a high risk of GLOs which are no longer referenced by any other valid GLO in the ObjectStore and therefore unaccessible but still remaining in the ObjectStore - this might be caused by bugs and we need to discover and handle those things in a way.
GLOs are "real objects" ina simualtion sense so they never tgo away unless explicitly destroyed. ANy named object can be found without any other referemce so those need to be treated as "root" objects. Having said this, gaving some tools to trace from the named objects to all references objects and find those no longer referenced is not a bad idea.
We're still designing all the admin tools, ill add it to the list of ideas for the team to discuss