Mail Archive Home | massiv-dev List | August 2008 Index
|Date Index -->||Thread Index -->|
Hi, thank you for your interest in Massiv.
Has there been any changes or other progress since the initial release.
No, except for minor maintenance changes (see below).
I believe there was mention of not being able to guarantee simulation/world validity dues to the ability to have servers located remotely, is this the only case where validity is not ensured? If all servers were located together would this alleviate this, or is this innate to the design that allowed for remotely located servers?
Where did you find that mentioned? If you mean consistency, then yes, the core library ensures that simulation state remains consistent under all circumstances, even when the state is being archived (and simulation subsequently restarted from an archive). Probably what you could have read was, that since servers do not run on a LAN, specific design decisions had to be made to make the system robust and/or allow for the implementation of the requested set of features. For example, it is technically impossible to "freeze" all servers at a specific global time instant and so special support is required to archive simulation state consistently. In this case, when the simulation state is being archived, the system ensures that all events that causally happen *after* the simulation state was archived (events caused by objects that were already archived) won't be reflected in the archived state. For example, consider a scenario where there is an object BankAccount1 on server1 and BankAccount2 on server2. If the archivation starts on server1, BankAccount1 gets archived and then money transfer from BankAccount1 to BankAccount2 is requested, the effects of the transfer must not be reflected in the state of BankAccount2 until BankAccount2 is archived on server2.
Would it be a major change to allow for load balancing that did not have this issue?
A form of load balancing (migrating objects to less loaded servers) has already been implemented.
Was there anything left you would change and/or improve over the current implementation?
Management of archives might be the biggest "issue". Each server keeps its local archives and there is no central authority that would be able to manage the archives. When a new server is added to the simulation, an empty archive has to manually be installed on that server during the installation process. When a server dies, its archives have to manually be moved to a different server and merged with the other server's archives. It would be nice if there was a global manager that could automate these steps. Servers would upload the archives to the manager when the simulation state gets archived and the manager would distribute to servers when the simulation is restarted. Also, archives are stored in a proprietary format ("volume"). Using a standalone database engine (running on the archive manager) could be a better idea.
What is the current status of this project? It's future?
I am doing the maintenance work to make sure the project compiles cleanly with newer compilers. I am going to compile these changes (that have accumulated since 2004) and make another release soon. Apart from this, I have not done anything and do not plan to work on the project given the current interest from the community. If we could find new maintainers/developers, that would be nice.
Do you have any good references in this area? Looking over projects like this, OpenNel, and other open source massively distributed frameworks seems like a good basis, just have to wrap my head around it all. Would like to eventually create a good overall set of tools and frameworks produced to aid in this area.
I do not know if anything has changed recently in this regard but I think that Massiv's architecture is pretty unique. The other systems I know of are supposed to run as clusters over LAN and hence offer different features (basically just synchronous RPC).
Hope this helps. -Markoid
|Date Index -->||Thread Index -->|
Powered by MHonArc.Copyright © 2006-2007, OW2 Consortium | contact | webmaster.