Phantasmal MUD Lib for DGD

Phantasmal Site > DGD > What Does It Do? > State Dumps

DGD's State Dumps

Statedumps provide a way to dump DGD's memory space to disk, much as the swapfile does, and restore it at a later time. All program state is maintained. Pointers continue to work, call_out() statements are still scheduled, objects that are previously un-upgraded remain that way until recompiled in the standard DGD way. The only difference after a statedump is that network connections are broken (of course), and that you can alter the code of the DGD server, perhaps by upgrading versions or changing the configuration file. Statedumps are almost invariably forward compatible to new stable DGD versions, and there are several other upgrades you can make, such as increasing the maximum number of objects, which will be transparently caught and permitted with an old statedump file.

A statedump makes the old save_object() and restore_object() functions redundant for saving your MUD's state as a whole, though they can occasionally be useful for saving an individual object or three. Like with save_object(), the resulting statedump file can contain security-sensitive information, so it should ideally be stored where only highly-privileged code can read it.

Statedumps are a highly-necessary part of persistence in a MUD Library. Skotos Tech's Castle Marrach used them since early development, keeping a 'virtual uptime' from 1999 into 2004 and beyond.

Statedumps, like shutdowns and object recompilation, take place immediately after the thread that requested them has ended. The thread that would otherwise immediately follow will occur, but not until after the server has been restarted with the just-saved statedump.

Statedumps are able to handle DGD's normal policy of having far more LPC data than there is available memory. Normally, a swapfile will be written incrementally as objects are swapped out. You can tell DGD when to put the finishing touches on it, declare it done, and that becomes your statedump. Your swapfile will then begin writing out (what will be) the next incremental statedump. By telling DGD approximately how often you'll be dumping state, you make it much easier for the server to write the statedump in a timely way when you request it — nearly all the work has been done in advance. However, it can be very difficult to guarantee the time taken to write a statedump for a large MUD, depending on what objects have or have not been swapped out recently.

Statedumps are, generally speaking, the fastest way to do backups in DGD. You should use them regularly for that purpose, in case your game crashes. Castle Marrach's statedump, circa 2004, was nearly 2GB in size. Its statedump usually took only a few seconds, only occasionally going into the 10-20 seconds range. That's not a lot of time required, so backing up (say) every hour won't cause your MUD a lot of lag.

Restoring

When you restore from a statedump, all your previous objects are exactly where they were. However, your network connections, if any, have been closed. You should destruct them if it's necessary to do so explicitly. See the Kernel Library's code for details, or use the Kernel Library to make even that part of restoration fully transparent.

Cleanup

A certain amount of object fragmentation is removed and other cleanup tasks done when objects are swapped out. For that reason, it's often good to make sure that DGD is permitted to do at least some swapping rather than running your game entirely in RAM.

From: DGD Mailing List (Noah Gibbs)
Date: Thu Jan  8 14:43:00 2004
Subject: [DGD] Persistance

--- Robert Forshaw wrote:
> When you eventually do reboot, how will it
> reconnect the rooms? Doing it by 
> file name makes this easy, but by object
> reference, well, for a start every 
> room object will have to be loaded, and then
> how is it going to figure out 
> what rooms connect to where, if not by file
> name?

  Here's a clarification that may help.  Under Unix,
there's something called a core dump.  Conceptually,
it's kind of like a DGD statedump.  The idea is that
the process's memory is all written out, along with
*all* its state, into a big file.  That file contains
the entire application in exactly the state it was in
when it core dumped.

  Usually, coredumps are used for debugging.  But
certain very clever applications use them for other
purposes.  For instance, emacs and some Perl
applications use them for compiling.  They go through
a very long startup sequence (several hours for
emacs), and then the dump core *without* crashing.

  When you run emacs, or run a precompiled Perl
binary, what happens is that core dump gets loaded
right back into memory, in the exact state where it
dumped core.  From the newly-loaded app's point of
view, it just dumped core and now it's time to run. 
Every time you run the app, you're restoring from its
just-after-build application state, without having to
wait for hours of startup while it compiles many
megabytes of code.

  DGD statedumps work a lot like coredumps.  All your
current running code is dumped.  All your objects are
dumped.  All your data is dumped.  All the stuff in
the swapfile is dumped.

  And when you put it back, it's as though it had
never been gone.  If you have a chunk of code which
contains a statedump, you won't know on the line after
that statedump if you just breezed through it, or if
you've just been restored, minutes or weeks or decades
later, from that statedump.  Maybe all your net
connections just went down (they do that when you
restore years later, alas).  Maybe your system clock
just advanced by a couple of weeks.  It's hard to
tell.  Because your function will continue just fine
ten years later, and may *not even realize* that you
just dumped state and restarted.

  I'm exaggerating slightly.  I think the statedump
happens just after the thread exits.  But when the
call_out that you scheduled ten years ago happens, it
won't know that it was scheduled ten years ago.  It'll
just know that it's time to run again :-)