Copyright © 1999, 2002 by Michael J. Roberts. Preliminary version 15 March 1999.

T3 Machine Model

This section discusses the "machine model," which is how a programmer writing T3 byte code views the execution environment. The machine model defines the datatypes that a program can use, the way a program manages memory, and the mechanisms for controlling program execution.

Table of Contents

Storage Conventions
Text and Unicode
Datatypes
Memory Model
Object Model
Transient Objects
Metaclass ID's
Value Comparisons
Data Conversions
Machine Registers
Stack Organization
Method Headers and Function Organization
Pre-defined Objects and Properties
Intrinsic Functions
External Native Functions
Exceptions
Input/Output and the Display
Garbage Collection
Finalizers
Virtual Memory and Swapping
Undo
Image Loading
Restarting
Saving and Restoring




Storage Conventions

Among the goals of the T3 virtual machine are binary portability of program files and saved state files, fast initial program loading, and fast saving and restoring of machine state.

In order to achieve binary portability, program files and saved state files must use a canonical format that is invariant by machine. The only thing that we can without reservation rely upon is the bit; this is unfortunately too unwieldy for our purposes, so we will presume that all platforms that can run the T3 VM are capable of working with bytes (octets of bits). In other words, we will presume that we can manipulate data in memory and from files in units of 8-bit bytes, and that a byte written to a file on one computer will have exactly the same bit pattern when read from a copy of the file on a different type of computer. Practically every type of computer in widespread use today can work efficiently with 8-bit bytes.

We cannot make any assumptions about common data formats beyond 8-bit bytes; modern computers have a variety of word sizes and byte orderings, so no datatype larger than a byte has a single representation that it used by all computers. Hence, we must define our own standard format, built out of 8-bit bytes, for each datatype. We will then interpret data stored in these standard formats such that a particular value in a particular standard format will always be interpreted the same way by the T3 VM on all platforms.

The basic T3 datatypes and their storage representations are:

The byte arrays used to store these values do not observe any alignment; they may appear at any byte offset within a file, and any byte offset within a memory structure.

In order to facilitate fast loading, saving, and restoring, the T3 VM uses the same representation for certain structures in memory as it uses in a file. All data loadable from a program file, and all data that can be stored to a saved state file, are stored in portable format in memory. This allows load, save, and restore operations to be performed as bulk transfers between memory and disk; no computation is needed to transform between the file storage format and the in-memory format, because the formats are identical. The trade-off is that the cost of transforming data between the portable format and the native representation is incurred each time the VM manipulates data.

In-memory structures that use the portable file format include the following:

Structures that are kept only in memory and are not loaded from a program file nor stored in a saved state file are kept in native format. These include stack elements, local variables, and machine registers.


Text and Unicode

UTF-8 Encoding Details

All text is stored internally as 16-bit Unicode characters. Text strings are encoded in UTF-8 format, which encodes each 16-bit Unicode character as follows:

FromToBinary Encoding
0x00000x007f0bbbbbbb
0x00800x07ff110bbbbb 10bbbbbb
0x08000xffff1110bbbb 10bbbbbb 10bbbbbb

The bits of the 16-bit value are encoded into the b's in the table above with the most significant group of bits in the first byte. So: 0x571 has the 16-bit Unicode binary pattern 0000011001110001, and is encoded in UTF-8 as 11011001 10110001.

UTF-8 has the desirable property that a 7-bit ASCII character can be represented in one byte (because the 7-bit ASCII character set is assigned to code points 0x0000 through 0x007f in 16-bit Unicode), and characters from most European languages can be encoded in two bytes (because most European language characters that are not in 7-bit ASCII are assigned to code points 0x0080 through 0x07ff in 16-bit Unicode).

The UTF-8 encoding has the undesirable property that it uses a varying number of bytes per character, which makes certain operations on strings more difficult (finding the number of characters in the string, for example, requires examination of all of the bytes of the string, even if the byte length is known, and finding the next character in a string requires incrementing a byte index into the string by one, two, or three positions, depending on the type of character encoded there).

However, the savings in space for 7-bit ASCII text seem to justify the added complexity of manipulating this format, and compared to other varying-length encodings, UTF-8 is unusually convenient:

Control Sequences

The T3 VM treats all character strings as opaque; the VM itself does not attempt to interpret the contents of character strings. All rendering and other interpretation is up to the host application environment.

For an example of special character interpretation, refer to TADS Special Characters.

Character Set Translation

UTF-8 is the internal character set that the T3 VM uses to store text strings in program image files and saved state files, and is also the format that the VM uses to represent strings in memory. However, it is likely that few (if any) operating systems on which the VM runs will use this same encoding for file, display, or keyboard input/output operations. Therefore, it is necessary to translate between UTF-8 and the native character set used by each operating system when performing I/O.

Note that file I/O operations relating to program image files and saved state files do not require any character set translation, because in-memory data structures represent text in UTF-8 just as these files do. The only file I/O operations that require translation are those involving text files.

The VM, as part of its I/O interfaces, automatically translates the text character set. Each I/O interface that sends data to the display or a file, or receives data from the keyboard or a file, is defined to use text in the native character set. The VM will translate to and from the native character set when invoking these interfaces.

In order to perform translations, the VM uses a character set translator. Four character set translator classes are defined in the basic set:

The Unicode Consortium provides tables that specify mappings between 16-bit Unicode and numerous widely-used character sets. These mapping tables can be mechanically converted to a form that the character set translators can use directly. The Vm will be provided with a set of these converted mappings, so the operating system code for a given port will in most cases merely need to identify the correct current mapping to the VM, and will not otherwise need to be aware of any character set issues.


Datatypes

The following primitive datatypes are supported:

Note that the representation of constant string and list values differs from dynamically-constructed values of these types.

A string constant is a value with primitive type string, and a 32-bit offset into the constant pool. In the constant pool, the string's data bytes consist of a two-byte length prefix giving the number of bytes in the string, not counting the prefix bytes themselves, followed immediately by the bytes of the string.

A list constant is a value with primitive type list, and a 32-bit offset into the constant pool. In the constant pool, the list's data bytes consist of a two-byte length prefix giving the number of bytes in the list, not counting the prefix bytes themselves, followed immediately by the list's contents. The list consists of a series of elements concatenated together. Each element consists of a one-byte type prefix, followed immediately by the value. When a constant list element is a constant string or constant list, the value is an offset into the constant pool.

Rationale: why have distinct constant and object representations of lists and strings? In other words, why not always use the dynamic (object) representation?

Using a single representation would indeed simplify the VM design in many areas and would be entirely practical. However, because the target application of the T3 VM is text-based games, and because this application makes heavy use of constant strings and lists, it seems desirable to minimize the overhead involved in using these types.

Dynamic objects require slightly more memory than constant string and list values; considering that large numbers of these constants are likely be present in a program, the more compact constant format should provide substantial memory savings in most cases. In addition, the total number of root-set objects affects garbage collector performance; using constants rather than dynamic objects for lists and strings known at compile-time should improve overall run-time performance of most programs by greatly reducing the number of root-set objects that the garbage collector must analyze on each garbage collection pass.


Memory Model

The T3 VM does not expose a simple byte-array memory map to programs, the way most physical computers do. Instead, the VM organizes memory into specific collections of values, which are accessible through typed references. Since all references are typed, the VM can prevent programs from performing invalid conversions or address arithmetic; all memory access can be validated, hence the VM can prevent many common programming errors that occur with more free-form addressing schemes.

Memory is divided into the following sections, each of which is managed separately:


Object Model

Generic Objects

The VM provides a "generic object," which can contain any type of data that conforms to the generic object interface. Generic objects are used for all types that are allocated dynamically at run-time; this allows a single memory manager to handle all allocation and garbage collection.

A specific implementation of the generic object interface is called a "metaclass." Each generic object has a metaclass. Metaclasses are written in native code that is linked into the application executable containing the VM.

Because the generic object interface doesn't contain information about any specific metaclasses, new metaclasses can be added through the interface. This VM specification includes a set of pre-defined metaclasses, but individual VM implementations can extend this set through the generic interface. For example, a VM could provide a high-level object type for a language other than TADS, such as for C++ or Java.

The generic object interface contains the following methods, which reflect the operations that the byte-code interpreter performs.

Object References

An object is referenced by its ID, which is a 32-bit value.

The object ID is an index into the object header table. The object table is conceptually an array of object descriptors. The object table is structured as multiple subarrays ("pages") in order to enable efficient dynamic growth of the table; each page contains a fixed number of object descriptors, and a master page table contains an array of pointers to the pages. Thus, given an object ID, the object descriptor's address can be found by this formula:

    page_number = (object_ID / DESCRIPTORS_PER_PAGE);
    page_offset = (object_ID % DESCRIPTORS_PER_PAGE);
    object_descriptor = &page_table[page_number][page_offset];

An object descriptor is a fixed-size data structure that describes an object. It contains a combination free list entry and object header, described below, plus some additional information for memory management: a forward link pointer to the next object in the garbage collector queue, a bit indicating whether the object is allocated or free, a bit indicating whether the object is part of the root set or not, and one bit giving the garbage collection state of the object. The purpose of this bit is described in detail in the garbage collection section.

The combination free list entry and object header is expressed in C code as a union of these two values. If the "free" bit in the object descriptor is set to true, then the free list entry is valid; otherwise, the object header is valid. The free list entry is simply the object ID of the next object in the free list.

An object header describes an active object. The header contains the vtable for the object's metaclass, which holds function pointers to the native C code that implements the system object interface, and a pointer to the variable-size data block associated with the object.

The variable-size data block is stored in a separate memory area. The purpose of separating the fixed-size portion and the variable-size portion is to simplify memory management. Since the variable part may grow and shrink dynamically, we may need to relocate the variable part to a new memory location from time to time. When we relocate the variable part, we need only update the single pointer to the variable part in the fixed part; since this is the only direct pointer to the variable part, we do not need to update any other references to the object. Similarly, this design allows us to compact the memory storing the variable parts, closing holes created by moving or deleting objects, with only a single pointer update when we move each variable block.

The use and interpretation of the variable-size data block is up to the object metaclass.

Metaclass Construction

In addition to the virtual methods in the generic object interface, each metaclass must provide certain static construction methods. These methods are static rather than virtual, because they construct new objects (a virtual method by its nature can only be called for an existing object; constructors are inherently static since they must obviously be callable before an object instance exists). These static methods must conform to a common interface; using a registration mechanism, the VM maintains a table that associates a metaclass's identification string with its construction methods. When the VM needs to construct an object given the metaclass's identifier string, it looks up the appropriate constructor in the registration table and invokes the method, which creates a new instance of the metaclass.

There are several standard ways to construct any metaclass; individual metaclasses may provide additional construction techniques that they use privately or that the VM may use for specific purposes, but each metaclass must minimally provide the standard construction methods:


Transient Objects

By default, all objects are "persisent," which means that they automatically participate in the built-in Save, Restore, Restart, and Undo mechanisms. However, objects can be specifically marked as "transient," which means that they do not participate in these mechanisms.

Transient objects are marked in the image file with a special flag in the object block data. In addition, a dynamically-created object can be marked as transient via a speical form of the NEW instructions.

The VM must keep a flag for each object record indicating whether the object is transient or persistent.


Metaclass Identifiers

Programs written for the T3 VM must have some way to refer to the system metaclasses so that they can specify the metaclass used by each static object and can create new objects dynamically. To provide a means for T3 VM programs to make these specifications, we specify a fixed set of metaclass identifier values.

A metaclass identifier is a string. The set of metaclasses identifiers is defined centrally; a given metaclass identifier string value always refers to the same metaclass for all implementations of the T3 VM. Metaclass identifier strings are "universally unique": two different metaclasses will never share the same name, and a given metaclass always uses the same name, for all implementations of the T3 VM.

A given VM implementation may not provide all of the defined metaclasses, for two reasons. First, new metaclasses may be defined as the VM evolves, so a VM implementing an earlier version of the specification will not provide metaclasses added in a newer specification. Second, some metaclasses may be specific to a particular application domain; a VM built for one application domain may not implement metaclasses specific to a different domain. The image file format contains information on the metaclasses that a program requires, so the VM can detect when loading a program image if the program requires metaclasses that are not provided by the implementation.

For code size reduction, the image file contains each metaclass string that it uses only once, in its metaclass dependency table. This table establishes a correspondence between a metaclass string and a small integer; each entry in the table is numbered, starting at zero and incrementing by one for each additional entry. The image file refers to metaclasses everywhere else using this table index. For example, if the table lists (in order) "tads-object", "string", and "list" as the three metaclasses it uses, these are assigned index values 0, 1, and 2, respectively; to create a new TADS Object metaclass instance dynamically in byte code, the NEW1 instruction would be coded with operand 0, since the TADS Object metaclass was listed first in the metaclass dependency table.

The compiler is responsible for constructing the metaclass dependency table and for generating the appropriate metaclass index values in the image file.

Refer to the Metaclass Identifier List Appendix for a list of the defined metaclasses.

Metaclass Versioning

Some metaclasses may evolve over time. In particular, metaclasses that provide access to functionality via get/set property calls must be able to extend the set of calls they provide over time by adding new properties they recognize. To allow this, metaclasses can be versioned.

When a metaclass is versioned, it is required that each new version is a complete superset of the functionality of the previous version. This means that all of the methods provided by the previous version must be identically provided in the new version, with the same method table layout. Since each new version is a complete superset of previous versions, an image file that requires an older version of a metaclass will be able to use an implementation of that version or any later version with complete transparency.

A metaclass's version information is encoded in the metaclass name string (the universally unique identifier stored in the image file that specifies the metaclass implementation). A metaclass identifier string is structured into two parts: the metaclass name, which is the registered, universally unique identifier; and the metaclass version string. The two parts are separated by a virgule (a forward slash, "/", ASCII and Unicode 0x002F).

A metaclass version is a string of six digits encoded as ASCII characters. The digits are given in descending order of significance (the first digit is the most significant, the last digit is the least significant). For example, consider these two identifier strings:

    dictionary/030005
    dictionary/030102

The second string's version ("030102") is greater than the first string's version ("030005"): the strings are identical in the first three digits, but the fourth digit is higher in the second string than in the first ("1" vs. "0").

When loading the metaclass dependency table, the system should follow this algorithm for each metaclass string in the image file:

Note that compilers should take care to include each metaclass only once in the dependency table. Compilers should follow the same matching procedure as the loader when adding a new metaclass to the dependency table, and should always keep the highest version number they see for a given metaclass.


Value Comparisons

The VM defines two types of value comparisons: test for equality, and compare magnitude. These two types of comparisons behave differently, primarily in that tests for equality always yield meaningful results, whereas magnitude comparisons are meaningful only for certain combinations of types.

Testing for Equality

In general, two values are not equal unless they have the same type. However, because certain types have both constant and object representations, there are cases when an object type can match a static type.

To compare two values, we inspect the type of the first value, and compare the value accordingly:

Magnitude Comparisons

To compare the magnitude of two values, we inspect the type of the first value, and compare the value accordingly:

In general, when a magnitude comparison is attempted, and the types involved make the comparison unmeaningful, the VM throws an exception (INVALID_COMPARISON).

Object Subclass Implementations

By default, an object returns true for an equality test only when the second value has the same object ID as the first object (in other words, two object references are equal only if they refer to the same instance), and magnitude comparisons are not meaningful.

The String and List object subclasses override the equality test so that they behave the same way as would a string or list constant with the same underlying value.

The String subclass overrides the magnitude comparison so that it behaves the same way as would a string constant the same underlying value.


Data Conversions

The VM performs implicit data conversions in the course of certain operations. For example, if a value is used as the second operand of an ADD operation, and the first operand is a string, the VM automatically converts the second operand to a string.

The defined conversions are listed below.

Conversion to String

Several operations implicitly convert values to strings. Several of the built-in datatypes can be converted to strings: Other types cannot be converted to string; a run-time exception results from an attempt to convert any other type to string.


Machine Registers

The T3 VM has several "registers" which control the state of the machine.

Data Register 0 (R0): this register is used for temporary storage of data values. This register can contain any value that can be stored in a stack location. The Return Value (RETVAL) instruction, for example, stores the return value of a function in this register.

Instruction pointer (IP): this register points to the next byte of byte-code to be interpreted by the VM execution engine.

Entry pointer (EP): this register points to the entrypoint of the current function. This allows us to calculate the offset of any given instruction within the function from the start of the function, which allows us to find any given instruction within certain tables that describe the function; in particular, exception tables and debugging tables specify information on ranges of instructions, stored relative to the start of the function.

Stack pointer (SP): this register points at any given time to the next free element of the machine stack. A "push" operation stores a value at the stack location to which this register points, and then increments the register. A "pop" operation decrements this register, then retrieves the value at the stack location to which it points.

Frame pointer (FP): this register points to the current stack frame. See the section on stack organization for details on how the frame pointer is used.

Current Savepoint: this register is used for undo operations.

Savepoint Count: this register is used for undo operations.


Stack Organization

The T3 VM is a stack-based machine: most operations are performed through the stack, rather than through a set of machine registers. For example, to add two values, a program pushes the pair of values to added onto the stack, then executes an ADD instruction, which computes the sum, discards the two values from the stack, and pushes the result back onto the stack.

The stack is organized as an array of value holders. Each value holder contains a type code and a value.

A stack pointer register (SP) points to the next available element of the stack at any given time.

A separate frame pointer register (FP) points to the current stack frame. The stack frame specifies the reference point for local variables and parameters. Local variables are at locations above the frame pointer, and actual parameters are at locations below the frame pointer (because they are pushed onto the stack by the caller, before the current function's frame was activated). The frame pointer always points to a stack location that contains the frame pointer of the enclosing frame; immediately below the enclosing frame pointer in the stack is the enclosing frame's entry code offset; below this is the return address (i.e., a pointer to the next instruction to execute when the current function returns), and immediately below the return address are the actual parameters, if any, to the current function.

Pushing a value: a "push" operation stores a value at the stack location to which SP points, and then increments SP.

Popping a value: a "pop" operation decrements SP, then retrieves the value at the stack location to which SP points.

Calling a function or method: the compiler will generate byte-code that pushes the arguments onto the stack, then invokes the function or method. In a method call, the object whose method is to be called may be encoded in the instruction as immediate data, or may be at the top of the stack (as the result of an intermediate evaluation); likewise, the target property to be invoked can be encoded as immediate data or as a stack value.

Method invocation is the same as property evaluation, so the byte-code instruction to call a method is a GETPROP instruction. This instruction performs the following operations:

Note that invoking a function differs from invoking a method only in that a function has no "self" or target property. However, the stack frame arrangement is identical for methods and functions, so in a function call, the VM pushes nil values for "self" and for the target property pointer.

Returning from a function or method: this effectively reverses the sequence of events from calling a function or method.


Method Headers and Function Organization

A "function" is a block of data containing byte code and associated metadata. Each method is associated with a function; functions are stored in the code pool.

A given function must be stored entirely within a single code page. A function cannot span pages. Note that this means that an instruction can never span pages (that is, all of the bytes of a given instruction, including all operand bytes, are guaranteed to be on the same page), because an instruction must always be contained within a function.

Each function starts with a method header, which is block of data that describes the function. The method header contains:

Here's a more compact picture of the header:

UBYTE parameter count (high bit set indicates variable arguments, in which case the value bitwise-AND 0x7f is the minium number of parameters)
UBYTE reserved (currently must be 0x00)
UINT2 local variable count
UINT2 maximum number of stack slots required by the function
UINT2 offset from start of method header of exception table; 0x0000 if there is no exception table
UINT2 offset from start of method header of debugging records; 0x0000 if there are no debugging records

The first byte-code instruction for the method immediately follows the method header.

Following the last byte of executable code is the exception table. The exception table consists of a count, which indicates how many elements are in the table, followed by an array of elements. Each element specifies:

Following the exception table is the debugger records. The debug records contain information for source-level symbolic debugging: the names and scopes of the local variables, and the byte code position of the first instruction of each line of executable code. Debug records are optional; if debug records are not present, the debug table offset in the method header is set to zero. Debug records are not required to execute a program; they're needed only to provide information to an interactive debugging utility so that it can show source-level information on the program as it executes.


Pre-defined Objects and Properties

The VM must at certain times make reference to certain objects and properties to which the loaded program must also be able to refer. In general, the VM uses these objects and properties to call code in the user program in response to VM events. The VM and the user program must agree on the ID's given to these pre-defined objects and properties, because the meanings of these ID's must be the same for the VM and for the user program code if the VM's operations with these ID's are to invoke the correct user code.

Hypothetically, one way to ensure that the VM and the user program agree on the ID's of the pre-defined objects and properties would be to define these ID's in this spec. For example, we could assert that object #1 is always the RuntimeError class object; the compiler would be obligated to generate the appropriate object under this ID value. This approach would work, but has a substantial problem: it is inflexible with respect to future additions to the specification. Any object ID's not assigned in the spec would be available to the compiler to use for other objects; as a result, if a new object were added to the spec, it might conflict with an ID that the compiler assigns to a different object, hence the program produced would not be compatible with future versions of the VM.

To ensure forward and backward compatibility, the VM assigns each pre-defined object and property a symbolic name. The compiler is free to choose any ID for each of the pre-defined objects; once the compiler chooses these values, it stores in the load image file a symbol table, which provides the correspondence between the symbolic names and the compiler-assigned ID's. The compiler is only obligated to store in the symbol table those objects and properties that are pre-defined in the version of the VM specification for which the compiler is designed.

During loading, the VM scans the symbol table for each of the pre-defined symbols, notes each one, and caches the information in global variables (or in a structure associated with the load image, if the VM is capable of loading multiple image files simultaneously). It's important to cache these values, because the VM may need to refer to some of them quite frequently. During program execution, the VM uses the cached values to refer to the pre-defined objects and properties.

The symbol table mechanism is reasonably flexible for future changes. When loading a program compiled with an older compiler into a newer VM, some symbols may be missing; the VM can simply allocate new values (unused by the load image), since the absence of the symbols means that the user code will not be aware of the new objects or properties. When loading a newer program with an older VM, the VM will simply ignore any symbols that it does not recognize.

The compiler is free to provide its own names for the pre-defined objects and properties, or not to name them at all. The compiler is also free to use any means to determine the contents of the pre-defined objects; for example, some compilers may use hard-coded definitions that are built into the compilers themselves, while others may obtain the definitions from source code provided by the user or in standard header or library files.

Note that each object referenced by a pre-defined symbol must always be part of the root set loaded from the image file. (This is probably obvious, since the compiler would otherwise have no way to assign these ID's statically in the load image in the first place.)

The current set of assigned symbols is shown below.

SymbolDescription
RuntimeErrorThe object ID of the exception class to use for run-time errors. When a run-time error occurs, the VM will construct an instance of this class and invoke its constructor with the VM error number (an integer value), and optionally additional arguments describing the error. The VM then throws the new object in the same manner that the program code would throw an exception.
exceptionMessageThe property ID that exception objects use to store the text of a message describing the exception. The VM uses this property for two purposes. First, whenever the VM creates a RuntimeError instance in response to a run-time exception condition, the VM will store a default message for the error in the new instance's exceptionMessage property. Second, if program execution terminates due to an unhandled exception, the VM will look in this property of the unhandled exception object for a message to display.
ConstructorThe property ID to use for notifying new TADS objects of their own creation. Whenever a new object is created programmatically (for example, via the NEW instruction), the VM evaluates this property of the new object.
DestructorThe property ID to use for notifying TADS objects of their imminent destruction. Whenever the garbage collector determines that a TADS object is unreachable, the VM invokes this method on the object prior to actually destroying the object.
LastPropThe last property ID value that the load image itself uses. The VM is free to use any property ID value greater than this number for its own internal purposes. In particular, the VM may use this value in future versions to assign values to pre-defined properties that are not provided by the load image file, to ensure that the values assigned do not conflict with any used in the loaded program.
ObjectCallPropThe property ID that the PTRCALL instruction invokes if an object is used as the function operand to this instruction.
propNotDefinedThe property ID that the GETPROP instruction and related instructions invoke when a property of an object is evaluated and the object does not define the property.


Intrinsic Functions

The T3 VM is designed to be independent of the source language used to prepare user code, and is also meant to be independent of the host application and operating environment. In order to accomplish this independence, the VM does not specify a fixed set of built-in functions, but instead allows multiple sets of built-in functions.

An intrinsic function is simply a native-code routine that can be called from byte code, and which is part of the VM in the sense that it can access internal VM methods and data structures. Intrinsic functions can perform computational operations, such as extracting a substring from a string, and can also provide access to the host application and operating environment, such as displaying output.

Intrinsic functions are provided in "function sets." Each function set has one or more unique identifiers (i.e., names), and consists of a vector of native code function pointers. A function set is essentially equivalent to an interface, as the term is used in CORBA or COM: it provides a fixed set of entrypoints with specific invocation protocols, all packaged together and assigned a universally unique identifier. Given a particular identifier, a program can rely on obtaining a specific set of services with a particular invocation protocol.

A single function set may have more than one identifier, because each version of a function set gets a new identifier. Each new version can, if it is desired, incorporate previous versions as subsets, simply by placing the same functions in the old versions at the same locations in the vector. Since each old version is a subset of the new version, the function set can be used equally well as an old version or as the new version. This scheme allows safe evolution of a function set over time; older programs will continue to run unchanged on newer VM's, since their older function sets will still be available as subsets of new function sets, but new programs will correctly detect when the newer function sets they rely upon are not available in an older VM.

Each program image file contains a function set selector. When the VM loads the program image, it reads the function set selector, which consists of one or more function set names (i.e., a universally unique identifier for each selected function set). The VM creates a function set selector vector, with one entry for each function set listed in the program image file. For each entry in the file, the VM attempts to find a function set with the same identifier; if found, the VM puts a pointer to the function set into the function set selector vector at index specified in the file.

The function set mechanism allows the VM to be independent of the underlying intrinsics. In fact, in a dynamically linked configuration, where the intrinsics are in separate dynamic link libraries, new function sets could be installed without making any changes to the VM executable binary. Furthermore, compilers can likewise be designed to be independent of the built-in functions by allowing programs to specify function sets as input (via declarative syntax in the source code, for example), thus allowing access to new versions of the run-time function set without making any modifications to the compiler.

To call an intrinsic function, the program uses the BUILTIN byte code instruction. This instruction specifies the function set number, which is the index into the function set selector vector, and the function number, which is the index into the function set vector itself. The VM retrieves the function set vector from the selector vector based on the function set number, and retrieves the function pointer from the function set vector based on the function number.

A special form of the instruction, BUILTIN0, calls an intrinsic in the first (zeroeth) function set in the selector vector. This is simply an optimization; most programs will rely mainly on a single intrinsic function set, so their byte code size can be reduced slightly by using this form of the instruction for this most frequently-invoked function set, since this instruction omits the implicit function set number.

Note that configurations of the VM may have different function sets available. The function set selection process will ensure that a program prepared to expect a particular function set will only load into a VM that has been configured with that function set. However, one function set is required in all T3 implementations: the "t3vm" function set.


External Native Functions

The VM is designed in such a way that it can be extended in the future to provide a mechanism for invoking functions implemented in native code but not linked directly into the VM. This feature, if and when implemented, would allow users to write VM programs that access external hardware or operating system features that are not available through the intrinsic functions.

This feature will depend upon the dynamic linking or loading capabilities of each operating system, and the specific implementation will necessarily vary by operating system. The VM will specify an interface that must be provided by the host application environment for loading and invoking external functions.

The VM will provide a set of interfaces to external functions that allow an external function to call back into the VM to perform certain operations. For example, these functions will allow an external function to retrieve its actual parameters, return values, and manipulate objects in the VM.

To facilitate addition of this feature in future versions, this speification reserves the opcode CALLEXT (0xB7).

At the current time, there is no specific plan for implementing the external native code invocation feature. This feature will be given further consideration in the future if sufficient demand arises among the system's users.


Exceptions

Exception handling is a structured method of performing non-local jumps within user code. The T3 VM supports exceptions as a native feature.

Exception handling has two parts: throwing exceptions and catching exceptions.

Throwing an exception is simple. User code creates an exception object, initializes it with any data that describes the exception, and then throws the exception. A VM instruction, THROW, performs this operation. (Note that the exception object does not actually have to be newly created, since a pre-existing exception object would serve just as well. In practice, most programs will create a new exception object to describe each exception condition that arises, but an exception object is often re-thrown several times by separate nested exception handlers.)

To catch an exception, VM code must specify a block of "protected" code, and one or more exception handlers which catch exceptions for the block. An exception handler can handle a specific type of exception (including subclasses of the exception type), or can be a "finally" block, which is executed unconditionally for a given block of code and hence implicitly catches all exceptions that occur within the corresponding protected block.

An exception is specified by an exception object, which is simply any TADS object instance. The TADS object inheritance mechanism is used to determine whether a particular exception instance is a subclass of a given exception type.

The compiler must build an exception table for a particular function; the compiler stores the location of the exception table (as an offset from the start of the function) in the method header for the function. When an exception is thrown, the VM locates the exception table by reading its location from the method header at the EP register.

The exception table consists of an array of handlers. Each handler specifies the range of code that it protects (expressed as the offset from the start of the function's method header of the beginning of the protected range, and the offset from the start of the function's method header of the last byte of the protected range), the exception class that it catches (given as an object ID), and the offset within the function of the first byte of the handler. A "finally" clause, which catches any exception, is simply represented as a handler for an invalid object ID; this causes the handler to be invoked for any exception.

When protected code is nested within other protected code, the compiler is responsible for generating the exception table with the innermost exception handler appearing first in the table; the VM simply scans the table for the first handler for the exception, so nesting is handled simply by ensuring that the innermost handler for a range appears earlier in the table than any enclosing handlers for the same range.

When an exception is thrown, the VM starts out by scanning the exception table for the current frame; the VM locates the table by getting its location from the method header table at the EP register. If the exception is not found in the exception table, the VM restores the enclosing frame (using the same procedure that it would to return from the current function, although a few steps can be omitted due to the lack of a return value) and tries again with the exception table of the enclosing frame. This continues until a suitable exception handler is located, or the outermost frame is reached. If the outermost frame is reached and no handler is found, the VM terminates and returns to the host environment. (The host environment may at this point restart the VM at a particular entrypoint, or may display the exception and abort the application, or take whatever other action is appropriate.)

If the VM finds a suitable handler, it pushes the exception object onto the stack, then transfers control to the handler. An exception handler's first instruction thus normally removes the item at the top of the stack and stores it in a local variable (via a GETLCLn instruction).

The exception table is stored after all of the executable instructions in the function. The compiler is expected to construct the exception table in the course of generating byte code for the function; when the compiler has finished geneating byte code for the function, it sets the method header's exception table offset field to the offset of the next available byte, then writes out the exception table. (Because the exception table's offset is stored in the method header, it doesn't actually matter whether the table immediately follows the last byte-code instruction or follows other intermediate tables, so the compiler is free to put other information, such as debugging records, between the last byte-code instruction and the exception table. However, the exception table is considered part of the method, in the sense that it is required to be located in the same code pool page as the method; this ensures that the exception table is always loaded in memory when the method's code is loaded in memory, and that its location can be calculated as a simple offset from the physical location of the method header in memory.)


Input/Output and the Display

The T3 VM does not specify a display model or an input event processing model. All display and input event processing are handled by user code in cooperation with the host application environment, via the intrinsic function set.

The T3 VM has one set of primitive operations related to displaying text, however. The VM must handle the "self-printing string" datatype by displaying the text of such strings whenever they're evaluated in most contexts. In order to accomplish this, the VM needs a default display function that it can call whenever a self-printing string must be displayed.

The VM actually defines two default string display mechanisms. The first is the default display method. This is simply defined as a property pointer. Any time a string is to be displayed, and a valid "self" object is active, and a default string display method is defined, and the "self" object defines or inherits this method, this method is invoked with the string to be displayed as the method's single argument. The second mechanism is the default display function, which is simply a function pointer. Any time a string is to be displayed, and the conditions aren't right to invoke the default display method, the default display function is called instead, passing the string as the function's argument.

In order to avoid any dependencies on a particular display model, the VM requires the program image file itself to define the default display method and function. The user program must specify these by calling an intrinsic function in the "t3vm" function set (this function set must be supported by all T3 VM implementations).

Because the user program itself handles all self-printing strings through its defined default display function, self-printing strings are nothing more than a short-hand for displaying strings through other, more general means. Given that the target application of the VM is interactive fiction games, however, it is highly desirable to make this operation as compact to represent in an image file as possible, hence this special short-hand mechanism.


Garbage Collection

This discussion is meant as an aid to implementation. The details of implementing the garbage collector are up to each implementation, so the details here are not meant to require a particular design.

Algorithm

The garbage collector uses a simple "mark-and-sweep" algorithm, which runs synchronously; in other words, all other VM operations must be suspended to perform garbage collection.

The garbage collector operates by starting with the "root set" of objects, and marking each of these objects as reachable. Then, it goes through all of those objects, and marks all of the objects to which they refer as reachable; this continues until every object that is reachable from the root set, directly or indirectly, has been marked. Finally, the garbage collector scans through all of the allocated objects and deletes each one that has not been marked as reachable.

The "root set" is the set of objects that are directly referenced from a global pointer of some kind. First, the entire active machine stack is always reachable, because machine stack locations correspond to local variables, parameters, and intermediate results that the program can reference. Second, all of the objects that have a symbol name in the program are always reachable, because code, constant lists, and image-loaded TADS objects can always refer to these objects directly. All of the objects that are directly loaded from an image file are considered part of the root set; we mark all of these objects as part of the root set by setting the appropriate bit in the object header when we load them. Third, machine registers that contain values (data register R0) are also always reachable.

The garbage collector requires some per-object state information:

The mark-and-sweep algorithm assumes the following initial conditions. These conditions must be in effect at all times when the garbage collector is not running.

The loader ensures that the above conditions are true initially, and the garbage collector ensures that the conditions hold after each garbage collection pass. So, the garbage collector can assume these conditions hold at the beginning of each pass. The garbage collection algorithm proceeds with these steps:

The reference_object(object, state) routine consists of these steps:

The mark_all_references(state) routine consists of the following procedure:

Performance Discussion

The algorithm described above scales linearly in the total number of objects allocated (reachable plus unreachable).

Several garbage collection algorithms of considerably greater sophistication have been developed for other virtual machines. Most notably, several GC algorithms can run concurrently with other VM operations, allowing GC to be performed continuously in a background thread.

The T3 VM specifies a synchronous GC algorithm for two reasons: first, it's very simple; second, the T3 VM's targeted application provides an ideal environment for a synchronous GC. The added complexity (both in terms of the GC algorithm and of the overall VM architecture) of concurrent garbage collection does not seem warranted for this application.

The T3 VM is designed for an interactive environment with frequent and slow user input. We expect to spend the vast majority of wall-clock time waiting for a human user to enter input, and we do not expect to perform very much computation at a stretch between input events. These user input pauses provide a perfect opportunity to perform garbage collection.

Three configurations are possible for the garbage collection algorithm described above during user input. The first configuration is trivial, but the other two configurations can be used to take maximum advantage of the delay waiting for user input, by allowing the user to work on input while the garbage collector is running.


Finalizers

The T3 memory management system provides a mechanism that notifies an object of its pending deletion. T3 notifies an object via its "finalizer" method; this is a special method that the VM invokes directly.

The VM invokes an object's finalizer method at some point between the time that the garbage collector determines that the object is "finalizable" and the time that the garbage collector makes the object's memory available for re-use.

An object becomes finalizable when the object is not directly reachable from the executing program, and that the object's finalizer method has never been called before. See the garbage collection discussion above for details on the finalization state and reachability.

The VM invokes the finalizer method with no arguments. If a finalizer method throws an exception that it doesn't catch within its own code (or invokes a method that throws an exception that the finalizer doesn't catch), execution of the finalizer terminates, but the exception has no further effect; the VM internally catches and ignores the exception.

Finalizers can be invoked at any time after the garbage collector determines that an object has become Finalizable. The garbage collector can never free or re-use an object's memory until after the object's finalizer method returns.

The effect of the finalizer method varies by metaclass. The String and List metaclasses ignore the finalizer method. The TADS Object metaclass invokes the property designated by the symbolic name "Destructor" in the predefined properties in the image file, if any.


Virtual Memory and Swapping

The T3 Virtual Machine supports a form of memory re-use that allows a small machine to run a program whose image is larger than can fit into available physical memory. We refer to this as "swapping," because the VM accomplishes this task by keeping only a portion of the program image in memory at any given time, thus allowing physical memory to act as a window onto the program image.

The T3 VM does not allow swapping of any memory that can be written at run-time. Hence, objects cannot be swapped out of memory, nor can any part of the stack.

The VM does allow swapping of certain constant data. Since only constant data can be swapped, the VM does not need to maintain any sort of swap file; since the data cannot change at run-time, the data can always be reconstructed simply by loading data from the program image file.

In particular, the VM can swap the "code" and "constant" segments. All references to data in these segments are made through offset values, not physical pointers, so segment loading can be performed in the course of offset translation: if the segment containing an offset being translated is not present, it is loaded at that time.

Simplifying assumptions:

Swapping is optional: The swapping mechanism is an optional subsystem. Two implementations of the memory pool manager are available: a swapping version and a non-swapping version. The non-swapping version simply loads the entire program image; translating an offset to a pointer is simply a matter of looking up the offset in the appropriate segment table to yield a physical address. The swapping version maintains an LRU cache of segments; each time an offset is translated, the swapping version checks to see if the required segment is loaded, and if not loads the segment, discarding the oldest segment if necessary to re-use its space.

TADS 2 and Performance Discussion

The TADS 2 VM used a swapping system, since it was written at a time when small machines were common and games frequently exceeded the available memory on most machines. The TADS 2 swapping system was quite costly in terms of performance and VM complexity; given that typical computer memories today can easily accomodate even an extremely large game, the performance cost argues against implementing a similar swapping scheme today.

However, small machines are still in use, so it would be desirable to make some allowance for running T3 VM games in limited-memory environments. The T3 swapping system described above provides this capability. However, the cost in performance and in VM complexity is slight in comparison to the TADS 2 system.

Some important differences between the two systems:

Debugger Interaction

Debuggers need write access to code segments, because they need to insert breakpoints into the byte code. Since the code-swapping model relies on the fact that code segments are read-only (in that the backing store is the image file itself, which we must treat as read-only), we cannot swap out a code segment that has been modified, hence we cannot swap out a code segment that contains a breakpoint.

Two possible solutions to this complication come to mind.

First, we could simply require that debuggers use a completely non-swapping configuration. This is the simplest approach, and is probably acceptable in nearly all cases: debuggers are needed only for program development, and most program development will take place on relatively powerful machines on which a non-swapping configuration will work well (in fact, non-swapping configurations will generally be preferred on larger machines because of the lower overhead involved).

Second, the swapping system could allow a means for the debugger to pin a page in memory so that it cannot be discarded. The debugger would be responsible for pinning any page that it had modified with a breakpoint, and unpinning the page when all breakpoints had been cleared from the page. This is a more complicated solution than simply turning off swapping entirely, but it would make it possible to debug some programs on smaller machines where swapping is required. However, debugging large programs might not be possible on small machines, because there may not be sufficient memory to keep enough pages pinned in memory after setting several breakpoints.


Undo

Overview

The T3 VM has an integrated "undo" system that allows changes to be reversed. Undo applies to all state stored in objects. Other state (such as the machine registers and the stack) are not part of the undo system. Since the undo feature is part of the VM, programs written for the VM automatically take advantage of the undo capability, without any need to pay attention to this feature in the design of the user code.

The undo mechanism records changes relative to "savepoints." A savepoint is effectively a snapshot of the system at a given time. Once a savepoint is created, all changes made are recorded with the savepoint, so that all changes since the savepoint can be undone as a group.

Transient objects are not affected by undo. There is no need to save undo information for changes made to transient objects.

The undo mechanism allows only a limited number of savepoints. If a new savepoint is created, and the number of existing savepoints equals the maximum number allowed, the undo mechanism automatically discards state associated with the oldest savepoint.

Programmatic Access

The undo system is controlled through a set of intrinsic functions accessible to a byte-code program through the "t3vm" system interface function set (which must be supported by all T3 VM implementations):

Implementation

Savepoints and Machine Registers

Savepoints are identified by a number from 1 to 255. Each time a savepoint is created, the number is incremented; if the result is 256, the number wraps to 1. Note that this imposes an architectural limit on the number of savepoints; in practice, however, the actual number of savepoints to be kept will probably be much smaller, because the practicality of more than twenty or thirty savepoints is small and does not justify the amount of memory needed to store that much undo information.

The VM defines two global register values associated with undo. The Last Savepoint register contains the savepoint number of the most recent savepoint that was created. The Savepoint Count register contains the number of active savepoints. If Savepoint Count is zero, it means that there are no active savepoints.

Undo Records

The undo mechanism uses a double-ended queue of undo records. The queue is double-ended because we must add records to one end (when saving a record, it's added at the "newest" end), but remove records from both ends (when applying undo, records are removed from the "newest" end; when discarding an obsolete savepoint, records are removed from the "oldest" end).

Note that an efficient implementation of the double-ended queue uses an array, which is used as a circular buffer of undo records. Two indices must be kept into the array: the index of the next available record, and the index of the oldest record in use. To allocate a record, we simply advance the next available record index; if the next available record index meets the oldest record index, the queue is full and we must discard one or more savepoints.

Each undo record contains the object associated with the undo; an identifying key, used by the associated object to identify the change; and a data holder, used to store the original value of the data changed by the operation for which undo is being kept.

The identifying key is meaningful only to the object that created the undo record. The key may be a property ID or an integer. TADS Objects, for example, store undo for property changes, so they use property ID's as undo keys; arrays use integer keys to store undo for element changes.

Creating a Savepoint

To create a savepoint, we increment the Last Savepoint register, wrapping back to 1 if the value was already 255. We then increment the Savepoint Count register. If Savepoint Count is greater than or equal to the maximum number of savepoints allowed (this value is determined through the host application environment, and may be set using, for example, configuration options such as preference settings or command line switches in the host application; in any case, this value is never more than 255), we must discard an old savepoint.

To discard an old savepoint, we subtract the savepoint count from the Last Savepoint value, adding 255 to the result if the result is less than 1. This tells us the savepoint number of the oldest active savepoint, which is the savepoint we will discard. We then remove from the "oldest" end of the undo record queue all of the records associated with the oldest savepoint.

Applying Undo

When asked to apply undo, we first check to make sure undo is available. If the Savepoint Count register is zero, no undo information is available, so the operation fails. Otherwise, we proceed with the undo.

Starting with the undo record at the "newest" end of the undo record queue, we remove from the queue each record associated with the savepoint and apply that record. This allows us to apply records in reverse order of creation, which ensures that we restore state correctly by essentially running the clock backwards.

To apply an undo record, we find the object associated with the undo record, and call its method to apply the record. The object is responsible for interpreting the identifier in the record and restoring its state accordingly.

After applying all undo records for the savepoint, we decrement the Last Savepoint register, wrapping to 255 if the result is below 1, and we decrement the Savepoint Count register.

Undo Handling by Metaclass

Lists and Strings: these metaclasses do not need to store any undo information at all, since their state cannot change.

Weak reference lists: this metaclass does not store any undo information. The only way that the state of a weak reference list can change is when the garbage collector deletes a weakly referenced object, at which time we delete the entry containing the object from the list; this operation cannot be undone, though, so it generates no undo information, hence we do not need to keep any undo information for this metaclass.

Arrays: the array metaclass must store undo information on each element changed in the array. In order to avoid storing redundant information, the array keeps one bit per array element indicating whether undo has been stored for the element in the current savepoint; when this bit is set for an element, the array does not create another undo record when changing for the element, since the record would be redundant with the record already saved for the element in the same savepoint. If when setting an element's value we find that the element does not already have undo saved in the current savepoint, we create a new undo record, set the record's integer key to the element's index, and save the element's original value in the undo record.

Weak reference arrays: this works the same as normal arrays, except that we keep no undo when setting a list entry to nil due to the deletion of a referenced object by the garbage collector. Since such deletions cannot be undone, there is no need to store any undo when they occur. In addition, when a weak reference stored in one of our undo records is deleted, we set the undo record's saved value to nil, since we would not want to restore the deleted object reference upon applying the undo record.

TADS Objects: the TADS object metaclass must store information on each property changed in the object. This metaclass maintains a bit array, with one bit per property table slot; each bit indicates whether undo has been saved for the property slot for the current savepoint. Whenever we set a property value, check the undo bit array element for the property slot: if the bit is set, we do nothing, since the original value for this savepoint has already been saved; otherwise, we add an undo record, setting the key to the property ID being changed, and storing the old value of the property. If the property did not previously exist, we must store the value "empty" in the undo record, that undoing the change will delete the property.

Interaction with Garbage Collection

A subtle question about saving and applying undo information is: how do we undo a change that made an object unreachable, and thus eligible to be deleted during garbage collection? In other words, what happens if we restore a reference to an object, and the object has already been deleted by the garbage collector?

The simple answer is that we never delete any objects referenced by undo information in the first place. The way we accomplish this is equally simple: when the garbage collector asks an object to mark all of the objects it refers to as reachable, we mark not only the objects to which we directly refer, but we also mark the objects to which any active undo records refer.

For example, suppose object A has a property P that contains a reference to object B (A.P := B). During garbage collection, assuming A is reachable, then A will mark B as reachable because one of A's properties refers to B. Now suppose that we set a savepoint, and then execute code that changes A.P to nil. Now, A no longer directly refers to B. However, A does have an undo record saving the fact that A.P formerly contained a reference to B. So, during garbage collection, A will still mark B as reachable, because it can reach B through an undo record, even though it can't reach B directly. If we then apply the undo record, restoring A.P to a reference to B, we don't have to worry about whether B still exists, because the presence of the undo record prevents B from being deleted.

But, isn't this terribly inefficient, since we can't collect objects that are effectively unreachable except through undo? Yes, it does take up memory, but only to the extent that undo takes up memory anyway. Consider the alternative: if we allow the garbage collector to delete B when A.P is nil but A still has an undo record that refers to B, then applying undo will set A.P to B again, and hence we need some way to un-delete B. The only way to un-delete B is to store a copy of B somewhere when we deleted it in the first place, and then create a new object from this saved copy. We clearly need to keep the saved copy of B for as long as we have any undo records that refer to the original B. So, in this alternative scheme, we need to keep a copy of B around for exactly as long as the first scheme keeps the original B around -- in other words, it uses exactly the same amount of memory, but with considerably greater complexity! Keeping around the original is simple, because it fits into the main garbage collection scheme naturally.

Undo and Save/Restore

We do not save undo information in saved state files. Saving has no effect on the current undo state, so it is still possible after saving to undo to a savepoint prior to the save operation.

When restoring a saved state file, we simply discard all accumulated undo information. Undo information currently in memory obviously has no meaning after we've restored a saved state file, and the saved state file itself has no undo information stored, so we simply cannot perform undo after restoring.


Image Loading

The image file format is the subject of a separate document. However, we'll discuss here the details of how the VM loads image files into memory.

Image files have three main sections: constants, code, and objects.

Constants and code are arranged into "pool" structures. A pool is an array of fixed-size pages. Data within a pool is referenced by a 32-bit offset value; the specific page containing a given offset can be obtained by dividing the offset by the page size to get the page index (the first page is at index 0), and the offset within that page obtained from the remainder of this division. The image file contains a header for each pool that describes the structure of the pool (specifically, the number of pages and size of each page); the VM uses this information to load the pages into memory, either at start-up or on demand, depending on the memory management configuration of the VM implementation.

Objects are arranged into variable-size blocks. The VM always loads all object data at start-up, since all objects must be resident in memory throughout VM execution. The image file contains header information giving the number of object data blocks and the size of each block; within each block, the image file contains information describing each object within the block. Each object must be entirely contained within a single object block; an object may not span multiple blocks.

The arrangement of objects into blocks is purely a convenience for certain smaller environments where the size of a single memory allocation block is limited (for example, on MS-DOS, blocks cannot exceed 64k, even when more than 64k of total memory is available to be allocated). Hence, we place an upper limit of 64k on the size of a single object data block. The compiler should use as large a size as possible less than this limit, since each block has some space overhead (both in the image file and in memory at run-time). The recommended algorithm for the compiler is to fill up each block as much as possible (up to the 64k limit), creating a new block whenever adding another object to an existing block would exceed the limit.

Each block has an associated object metaclass. Hence, every object in a particular block is of the same metaclass. (This saves space in the image file, by saving us the need to specify a metaclass with each object.)

To load objects, the VM loader performs the following steps at start-up:

Note that all objects in an image file are implicitly part of the root set.

One important feature of the loading model is that it allows a memory-mapped file to be used directly, without duplicating any of the data in the file separately in memory. This is especially important for diskless platforms (such as handheld computers), since all files are inherently memory-mapped on such devices. With a memory-mapped file, the memory that holds the image file is directly used without duplication as the memory containing the executing program.


Restarting

Restarting the machine involves simply restoring the machine state to its initial state. Given that the VM is designed to make initial program load very fast, restarting can be accomplished simply by reloading all objects.

Note that transient objects (i.e., objects specifically marked with "transient" status) are not affected by a Restart operation. These objects should simply be ignored during Restart processing.

When restarting, all undo state must be dropped, since it can no longer be meaningfully applied after objects have reverted to their initial state.

Note that only objects in the root set will be reverted to their initial state by a reload. We can simply leave all other objects in their current state; we can't simply delete all non-root set objects, since stack elements may still refer to non-root set objects. (Transient objects in particular could continue referring to non-root set objects.)


Saving and Restoring

The T3 VM has the ability to save the machine state to a file, and later restore the machine state from a previously saved file. This capability allows programs to be written for the VM to take advantage of this save/restore mechanism, without the need to pay any attention to this feature in the design of the user code.

Saving

Saving the machine state involves writing the state of all non-transient objects to a file. Only object state is saved; the state of the stack and machine registers is not saved. In addition, it is unnecessary to save constant data and code, since these do not change in the course of execution and can simply be reloaded from the program image.

Because objects are maintained in memory in a format suitable for portable file representation, we can save objects simply by writing out their internal state directly to the file, as a direct binary transfer.

All non-transient objects, both root set objects and non-root set objects, are written to the saved state file. The state of the objects effectively captures the state of the machine.

Note that we do not save or restore the state of the stack or machine registers. This allows us on restore to continue processing code without jumping into a new context suddenly, so that user code that restores a saved state can continue with post-restore processing directly. If we restored the stack and machine registers, the code immediately after the original save would suddenly be executing after a restore (giving the same effect as a setjmp/longjmp in C).

There is one complication, which is that a non-transient object can contain a reference to a transient object. The non-transient object will be saved, but the transient object will not be. We address this by saving, for each transient object, a record that the object is transient; we don't save the transient object itself, but merely a placeholder indicating that a transient object was there. When we restore, we can use these placeholders to set to "nil" each reference to a transient object.

Restoring

Restoring a saved state essentially reverses the serialization process of saving the objects. However, there is an added dimension: we must "fix up" references among objects in the saved state with the new ID's we assign to the newly-loaded instances of the saved objects.

In order to restore saved state, we must first delete all non-root set objects. This is necessary because the objects in the saved state file must be restored using the same object ID that they had when they were saved, since the file may contain references to objects by ID. In addition, we must set to "nil" any references from a saved object to an object that was transient at the time of saving.

The typical Restore implementation is to start by loading the "table of contents" from the saved state, to find the ID's in the saved file's numbering scheme of all saved objects. We then assign a new ID for each object, remembering the association between the file's numbering scheme and the new ID numbering scheme. Then we load each object, deserializing the data saved for the object. We must fix up the deserialized data by translating each object reference in the data from the file's numbering scheme to the new numbering scheme, by looking up the old object number in our table of contents and substituting the new ID number.

Interaction with Undo

Saving has no effect on undo. However, undo information is not saved to the state file.

When restoring, all undo information must be discarded, since the undo information is not meaningful in the context of the restored state.





Copyright © 2001, 2002 by Michael J. Roberts. Preliminary version 15 March 1999.