So I've been thinking for quite a while on feasibility of creating a always running, compiled, strongly typed, hot-swappable programming environment, with minimal runtime-overhead (within reason). And I wonder what people here think.
I do have more questions than answers.
First-off - every function would always have the same fixed memory address.
This makes hot-swapping functions trivial. No jumptables, and no patching function pointers or anything like that.
I guess that makes the memory address of the function the identifier for it (rather than just the associated label).
That would probably mean grabbing a big chunk of virtual memory address space and dividing it up - as to give *enough* space for each function to *grow*.
Now there are a few questions associated with this:
1. How big a function could *realistically* get and how big should be the gaps between other functions, and what to do if the address of a function has to be moved (if-ever).
2. Is the addressable virtual address space big enough for it to never be an issue?
3. What to do with inlined functions? Maintaining some sort of function dependency graph might still be needed.
4. Is the overhead of using jumptables significant enough for it to really matter in most use-cases?
To be honest, hot-swapping functions is the easy part.
The hard part is editing data-structures at runtime.
And this is where it hits the hardest - just how ill suited the traditional file-based textual representation is - I know exactly what I'm doing - which function i'm editing, which fields in a struct i'm removing, renaming or adding. The text editor - in broad strokes - has no idea what i'm doing.
And making the textual editor aware of what i'm doing seems unnecessarily complicated.
Recompiling the changed files and figuring out what has *happened* since the last invocation feels wrong for some reason. Especially since you can explicitly tell the computer to add field X to struct Y, and that simplifies things tremendously.
The Unity approach of script hot swapping is just serializing/deserializing the state (with some caveats and limitations).
Now adding field X to struct Y also triggers a recompile to all the functions which have code that work with struct Y. So the struct Y has a list of dependents (list of functions, list of structs).
With dlls each call external to the dll jumps to a page filled with jump instructions which each jump to the correct function, the loader will patch that table when the dll is loaded based on the identifier of the function. To hotswap you just need to update the jump instruction in every loaded binary that uses it.
with 64 bit addressing you will have enough virtual address space to hold every recompilation you will ever need. The real issue is making sure you free the old pages so you don't run out of physical ram/swap space.
To bring another example, in Java all method calls on objects are virtual, i. e. indirect, however the code might be optimized at runtime, which I believe can involve de-virtualizing method calls and also inlining small methods. But Java has limited hotswapping support.
Another option would be to load the new code for the function into yet unused memory, then replace the old code wirh a jump to the new code. This way, only patched functions require a jump.
Editing data structures at runtime is much harder. Basically the address of each object might change. Not only does your environment need to handle the change correctly (if objects can be allocated on the stack, you might need to rewrite the stack in addition to patching up the heap), the programs running inside it must be robust against such changes at any time. This makes low level memory access unsafe and it should probably not be allowed in your environment.
Regarding your last point there have been some efforts to build text editors or rather AST editors, which support or sometimes exclusively rely on higher level commands, such as 'add field', 'add parameter to function', etc. However these editors did not catch on yet.
I'm writing some code in an old-codebase and now it needs to serialize out a part of the game-state. Which reminded me of this maxim - POINTERS ARE EVIL. No seriously.
Everything has to be either a) INDICE or b) HANDLE.
If it is handle then it's backed up by some sort of FreeList/PackedArray implementation.
Wherein FreeLists can be compacting (+one additional indirection), non-compacting (+some gaps in otherwise linear memory), with id/generation info per item(+some memory usage) if it's necessary to detect use-after-delete scenarios.
Now with handles and indices there are no more problems with invalidating/patching pointers.
If a data-type is changed - all the Arrays using it have to be rearranged. And all the types which include (composition) it within them have to do it too. And that's it.
And realistically within game context it's basically one Array per entity type.
Pointers essentially make everything complicated (and take 8 BYTES to boot) for no real added benefit.
Is there a performance benefit from dereferencing a *pointer over *(pointer + offset). Well, no.