So, a computer has K bytes of physical memory. When a process starts, the OS loads the code into physical memory, at an address unknown to us. It then uses a function (usually implemented in hardware) to map the addresses that we (the process) see to physical ones and vice versa. This function is used for instructions, stack memory, heap memory, etc; every single address goes through this function.
This means that the reason we can't write at address 0 is not because some other crucial piece of data is stored at it, and the OS is preventing us from overwriting it; when we try to write at 0, that 0 goes through the mapping function, and we are actually trying to write to a perfectly regular address in physical memory.
Is this correct so far? If I take a look at this image:
I can see that the OS leaves a gap between address 0 and the lower limit of the stack; We can't ever write to this gap "because the OS said so", probabily for security reasons if I had to guess.
This also means that the reason that the same piece of data (instruction or variable) is stored at different locations on different runs of the same program is also a totally arbitrary OS decision. It is not, as I though for a long time, because the addresses used on the previous run are now being used by someone else. The OS chooses to make us start in a different place, again "because security".
But what, exactly, does this mean? Just that the gap at the start is bigger or smaller, and everything else shifts accordingly? It's not that the stack and the heap are swapped around or something, that can't happen, right?
The final question is about something Casey did on the earlier days of HH:
// Inside WinMain() ... void *DesiredAddress; #if HANDMADE_INTERNAL DesiredAddress = Terabytes(2); #else DesiredAddress = 0; #endif void *Memory = VirtualAlloc(DesiredAddress, ...);
Why 2 terabytes? If process layouts are randomized, how can you be sure that what you're asking for is heap memory? Maybe the randomization is limited, and that is an address that he just knew was going to be ok. But also, while watching the video (Day 14 if I'm not mistaken), I had the impression that he just chose it arbitrarily, and any other address would've worked just as well... which isn't really true, since you have to pick an address that's in the heap section, right?
When you're talking about "function" that maps the addresses - the result does not need to be actual physical memory address. The function is allowed to return "not mapped" which means CPU cannot read or write contents of such location - it does not exist in physical memory.
Address 0 in user-space process is such region - not mapped to any physical region. That's why code crashes when you try reading/writing it. CPU knows whether it can access each page, and how to access it (whether it can read or write or execute from them). Technically nothing prevents OS to map address 0 to valid region and your code will be able to read and write it. But due to some other complications in C language (NULL pointer dereference is undefined behavior) it will never do that.
The different location of addresses when your process runs is called Address Space Layout Randomization (ASLR). This is an optional feature, you can disable it for your binary - then it will be loaded at known address (code/data segment). ASLR is a security feature so foreign/malicious code injected into your process does not know location of things. The way OS implements is that it shifts some things it loads - like code and data segments - by fixed offset (chosen randomly at startup). Or it assigns more or less random locations to virtual memory allocations whenever they are allocated. They can be anywhere in virtual address space, as long as there is nothing else there before.
That image does not really represent order of stack/heap/code in modern OS. It maybe valid in earlier 32-bit days. But nowadays locations are very random.
As to why Casey chose 2TB address - it is completely arbitrary decision. It is just very unlikely that in 2^48 address region this specific address will be taken.
Also note that when talking about virtual memory mappings - saying "heap" is wrong. Heap memory is a very different thing. Heap memory is malloc implementation that has nothing to do with this virtual memory mapping "function". Heap memory is just an extra data structure C runtime implements as an extra for performance reasons. Because doing virtual memory allocation syscall for every single allocation would be too slow, and too wasteful (as minimum page size is 4KB).
To complement mmozeiko answer,
you can find the minimum and maximum address the user can allocate using VirtualAlloc
by using the function GetSystemInfo and looking at lpMinimumApplicationAddress
and lpMaximumApplicationAddress
. The first 64KB of memory can't be used.
In the virtual address space of the process there are some pages that are shared between the OS and the process, so you can't VirtualAlloc
those addresses. For example: Why doesn’t Windows use the 64-bit virtual address space below 0x00000000`7ffe0000?.
And here is a (2 parts) video with a lot of information about windows memory management: Mysteries of Memory Management Revealed,with Mark Russinovich. If I remember correctly, the first part is about virtual memory, and the second part about physical memory. Some information might be a bit outdated (I think this was before Windows 8.1 and 10 which allows 128TB of user virtual address space instead of 16TB).