Thoughts on defer and memory ownership

So after watching Jon Blow's programming language video from 2014 again as well as an article refuting a lot of his points I have been thinking about how to properly handle memory ownership in my own code and what I think seems best. For example, right now I have an Animation class that has some dynamic arrays which I need to allocate upon initialization of the class and, as with most classes which allocate memory, I will eventually need to free that memory (most likely). Currently I'm using an ad-hoc defer function which I call at the site of construction of a certain Animation like so:

1
2
Animation animation { heap };
defer { CleanUpAnimation(animation); };


You'll notice that I'm explicitly identifying within the animation constructor that memory will be allocated on the heap and I do this to signal to myself that I need to think about releasing/cleaning up this memory. With this cue I can just add a defer { cleanup() } right under it so I get the same functionality as RAII if that's what I want or I could call defer somewhere else (outer scope of the function) if for some reason I don't want to release memory on this particular functions end.

Right now it seems that this way is more explicit and more flexible than something like RAII which to me is a good thing. However, I haven't worked on many serious, large projects yet and I know that a lot of what C++ and other languages laud is making memory/resource management a problem of the particular class with which memory is allocated in, thus ensuring resources are more reliably released and the burden of resource management is not placed on the programmer. Again though, to me making explicit the process of resource acquisition and management is preferred because the programmer should know when memory is being allocated and released and it doesn't seem that difficult a burden for the programmer to have to think about.

What are some of your opinions on the matter? I just hear so many different things from different experts and find it difficult to come to any conclusions.


Edited by Jason on Reason: Initial post
In this example if you are using C++ class constructor, then I don't see why CleanUpAnimation could not be placed simply inside its destructor. It will be less error prone and less to type.
I would agree, that it gives you more visibility, but the only flexibility I can think of, is that you might decide to not free the allocated memory.
So I would say it's a tradeof between "more typing" and "more visibility", that you have to decide yourself :)
If you are not the only one working on the codebase, you should probably stick to the "classical" approach, but I don't know.

Also sorry for being the "stack overflow guy" and suggesting option three of two, but I think this is one of the problems that linear allocators / arena allocators solve very cleanly.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
void do_some_rendering_or_something(memory_arena *scratch){
    // save the state of the allocator so we can rewind to this point
    temporary_memory temp = begin_tempoary_memory(scratch);

    // do a bunch of allocations, might call functions and whatever
    for(int i = 0; i < 100; i++){
        Animation animation = push_animation(scratch);
        // ...
    }

    // and they are all gone
    end_tempoary_memory(temp);
}


So instead of thinking about memory ownership, which I always find confusing, you purly think about the lifetime of the allocations.
Here you might also do defer{ end_tempoary_memory(temp); } or have have a destructor for tempoary_memory, so you can early out.
If you don't know the articles by gingerBill about memory allocation I think they are a good read :) link.

mmozeiko
In this example if you are using C++ class constructor, then I don't see why CleanUpAnimation could not be placed simply inside its destructor. It will be less error prone and less to type.


Ya, I suppose for this certain scenario there shouldn't be too much issue with just placing the CleanUP call within the destructor. I guess what I'm trying to get at is what should be my default memory handling mindset be and whether memory bugs/performance issues that prop up are better prevented by having things be more explicit.

Recyrillic
Also sorry for being the "stack overflow guy" and suggesting option three of two, but I think this is one of the problems that linear allocators / arena allocators solve very cleanly.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
void do_some_rendering_or_something(memory_arena *scratch){
    // save the state of the allocator so we can rewind to this point
    temporary_memory temp = begin_tempoary_memory(scratch);

    // do a bunch of allocations, might call functions and whatever
    for(int i = 0; i < 100; i++){
        Animation animation = push_animation(scratch);
        // ...
    }

    // and they are all gone
    end_tempoary_memory(temp);
}


Actually I have seen this sort of thing within the 4coder code base. This could be a decent solution as it still keeps things explicit but it saves a bit of typing in the cases where I have a bunch of different objects that require memory allocation: So instead of:

1
2
3
4
5
6
7
8
Object 1 thing(heap);
defer { cleanup(thing); };

Object 2 thing2(heap);
defer { cleanup(thing2); };

Object 3 thing3(heap);
defer { cleanup(thing3); };


I could write:

1
2
3
4
5
6
7
Temporary_Memory temp = begin_temp_mem(scratch);

Object 1 thing(scratch);
Object 2 thing2(scratch);
Object 3 thing3(scratch);

//Just have temp desctructor destroy everything at end of scope