Reply to polymorphism in "Handmade Hero Chat 014 - CRTP and Library Design"

There is a way in C++ from a base class, which inherited class it is. Whenever a class in C++ has at least one virtual method (can be the destructor), the first field in the class will be pointer to the vtable. This pointer is enough to distinguish different subtypes. It is used internally for the dynamic_cast as shown in the example below:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
struct entity {
  float P[3];
  float Radius;

  virtual ~entity(){}
};

struct entity_ghost : public entity {
  float Spookiness;
};

void Foo(entity *Entity) {
  Entity->P[0] += 1;
  if(entity_ghost* Ghost = dynamic_cast<entity_ghost*>(Entity)) {
    Ghost->Radius = 0.0f;
    Ghost->Spookiness += 1.0f;
  } else {
    Entity->Radius += 1.0f;
  }
}

void Bar(void) {
  entity_ghost Ghost;
  Foo(&Ghost);
}
Except the entire point of crtp is to avoid having any virtual functions in the first place.

I feel that crtp is to have an abstract super class with state and member functions instead of only the duck-typed interface you get from plain templates.

I feel that a lot of the idioms with janky syntax in C++ are really workarounds to get behavior that people actually (thought they) needed.
I thought the purpose of CRTP and other template *tricks* are to utterly and completely butcher compile-times.

Casey could have finished with just that.

Honestly it doesn't matter what kind of excuses C++ apologists come up with, if using a language feature sends compile times into seconds and minutes, the feature is USELESS.

Language features are supposed to bring down the development time, not increase it. It's insane.

I remember some time ago I tried Rust, and hello world program took 500ms to compile.
That's it right there, it doesn't matter anymore what kind of *features* Rust has.

If Rust has a slow compiler, the language as a whole is USELESS.

Imagine if you could say, "hey, C++ has lots of warts and questionable design, but at least it compiles super fast". No, it's terrible in almost every dimension except for code-gen.

It's also one of the main reasons JAI has potential to be the best language out there - no hyperbole! It's a combination of blazing fast compiler with sane, pragmatic design.

I cried like a little bitch when Jon demoed the compiler speed.


Edited by pragmatic_hero on
Languages like rust typically during compilation have very slow constant part due to how they are created. Basically compile time = SlowLongPartConstant + k*CodeSize. When you benchmark compiling hello-world program, this SlowLongPartConstant is a signficant factor. But the more code you add, then less important it becomes. The k*CodeSize part starts to dominate at some point. Its like O(n) can be slower than O(n^2) for some n, because maybe the O(n) algorithm has huge multiplier constant, but O(n^2) very small.
this code is just an edited version of the code casey showed in the stream and it is supposed to show that what he said is not 100% correct. It is possible in c++ to know the subtype when the compile time type information just knows the base type. dynamic_cast is also not perfect, because I heared it has performance proplems, but it allows the exact same code structure tha casey wants. No no reason to complain about that.
krux02
this code is just an edited version of the code casey showed in the stream and it is supposed to show that what he said is not 100% correct. It is possible in c++ to know the subtype when the compile time type information just knows the base type. dynamic_cast is also not perfect, because I heared it has performance proplems, but it allows the exact same code structure tha casey wants. No no reason to complain about that.


And the people that use CRTP are exactly the ones that don't want to have anything to do with the virtual call or dynamic cast.

So they invent that kind of stuff so that they can use the existing code structure (in this case heavily oop inspired) without the penalties of virtual calls
mmozeiko
Languages like rust typically during compilation have very slow constant part due to how they are created. Basically compile time = SlowLongPartConstant + k*CodeSize. When you benchmark compiling hello-world program, this SlowLongPartConstant is a signficant factor. But the more code you add, then less important it becomes. The k*CodeSize part starts to dominate at some point. Its like O(n) can be slower than O(n^2) for some n, because maybe the O(n) algorithm has huge multiplier constant, but O(n^2) very small.

So what's being done in the SlowLongPartConstant?
500ms is a looong time. Reminder: it's hello world app.

It would be a reasonable trade-off if the SlowLongPartConstant significantly reduces the k in
1
compile time = SlowLongPartConstant + k*CodeSize

Does it?

I don't think compilation speed has been ever a priority for Rust devs. It's just bad, C++ levels of bad.

And unless a language is designed with that in mind from day one, it's highly unlikely to ever get to be fast.


pragmatic_hero
It would be a reasonable trade-off if the SlowLongPartConstant significantly reduces the k

That's exactly my point. "k" is much more important once you start compiling real code instead hello world. That's why making compilation speed decisions based on hello world is kind of silly.

Its same as complaining why cout << "hello world" binary takes 600KB or more (of course only if STL is your thing).

Edited by Mārtiņš Možeiko on
I wish I could say that Rust and C++ had O(n) complexity in amount of source code read, however that's hardly true :)

In C++ with clang, which is what I looked into, there seems to be a rough O(n) relationship between naive C-like code and the resulting optimized binary object bytes. However as soon as you bring templates to the table, the front-end part scales very non-linearly. Which is what people want, in a way, since they want low line-counts for "more value"; however this also means that the throughput: output-object-byte/input-size-byte varies quite a bit.

And by vary quite a bit I mean, on the large C/C++ application I was studying, I measured throughputs of input-size-byte/time:

min: 10.7KB/s (172x slower!)
max: 1850KB/s
average: 793KB/s

Unfortunately I did not compute the median.

* By the way input-size-byte I use is the raw, preprocessed input size. When it comes to user's input size, you get into scary territory, like a 600 B long file that took 12s to compile.

Edited by Nicolas Léveillé on Reason: (Added numbers)
The worst thing is once you make the meta programming turing complete. Now you have no idea whether the compilation can actually finish.

Granted there are ways to get the compiler stuck with other undecidable problems. But turing completeness is so easy to accomplish with minimal meta programming.
ratchetfreak
The worst thing is once you make the meta programming turing complete. Now you have no idea whether the compilation can actually finish.

Granted there are ways to get the compiler stuck with other undecidable problems. But turing completeness is so easy to accomplish with minimal meta programming.

Is that really a problem though?

If compile time > some cut-off threshold, something is wrong either way. At that point it doesn't even matter if the compilation could actually finish.

One of the major problems with C++ templates is that it's very hard to reason how it's going to affect compile times without knowing compiler internals and whatnot.
That multiplied by the C preprocessor include crazy-town. And all the utter madness.

JAI style compile-time-execution on the other hand. Is basic imperative code. Comparatively speaking fairly easy to eyeball the performance. More-over i'm sure Jon will include some sort of way to profile how much time is spent doing CTE in each file/module or something like that.