Great essay on Types

Thanks for the link! As you say, very interesting article. I didn't quite understand the bit about Idris though. I don't understand how its compiler can verify at compile time that a number will be smaller than another. Obviously this is easy if we're dealing with literals, as was the case in the example, but how would that work with variables?
This is answered in comments:
in any programming language, we usually verify statically unknown data with runtime checks, and do some error throwing/handling if our data is not OK. We do the same in Idris/Agda, except we have runtime checks that return proofs of properties. If the data is OK, the checks return proofs of OK-ness, and we pass the proofs along to functions that require them. If the data is not OK, the checks return proofs of not-OK-ness.
Very nice rant indeed. In the spirit of handmade, I had also commented asking if anyone would elaborate why almost nothing useful is written in the great language that is Haskell: https://gist.github.com/garybernh...a6cd674795a9c#gistcomment-1856879

I truly want to understand. My guess is that the development is too slow ("fiddling with types") to be fun and enjoyable? Or that the type system limits expressivity too much...

A similar language that I used a little bit a decade ago - OCaml - had a major problem: you couldn't print" a datastructure easily.

Edited by hugo on
@mmozeiko Ah thanks, I'd missed that in the comments.

@hugo Interesting question. I think there are a lot of reasons that better products, be they programming languages or anything else, can lose out to inferior ones, so I hesitate to write off Haskell for that reason alone. However I must admit that when I tried to learn Haskell, I didn't get on with it at all. I've never tried OCaml. So do you think that the type system of these languages is their main problem, rather than the fact that they are towards the pure end of functional programming languages?
Quite a good overview over different typesystems.

Idris seems quite compelling with its ability to express and typecheck arbitrary knowledge at compiletime. The problem with dependent typing is, that you somehow have to provide the neccassary type information and because of its versatility, you often have to write a step by step prove. As far as I know, Idris can not even automatically use the commutativity law for addition.
Especially when you just want to try something quickly, it is quite annoying.

Personally I am more interested in refinement typing. The type checker contains a SMT-Solver, that understands basic arithmetic and set theory in a far more convenient way, but unlike dependent typing can no longer proof everything.
Especially LiquidHaskell seems promising, because it is just an optional extension to haskell, so that you can switch it off during prototyping.


John_Uskglass
I didn't quite understand the bit about Idris though. I don't understand how its compiler can verify at compile time that a number will be smaller than another. Obviously this is easy if we're dealing with literals, as was the case in the example, but how would that work with variables?


For obvious reasons you can't assume any properties for yet unknown data, eg. input data, but in practise you often first validate the data and then use a subsystem, that assumes this properties, propably additionally checked with asserts. Refinement typing allows you to retain that info at type system level and therefore provable at compiletime. Even when new values are computed, you can retain some info. For example in a binary search, both boundaries never cross the middle and so the you can derive the input condition for the next step.


hugo
In the spirit of handmade, I had also commented asking if anyone would elaborate why almost nothing useful is written in the great language that is Haskell: https://gist.github.com/garybernh...a6cd674795a9c#gistcomment-1856879

I truly want to understand. My guess is that the development is too slow ("fiddling with types") to be fun and enjoyable? Or that the type system limits expressivity too much...


A languages popularity does not directly relate to usability. Adopting a new language takes a lot of time and makes old code and quite a lot of knowledge useless. Additionally there need to be tools, libraries, a build system, documentation, etc. I think haskell would be a great language for many tasks, but changing the production process would take years for many companies.

Edited by Marthog on
John_Uskglass
the fact that they are towards the pure end of functional programming languages?


You're right - thats probably the main issue (ie making mutation hard), specially for games. Jon must have talked about it in his Jai videos - I just haven't watched all of them yet.

Edited by hugo on
The thing about mutability is that both ends of the spectrum are very powerful. Being stateless is powerful in certain ways, as is having state. My main gripe about functional languages is the "purity". A lot of the time, I want statefulness and in languages such as Haskell, it is nigh impossible.

Some languages go to the other end have only mutability (e.g. Go, C (because const isn't really "constant"). But they do not suffer from those problems as you can treat the mutable variables as if they were immutable. Some languages meet in the middle (e.g. Rust) and say, immutability is the default but mutability is easy to do too.

I think that may be part of the problem with "pure" functional languages, state is extremely useful and those languages cannot leverage it.
This gives a great breakdown on how Idris works. To be clear you will get a compile time error, if it is possible for your code to pass data violating the type constraints:

https://news.ycombinator.com/item?id=12350415
hugo
OCaml - had a major problem: you couldn't print" a datastructure easily.


OCaml is not a pure functional language like Haskell, it should be much less hard for someone used to C to pick up.
You can make things mutable and do IO as normal if you want. Maybe back in the day they just had a bad printf?

hugo
I had also commented asking if anyone would elaborate why almost nothing useful is written in the great language that is Haskell



So there are a few barriers to entry. First and probably biggest is that once you are in a pure functional language things are just very very different. You will maybe start with some idea you want to try in Haskell like "how do I do an in place quicksort on an array"? Well you just don't do that. You have to relearn your approach to everything, and that is hard, and takes time. If you started out at 13 it wouldn't be as big of a problem. It might still be a little harder to wrap your head around though, even starting tabula rasa.

The other issues is there are performance penalties to pure functional approaches. Not always, but often, and probably very often with games. On the other hand there are also some optimizations that become easier with a pure functional language, you can take advantage of lazy evaluation automatically for instance. Sorting a linked list in F# is impossible to do efficiently because it is immutable but it isn't pure, so you just copy it to an array, sort the array, and then make a new linked list instead. With Haskell you can actually sort the linked list pretty efficiently because lazy evaluation can be optimized so well. You can also more easily take advantage of multiple cores with a pure functional language. Though sometimes this point is absurd (Ie, a function takes 8 times longer to run, but you can run it on 4 cores at once!)

Another issue is it just takes a lot more work to get something working. The counterpoint to this, is that Haskell is often just forcing you to do work you should have been doing anyway, if you want to be sure you program is correct.

Lastly, Haskell's goal is not to be an industry useful thing, but more to be an experiment of what a pure functional language should be. So there isn't a team cranking out super high quality standardized libraries for things people in industry commonly do.




Edited by Jack Mott on
To return to the original topic of types, I wonder what you guys think of this article by Robert Martin about the "type wars"? (For the record, I do read other technical blogs, it's coincidence I'm linking to him again, honest!). His argument is that the checks offered by statically typed languages are only a subset of those offered by following TDD, but that they tend to flag up more false positives, and therefore if you're following TDD, you may be better off with a dynamically typed language.

What do you think? Is there anything to this? I think this is particularly interesting in regard to languages like Rust, which have a pretty strong type system and in built static analysis, and yet also provide a lot of well integrated tools for unit testing. If you're using Rust's unit testing methodically, do you still need its type system and static analysis too?
And then again game developers don't usually develop w/ TDD, right?

Edited by hugo on
One thing I find a bit absurd about the TDD argument, is they are basically saying:

1. Having static type checking happen automatically for you in the language is not useful.

2. Because you should be building static type checking yourself with tests.


But, these arguments are just rhetorical, and if we want to call ourselves computer scientists, then to really know if dynamic typing is good or bad, we need to *count* (aka do scientific studies)

Andreas Stefik is one guy who does a lot of studies on this topic, in his opinion there is solid evidence that static typing is helpful. One of his papers is here: http://dl.acm.org/citation.cfm?id=2674548

One of the interesting things he has found, is that productivity is best when you have type annotations on function declaration, but *not* in the body of functions. Basically he see that type annotations on functions declarations prevent you from having to jump to the function definition to see what it does as often. But then being able to say "var x" or "auto x" or whatever in the function body just saves you time. So that would imply you want static typing with type inference, and mandatory type annotation on function signatures (which I've found to be my preference when working with F#, where they aren't mandatory, and it drives me crazy when they are not annotated!)



But, Dan Luu (who has a great blog by the way) has a nice article that quickly reviews some of the available research on the topic and he is of the conclusion that "nobody knows!":
https://danluu.com/empirical-pl/








Yeah I'm not convinced by the TDD argument--100% code coverage isn't a guarantee of anything and it's even less of a guarantee in dynamic languages. TDD really is great when its appropriate but there's no reason to make an ideology out of it.

MandleBro you'll be interested in Microsoft Research's Empirical Software Engineering Group. It would be nice if the most relevant link on that page wasn't broken though! Cheers, microsoft