I'm especially intrigued by Idris!
in any programming language, we usually verify statically unknown data with runtime checks, and do some error throwing/handling if our data is not OK. We do the same in Idris/Agda, except we have runtime checks that return proofs of properties. If the data is OK, the checks return proofs of OK-ness, and we pass the proofs along to functions that require them. If the data is not OK, the checks return proofs of not-OK-ness.
I didn't quite understand the bit about Idris though. I don't understand how its compiler can verify at compile time that a number will be smaller than another. Obviously this is easy if we're dealing with literals, as was the case in the example, but how would that work with variables?
In the spirit of handmade, I had also commented asking if anyone would elaborate why almost nothing useful is written in the great language that is Haskell: https://gist.github.com/garybernh...a6cd674795a9c#gistcomment-1856879
I truly want to understand. My guess is that the development is too slow ("fiddling with types") to be fun and enjoyable? Or that the type system limits expressivity too much...
the fact that they are towards the pure end of functional programming languages?
OCaml - had a major problem: you couldn't print" a datastructure easily.
I had also commented asking if anyone would elaborate why almost nothing useful is written in the great language that is Haskell