The 2024 Wheel Reinvention Jam just concluded. See the results.

Goodbye, Object Oriented Programming

I guess a worse case is when you want to do something like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
for( int i=0; i<100; i++)
{
  
   int x = bar(i)
   if(x==SPECIAL_CASE)
    return foo;
   //stuff

} 
return foobar;



I don't know for sure about OCaml but it is probably the same as in F# where you just can't do that directly. I'm not sure any of the workarounds are as efficient, and certainly none are as straightforward to type. Again, Rust lets you do this.


Edited by Jack Mott on
I realized that people have a different definition of what "Function Programming" means, and that really changed the conversation.

When I think that FP is terrible and isn't going to solve any problems, just create more, I was thinking of the syntax of function languages. As a fun example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
public int getFibonacciNumber(int n) {
    return (int divide(subtract(exponentiate(phi(), n), exponentiate(psi(), n)),
        subtract(phi(), psi()));
}

public double exponentiate(double a, double b) {
    if (equal(b, zero())) {
        return one();
    } else {
        return multiply(a, exponentiate(a, subtract(b, one())));
    }
}

public double phi() {
    return divide(add(one(), sqrt(add(one(), one(), one(), one(), one()))),
        add(one(), one()));
}

public double psi() {
    return subtract(one(), phi());
}


But many of you are not talking about lisp or scheme and all the crazy parenthesis or the
1
(car(car(cdr(cdr(list)))))
but just the idea that pure functions, that don't change or maintain state, can be better.

I've been doing this for years, and in fact I am currently working on a server that uses Akka Actors to process network commands, and the Actors all follow the stateless, functional idea.

So I guess it's just the syntax of some pure functional languages that bug me. Huh.
That kind of syntax drives me crazy too, which is what I love about the |> operator which you can create it in many languages if it isn't there by default.

So instead of something like

1
2
3
let x = [| 1 .. 10000 |] //an array

sum(map(filter(x,odds),square))


which can be pretty hard to read, if you have an operator like |> which is just defined as:

1
let (|>) x f = f x


then it can become:

1
x |> filter odds |> map square |> sum  //filter out only odds, square each value, the sum all values


So it reads the same way it actually operates. F# has this by default, with OCaml you can define it with that one line of code. Rust you can kind of approximate it with a Macro, but it seems more common in Rust to use a method chaining style:

1
x.filter(odds).map(square).sum()



Edited by Jack Mott on
My problem with all that |>-ing is naively we'd define map as
1
map :: (a -> b) -> [a] -> [b]

which, again naively, appears to be producing a new list every time it is called, which we would rather not do when it is part of a larger chain of intermediate steps. in haskell this is supposed to just work because everything is evaluated lazily. but sometimes this doesn't just work, maybe because of some terrible graph reduction edge case that requires a masters degree to understand and 800 megabytes of memory to evaluate*, or because [] is a linked list internally and maybe you really want some other abstraction pulled from category theory to better express what you are doing, and then later you discover that Real Haskell Programmers pepper their code with strict annotations and they know never to define map in such an amateurish way anyway

but that's just Haskell. I don't know about F#, but rust ofc does everything with iterators, so what you get back from map and filter is not a new array but an iterator that's yet to have its lever cranked. clojure solves this problem yet another way with transducers, which is really the same thing but uses higher order functions instead of iterators to get some brownie points about being stateless

basically by and large in the functional world your classic examples of functional code--by which i more or less mean expressing data transformations as morphisms, so x.map().filter().andSoOn() can become compose(map, filter, andSoOn)(x) for 'free'--do not do what they purport they do; they also, and sometimes only, prepare the machinery required to make the transformation happen. and its a lot of machinery, and its different for every language, and it will bite you in the ass if you don't understand it (and web programmers by and large don't). So its just no silver bullet all over again (oh, one thing i hear, about programming 'in the problem domain and not the solution domain'--already has collapsed for FP, let alone OOP, which has since moved from being the problem domain to the new solution domain)

But! This is all very little compared to what I wanted to get to in the first place, which is I do not mind the difference between
1
sum(map(square, filter(odd, x)))

and
1
x |> filter odds |> map square |> sum

simply because the first is much closer to the natural language way of expressing it, "sum the square of each odd number in x," than the second, which is a sequence of high-level instructions that happens to create the desired output. something of a taboo in fp as i understand it, the frisson of the imperative, ah,

anyway with currying you can see at a glance that the last term, x, can be pulled out to make it point-free. |> just complicates things in that context, although it obviously has its uses

*understanding seq in haskell is probably a bit like understanding pointers in C

Edited by graeme on
timothy.wright
I realized that people have a different definition of what "Function Programming" means, and that really changed the conversation.


Definitely. There is functional programming as a mindset, functional programming as a first-class citizen (i.e. functional programming in a language designed with it in mind), and functional programming as a second-class citizen (i.e. in a language not designed with that in mind). The mindset is what's valuable.

Having said that, one of the things I love about Haskell is that it is absolutely no-compromise when it comes to "pure" functional programming. This would normally be a disadvantage in a programming language, however Haskell's unofficial motto is "avoid success at all costs". Haskell is deliberately designed to be the language of the future, in that (unless you are a researcher or work for a European bank) you will never use Haskell for your day job today, but the features developed in Haskell today will find their way into the language that you will use for your day job in a few years' time.

Haskell is probably the first mainstream programming language whose explicit purpose is to be a test lab/proving ground. If there isn't a known theoretically pure way to get some feature, Haskell will not get that feature until someone has done the research to find a theoretically pure way to do it.

If you're a closet theoretician (as I am), that's an extremely valuable thing to have. Long may Haskell remain unsuccessful.

timothy.wright
When I think that FP is terrible and isn't going to solve any problems, just create more, I was thinking of the syntax of function languages. As a fun example:

At the risk of stating the obvious, this "fun example" is clearly not written in a "functional language". This code is what you'd find in a book called "Teach Yourself Guide to the Black Art of Functional Java in 21 Days for Idiots" or words to that effect.

Edited by Andrew Bromage on
I have been looking at Haskell as something new to learn. I think i'll give it a try.
Good luck with learning Haskell. I must admit I bounced right off Haskell, and quite a few other functional languages, when I was trying to get into FP. For me, the language that finally made FP "click" was Erlang. However I can't really say if that was because of something about Erlang or was simply because by that point, I had already tried to learn FP a few times and so was at a stage where I was ready to grasp it.

Returning to the original topic of the Goodbye OOP article, have any of you read this blog post by Robert C. Martin? I'm not sure if it's a direct response to that article (it was written around the same time) or just a response to the general anti-OOP feeling around at the moment, but I think it makes some valid points. What do you reckon?

Edited by Laurie on
Uncle Bob
It's time to simply get down to work. We need to choose a language, or two, or three. A small set of simple frameworks. Build up our tools. Solidify our processes. And become a goddam profession.


I may agree with that, but it's very common for people to nerd rage when feeling cheated by the promise of something never delivered, so I sympathize with the "Goodbye, OOP" developers.

Possible root causes of not feeling like a profession are (1) programming knowledge is shared in biased, opinionated ways, and (2) we're scaring the new generation of programmers into thinking they're not good enough to improve upon frameworks or language design (e.g. we encourage almost blind use of existing things above all else). I don't think anyone who uses libraries on a daily basis can say we've gotten to where we need to be. However, we do need to have a conversation of how to improve things with more standardized metrics, not just another conceptual agenda to justify a new language.

EDIT: To add to #2, the ability to think about how to improve our profession relies on trusting developers to freely dig deep into the inner workings of their platforms. If we're stuck at the surface level, our thought process will be pretty shallow.

Edited by Abner Coimbre on
Things from the Article I would like to discuss:

1. All we are really doing is reinventing the wheel, over, and over again. And we're wasting massive amounts of time and effort doing it. I see waste. Massive, incalculable, waste. Waste, piled upon waste, piled upon even more waste.

Casey Muratori mentioned in a Handmade Hero episode how it is wrong to compare the rewriting of software to "reinventing the wheel". I agree. A wheel and axle is one of six simple machines that form the building blocks of all other machines. No one is reinventing the wheel.

The six simple machines are: lever, wheel/axle, pulley, inclined plane, wedge, and screw.
In computers, the simple machines are: and, or, not, nand, nor, xor, and xnor. (or some subset of those)

No one is reinventing these. But I see cars continue ot evolve, and so will software. Such an egotistical view to compare Java to the wheel.

2. OO isn't dead. OO was never alive. OO is a technique; and a good one. Functional programming is not "better" than Object Oriented programming. Functional Programming is a technique, and a good one, that can be used alongside Object Oriented programming.

If either one of these technologies was good, then we wouldn't be here, where everything is broken, nothing works, systems crash all the time, and everyone just pretends things are better because the hardware people have been learning to use the tools better, making computers faster, to hide all the problems.

3. Progress in software has followed a logarithmic growth curve. In the early years, progress was stark and dramatic. In later years the progress became much more incremental. Now, progress is virtually non-existent.
Every year. though we apply massive effort, we make less progress than the year before; because every year we get closer and closer to the asymptote.

A logarithmic curve is what happens when you continually try to improve a flawed technology. Hardware has an exponential curve. Why is that?

4. Have you ever used IntelliJ or Eclipse to program Java? Those are incredibly powerful tools.

IntelliJ and Eclipse (which we use at my work every day) are some of the most broken, bloated, pieces of crap software I have ever used. This opinion alone tells me that this is just another theoretical article from an out-of-touch computer scientist.

5. We need to realize that we have hit the asymptote. It's time to stop the wasteful churning over languages, and frameworks, and paradigms, and processes. It's time to simply get down to work. We need to choose a language, or two, or three. A small set of simple frameworks. Build up our tools. Solidify our processes. And become a goddam profession.

I agree of course, but first we have to figure out what the "six simple tools" are for software. So far, that still remains an elusive mystery.

Edited by Timothy Wright on
While new languages could be improving the wheel, a lot of them are just slight rehashes of an old thing.

There are like a dozen variants of ML right now that are "the hot new thing", for instance (F#, OCaml, Reason)

A couple Lisps

At least 2 Javas (Java and C#)


It would be nice if we condensed these efforts a bit. Maybe then Java would have had value types for years and C# would have had auto vectorization for years.

By the way you might be interested in Intellij Zero Latency mode:
https://blog.jetbrains.com/idea/2...y-typing-in-intellij-idea-15-eap/


Interesting points about the article Timothy.

Timothy Wright
it is wrong to compare the rewriting of software to "reinventing the wheel"

I think this is an important insight, and I'd certainly not thought about it this way before. However I wonder if this isn't at risk of being a bit pedantic? I mean, certainly you have demonstrated that the wheel is a poor analogy, but in other industries, they reuse a lot more than just the six simple machines you describe, right? I am guessing here, because I've never worked in these industries, but I always thought that car manufacturers would make extensive reuse of large existing components (not just wheels), and computer hardware manufacturers would reuse large blocks of circuitry that are working well. Maybe this is what the article is trying to draw a comparison with, rather than a literal wheel/axle level building block?

Timothy Wright
If either one of these technologies was good, then we wouldn't be here, where everything is broken, nothing works, systems crash all the time, and everyone just pretends things are better because the hardware people have been learning to use the tools better, making computers faster, to hide all the problems.

I'm not quite sure I follow this. I don't accept that everything is broken, but I'm assuming you mean that rhetorically rather than literally so fair enough. However is your argument here that had OO and FP simply never been used, then software would be less "broken"?

Timothy Wright
A logarithmic curve is what happens when you continually try to improve a flawed technology. Hardware has an exponential curve. Why is that?

Again could you elaborate on this? Is your argument that the natural course of things is for exponential improvement, and that falling short of that means you're doing something wrong? My impression was that part of the point of this article was to argue that exponential improvement isn't actually to be expected as the norm, and this isn't a new idea. For example the No Silver Bullet article does a good job of explaining why we're unlikely to see orders of magnitude improvements in software engineering in the future, and it doesn't need the misguided use of OOP to explain it.

Timothy Wright
IntelliJ and Eclipse (which we use at my work every day) are some of the most broken, bloated, pieces of crap software I have ever used.

I won't comment on IntelliJ or Eclipse because although I've used both, I don't work with them day in day out so I can't really comment on their power or lack thereof.

Timothy Wright
This opinion alone tells me that this is just another theoretical article from an out-of-touch computer scientist.

I find that quite hard to accept. Robert C. Martin is really not an aloof academic computer scientist. He's spent his life working in the software industry. That doesn't mean he's right of course, but I simply can't dismiss him as someone lacking relevant experience or background.

Timothy Wright
we have to figure out what the "six simple tools" are for software. So far, that still remains an elusive mystery.

You're absolutely right on this last point; what he never says in this article is what those simple tools are, or perhaps more importantly, who gets to decide what that core toolset should be. If you asked programmers to list their core toolsets, I imagine it would be difficult to find two programmers who would draw up the same list!
Lots of great stuff here. I'm thinking about it (and also trying to work on two different projects).

...
On the "basic machines" metaphor, I think we do understand at least some of the basic pieces that software is built from, at least in the sense that we are now convinced that these pieces were discovered, not invented.

For example: The tuple (also known as "record", "struct", and "product"), and the discriminated union (also known as "variant record", "coproduct", or "sum") are two basic machines from which software is made. Discriminated unions are a bit of an odd thing, because fewer programming languages have them than you might hope. However, they are not only basic (in category theory-speak, they are universal objects), they are also very closely tied together (in category theory-speak, they are dual).

Another basic machine is the "closure", which is often realised as a "function pointer", "function object", or related notion.
Pseudonym
On the "basic machines" metaphor, I think we do understand at least some of the basic pieces that software is built from, at least in the sense that we are now convinced that these pieces were discovered, not invented.

For example: The tuple (also known as "record", "struct", and "product"), and the discriminated union (also known as "variant record", "coproduct", or "sum") are two basic machines from which software is made. Discriminated unions are a bit of an odd thing, because fewer programming languages have them than you might hope. However, they are not only basic (in category theory-speak, they are universal objects), they are also very closely tied together (in category theory-speak, they are dual).

Another basic machine is the "closure", which is often realised as a "function pointer", "function object", or related notion.


Closure is really just a combination of the tuple and dynamic dispatch. Where dynamic dispatch is a pure stateless function pointer.

Going further down the road of "basic machines" I can see 3 broad categories of operations:

  1. control flow: which is everything that manipulates the next bit of code to run.
  2. variable selection: which is everything that decides what variables the code runs on, like getting the member of a struct indexing into an array, dereferencing a pointer,...
  3. computation: which is all manipulation of the values of the variables


in many esoteric Turing complete languages (as long as they aren't too deep into the Turing tarpit) you can see those 3 categories in the instruction set.