What, if anything, is OOP?
Is object-oriented programming bad? Is it good? What even is it, anyway? This fishbowl explores OOP more carefullyโwhat is the essence of it, what are the good parts, why did it take over the world, and why do we criticize it so much?
This is a
fishbowl: a panel conversation held on the Handmade Network Discord where a select few participants discuss a topic in depth. We host them on a regular basis, so if you want to catch the next one,
join the Discord!
bvisness
Topic: What, if anything, is OOP?
Welcome to yet another fishbowl, everyone! Just setting things up here. The main conversation is here in this thread, and the #fishbowl-audience conversation starts here: https://discord.com/channels/239737791225790464/708458209131757598/980155512412864552
The pinned messages are some "checkpoints" in the discussion; chapter markers of sorts. Check them out if there's a specific aspect of the discussion you'd like to see.
(edited)
So, this fishbowl is a long time coming. Since the dawn of the network, many people have asked the question: Is object-oriented programming bad, or what?
That is, unfortunately, not a very nuanced question.
Our conversations about OOP often veer in a lot of different directions, and the subject is hard to discuss productively because it means so many different things to different people.
There are many different ideas that are bundled into the modern concept of "object-oriented programming". There are also many languages that have taken on these ideas with different decisions and tradeoffs. Our goal in this fishbowl is to break this subject down and discuss these different concepts with more nuance.
To that end, we're doing things just a little bit differently for this one. Each of our esteemed participants have picked a particular aspect of "OOP" to present, and we'll be working through those over the course of the conversation here.
On that note, let me introduce our participants:
- @demetrispanos, general wise elder of the community with particular expertise in machine learning and artificial intelligence
- @gingerBill, creator of the Odin programming language
- @Kartik Agaram, creator of creative and very low-level programming systems / operating systems such as Mu and Teliva (look them up!)
- @SirWhinesalot, a community member whose comments were instrumental to the planning process for this fishbowl (if there are projects you've worked on that I'm not aware of, let me know and I'll update this introduction
)
none public
but my expertise is in Domain-Specific Languages and Constraint Solving
Before we get into the various modern conceptions of OOP, I think it would be good to take a look back at the history of object-oriented programming, and the path that we took to get to where we are today. @Kartik Agaram has graciously volunteered to walk us through that history, so - take it away, Kartik!
Is this thing on? Ok, let me paste in some prepared remarks. A quick, opinionated timeline of the major ideas that make up OO.
1958-1966: A few different people gradually come up with the notion of structured data. Conventions for managing compound data made of multiple words in memory. If you have two points, you want their x and y coordinates close together and in the same relative order. Seems obvious, but wasn't! See http://akkartik.name/sketchpad-oo.png from the Sketchpad thesis [1].
1960-1962: Ivan Sutherland works on Sketchpad. It presages many OO ideas, but they're all in the programmer's mind and prose (thesis) because the code is all machine code.
1961-1962: Simula provides language support for many OO ideas: objects, classes, inheritance, dispatch based on the precise class of an object (virtual methods).[2]
1960-1975: Data first. This was mostly a mindset thing, but led eventually to notation like obj.method. Had spread fairly wide by around 1975, when Fred Brooks said, "show me your code and I'll be mystified. Show me your data structures and the code will be obvious." [3]
1973-1975: Abstract data types by Barbara Liskov. Ignore internal details of how objects are laid out in memory. Focus instead on a small vocabulary of operations that can be performed using them. Interfaces, basically.[4]
1966-1975: Alan Kay coins OO after working on Smalltalk. (The first chapter of http://gagne.homedns.org/~tgagne/contrib/EarlyHistoryST.html is better than my history above.) "Did not have C++ in mind." However, we're only halfway done.
(edited)
1970-1975: Lexical scope comes into its own, and leads naturally to closures.[5]
1980-1994: Lisp has been evolving largely in parallel, but now starts to cross-pollinate with OO. MIT Flavors (1982), CommonLOOPS (1986), CLOS (1994). OO is big on late binding. Lisp takes late binding to 11, with before/after advice and dispatch based on the types of multiple arguments. In 1994, Common Lisp decides to adopt lexical scope by default.
1992: It becomes clear that there's a yin-yang duality between objects and closures.[6] Though objects have the benefit of putting data first.
1994: The "gang of four" writes the Design Patterns book, based on ideas by the architect Christopher Alexander. Kicks off the field of OO design.
2001: Joel Spolsky coins "architecture astronauts,"[7] peak of hype cycle for design patterns. (OO design remains in vogue to this day, though.)
2007: Clojure arrives. Built on an OO foundation (the JVM) but determinedly anti-OO. Peak of the hype cycle for OO? Functional programming is now ascendant, impelled by the value of immutability.
2008-2014: Growing anti-OO sentiment[8], anti-design-patterns sentiment[9], awareness that composition is superior to inheritance[10].
2020: Richard Feldman crystallizes the case against OO[11]: OO wants to make every object a little computer. But why do we want to make all programs distributed?
Ok I'm done
Summary: there's a lot of ideas baked into "OO". Most of them are great. Some of them are showing their age. And unbundling them all feels more useful than treating them monolithically. End prepared remarks
Excellent, this is a wonderful place to start. I appreciate that you included a lot of the recent takes on OO; I think we'll especially be revisiting a lot of the 2001-present takes as we go along.
For some more context, what languages were in use around the 60s-70s when Brooks said that quote about data structures, and when Liskov was pioneering abstract data types?
In general I'm curious how some of these early ideas manifested in the tools that we now use today. (I know Java was a big one along the way, and we'll have a nice long section dedicated just to that...)
(edited)
I know CLU (Barbara Liskov's language) came out in 1973
Interestingly it was the first language with built-in Tagged Unions as far as I know
Oh gosh. I wasn't around then
but my weak sense is that while there were a few different programming languages being researched, the vast majority of people were programming in machine code and Assembly back then. Big ones: Fortran, COBOL, Algol (still getting refined), Lisp 1.5.
@SirWhinesalot Yes, I think so! Too bad it took so long for them to go "mainstream".
Going back to that Brooks quote. I think he was mostly talking about how programmers communicate. Languages and compilers weren't really the dominant gatekeepers of notation back then.
It was paper, in some form.
That makes sense, I suppose that would be the case
The sense that I get (which may not be very accurate) is that in the 60s and 70s, there was a lot of pioneering work in the field happening mostly in parallel, but that the ideas hadn't really taken hold in "industry" yet. At what point would you say that began to occur?
(edited)
Certainly from what I've seen of e.g. FORTRAN, the "practical tools" of the day didn't have any such concepts.
That's a good question. I have a hard time thinking of descriptions of production systems and languages used by industry. Even though Bell Labs and PARC came from industry, I think a lot of their work was fed by govt. grants and so tightly coupled with govt. rather than commercial considerations.
The software industry took off around late 70s? And then it took a while to rediscover the value of programming languages as the PC revolution took off and computers got fast enough to manage HLLs.
Turbo Pascal was 1983. That was really my introduction to high-level languages.
I know @gingerBill is quite a Pascal fan; I don't know if he would know any more about the history there
or at least a Wirth fan
3
The first Pascal was created by Niklaus Wirth in 1970 as a teaching tool.
It mostly became popular because of its simplicity and that it was easy to learn, especially amongst many of the minicomputer buyers in the 1970s.
Pascal (along with C) are part of a language family called ALGOL, specifically ALGOL 60.
ALGOL is the basis of pretty much every imperative (procedural) programming language to date.
It's honestly pretty crazy to me to look at the timeline here and see that nearly all these fundamental OOP-y designs were all developed independently at around the same time.
ALGOL 60 was created by many designers with many famous names too:
John Backus, Peter Naur (yes, those two of Backus-Naur Form), Friedrich Bauer (discovered the stack data structure), Charles Katz (you'll see his name in many papers), John McCarthy (AI, Lisp, etc), Alan Perlis (APL), and many many more.
And then the whole academic side runs out of steam in the 2000s, in my mind..
There's the basic history lesson for the roots of all imperative programming languages, as well as some others like Lisp and APL.
The 1980s to early 1990s is when OOP really starts picking up steam, Object Pascal, Objective-C, C++, COM, all culminating in Java in 1995
I think I can see traces of all these historical ideas in the languages we use today, and perhaps we can refer back to this list as we go
What Bill is saying about imperative programming languages may accidentally be a good segue into the first major concept of OOP that I have on my list...
Which is that OOP is when you associate behavior with data, in a sense just the idea of a "method". (I am not saying this is a completely correct concept, but it is one that I see frequently discussed.)
(edited)
One really important thing that I would love to bring up before we dive deep into the philosophy questions is the distinction between Object-Oriented Programming (OOP) and Methods, and why I think this is at the heart of the discussion.
(edited)
I'd argue that 99% of the reasons people want methods (and what many mean by "OOP" in a weird sense) are for the following reasons:
(1) Organizing procedures under a data type
(2) Allowing methods as a form syntactic sugar for writing calls in a subject verb object manner, e.g. do_thing(x, y) vs x.do_thing(y)
(3) Searching for procedures/methods by a data type, especially with the aid of tools such as an IDE
Regarding (1), I personally believe this is much better solved with a well designed package/module/library system as part of the language. Most OOP languages do not have such a system and usually use classes as a means to organize code. For my programming language Odin, packages have really aided us in terms of organization a lot better than
Regarding (2), this is a weird linguistic typology thing which I doubt most people will even know unless they speak multiple languages. If you want to learn more about word orders in different languages, I recommend reading the following:
* https://en.wikipedia.org/wiki/Subject%E2%80%93verb%E2%80%93object_word_order
* https://en.wikipedia.org/wiki/Verb%E2%80%93subject%E2%80%93object_word_order
Regarding (3), this is purely a tooling issue. It comes down to two main aspects:
* Many people just want to use the tools that they have and how they currently work (there is nothing wrong with that, per se)
* Many people cannot think outside of the current paradigm and think what is being asked for is not possible
Doing foo. and waiting for the IDE to autocomplete what is available for that value of a specific data type is extremely useful to many people, but there is not reason it could not work for something that was purely procedural, it is that many languages nowadays do support some form of methods and thus the general need has not arisen.
Like Common Lisp's generic methods.
So in your opinion, is it not correct to say that "associating behavior with data" is essential to OOP, because all functions are associated with some form of data? (Regardless of syntax?)
"mere methods" (Lua is a good example of this, where they're just sugar for passing in the "subject" pointer) are IMO roughly analogous to for-loops or if-blocks, i.e. a syntactic standardization of a widespread common practice
"procedural programming" could be seen as just standardizing a handful of assembly language idioms into language constructs
"mere methods" are, for me, just more of this
they standardize the pattern struct mything {
// whatever
}
void mything_do_cool_stuff (struct mything *m, int x) {
// whatever
}
I would agree with @demetrispanos and @gingerBill , methods as in the little . notation and OOP are independent concerns
demetrispanos
they standardize the pattern struct mything {
// whatever
}
void mything_do_cool_stuff (struct mything *m, int x) {
// whatever
}
importantly, note how in the pattern above the function is not tied to struct, you can keep adding as many functions as you want
in OOP, classes have a fixed set of methods, usually
SirWhinesalot
importantly, note how in the pattern above the function is not tied to struct, you can keep adding as many functions as you want
yes exactly, that's part of what I mean by "mere" (i.e. they're not wound up in some other concept like a class)
Lua is an example of something I'd like to call an Emergent OO Programming Language , but I'll leave that for a little later to reduce confusion.
Lua after all is a table oriented language.
There's a few of those
SirWhinesalot
in OOP, classes have a fixed set of methods, usually
"open to extension" is another wonderful idea we should discuss at some point. โค๏ธ Ruby
Smalltalk as well
1
demetrispanos
yes exactly, that's part of what I mean by "mere" (i.e. they're not wound up in some other concept like a class)
Would you elaborate on what extra ideas a "class" adds to the picture?
I'd say a "mere class" is a struct with a fixed set of "mere methods" (echoing some comments from above)
but classes as practiced add more ideas like visibility, inheritance, etc.
I'd note that while "mere methods" are IMO definitely useful, it's not clear to me what you get from a "mere class"
I think a mere class only becomes useful when subtype polymorphism is involved (i.e., a vtable of some sort)
The evolution I tend to see is:
* Putting the primary data type first is a nice way to organize vocabularies in people's minds. (I think this is what y'all mean by "mere methods")
* Once you do that, you can also generalize and allow different types to do different things. That pulls in inheritance and static dispatch.
* Putting the primary data type first is a nice way to organize vocabularies in people's minds. (I think this is what y'all mean by "mere methods")
yes
but importantly, the generalization of mere methods has consequences
because the set of methods is usually fixed
for implementation reasons
there are alternative formulations but they tend to have severe performance penalties
this is why I think mere methods and OOP are separate concerns. Having a nice . notation is one thing, introducing the extra machinery is another (and OOP only starts manifesting when the extra machinery comes into play)
I guess re: methods and subtypes, we should also mention prototype-style ideas because that gives a related view on what "mere classes" would do
(and of course this carried over into the original version of Javascript OO)
Yeah. There's definitely class- and prototype-based OO out there. Also extensible vs closed to extension.
Trying to demarcate precisely the boundaries of "OOP" might be a time sink. I'd much rather chop OOP up into its component pieces, and then we can discuss each separately. And then the question of, "is there anything we missed?" becomes fruitful.
In a game you can have a performance-focused substrate that is closed to extension, and a command language like Lua that is open to extension. Both use OO ideas.
I think it's important to address how people think about OO though
because just chopping up the pieces leads to talking about various pieces in isolation and that's not what gets people confused (or irate)
Let's talk about some of those pieces though, because getting a more nuanced idea of them will allow us to come back to what does get people confused and irate
SirWhinesalot
this is why I think mere methods and OOP are separate concerns. Having a nice . notation is one thing, introducing the extra machinery is another (and OOP only starts manifesting when the extra machinery comes into play)
I think we should start discussing this "extra machinery". I'm not sure what angle to discuss first; there's the more nuts-and-bolts ideas such as inheritance and polymorphism, but there's also the very hairy subject of object-oriented design. @SirWhinesalot is our point person for that, but I'd like to make sure we feel we've covered our bases first.
Right, might be worth posting that meme making fun of GeeksForGeeks here
So what we're talking about here is the idea that "OOP models the real world".
that's indeed the most important meme to address IMO
Should we tackle that now? or after we split OOP up first?
and it links up to almost everything else
We can branch from there into everything else, like Demetri says
What does it mean for OOP to model the real world? Here's what Wikipedia has to say (technically from the Domain-Driven Design page, but the two are intimately related):
In terms of object-oriented programming it means that the structure and language of software code (class names, class methods, class variables) should match the business domain. For example, if a software processes loan applications, it might have classes like LoanApplication and Customer, and methods such as AcceptOffer and Withdraw.
I would like to focus on that Customer class. Think of a website like Amazon, what does a Customer do? They can login/logout, purchase items, review items, contact customer support, etc. They also have a lot of data about them like their user-info, preferences, purchases, support tickets, etc.
What data and methods should the Customer class have?
Really think about this and the consequences it has on the structure of the codebase. All of the above? Classes should have a "single-responsibility" (supposedly). What should the single responsibility of the Customer class be?
You see how we immediatelly start getting into trouble?
1
Then there's the issue of structuring your code into class hierarchies based on real-world concepts, e.g. Customer : Person, Seller : Person, Employee : Person.
The problem of course is that the same person can not only be all three, they can be different subsets at different times! Even real-world taxonomies need to be updated from time to time, and this modeling style is imposing that kind of rigid structure, with far less certainty, on your data and behavior.
This style is considered bad practice these days, but even without the taxonomy nonsense the core issue remains, which is trying to model real-world concepts as bundles of encapsulated data + behavior. It just doesn't work well in practice, that idea is the fundamental mistake.
My personal view that thinking "OOP models the real world" is wrong because it is a category error. You cannot treat [programming objects] as "real objects" since "real objects" and "programming objects" have little connection to each other ontologically. This is why I've previously called OOP a form of misinterpreted and misapplied Aristotelian Metaphysics applied to a domain it was never meant to model[2].
By this statement, I mean that artificially conforming to any/all relations between data and types to an artificial hierarchy of agency is a form of naรฏve-Aristotelian metaphysics. Since there is no actual agency in the programming objects, it is a partial fallacy (and as previously stated, category error). When trying to conform a program to have a particular structure when it does not naturally, the absence of a structure in a program in more useful than a bad structure.
[1] https://en.wikipedia.org/wiki/Category_mistake
[2] https://www.gingerbill.org/article/2020/05/31/progamming-pragmatist-proverbs/
There is also the other aspect that in an actual Aristotelian metaphysics, an object can belong to an infinite amount of "classes" in any moment and even change depending on the situation.
When we discuss of an object, we are speaking of it about what is relevant to us in that situation.
Let's take a random object off my table right now: a lens cap.
A lens cap could be in the class of objects that cover a lens.
But it could also be used a coaster for my drink.
It could also be used as an eye patch.
1
Or a one of an infinite number of things with infinite uses.
The question is realizing what is relevant for the situation.
Now you've got me thinking about what a computer that can fluidly move between ontologies would look like..
This is why, in my personal opinion, a huge issue with force something to an "object"/"class"/etc has issues with. Things belong to more than just one class and trying to force it into one category can be very bad, especially when there is no "object" to speak of in the first place, only algorithms and data structures.
The relational/logical model tends to be much better at this, since you can add/retract facts about an object
Some people think they are a form of OO, that's incorrect
I think whether or not it is wrong, "OO solves problem by modeling the real world" is both believed and practiced, and indeed it is widely used to advertise and teach OO
Indeed, that's how I was taught in university
this is what I mean about talking about how people think about it
The best defense I've heard for OO design is to limit your ambitions to modeling solutions, not problems. That avoids the issues@gingerBill raises, but it's also a much more modest goal. OO design books seem to pay this idea lip service, but quickly go back to thinking symbols are the thing they represent. It's a very easy mental trap to fall into.
modeling solutions is antithetical to OO-best practices (as taught), because the right solution often involves some sort of "Manager" class that handles the data, and these are "frowned upon" because they don't bundle the data and behavior
Kartik Agaram
The best defense I've heard for OO design is to limit your ambitions to modeling solutions, not problems. That avoids the issues@gingerBill raises, but it's also a much more modest goal. OO design books seem to pay this idea lip service, but quickly go back to thinking symbols are the thing they represent. It's a very easy mental trap to fall into.
The irony of that statement is that is kind of the opposite of the point. Programming is a form of problem solving. If you have already solved the problem, then redesigning it to be OOP is really weird, and borderline masturbation.
limit your ambitions to modeling solutions, not problems.
I would phrase this as "[90s style C++/Java] OO offers the ability to form hierarchies of data structures and functions but that this is usually misinterpreted to mean that it will be useful to model hierarchies of concepts in the world outside the program"
gingerBill
The irony of that statement is that is kind of the opposite of the point. Programming is a form of problem solving. If you have already solved the problem, then redesigning it to be OOP is really weird, and borderline masturbation.
The idea is that once you've solved a problem, OO Design is a good way to continue to maintain the solution without forgetting why you did what you did. Software systems can last a long time!
I've found no advantage to an OO design vs any other form of design with a clear structure, unless the features provided by OO (like subtype polymorphism) were necessary
In fact quite the opposite, since OO design can impose a lot of rigidity that is hard to get rid off
Kartik Agaram
The idea is that once you've solved a problem, OO Design is a good way to continue to maintain the solution without forgetting why you did what you did. Software systems can last a long time!
See the trick? If it needs to be continuously improved, it's not a solution, yet.
I would say I almost never use inheritance, but I almost always use methods
and if someone told me I could never use inheritance again I would not consider it a big loss
I'm actually not familiar with any non-OO design methodology
They kinda sucked the oxygen out of the room, didn't they?
Kartik Agaram
I'm actually not familiar with any non-OO design methodology
They kinda sucked the oxygen out of the room, didn't they?
there are many architectures you can use. Things like publish/subscribe, model-view-update, etc.
Certainly though the term "design patterns" has come to exclusively mean "object-oriented design patterns", and really the gang of four stuff
Kartik Agaram
I'm actually not familiar with any non-OO design methodology
They kinda sucked the oxygen out of the room, didn't they?
"JUST SOLVE THE PROBLEM YOU HAVE" is my preferred one.
7
bvisness
Certainly though the term "design patterns" has come to exclusively mean "object-oriented design patterns", and really the gang of four stuff
and that persists to this day, thanks probably to universities?
yes I think the fact that OO has the superficial appearance of a "theory of programming" is important here
universities had rapidly increasing needs to "produce programmers" but universities loathe doing anything that might look like vocational school
OOP looked like it had some ideas from type theory, had some kind of taxonomy scheme, had some claims to provable properties
and so it was compatible with what a university might be persuaded to adopt
and it also had the claim of "intuitively models the real world, so businesses like it"
1
My university thankfully presented all paradigms (procedural, oo, functional, logical)
a good question there is if OO is an actual paradigm
(edited)
Alan Kay certainly thought so, but Smalltalk OO is a very different beast from C++ or Java or whatever
I think Kay has sufficiently lost that battle that there's not much point talking about it as a peer
when I want to talk about his stuff I say "Kay style OO" or "message passing OO"
I personally think Kay-style OO could be seen as a unique programming paradigm (when you include the full reflexivity and the IDE as the OS sort of programming style Smalltalk has), but regular OO is not, it's just procedural programming with a pointless structure on top
the folk understanding of OO, which IMO is like 90+% of people, is something like
- your program is a collection of classes
- one class per file
- each class represents some concept from the real world
- each class has a single responsibility (this is at odds with above but whatever :P)
- there are "good patterns" of assembling classes into programs to solve problems
(edited)
I (luckily?) don't have any programming education, let alone academic, so I cannot comment on what is or what is not taught commonly in Universities beyond what I have heard. From what I can tell, they seem to vary extremely widely compared to say harder science such as Physics or Chemistry. If you go to one University for Physics, it'll be similar enough to another. But for Computer Science (Infomatics), it will be very very different
yes, a very important essay in this history
(edited)
I love that post. When Demetri showed it to me for the first time years ago, I was shocked by how much it resonated with me
This seems like as good a time as any for Demetri to talk about Java and what it did to the world of software
which he volunteered to do as someone who lived through the Great Javafication
To give some context for that, though, my understanding is that Java really took over the software industry in the mid-90s for a variety of reasons I'm not entirely familiar with
And much of what he laid out in his "folk understanding of OO" above is, as far as I know, a direct line from Java
for example, one class per file
Java uses classes for everything. Namespacing, modeling the solution, modeling the world, file system organization...
Yes I believe other languages aren't as restrictive, but the pattern is visible in a lot of C# codebases for example
(edited)
because they sort of act like modules as well, specially when it's a "static" class with "static" methods
(read: a bog standard procedural thing with extra keywords)
Java was the first "real" programming language I learned, and it took me a long time to broaden my understanding enough to unlearn a lot of the decisions it made. static is certainly an example of that; a great way to obscure a very obvious concept.
It was the first question I asked my teacher: "What does static mean?"
I had been exposed to Pascal before, but not in an educational setting
So public static void main() inside a class was very confusing
Btw his answer was "I'll explain that by the end"
"Ignore it for now, just know it needs to be there"
it was much worse for my partner, her teacher decided explaining the difference between static and non-static was a great discussion topic for class 2 of newbie bioinformatics learning to program
static is a great scapegoat example for the problems with Java's design decisions, but I suspect there is a lot more to it than that
For example, I'm curious if from @demetrispanos's perspective, the design of Java's standard library had an impact here (or if that was just an outflow of the language design)
the standard library specifically, I'm not sure ... though Java definitely pioneered the idea that the standard library would have a canned solution for almost everything
I remember buying "Java in a Nutshell" in the late 90s and being genuinely shocked at its size
I think the large standard library overlapped with the idea of programs as "connecting objects to each other"
(edited)
Well perhaps it would be best to just step back then and talk about the shift you saw in the industry when Java hit the scene
Java was the front runner but C++ in the 90s was also a major participant (indeed Design Patterns came from the C++ community)
I think the biggest change was just how much marketing effort was behind "90s OOP", led primarily by Java (in terms of marketing)
Java promised everybody a pony
universities got to teach a thing that looked like a theory of programming
businesses got programmers that could program in "intuitive real world concepts"
new programmers were told they didn't have to worry about details their predecessors used to sweat over, and could just draw object/class hierarchy diagrams
(there were also other Java promises re: security and portability that aren't relevant here)
I wonder how much of Java being OO was because of the marketing efforts predating it..
well, to ask a question I think I know the answer to...how did Java live up to the hype :)
and indeed Java was conceptualized as a kind of "C++, the good parts"
and it's telling what they thought were "the good parts"
you don't just have the option of classes, everything is a class (even your program itself)
I would say that by the end of the 90s the thing we now know as "Java/C++ OOP" was cemented as the responsible way to write software
or rather, cemented the reputation as such
One reason it took so long to notice that OO didn't always work well, was that it worked best in the lower levels where people designed and implemented languages like Java. After they put in all the effort to build the language over a decade, it took a decade of using OO methodologies in anger to crystallize its limitations.
well also it genuinely did work reasonably well with widget hierarchies in GUIs
but that was IMO an accident of overlap between problem and solution domain
the widgets are code objects, and having hierarchies of code objects is something that OOP delivers as it says on the tin
and indeed "textbox with a slider" is a reasonable conceptual descendant of "plain textbox"
so I think many people saw that there was a good solution for an ascendant problem (the 90s saw the rise of GUIs to dominance)
and they figured this would just be the first major success of many
but widget hierarchies are much more usefully modeled by OOP than, say, customer/person hierarchies
That might be an interesting example to break down further
(Except when widgets used inner classes. That was a whole other can of worms..)
demetrispanos
but widget hierarchies are much more usefully modeled by OOP than, say, customer/person hierarchies
Important to note here that UI frameworks moved away from "Split button inherits from Button" to "Split button is a button with a nested dropdown and a vertical line", you see that shift with the introduction of WPF at least
so the move away from inheritance to composition was definitely ongoing in the 2000s
yeah this is what I meant by "reasonably well", i.e. it was better than no organization at all but didn't end up being the one abstraction to rule all of GUI
I do think it's interesting though that it was still a more effective model than what you tend to see elsewhere. @Kartik Agaram and @SirWhinesalot talked in the planning doc about objects needing to belong to the "solution domain", not the "problem domain", and this feels like a similar situation to me
One thing I still grapple with is that it's easier to use computers to talk about computers than it is to do something useful. That leads to all our yak shaving and abstraction-tower-building tendencies.
Yeah, OO is a rather natural way to model GUIs (not necessarily the only way, nor the best), but it works. A button has some internal state, it responds to some events, those events are shared between all kinds of GUI entities. A "Widget" interface that gets implemented by many different Widget classes is a rather obvious way to model this
even in languages without native OO support
bvisness
I do think it's interesting though that it was still a more effective model than what you tend to see elsewhere. @Kartik Agaram and @SirWhinesalot talked in the planning doc about objects needing to belong to the "solution domain", not the "problem domain", and this feels like a similar situation to me
That HN comment (https://news.ycombinator.com/item?id=785601#785833) actually hit me like a pile of bricks. Until then I'd only heard that you used inheritance to model is-a relationships, then wandered off because that seemed to suck.
yes how many times has a junior programmer agonized over "is-a" vs "has-a"
demetrispanos
yes how many times has a junior programmer agonized over "is-a" vs "has-a"
I did this for years
and it was a meaningless question
yeah and the memeplex around 90s OOP invites that activity
it very strongly implies that this is what you should do
interface vs implementation inheritance. And then I find that in practice, all inheritance is good for is sharing code.
(edited)
bvisness
and it was a meaningless question
meaningless but unfortunately very critical if you're in an OO language
because the decision is one you're mostly stuck with
SirWhinesalot
meaningless but unfortunately very critical if you're in an OO language
this I would say is a strong Java-ism
In practice, why is a relationship between two concepts being model as a language feature?
There's absolutely no reason for this
(edited)
I can model that A is a B in 1000 ways
well, the dirty secret is that it's not
there is not even an attempt at it
all that is attempted is sharing implementation details with ancestors
the language never engages with whether the alleged is-a relationship is valid
true, you actually have to be careful to ensure an A is actually a B
in fact traps like that are typical exam questions
the is-a aspect is entirely in people's minds
2
the thing that exists in the material world is sharing code
2
which is much better done with before/after advice or hooks that receive anonymous functions
When teaching my FIRST robotics students, I've struggled in the past with better examples to illustrate how inheritance works in Java than the classic "Cat is-a Animal" thing, and I suppose that's largely because inheritance isn't actually very good at modeling anything practical
but the concept must be taught if we are to use the language
(we no longer use the language)
demetrispanos
yes how many times has a junior programmer agonized over "is-a" vs "has-a"
One other aspect of OOP I think might be worth exploring is the idea of "agency" of objects, which "has-a" gets into
I'm not sure if we've really touched on that yet
perhaps in passing but we haven't deep dived on it
This also relates in my mind to the actor model and perhaps the talk @Kartik Agaram mentioned by Richard Feldman about how OOP design makes each object a "small computer" on its own
I might say "small machine" but yes, and indeed this is the origin in Simula
This is to me different from the taxonomy issue
the execution of the program is a simulation of many independent machines doing things to each other
this is "OOP as a model of computation"
I think it became popular because of the idea that you'd use "code objects" the same way you use, say, screws from Home Depot
ah yes, software components. The ubiquitous siren of lego blocks.
1
starting a new blog engine? go get some XYZ database screws and some PQR rendering bolts and you're 90% done
one important distinction between the smalltalk view (that affects how "good" it is) is the fact that objects are "live". You're not describing a simulation in terms of a software description of communicating objects (as in Java or C++), but actually witnessing the objects interact and changing their behavior as appropriate
this always felt like Alan Kay marketing speak to me
databases are similarly "alive" in that the tables change with each insert
live editing is also not dependent on any sort of "object"
Kay is a very creative guy, but he is also easily tempted by marketing language
(this is fine, good ideas need marketing)
anyway, it is indeed important to address OOP and the subject of software components/reuse
because it definitely was perceived as the way to achieve the latter
you no longer would do the "unimportant" "tedious" work of writing a string formatting function, you'd just use a java.util.StringFormatter
(I don't know if that exists, it probably doesn't)
so close
and this overlaps with Java's enormous standard library, because it was meant to be analogous to walking into Home Depot and getting the screws you need
haha closer than I thought
importantly, I never saw any kind of argument for why an "object" would be more reusable than a function
but it was widely believed to be the case
and that was a big part of the Java marketing agenda
but again, this has nothing to do with objects, a module will do just fine for this purpose
the only case I can see for it is when the object is "live", meaning you acquire the object already in the state you want, but that's a big stretch
why not just... configure the thing?
specially not in a Java like language where Classes may as well be modules with extra steps
yes and actually you can see some of this in how Python played out, because it was very oriented toward modules
but then got a large influx of what I'll uncharitably call "java refugees" in the mid 00s
and suddenly you started seeing a lot of python that looked like java
Reuse does feel like an idea important enough to be worth trying for a couple of decades. I'm glad we're past it now. At least for some people today, a library you can reuse registers as more liability than asset. Now you gotta wire it into your supply chain and watch out for vulnerabilities.
Kartik Agaram
Reuse does feel like an idea important enough to be worth trying for a couple of decades. I'm glad we're past it now. At least for some people today, a library you can reuse registers as more liability than asset. Now you gotta wire it into your supply chain and watch out for vulnerabilities.
It does feel like that's the trend, thankfully, but I suspect the dream of perfect reuse will never die out
I think the reuse concern is largely mercenary
I mean it's mainly something businesses care about to reduce costs
this is fine, keeping costs low is a reasonable concern
The key for me is to minimize zones of ownership. Reuse makes most sense when a vocabulary of components was coherently designed to compose well.
but I don't think it's ultimately a deep concern about quality of code (even if it overlaps with quality concerns like single-point-of-truth)
I want to distinguish between "reuse" and just "use by someone else"
because I think they're conflated, but really only the second thing matters in most cases
if I write the postgres database engine, what's important is that many people can use it (and not write their own)
what's not important is that it can be used to make a substantially different database engine
but that's the "just like screws" idea
the same screws can make a table or a bookshelf etc.
Do you think object-oriented design is worse at providing reuse than other approaches, or just completely unrelated?
one are where OOP matters for reuse is the idea of an interface that can be implemented in many different ways
it sets a boundary between parts of the code
I think OOP tilted harder at that particular problem. Making it easier to build things that you can reuse with anything else. It failed, but that's ok.
I would hesitate to attribute interfaces to any particular discipline
bvisness
Do you think object-oriented design is worse at providing reuse than other approaches, or just completely unrelated?
I think it is in practice worse because it forces you to buy into the author's specific set of branded lego pieces that don't quite fit anyone else's
in my experience the things that are easiest to glue together have extremely simple and unopinionated representations
but if you have to instantiate an object from some complex hierarchy merely to get started, you have a lot of friction
It seems to me that easy reuse is in direct conflict with ideas of taxonomy and ownership and agency
(edited)
the unix way is "almost everything is lines of text, and you have to parse them"
it's annoying, but easy to glue together
(edited)
bvisness
Do you think object-oriented design is worse at providing reuse than other approaches, or just completely unrelated?
So this depends. Most traditonal inheritance based OOP are worse than plain procedural, but something like Go's implicit interfaces (which some don't even call OOP) is a lot more reusable.
gingerBill
So this depends. Most traditonal inheritance based OOP are worse than plain procedural, but something like Go's implicit interfaces (which some don't even call OOP) is a lot more reusable.
The big issue is that Go's interfaces require GC and have many performance issues.
as @skejeton was getting at in #fishbowl-audience, a table doesn't own a screw, a table leg doesn't own a screw, they're just all there
the C way is "almost everything is arrays of a few machine-type primitives"
and that too has proven to be a very productive way to glue things together, as it is the basis of almost every practical FFI design
the web way is "almost everything is JSON"
and again, that is easy to glue together
Java attempted "amost everything is XML" but it rightfully went up in flames
because XML has many of the same problems vs JSON as Java OOP vs procedural
JSON makes no pretense of modeling the real world, it's just a record
rich hickey goes into this on why systems level concerns are different from application level concerns, and the data exchange between "processes" or "entities" in a system should be very "loose"
it's why unix has just strings, or the web has json everywhere
this is also why stuff like dependency injection frameworks exist, their job is to set up the "wiring" between various components, which gets more and more complicated the more detailed the interface specification they have to communicate is
Complexity feels like something worth talking about head on. A lot of our discussion has had it implicitly in the background. The promise of many languages/tools/paradigms is "treat complexity as an externality. Infinite scale!" This never bears out, and lately I consider it a bad direction. Why is it so important to be able to scale up in complexity?
Oh wait, where am I, this is Handmade Network. I'm preaching to the choir
Kartik Agaram
Complexity feels like something worth talking about head on. A lot of our discussion has had it implicitly in the background. The promise of many languages/tools/paradigms is "treat complexity as an externality. Infinite scale!" This never bears out, and lately I consider it a bad direction. Why is it so important to be able to scale up in complexity?
Oh wait, where am I, this is Handmade Network. I'm preaching to the choir
Well on that note, another thing that I had on my list was the idea that OOP allows you to scale up development and solve more complex problems. One of the links you posted during planning directly stated this exact idea. But...is there any merit to that idea at all?
Was that perhaps just an accident of all the other things Java provided out of the box?
I think a lot of these ideas do help. Perhaps it's a matter of setting expectations. "This is awesome, rock out with it!" vs "You're trying to do something really hard, this thing will help mitigate the pain."
I now see that my prep notes actually say "OOP allows us to write larger programs" and that is definitely true
(edited)
breaking down systems into components definitely helps with scaling things up, this is done all the time in hardware land, but components are not objects
I think the steelman version of this argument goes like this
1. OOP encourages you to assemble your program from many small objects, one per file
2. since each object has a single responsibility, individual changes that happen in the future usually belong in only one file
3. so even if you have 10k class files, only one matters at any given time
4. so you can scale your classes across your people
that's among the weaker steelmans I've seen
1
and this is mostly true for changes like "instead of saying Hello Newcomer, it should say Howdy Stranger when the player logs in"
1
you look up the PlayerGreeter.java
I do often find myself needing to modify multiple classes/files in a single tiny commit.
For a while I used to have my build script show me the average number of files modified in the last 10 commits. That did give an interesting sort of peripheral awareness.
one thing that's important to note is that OOP makes changing the architecture rather hard, i.e. since objects store their references to other objects, you have to jump around everywhere to change things
if you try to avoid this by using some pattern where an "upper level" object sets everything up
you end up with way too much argument passing to constructors
which, in turn, leads to awful dependency injection frameworks
in most component-based design systems for hardware, the connections between the components are a directly editable thing
But then doesn't everything? I define architecture as "that which is hard to change."
That feels like it's mostly caused by the inheritance / taxonomy / hierarchy kinds of design problems we were talking about before
even with composition it's an issue, because of how communication between objects occurs
i.e. For A to send messages to B, it needs to remember B
(edited)
somehow, one way or another
whereas in actual component-based systems, that "memory" is stored externally
importantly, A does not own B here
sure, I guess I could see how it's a problem with "agency" ideas too
perhaps another thing worth noting here is that OOP overlapped with many general-good-practice "discoveries" and I think inherited some of their reputation, e.g. avoiding unnecessary globals
(which OOP then undoes by inventing singletons :P)
SirWhinesalot
breaking down systems into components definitely helps with scaling things up, this is done all the time in hardware land, but components are not objects
Going back to this, I would agree that breaking down systems into more-independent modules is certainly very important to solving larger problems; avoiding unnecessary globals is another example of a useful design tactic for tackling complexity
(edited)
Was it really all that bad before OOP languages became popular though?
All the fun there is in that word "unnecessary"
That might be too big a question
well, OOP also largely coincided with the establishment of "programmer" as a known profession
Our programs did grow by several orders of magnitude in the period of the OO paradigm. I feel like it should take some of the credit/blame for that.
indeed in the 80s it was common for people to conflate the "IT guy" with a programmer, and it was even riffed a bit in movies
so I think it's hard to separate the professionalization of the field from the techniques that were popular at the time
far fewer people were programming in the 80s
I suppose then that programming was just running into the very real problems of working with larger teams
(edited)
"software engineering", to use Russ Cox's definition
(edited)
bvisness
I suppose then that programming was just running into the very real problems of working with larger teams
(edited)
Oh very interesting, hadn't seen this before: https://research.swtch.com/vgo-eng
yep I was just gonna share the full quote:
Software engineering is what happens to programming
when you add time and other programmers.
Is it fair to say that OOP languages offered some useful tools for "software engineering", but that those tools were misused?
(edited)
I'd say Java/C++ offered the versions of various things that became popular (e.g. modularity was a general useful idea, and the way it came to the market was classes)
bvisness
Is it fair to say that OOP languages offered some useful tools for "software engineering", but that those tools were misused?
(edited)
Yeah it feels like the nature of the world. You get some promises, you build something. You get some benefits, you get some problems you now have to live with. There's no going back. You can't enter a river twice.
Even the bad parts, we had to explore them to understand why they're bad.
it's worth trying to eliminate the "sturgeon contribution" (sturgeons law: 90% of everything is bad)
a lot of bad stuff was done with OOP, but a lot of bad stuff would be done regardless
so it's useful to think about what badness was imposed on top of the sturgeon fraction
personally I think the "model the real world" thing is above-sturgeon badness
it infected everything and everyone, it wasn't just the usual "most people do bad work"
you could argue that it's a sturgeon-multiplier
it makes the 90% even worse
I tend to think all the above-sturgeon badness came from social context. Mostly incentives. Hard to blame a tech paradigm for it.
but the paradigm clearly invites some forms of badness
and deserves blame for something it invites
so for example, the problems with modeling
I can't blame OOP for people applying it lazily or in a way that it doesn't invite, but I can absolutely blame it for people using it the way it advertises itself
you yourself did talk though about the level of marketing behind Java - so both factors are certainly in play
well ok but the thing that was marketed is not separable right?
they didn't randomly choose something to promote, they chose that thing to promote
going from "I have this problem, how do I solve it with a combination of data structures and algorithms" to "how do I model these real-world concepts as agents with data + behavior" was one of the greatest mistakes we did as an industry
As an example, a nameless org I worked at was big on Design Patterns. But that was mostly caused by promotion incentives that reward complex-seeming work. I hate Design Patterns, but have a hard time blaming it for the ills created in its name.
ok but Design Patterns didn't say "you should base promotions on complex patterns"
whereas OOP does say "you should have your program model concepts in the real world"
(edited)
they (Design Patterns) were meant as a tool of communication, something to help explain what a certain structure in the code (that occurs often) is doing, but they turned into "best practices" somehow
(edited)
True. I think I just give it some leeway as an honest mistake..
Design Patterns were also an honest mistake. They thought the problem they were solving was how to work in complex domains. The problem they turned out to solve was how to smuggle complexity and over-engineering into anything.
I think they started from a very true and valuable observation, which is that code-as-characters-in-files reuse is [NOT] important compared to design reuse
(edited)
that one insight was enough to persuade me to keep trying to find value in the book for years
this is Design Patterns, you mean?
if you already have a design of a solution you can reuse from the past, you're 80% done
because most of programming is discovering a good-enough design
Design Patterns are certainly useful. I.e. "I need to model sum types in this language that doesn't have them, how do I do it?" -> "Visitor Pattern, here's how you set it up"
right well that's the Norvig argument, that design patterns are actually just papering over language deficiencies
they are
they're actually a great source of language design inspiration
"how does your language avoid the need for this pattern"
yes and this would bring it back to the tried-and-true tradition of standardizing extant patterns of use
just like the for-loop, or the mere method
if you write an assembly program, you will at some point write what amounts to a for-loop
so the language just makes that standard
to the extent that can be done for design patterns I consider it useful
I think the problem was that "applying" design patterns somehow became a point of pride? which is silly? they're a shopping list of solutions to particular problems
i.e. the more patterns in your code the better
yes, well ... this would be far afield but this is tied to bigtech interviewing/hiring practices in the late 90s
in the same way universities need a thing to teach as a theory of programming, tech companies needed a thing they could quiz
as much as I would love to crack the coding interview right now, we are past the three-hour mark and I think it would be a good idea to step back and summarize what we've discussed today
although that would be a great subject for continued discussion afterward
yes I agree it's too far a tangent
perhaps a future fishbowl, since it's not really OOP specific
So here's my summary of what we've covered:
- "OOP is when you associate behavior with data": This is present in OOP, but not really the heart of the issue. That is, just because you have methods doesn't necessarily mean you have anything more than simple procedural programming with some syntax sugar.
- "OOP models the real world": This is categorically not true or effective, no matter what language you use. The real world resists taxonomy, and doesn't have anything to do with the problems you are solving.
- "OOP is Java": Well, we didn't say it as such, but we did discuss many of the specific problems of Java, and the promises it failed to live up to, despite its takeover of industry.
- "OOP as a model of computation": This idea was appealing largely because of promises of independent components and easy reuse - but arguably OOP is worse at providing this than other programming paradigms (especially the Java flavor of OOP).
- "OOP tackles complexity": There are some good concepts, such as modularity and avoiding globals, that became popular along with OOP languages, but OOP itself makes things more complex.
(edited)
Is this a fair representation or am I leaving things out?
I'd been writing a summary as well over the conversation, and it is much more instrumental, through the lens of "how to write better programs"
- if you have lots of one kind of thing, give them a common template
- when building a program, take a moment to plan such schemas first
- as a schema proves valuable, rework it to be more timeless. This includes making it easy to change the memory layout, but much more.
Some rules of thumb like these capture all that is valuable about OO, to my mind. And what programming languages can contribute here are late-binding capabilities that preserve optionality for the programmer.
What they don't capture is the hardest problem of all: how you can preserve an executable, checkable model of the world that acts as a history of why your program is as it is.
Tests are one way to do that. Formal models are another, though formal models can also be used for modeling solutions rather than contexts. Prose documents are a third. They have limitations, but they're better than nothing to a future code archeologist.
A lot of the ills we see with OO dogma lie in the space between my rules of thumb. You have to wait until you understand a domain before you try to generalize. OO languages are particularly bad here, because you're forced to create classes literally on line 1.
I think design as a general activity makes no sense. Good design comes from understanding a domain. It is by necessity domain-specific.
And any design activity really has to have a certain level of humility. Pick a bounded problem. Interact with few peers (that themselves have humility in dependencies and goals). Keep things coherent rather than trying to optimize the whole ocean into paperclips.
(edited)
bvisness
Is this a fair representation or am I leaving things out?
just that if we split interfaces/vtables from OO, and maybe the nice subject verb object syntax, there's nothing of use left
I'm willing to leave interfaces/vtables as being "OO", as the one useful thing it has, because they are damn useful when you need them
but other people disagree that they're even "OO", since stuff like modules in SML have many similarities without the runtime dispatch for example, and there are many ways to do runtime dispatch that are not interface based
(edited)
So in that case...I'd like to return to the original age-old question: "is OOP bad or what?"
It sounds like the prevailing opinion here is that:
- Object-oriented programming is fundamentally an approach to modeling and design, not merely some nice syntax in your language
- Object-oriented design is fundamentally flawed, since the idea of solving problems by modeling the world is fundamentally flawed
- The flaws of object-oriented design also prevent OOP from being a useful model of computation, or from meaningfully tackling complexity
that's my view on the matter yes
A lot of the ills we see with OO dogma lie in the space between my rules of thumb. You have to wait until you understand a domain before you try to generalize. OO languages are particularly bad here, because you're forced to create classes literally on line 1.
I think design as a general activity makes no sense. Good design comes from understanding a domain. It is by necessity domain-specific.
And I appreciate these thoughts by @Kartik Agaram a lot.
Not only does object-oriented design fail to model problems well, but the tools force you into bad designs immediately.
bvisness
Not only does object-oriented design fail to model problems well, but the tools force you into bad designs immediately.
yes in particular "in what class does this activity belong?" is a useless question that has wasted an enormous amount of time
btw, just to answer the question I posted when I did my design introduction
What is the single responsibility of the Customer class?
(edited)
it is to uniquely identify a customer, it's a primary key
here's a Customer class in Odin
I think I can also reasonably conclude that, while OOP languages and practices may contain some good ideas, those are mostly unrelated to the core of object-oriented design.
I think "ownership" should maybe go in another fishbowl as well, since it has some similar issues with rigid tree-like structures and assigning "agency" to bits of data
ownership is quite central to OOP
because there's a distinction between an object that "contains" another object within itself, and an object that "knows" about another object
in Rust there's a syntactical distinction, but in Java there isn't
Ownership semantics is surprisingly related to OOP from an ontological perspective
and there we have massive tangent #2
But yeah I agree that it could be a valuable topic to fishbowl about someday.
Thanks @Kartik Agaram for the summary thoughts - are there any other summary thoughts our other participants would like to provide before we close?
so I'll shut up now
Thanks everyone for this introduction to the fishbowl format. I'm a huge convert, to the extent that I think we should stop having in-person conferences.
3
3
Thank you very much for participating! I am really glad we finally had the opportunity to discuss this topic thoroughly.
Thank you all for helping me with my secret moderator agenda of being able to say "before you discuss OOP any further, read this conversation"
6
I hope everyone found the conversation useful, that it helps you understand OOP better, and that it helps you understand how to write better programs.
See you in a couple months for our next fishbowl!
Thank you @bvisness for organizing!
2
Indeed, thanks a lot @bvisness , and also thank you to all the participants