The Unix Philosophy, or how big should a program be?

Something I've been thinking about lately is large multi-use programs, and I was interested to see how people in the Handmade community felt about them, vis-a-vis the Unix Philosophy of having lots of very small single purpose programs.

When I say large, multi-use programs, I'm thinking of things like Visual Studio, which includes a code editor, debugger, profiler, static analysis tool, search and refactoring tools and much else besides. Another example would be Blender, which is a 3D modelling tool, a rigging tool, an animation tool, a video editor, a renderer, a texturing tool, a game engine, and lots lots more too.

My questions really are:
- As a user, do you prefer working with many small applications or a few larger ones?
- As a developer, do you see any benefits to one approach or the other?

Personally as a user, I feel very conflicted. I dislike it when software appears to bundle things I do want to use (e.g. the Visual Studio debugger) with things that I don't want to use (e.g. the Visual Studio code editor). However I'm also aware that there can be a bit of a context switching overhead from constantly Alt-Tabbing between lots of different programs. As a developer, I think I'm still leaning towards the Unix Philosophy, in that I feel that smaller programs are much more manageable to develop and polish into high quality products than giant monolithic ones are. I'm really curious to hear other people's opinions on this though.

As a follow up question, assuming that there is at least some point beyond which a program becomes too large, how do you decide where to draw the line? If you've already developed a tool, and have an idea for something else that would be useful, how do you decide whether you should add it to the existing tool or create a new one?
I will only reply as a user since I never developed any huge application and I would only be guessing.

The thing that matter to me is that the application is responsive and makes it easy to do what I want. I don't categorize applications on the single or multi purpose criteria because there are both types that are good or bad, it's just on a case by case basis.

That said if I have the same tool as a small package or a big package I will chose the smaller one if I don't use other part of the big one because in my head a smaller application should be faster (but it's not always the case). Also bigger tools tend to require an installer (or connection to a server) and are harder to uninstall or use from a usb drive. Blender is an exception to that.

One downside of having multi purpose application is that you can end up with tools that are less finished or less useful than other specialize tools (e.g. blender video editor, game engine and sculpting tools).

I don't feel like there is a problem with alt-tabbing between application. The problem for me is if it takes time to start working after the alt + tab. For example if I edit a model in blender, then alt + tab to a game engine and have to wait or do some actions to see my model update it can be a problem if I have to do a lot of small edits.
In my opinion there is a time and place for both. When working in a command line small single-purpose programs are great, but for sophisticated tasks having a swiss army knife can be great too. Graphical applications more often than not fall into the swiss army knife category. That isn't to say that graphical applications should always do a lot, sometimes you just want a simple gui interface for something for example. On the command line side, there are certainly examples of useful swiss army knives such as imagemagick or ffmpeg. A good example of a command that shouldn't be a swiss army knife is `ls` which imho has way too much feature creep. If I wrote an ls replacement it would simply printf the struct stat from fstat-ing each file in the directory. From there the user can easily pipe the output into their desired filters and alias ls
I don't think the size of a program is necessarily the problem rather, how the components of that program and the other programs look and interact with one another is more important. If there is no consistency between the components/smaller programs, then that is personally more of a problem for me.

Take media on smartphones as an example. On iPhones, virtually all media can be accessed from iTunes: music, podcasts, films, audiobook, etc, and because of this, there is a huge consistency between the components of the application and the operating system in general.
However, on android, you pretty much have to get a separate application for each media "component" and due to each application is not made by the same person, nor the maker of the operating system, there is a lack of consistency between each other and with the operating system.

* * *

Having said all that, one problem with having a huge monolithic program is that it does mean that the software has a higher chance of becoming slower or even broken between "components".

Trade-offs are everywhere.
I don't think single/multi-purpose is the right way to think about it. Visual Studio still has as much a single purpose (create software) as ls (show files). The difference is, one is a software for performing tasks, the other software to perform an operation that can be combined with others to form a task.
With that definition it's in my opinion pretty clear, that you'll need both, because either
a) You need/want to use multiple operation level software, to perform a certain task, because there is no task level software for your requirements
b) Your task level software allows you to better perform your task than a number of operation level software, because it was made with that purpose in mind (better visualization of key information or something)
c) Your task level software can be used with operation level software to automate/improve completing (related) tasks

So for me at least it's not a matter of what I prefer, because by definition a task level software should be better for performing a certain task, as long as the requirements are being addressed. If I prefer to use operation level software only, than I have (arbitrary) requirements that are not fullfilled (understandable gui design, download size/installation effort, etc.)


When you say "The Unix Philosophy" you oughta be fairly explicit what you mean by it.

Pretty much everything that has come out from the Unix lineage has turned out to be pretty universaly badly designed. Or rather - hasn't had any sort of design to begin with.
(e.g. look at C the programming language, gdb, bash, unix terminal). It's just all super bad.

Why would I ever want to use ls, cp, rm, mkdir and all these weird cryptic commands, and memorize all the nonsensical command line arguments. When I can use something like Total Commander (or midnight commander or whatever) for file management.

And when I need a batch conversion or something bit more involved I can write a python script instead of fighting all the warts in the bizzaro world of bash/shell scripts.

And the whole idea of text through stdin/stdout being the universal interface just doesn't work in practice as every tool has a different format of output, so then you have to parse it with sed and grep and text-transform the outputs so that it all fits. It's insane.

The problem with Visual Studio isn't that it bundles code editor + debugger. It's that it tries to bring EVERYTHING under one roof. All the web development, SQL database stuff, all the languages. Be super plugin-able and expandable.

Better example of monolithic development environment is something like Borlands Turbo Pascal. It's for one language, and one language only. I believe that's the way to go with IDEs. You oughta center it around one language and make the development for that one particular language really, really pleasant. Instead of trying to shoot for it all, that never works well and ends up bloated, over-enginered in one way or another.

Best experiences with software that I've had are pretty much always monoliths with a clear focus in mind (unlike say visual studio, eclipse, etc which are all over the place).
Small text-based programs are nice because they're low-investment to learn, are often well-documented (through man pages), and sometimes composable. I use grep, man, git, rsync, and the basic shell utility progs constantly. I spend all of my time in emacs though, which is pretty much opposite the trad unix philosophy-- I think there's something to be said for a unifying / monolithic interface, even if it's not ideal (I could never get used to window transience).
Lots of really interesting responses here. Thanks everyone for sharing your thoughts.

One idea that particularly resonated with me was what Lares Yamoir and others suggested about looking at it not so much in terms of single/multi-purpose or small/large, but rather in terms of how the different parts of a monolithic piece of software interact. Thinking about it, maybe the best way to judge whether something should be a separate tool or integrated into an existing one is whether you want to use those tools together. Looking at it that way, it could be argued that Blender's video editor does not necessarily belong in the same program as its 3D modelling tools, because I suspect few users find themselves flicking back and forth between the two. However a programmer may well flick back and forth between the text editor and the debugger, and so there is more justification there for having them tightly integrated. Similarly although I would never use Microsoft Word as an image editor, the fact that it has basic image manipulation functionality like cropping and rotating built in is incredibly useful to save me from having to crack out a more powerful image editor like GIMP every time I want to do something basic with an image in a Word document.
Here is a quote from The UNIX Programming Environment, written by Brian W. Kernighan and Rob Pike:

Even though the UNIX system introduces a number of innovative programs and techniques, no single program or idea makes it work well. Instead, what makes it effective is the approach to programming, a philosophy of using the computer. Although that philosophy can't be written down in a single sentence, at its heart is the idea that the power of a system comes more from the relationships among programs than from the programs themselves. Many UNIX programs do quite trivial things in isolation, but, combined with other programs, become general and useful tools.

The point isn't that every piece of software should do one thing and do it well, it's that we can make "general and useful tools" by combining other programs. Relating this back to Visual Studio, there are no smaller programs that make up the giant monster. If we could decouple parts of VS and use what we needed for a particular task, I would say that is better.

Of course, there are always tradeoffs. It was probably faster and cheaper to make the entire beast as a single piece of software, adding bits here and there to eventually end up with an unweilding tool that has way too many attachments... Or, Microsoft just doesn't know what they're doing?

Anyway, from my perspective, The UNIX Philosophy is more about understanding what pieces make up a larger tool, and ensuring that those pieces work well and in isolation. Easier said than done, but worth striving for.
CaptainKraft
Microsoft just doesn't know what they're doing?

I believe the current state of MS Visual Studio has something to do with Conway's Law and corporate politics.

At least JetBrains is smart enough to do the "right thing" which is - there are separate products - IDE for C/C++, IDE for C#, IDE for Java, etc, surely they share some code between them. But at least it's not this one big monstrosity which tries to accommodate everything under the sun.

There are multiple ways Visual Studio could be sliced and diced, but slicing it in two parts = Visual Studio Code Editor for All The Languages
+ Visual Studio Debugger for All The Languages is the worst way to do it.

Even writing something like a debugger for All The Languages, to me seems insane due to sheer amount of complexity necessary to cover all the cases.

Back in the day MS Visual C++ and MS Visual Basic and whatever else where separate products and - arguably they sucked less - or at very least they were less bloated and weren't as slow.

Edited by pragmatic_hero on
potatotomato
Small text-based programs are nice because they're low-investment to learn, are often well-documented (through man pages), and sometimes composable. I use grep, man, git, rsync, and the basic shell utility progs constantly. I spend all of my time in emacs though, which is pretty much opposite the trad unix philosophy-- I think there's something to be said for a unifying / monolithic interface, even if it's not ideal (I could never get used to window transience).

Unix tools are anything, but low-investment to learn.
Surely the initial investment might seem low, open up the man-page and then figure out which combination of cryptic flags you have to use to do what you want.

But you are actually not LEARNING anything at the time. What you have to do instead is REMEMBER all this useless trivia and all gotchas.
And every time you want to do something you have instant-recall it, or read the same manpage for the Nth time. Unix shells offer little - if any assistance in this - or any other process.

That's if you're lucky and remember which tool it is to begin with. Sometimes I remember that there was a tool X with flags Y which did something what I want to do again now.
Now good luck finding the tool X (remember the unix tool names are "very descriptive") and then figuring out which flags Y you had to use. So easy to learn.

Open up the man page for "tar". Learn it.

Compare it to learning how to work with archive files in Total Commander. Well they work almost exactly like folders do. And that's it. You don't have to relearn anything ever again. The skill sticks. You might forget the hotkey but it's always there in front of you, and you don't really have to read documentation ever. And it works exactly like you'd expect it to.

The fact that you have to read documentation to do the simplest things is insane by itself.

Mind you, the 'man' tool doesn't even have any kind of navigation functionality in it. There are NO LINKS in man-pages. It's a bare-bones text formatter/reader.

It's like the software from 1960/1970ties. And not even the good kind.


Edited by pragmatic_hero on
I like to follow the Unix Philosophy, but I don't agree that a big program is necessarily breaking the rules of the Unix Philosophy.

Take for example a painting program. A painter does need all those tools available there. And, a set of programs that do the same job would drag down the work, instead.

I think games (as well game frameworks/engines), which are also big programs, doesn't necessarily break those rules, either. There's an special case: game engines with built-in editors. I think those are breaking the rules, because the logic shouldn't be attached to the data (in this case, the scene). A game engine, such as Unity could be break down into framework + editor. In this case, that's what Monogame does, it is a game framework and you can use it with any editor of your choice.

Scientific programs, on the other hand, are better if they are small (command-line applications).

It's a matter of how much you can break down and still keep its essence.
I say make a program that does whatever is useful, or what makes sense for the problem you're trying to solve- there is no one standard that will best fit all problems in this respect. :)