switching from OOP to procedural

I'm learning programming on my own, and my approach so far has been choosing a concept for a program, and learning language features as needed in order to execute the concept. I did that for C++ first, and now I'm doing it for C# because my ultimate goal is to get a job, and C# jobs seem common.

I believe that Casey's attitude that OOP ultimately only makes things worse is probably right, but nonetheless, over the course of my C# project, I haven't tried to fight against what seemed to me to be the way the language is intended to be used. The result, whether because Casey is right, or because I'm not using C# "correctly" after all, or some of both, I've ended up with towering class hierarchies that, despite my attempts to use them to model the conceptual hierarchies inherent in my problem domain in a sensible way, don't even entirely succeed at that. The amount of architecture I need to add to support each new piece of functionality keeps growing.

The thing that bothers me is that if I were to start over in C, I don't see how I would approach it differently that wouldn't lead to a similar quagmire. If I were to step through my thought process for structuring my program the way I did, would people be willing to suggest how they would do it instead?

Edited by AlexKindel on Reason: Initial post
Hi AlexKindel,

I've struggled with this same problem in particular for quite some time. I spent a long time programming in a traditional object-oriented fashion (though my experience was with C++), and even when switching to C-style C++ (similar to the style used in Handmade Hero), my thought processes remained similar. I wrote a blog post about this, specifically with respect to my game.

Honestly, I'm not sure if there's a shortcut to shifting your thinking. I've been trying to program in a more straightforward, procedural fashion for over a year and am still catching myself thinking in ways that isn't particularly productive. That is, however, what this community is for (helping each other converge to an 'ideal' software engineering methodology), which implies that the answer to your question is "Of course!".

Feel free to post about architectural decisions and about this community's various thoughts about it (which, by the way, will vary from person to person); I'll be sure to chime in!

Ryan

Edited by Ryan Fleury on
Excellent! I asked first because most programming forums I've tried seem to explicitly discourage such broad questions. It's hard to find advice on anything that isn't highly focused.

The purpose of the program is to take a string representing a numerical expression, specified in terms of integers, +, -, *, /, ^, i, ( and, ), and return a simplified form of that expression. The simplifications involve all the obvious ones you learn in primary school, like 1+1 returns 2 and (6/8)^(1/2) returns 3^(1/2)/2, as well as some more technical ones.

My basic approach is to parse the input into a sequence of Number objects alternating with binary operations. The operations are evaluated one by one according to the order of operations, where evaluation means to combine the Number objects on either side of it into a single Number of the appropriate value. Once there are no more operations left, leaving behind a single Number, that Number is printed out as the answer.

This requires that Number be able to represent, at minimum, any form a fully-simplified numeric expression might take. I achieve this by building a class hierarchy on top of Number. Answers that are integers are possible, so I started like this:

1
2
3
4
5
abstract class Number{}
class Integer : Number
{
    int Value;
}


Fractions that aren't integers are also possible, so I added

1
2
3
4
5
class Fraction : Number
{
    Integer Numerator;
    Integer Denominator;
}


While it is possible to represent any integer as a fraction with denominator 1, I wanted to keep Integer because the more complicated the numerical structure represented by the class, the less efficient it is to do operations on. That trend continues. There are numbers to powers that aren't possible to represent as fractions, so I added

1
2
3
4
5
class Exponentiation : Number
{
    Number Base;
    Number Exponent;
}


Sometimes exponentiations can't be consolidated into a single term on addition, like 2^(1/2)+3^(1/3), so I added

1
2
3
4
class Sum : Number
{
    List<Number>Terms;
}


But it doesn't make sense for Sum.Terms to be able to contain any Sum objects, so I added a Term class to be the base class of all the types other than Sum:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
abstract class Number{}
abstract class Term : Number{}
class Integer : Term
{
    int Value;
}
class Fraction : Term
{
    Integer Numerator;
    Integer Denominator;
}
class Exponentiation : Term
{
    Number Base;
    Number Exponent;
}
class Sum : Number
{
    List<Term>Terms;
}


The structure went through a couple more refinements before I'd accounted for every possibility, with some additional wrinkles added for convenience rather than strict theoretical necessity, all the while defining the arithmetic operations for each class to account for each other object type it might be combined with.

How does this seem so far?

Edited by AlexKindel on
I spent some time playing around with this problem and wrote a semi-solution in C to perhaps give you an idea of how I'd approach the problem (without doing it in an object-oriented fashion).

I decided it'd be most useful to store data in terms of what a mathematical expression can be in its most basic form; I called this an "Operation":

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
enum {
    OPERATION_null,
    OPERATION_subtract,
    OPERATION_add,
    OPERATION_divide,
    OPERATION_multiply,
    OPERATION_exponentiate,
    MAX_OPERATION
};

// an "Operation" can just be a single value, in 
// which case a and b are both 0, and a_val contains
// the proper value.

// an "Operation" can point to other operations,
// allowing for arbitrary operation complexity
// (like what we need for a math expression parser)

typedef struct Operation {
    struct Operation *a;
    struct Operation *b;
    int a_val;
    int b_val;
    int operation_type;
} Operation;


My reasoning is that there's no real reason why a fraction is any different of an operation than an addition or subtraction. It is still fundamentally a binary arithmetic operation, so it's easy to generalize on that.

My goal was to effectively generate a tree that would be useful for parsing an expression. Take, for instance, the expression "1 + 2 * 3".

It can be observed that the best way to go about forming this tree is to find the lowest possible priority operator in a given expression and split the expression based on that to simpler components:



By picking the lowest priority operator, we can break up the expression in such a way that is compliant with operator precedence. The leaf on the above tree that holds "2*3" is actually just a representation for a sub-tree, which can then be calculated recursively in a similar fashion:



Combining the two trees we've constructed thus far gives us a properly defined tree that will be helpful in calculating a result:



To calculate the result, we can just traverse the tree, starting at its root, and performing the correct operation on a given Operation's two branches according to its operator_type. If a particular branch is not an atomic value and is another sub-tree, we can operate on it recursively:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
float solve_operation(Operation *root) {
    float result = 0.f;
    
    float a = root->a_val;
    float b = root->b_val;
    if(root->a) {
        a = solve_operation(root->a);
    }
    if(root->b) {
        b = solve_operation(root->b);
    }
    
    switch(root->operation_type) {
        case OPERATION_null: {
            result = a;
            break;
        }
        case OPERATION_subtract: {
            result = a - b;
            break;
        }
        case OPERATION_add: {
            result = a + b;
            break;
        }
        case OPERATION_divide: {
            result = b == 0.f ? 0.f : (a / b);
            break;
        }
        case OPERATION_multiply: {
            result = a * b;
            break;
        }
        case OPERATION_exponentiate: {
            result = pow(a, b);
            break;
        }
        default: break;
    }
    return result;
}


Note that this doesn't actually simplify the expression as you wanted, but rather solves for it. To simplify it algebraically, you'd just need to traverse it slightly differently and perform different operations, but the fundamental idea would be the same.

Now, this probably isn't the most efficient way to go about doing this, but it gets the job done fine for this problem in particular. An iterative solution would probably lend itself better for efficiency, and perhaps the tree could be stored in a more cache-efficient way, but I was just trying to work through the problem conceptually.

I think what I would say regarding your solution is that it doesn't fundamentally matter what a human's understanding of what a number is and therefore it doesn't matter how a number is represented in terms of a class hierarchy. What matters is what is most useful to generalize strictly in terms of solving the problem at hand (which, in this case, happens to be parsing a mathematical expression somehow).

Edited by Ryan Fleury on
In order to adapt this to give exact, algebraically-simplified answers rather than decimal approximations, the return value of solve_operation would have to be Operation, right? So that, for example, given an Operation that represents the tree for 2^(1/2), solve_operation would return that Operation unchanged?

In that case, the handling of each operation_type value would itself become a switch statement that checks against the nesting structure of the Operations being acted on, because the specifics of what the arithmetic operators have to do depends very much on that. And, so that it wouldn't be necessary to figure out anew what the nesting structure of a given Operation is each time it gets operated on, it would make sense to add a number_type field that tracks that.

At that point, I think what you'd have would amount to my approach with the number_type values corresponding to my class names and the virtual functions replaced by switch statements. Which is not to say it wouldn't be worth trying, because I've run up against limitations of C#'s class architecture all the time that I might be able to get around if I'm essentially rewriting that architecture myself, the way I need it to work.

Edited by AlexKindel on
The purpose of the program is to take a string representing a numerical expression, specified in terms of integers, +, -, *, /, ^, i, ( and, ), and return a simplified form of that expression.

In my opinion, that is not a program, that is a function. A single function that takes a string in, and sends a new string out. That is by far, the simplest form. That can of course be a HUGE function, but the key to good procedural programming, is to know how to make good subfunctions, that this superfunction calls.

For example, in function "eat food", we have subfunctions "chew" and swallow"
Instead of returning an object from the operation at a specific node, you could just repeatedly traverse the tree and modify the subtrees in-place, using a set of simplification rules. The result would be the resulting tree once no more rule can apply (ie after a traversal of the tree that yields no modification).

You don't necessarily have to hardcode switch statements accounting for each specific operation combinations. In fact, the operation in itself don't matter much unless you can evaluate it (ie its left and right subtrees have been simplified to integers and the result of the operation is an integer), what matters is its associative and distributive properties.

That said, the result of a "simplify" operation depends very much on what do you mean by simplification ! It may be more solid to think in terms of well defined transformations such as factorizing, distributing, recognizing identities such as a^2+b^2-2*ab = (a-b)^2, etc... Which methods yelds a simpler result depends on what you want / what the initial expression is.

So I would rather have a set of rules, eg functions that check if an operation is possible on a subtree, modify it accordingly and return true if a modification happened. You could then do a traversal of the tree and at each node try to apply the rules until you find one that returns true. Then repeat, until no more rules can be applied.
Maybe not the most efficient either, but it allows you to think in terms of general rules and to precisely tune what "simplifying" means to your program.

Edited by Martin Fouilleul on
There is definitely at least one case where, if I were doing repeated traversal rather than combining pairs of numbers in one pass, I could be more efficient: When the input expression takes a root of a negative or imaginary number, I factor out a root of unity and express it in a+bi form whenever possible, which often produces quite a complicated expression. For some of the simplifications I do, I need a minimal polynomial for a portion of the tree, and the minimal polynomials of such complicated expressions are expensive to calculate. However, if the program knows that the expression is a root of unity when the need for its minimal polynomial arises, the process is trivial. Leaving roots of unity as complex exponentials would render them automatically recognizable, to be converted to radical expressions only after it's known that no more minimal polynomials will be needed. I think that's the only such clear-cut case I've come across so far, but there might be more.

For what it's worth, what I mean by "simplify" is more or less to obey the property that, given any two inputs that represent the same real number, the program will convert both inputs to the same form. That might be impossible, but I get as close as I can by doing things like rationalizing all denominators and reducing nested radicals to minimal nesting depth.

Edited by AlexKindel on
I don't think you need to "switch from OOP to procedural". It isn't anything religious or polarized that you think it is. IMO it's about doing what is right.

Abstractions, hierarchies, polymorphism may all be used if and only if you really see them adding value.

Eg. https://www.kernel.org/doc/Documentation/kobject.txt The kobject is being used to give all the above attributes.

To give another example, when you attach a disk that disk is claimed by a volume manager. The volume manager can create many objects for the things on disk, eg. an object can be created representating the volume abstraction. That volume abstraction can contain many "plex" (http://manpages.ubuntu.com/manpages/precise/man4/vinum.4freebsd.html). A plex can contain subdisk, etc.

This volume->plex(s)->column(s)->subdisk(s)->disk "hierarchy" is maintainted using containment sometimes.

When IO is requested by the filesystem on a volume the IO happens on volume, then on plex, then on subdisks and finally disks. A sort of io tree (even though it isn't called a "tree" in code) is created and the tree is traversed from its leaf node (disk) to the root (volume) during io completion.

So imo even simple volume management code that predates C++ uses everything C++ says it "provides".

Would C++ have helped in this case? I don't think so. It would've unnecessarily complicated the development. Also optimization would have been difficult with C++. Eg. a thing called "fast path" was done later where in case of simple layout (just a simple non raid volume that doesn't spans disks) this io tree formation is skipped and io is directly sent to disk. The simplicity of C allowed this adhoc optimization.

My point is don't quibble over ideas like OOP vs procedural. Don't be so polarized. Avoiding C++ doesn't mean never using OOP or code generation. For me it means not wasting time reading the yet another c++ standard, writing readable code, writing debugable code, being easily able to think about the end assembly, having a feeling that I'm actually programming the physical x86 (or other) chip and not some virtual platform and being professional.
Good points from everyone. I would also suggest that if you want to learn to program in a different style, you should just try writing a meaningful program in a language that forces you into that style. Programming in C or Go for a bit will help procedural programming settle in your mind more firmly, since that’s literally the only way to program in those languages.

I think that’s probably true for any programming paradigm, not just procedural.

Edited by Ben Visness on
Fair enough. I do have a couple questions that are applicable much more generally than my current project, like "if you can't or won't use templates/generics, what do you do when you need to use the same algorithm on variables of many different types?" and "if you can't or won't use exceptions, what do you do if the desired response to a special circumstance is to abort many layers of in-progress function calls?" My understanding is that Jonathan Blow and Casey, among others, happily do without both of those language features, but I don't understand the reasonable alternative.
AlexKindel
Fair enough. I do have a couple questions that are applicable much more generally than my current project, like "if you can't or won't use templates/generics, what do you do when you need to use the same algorithm on variables of many different types?" and "if you can't or won't use exceptions, what do you do if the desired response to a special circumstance is to abort many layers of in-progress function calls?" My understanding is that Jonathan Blow and Casey, among others, happily do without both of those language features, but I don't understand the reasonable alternative.


The answer to number one is meta programming or copy and paste.

The answer to the second is proper error handling. Even if you have to have error checking in every layer. Though really that should probably be refactored so there is a way to continue cleanly even in he face of errors.
I should clarify that I don't really consider 'object-oriented programming' to be a set of language features. I see it as instead a methodology of approaching problems. I consider many popular object-oriented features to not be very useful not because they are in some arbitrarily defined circle called 'object-oriented programming', but rather because they arise out of an object-oriented programming methodology.

To respond to your message, AlexKindel, I don't agree that my solution would require emulation of your object-oriented class hierarchy. Martin Fouilleul was correct in saying that all you need is a set of rules to check each sub-tree on. This is necessarily defined by you, as you are defining what "simplification" means to you in the first place. Will your program change 3^(1/2) to 3^0.5? That's up to you. If your program shouldn't do that, then you need to create a rule that dictates that the program not solve for a decimal approximation for that particular type of expression.

With regards to generic programming, there are several ways to solve that problem and which particular method you choose depends deeply on your needs. A good way to consider some data generically is treat it as the most generic thing possible on a computer: A block of memory. The abstraction of a 'data type' is only useful in so far as it actually helps you; if it doesn't (and if you need something generic with respect to memory), then ditch the abstraction. This is exemplified by Sean Barrett's stretchy buffer library; it is a pseudo-equivalent of the C++ standard template library's std::vector, but it is solved in such a way that it does not require templates (and therefore not a recompilation for each type it is used with).

When working with memory generically isn't useful for a particular problem, meta-programming is probably the best way to go, though I haven't worked much with meta-programming and therefore can't speak too much further on that subject in particular.

Edited by Ryan Fleury on
In the current project, I think it would be necessary to error check at every layer: the only kind of error that can happen beyond user syntax errors that can be caught immediately is divide by zero errors (also introduced by the user input), which often can't be detected without a lot of processing, and the moment of detection might be deeply buried, like with 1/(1+1/(2^(1/2)+3^(1/2)-(5+2*6^(1/2))^(1/2))^(1/2), where 2^(1/2)+3^(1/2)-(5+2*6^(1/2))^(1/2) happens to equal zero.

I'll have to look into metaprogramming. Maybe it would be more powerful than C#'s generic capabilities, which I haven't been able to figure out how to use to factor out everything I want to. Right now I have four classes for modelling different kinds of polynomials, and there's a ton of code that's similar or identical between them other than the fact that it acts on different types. I've actually been pretty happy with the Number hierarchy part of the program, but the polynomials is where it has gotten nasty.

Edited by AlexKindel on
Delix
I should clarify that I don't really consider 'object-oriented programming' to be a set of language features. I see it as instead a methodology of approaching problems. I consider many popular object-oriented features to not be very useful not because they are in some arbitrarily defined circle called 'object-oriented programming', but rather because they arise out of an object-oriented programming methodology.


I use the term loosely, for sure. I'm certainly not wedded to the idea that whatever structure I use has to model named, real-world concepts.