I have a design problem that seems well-suited to OOP, but I want a clean C implementation. I'd like the community's thoughts on the matter. A bit of background:
I've been working on this image editor project, called
Papaya, for some time now. Recently, I made a big breakthrough in high-level design, and want to implement everything as nodes. Here's what I would like to implement:
Layering and masking
Traditional image applications have layers. You can put pixels on a layer, and you can put layers on top of each other to composite an image. Each layer can also have a blend mode (e.g. Normal, Multiply, Overlay, etc.). Layers can also have masks. Photoshop confuses this even further by having two types of masks - a black and white mask that is represented as a second image in the same layer, and a clipping mask, which uses the layer below as a mask.
Papaya won't have any layers at all. It will have bitmap nodes instead. A bitmap node is just a type of node that will contain a bitmap image.
| ▲ Output
|
____|___
| |
| Bitmap |◄--- Mask (optional)
|________|
▲
|
| Input (optional)
|
A bitmap node will have two input slots and one output slot. It will contain its own image data and a blend type. If given an input image, it will blend its own data with the input image, and will produce a resultant output image. If a mask is connected, it will be used. We can thus connect bitmap nodes to each other to obtain layering and masking functionality.
When a bitmap node is selected, users will be given bitmap/raster tools to operate on the data for that node. Brushes, erasers, selection tools, etc. will be available.
Effects
In Papaya, just like everything else, effects will be nodes too. For example, a hue/saturation effect node may look like this:
| ▲ Output
|
____|____
| |
| Hue/Sat |◄--- Mask (optional)
|_________|
▲
|
| Input
|
It will have two input slots, and one output slot. All effect parameters will be node parameters, just like blend type was a node parameter for bitmap nodes. Masking will be possible just like it was for bitmap nodes.
A huge amount of effects such as blurs, color corrections, exposure, brightness, contrasts, chromatic effects, vignettes, dropshadows, embosses, etc. will be possible to atomically implement as nodes.
Vectors
The node approach means that in Papaya, both vector and raster images can be first-class features. The "document" will have no resolution whatsoever. Each bitmap node may have a different resolution. The final image is just whatever node you choose to be the final image. This means that vector nodes will have first-class support.
When a vector node is selected, the raster tools on the left bar of the editor will disappear, giving way to vector tools instead.
If you connect a vector node's output to a bitmap node's input, that vector image will be rasterized at the bitmap node's resolution.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15 | ▲
|
____|___
| |
| Bitmap |
|________|
▲
| Rasterization occurs here, at the Bitmap node's resolution
____|___
| |
| Vector |
|________|
▲
|
|
|
(Note that vector support is non-trivial to implement. All I'm saying is that it fits natively into this node-based model.)
Lossless transforms
Another kind of node will be a transformation node, where you can translate, scale, rotate and otherwise deform the input node, but this will be lossless, since we will be retaining the original node. Tiling nodes will also be possible, which will make creating patterns like checkers and stripes easy.
Parametric images
The file format used for storage of Papaya documents will just describe this node graph and properties of each node. There may be sparse document information, such as grid-line size, guide lines, document-specific settings, etc.
Papaya will consist of two main parts:
1) The Core, which is the GUI, platform layer, etc.
2) The Engine, which will be able to serialize and deserialize nodes, and also evaluate them.
The Core will call the Engine to actually produce images by compositing the nodes. I'd like to make the Engine a separate component, such that it should be able to be integrated into other code bases like games or game engines. This way, foreign code bases will be able to read, tweak and evaluate the node graph from Papaya files, and get images in return. This way games can parameterize image generation wherever required.
---
I think it'll be valuable to create the Engine in pure C, since that makes integration easier for other people.
I'm having trouble coming up with a clean non-OOP solution. Here's the general outline of what I've been thinking about:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19 | // Public API - Used in Papaya's Core, and in consumer applications
// ----------------------------------------------------------------
ppy_load(filename, &x, &y, &n) . . Loads file, returns bitmap.
ppy_parse(...) . . . . . . . . . . Loads file, returns document data.
ppy_node_evaluate(...) . . . . . . Takes in a node graph, and then evaluates
and returns an evaluated bitmap for a
specific node.
??? . . . . . . . . . . . . . . . Some way of selectively modifying nodes.
// Authoring API - Used primarily in Papaya's Core, but can be exposed to consumers anyway
// ---------------------------------------------------------------------------------------
ppy_node_init() . . . . . . . . . . . Create/init a node
ppy_node_destroy(). . . . . . . . . . Destroy a node
ppy_write(filename, w, h, n, data). . Write document to a file
|
So, finally, the question: What would be a good way to structure this API?
I need to have different kinds of nodes - e.g. BitmapNode, HueSatNode, VectorNode, etc. I need to expose their internal member vars to users, allow for generic evaluation calls, and store them in a homogeneous array. This seems tantalizingly suitable for an OOP solution.
This spec will have to define what the Node structs and the function calls will look like.
Thoughts?