## Recent Activity

I've been dreaming of rendering the Ghostscript tiger for a long time, now I'm dangerously close to quality rendering. Sometimes using a library lets you focus on what you care about, heh. I used nanosvg.h (instead of my own shitty less complete SVG parser) to parse the SVG file into a list of shapes, each shape composed of a set of cubic bezier outline paths. These paths are naturally cached for efficiency reasons. I ripped a piece of code out of nanosvgrast.h to convert the bezier outlines to line segment outlines using a recursive subdivision algorithm. The segments outlines are cached as well, but the conversion is redone whenever the scaling level (zoom) has changed by more than 2x.

Everything else is computed end-to-end as before, making tiles with localized geometry on the CPU, then rendering each tile using the equivalent of scanline rasterization on the GPU. Still scales reasonably well, and I'm hopeful to find out what options this approach affords. While there are some drawbacks compared to tesselation, it's a less data-intensive approach, and probably more flexible all considered.

Worked on optimizing my 2D renderer (which renders polygon outlines end-to-end, based on ravg.pdf paper) more. Noticed that benchmarking is hard. Interestingly, while I am recording with ShareX, it takes only about half the "normal" time to generate a frame. Here it takes around 400 usec to generate the geometry and specialize it to tiles of 64x64 pixels. It takes about 80 usec to push the tiles to OpenGL. When syncing with glFinish() at the end, the OpenGL part takes around 300usec. Probably not measuring anything interesting here, since the GPU work is dominated by various latencies. Something similar could be the case for CPU part (geometry) since the times are probably subject to CPU scaling for example.

If I do both workloads 20 times per frame, both measurements increase by 10x-15x, still running comfortably at 60 FPS. More complicated geometry is definitely needed now to make any serious evaluations.

Improved the polygon renderer to allow for per-polygon colors. It was actually quite a bit of work since I had to optimize how tile paint jobs are dispatched. Came up with a simple API - the geometry is built by repeatedly calling add_line_segment(Ctx *ctx, Point a, Point b) and then finish_polygon(Ctx *ctx, Polygon_Style *style) after all the segments of a polygon were added. Rinse and repeat for next polygon. What's nice is that the internal structures aren't much more complex. Given that everything is chunked by tiles already, I can afford to duplicate a handle to the Polygon_Style in each tile paint job. The renderer doesn't need to track individual polygons, so there is no need to maintain a complicated object graph.

Next up is testing with more complex shapes and seeing which tricks can be employed to keep it running smoothly.

Implemented the most basic mechanism from ravg.pdf ( https://hhoppe.com/ravg.pdf ) paper, which cost me more than 3 days. Seems like graphics programming is hard. I noticed it helps a lot to build visualizations for all the little preprocessing data structures, to make sure they don't break even in corner cases. Making the shaders hot-reloadable helped a lot too.

The 144 stars shown in the video consist of 720 line segments, and get preprocessed in small tiles (32x32 screen pixels), each of which has a specialized description of the relevant line segments which includes some of the original segments clamped to the cell and as well some added artificial segments to complete the description. The tiles can then get rasterized on the GPU in a conventional vertex/pixel shader pipeline. Each pixel "finds" whether it is inside or outside relative to the segments of its cell. There is proper antialiasing for partially covered pixels, too.

On my computer with a 4K resolution screen, the 144 stars are generated and preprocessed in about 1ms (CPU) and the cells get rastered in about 1ms as well (CPU / GPU (OpenGL)). About 2x this time in debug mode with runtime checking enabled.

Work-in-progress SVG parser to test my triangulator and future vector UI work

Putting my Delaunay triangulator to some good use, with two vector glyphs. The vector glyphs are hand-coded as polygon outlines (each of these glyphs needs two outlines).

Took the time today to hunt a bug in my Delaunay triangulator. The blue edges are the ones which are not locally Delaunay (something is wrong). It seems there was an oversight in Guibas & Stolfi's Delaunay Paper, or maybe I was reading it wrong. I found & fixed the problem. I had the idea of using the (now (more) correct) Delaunay triangulator to outline point clouds. In the video I'm simply hiding all edges that are longer than some constant that is tuned to the granularity of the input point cloud.

This was incredibly hard to achieve (for me). It uses Loop-Blinn with earclipping triangulator. In the case of one (or more) vertices intersecting a bezier "triangle" (control point + 2 polygon neighbours) the bezier triangle must be split in two (or more) "wings" (but for rendering each of the wings still has the original triangle as control points of course). There are so many things that can go wrong with this basic fix that I almost went insane. In the end of the video there is a tiny glitch but that one is acceptable for me since I made the polygon self-intersecting (looking at just the polygon vertices; the concave bezier curve is just added as an extra with absolutely no bearing on the rest of the process)

Working on Constrained Delaunay (forced edges)

Implemented incremental Delaunay Triangulation from Guibas & Stolfi's Paper (1985). Not sure everything is right, but it's fun to play with. Added an animation for the Locate procedure as well.

My brain was melting while I was trying to implement some robust triangulations... so after getting just the Quad-Edge data structure right, it was time for some ryanimations to keep myself motivated 🙂

More polygon rasterizer work. Editing a B-spline curve. This stuff is hard to get right, but I've made some progress...

Software rasterization experiments. I know it's ugly AF, and the code looks even worse - but it's still a good feeling to have managed to put something on the screen. The torus is represented as two separate non-intersecting polygons (inner and outer circle, 360 points each) and is rasterized using a generic polygon rasterizer. With a little more work I'll be able to rasterize vector fonts as well. The lines are bezier curves sampled 128 times and at each sample a normal is computed and scaled to a certain thickness, resulting in a band that is easily triangulated. The big blue circle is rendered implicitly (center + radius), which allows for quick and dirty antialiasing that looks somewhat better than the rest.

Coding a GUI from scratch for work. This time around, I decided to start with a software rasterizer, which was a great choice. Added an OpenGL backend later. Colored + Textured Quads only. Freetype for font rasterization. A new idea was to use 1-dimensional glyph allocation for the font atlas. It's perfect if we're doing pixel-precise blitting - no 2 dimensions needed since there is no texture interpolation. This way simplifies the glyph allocation, and makes it very easy to just dynamically upload glyph data to a texture in OpenGL mode, much like a streaming vertex buffer. Haven't profiled the OpenGL backend, hopefully I can improve but 3ms/frame is bearable for now, most time probably spent waiting for uploads (single threaded).

Software rasterized image viewer. This particular image has 272 megapixels (14108 x 19347) and ~1GB of raw size. A mipmap of the image is created ahead of time and saved as 1024x1024 tiles. The viewer runs in about 50MB of RAM, with 16 cached tiles of 1024x1024 pixels, loading tiles from the appropriate mipmap level on demand. Some prediction is still needed to avoid flickering.

Experimenting with automated layout

... discord slowmode kicking in when trying to send multiple attachments

Did another attempt on structuring GUI from scratch. Taking a somewhat retained approach but without any messaging (no event types / routing etc) which seems to be a big contributor to complexity. Structure used is a big Ui context struct that holds state of inputs/outputs as well as stacks of layout rects, clipping rects, mouse interaction regions. Layout methods used are "RectCut" and two custom layout routines used to arrange the buttons in rows / columns. Code screenshot is how the color pickers at the bottom are assembled from more primitive elements. I feel it's reasonably concise while maintaining flexibility.

Starting a few experiments with UI design.

Trying to get the right organization here has caused me a lot of pain. Still not perfect, but it's getting better at least! And if two or more parties are involved in an operation (and you're stubborn enough to not want to have a central location that knows all the things), there is invariably a lot of bureaucracy involved.

Two nasty words I'm liking more and more: Object-oriented (actor model), Retained mode GUI. ~700 lines OpenGL + GLFW, ~1000 lines for the GUI. Effortlessly clean code (for my standards). Nice isolation and extremely generic widget implementations with deferred messaging. All the widgets have a single inbox and outbox, messaging happens strictly on the nesting hierarchy.

some progress on my Lisp from yesterday. Added strings objects/literals, but more importantly: Eval, Quasiquote, Unquote. The important realization is how macros behave exactly like functions, except they receive their arguments unevaluated.

Had another try at programming a small lisp today. I think I finally understand the evaluation process and macros better. Although macros here are not yet usable, it simply evals two times - first time to splice syntactic (unevaluated) args, and then a second eval pass over the resulting syntax node.

jstimpfle

Some 2D UI work

Did some more work on my puzzle game. Longer video at http://jstimpfle.de/videos/DqUynavp5j.mp4

Worked on a puzzle designer / game. Also did some work on a WebAssembly port. Check it out at http://jstimpfle.de/projects/puzzle/puzzle.html

Worked on a puzzle piece designer. There are still crashes in the triangulator. Computational geometry is hard work

Worked some more on a toy lisp implementation to better understand lisp. To be honest, I still haven't found what's so great about it. The huge difference is that everything is lists, so it's much easier to make hard to debug mistakes - which I assume does not only apply to writing the compiler but also writing programs in LISP itself.

The quoting rules and name binding schemes are at least as context-dependent and confusing as say, C, must be for a novice programmer. I tried making very few special cases but haven't really found a good way, and Scheme and also @rxi 's fe both have numerous primitive forms (which need special handling) as well.

Finally took the plunge and started moving all my code to a monorepo. I've decided to define my projects as simple python functions for now that can inspect how the project is built (Compiler/OS etc) and return the set of cfiles, includedirs, linklibs and so on. It feels great so far because I finally can fluently share code between all my projects without copying. No diverging code anymore! I also get a lot of control over my compilation process - I already have included a configuration that targets WASM/WebGL with emscripten. It's all in a single build.py file and the project definitions already outweigh the build logic code. All I'm going to add in the future is checks whether a file needs to be rebuilt.

website screenshot

found an app to record gifs

A little app to combine STL files in tree structures, and move objects around according to their degrees of freedom

jstimpfle