The 2024 Wheel Reinvention Jam just concluded. See the results.

Affine transforms for a movable origin

I'm having some trouble wrapping my head around the transforms I need for a drawing surface. I'm hoping it's not something super obvious that I'm completely missing, but I've been racking my brain since yesterday on this issue and I'm hoping I can get some help here.

I'm working on an iOS app that has a drawing surface (specific knowledge of iOS/Core Graphics isn't necessary as the problem just concerns transforms). It supports paths and images. It also supports panning (effectively infinite in all directions), and zooming (10% - 1000%). The origin of the user coordinate space is the upper left corner. This means that whenever you zoom, shapes "pull" or gravitate towards this origin at the top left. This does not result in a great user experience when you're panning around to shapes that are farther away from the origin, you zoom in, and the shapes drift way off screen towards the origin. What I want is for zooming always to focus on the centre of the screen. i.e. No matter how far away you've panned away from the original origin, zooming in/out should always been relative to the current centre of the screen.

I have successfully achieved a movable origin where zooming does happen relative to the centre of the screen. However, it adversely affects the speed of panning when zoomed far in or out. e.g. When zoomed far out, panning is very slow, and when zoomed in, panning is too fast. In other words, the speed of panning does not follow the speed of your finger when panning over the screen. Here is a visual of what's happening:
Sample 1

Notice that the origin stays at the centre of the screen and that zooming in/out is relative to the origin even when panning away. When zoomed out, you can see how slow panning is. Here is another video that shows a consistent panning speed, but the origin is no longer fixed to the centre of the screen, so zooming in/out is no longer relative to the centre:
Sample 2

In short, the second video illustrates correct panning but incorrect zooming, where the first video shows correct zooming but incorrect panning. :/

Here's how I've approached this problem so far:

- As the user pans the screen, I need to move the origin of the graphics context to keep it centred in the screen. This offset value is updated whenever panning is in progress with the absolute value of the panning position. (I'll explain the translation matrix shortly.)
1
2
3
4
5
6
7
8
var offset: CGPoint = .zero {
        didSet {
            // bounds is the bounding rect of the screen.
            tOriginOffset = CGAffineTransform(translationX: bounds.width/2.0 - offset.x, y: bounds.height/2.0 - offset.y)
            calculateTranslationMatrix()
            setNeedsDisplay()
        }
    }


- As the user zooms in/out, update the scaling transform accordingly.
1
2
3
4
5
6
7
8
var scale: CGFloat = 1.0 {
        didSet {
            scale = max(min(scale, QLCanvasView.MAX_SCALING), QLCanvasView.MIN_SCALING)
            tScaling = CGAffineTransform(scaleX: scale, y: scale)
            calculateTranslationMatrix()
            setNeedsDisplay()
        }
    }


- Since I'm applying an offset to the graphics context in order to move the origin to the centre of the screen, I need to compensate for this when drawing actual paths and images so they appear in the right place on the canvas. Hence a separate translation matrix is applied to paths and images to move them "back" to where they should be.
1
2
3
4
5
6
7
func calculateTranslationMatrix() {
        // NOTE: Factor in scale in translation offset so that panning stays at a consistent speed. But it doesn't work as I expected.
        let scaledOffset = offset * (1.0/scale)
        let tx = (scaledOffset.x - tOriginOffset.tx)
        let ty = (scaledOffset.y - tOriginOffset.ty)
        tTranslation = CGAffineTransform(translationX: tx, y: ty)
    }


- Finally during drawing, I concatenate the origin offset and scaling matrices together and apply them to the graphics context, which affects all elements drawn to it. All paths will then have the translation matrix applied to it to compensate for the translation of the graphics context.
1
2
3
4
let ctm = tScaling.concatenating(tOriginOffset)
context.concatenate(ctm) // effectively ctm = tScaling * tOriginOffset
//...
path.move(to: stroke.origin, transform: tTranslation)


To me it seems like it's a problem with the translation matrix that I'm applying to individual paths/images. I've attempted to scale it according to the zoom scale as you can see in the function above, but it obviously isn't right. What am I missing here? Or am I approaching this totally in the wrong way?

Edited by Flyingsand on Reason: Initial post
I tried for a few hours to get some code doing what you want but I failed. I feel like it should be as simple as translate, scale, translate but can't seem to make that work. I got something that resemble what you want but somehow the zoom doesn't zoom at the center of the screen. I'll try to continue when I've got some time as this interests me and bothers me that I can't solve it.

My idea was that before you start to zoom, you compute the "world space" position of the "screen space" center. You compute the new zoom factor. And then you compute the world space position of the screen space center with the new zoom value. The difference between the two would then be the world space offset, that you need to scale with the current zoom factor, needed to keep the same point at the center.

Here is the code, but it's not quite working (zoom doesn't go correctly towards the center). The pan works great independently of the zoom level.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
r32 translation[ 16 ];
r32 scale[ 16 ];
r32 transform[ 16 ];
r32 screen[ 16 ];
r32 matrix[ 16 ];

static r32 panX = 0;
static r32 panY = 0;
static r32 zoom = 1.0f;

static r32 lastMouseX = 0.0f;
static r32 lastMouseY = 0.0f;

static r32 zoomStartY = 0.0f;

static r32 startZoom = 1.0f;
static r32 startPanX = 0;
static r32 startPanY = 0;
static r32 centerX = 0;
static r32 centerY = 0;

if ( input_keyJustPressed( &window, vk_mouseLeft ) ) {
    lastMouseX = ( r32 ) window.mouseX;
    lastMouseY = ( r32 ) window.mouseY;
    zoomStartY = ( r32 ) window.mouseY;
    startZoom = zoom;
    startPanX = panX;
    startPanY = panY;
    /* To world space */
    centerX = ( window.width * 0.5f ) * ( 1 / zoom ) - startPanX;
    centerY = ( window.height * 0.5f ) * ( 1 / zoom ) - startPanY;
}

if ( input_keyIsPressed( &window, vk_mouseLeft ) ) {
    
    r32 mouseX = ( r32 ) window.mouseX;
    r32 mouseY = ( r32 ) window.mouseY;
    
    if ( input_keyIsPressed( &window, vk_space ) ) {
        zoom = startZoom + ( zoomStartY- mouseY ) * 0.01f;
        /* To world space */
        r32 newCenterX = ( window.width * 0.5f ) * ( 1 / zoom ) - startPanX;
        r32 newCenterY = ( window.height * 0.5f ) * ( 1 / zoom ) - startPanY;
        panX = ( newCenterX - centerX ) * zoom + startPanX;
        panY = ( newCenterY - centerY ) * zoom + startPanY;
    } else {
        panX += mouseX - lastMouseX;
        panY += lastMouseY - mouseY; /* Inverted because the mouse coordinate system is y+ goes up. */
    }
    
    lastMouseX = ( r32 ) window.mouseX;
    lastMouseY = ( r32 ) window.mouseY;
}

matrix_createScale( scale, vec3( zoom, zoom, 0 ) );
matrix_createTranslation( translation, vec3( panX, panY, 0 ) );
// matrix_createIdentity( translation );

// matrix_multiply4x4( translation, scale, transform ); /* Order is important */
matrix_multiply4x4( scale, translation, transform );
matrix_createScreenSpaceTopDown( matrix, window.width, window.height );
matrix_multiply4x4( transform, matrix, screen );

lineBatch.matrix = screen;

gl_batch( &commands, &lineBatch );
gl_render( &commands );
I feel like expressing this is matrices will be much simpler than manually manipulating various variables.

Basically when you want to zoom, you need to adjust current position. With matrices this looks like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
        float dscale = ...; // scale change: 0.1f or -0.1f or similar value
        float newscale = scale + dscale;
        
        if (newscale >= 0.1f && newscale <= 5.f) // clamp scale to [0.1 .. 5.0] interval
        {
          float t = newscale / scale;

          matrix m;
          mat_identity(m);
          mat_translate(m, centerx, centery);
          mat_scale(m, t, t);
          mat_translate(m, -centerx, -centery);
          mat_transform(m, &posx, &posy);

          scale = newscale;
        }

This will adjust posx/posy panning offset in a way that zoom happens around centerx/centery point.

Here is code that demonstrates this. It is based on SDL2, but I'm sure it should be pretty easy to adjust it to any API - it simply draws bunch of random lines by transforming them with 2D matrix: https://gist.github.com/mmozeiko/c9ecb8b206c245198f0a5aedc21f5a64

Code allows you to set any point as "center" point for zooming - for example, mouse position would be an good choice.

Here's a gif that shows how it looks at runtime - red rectangle is window center around which "zooming" happens. Pan with left mouse button, zoom with wheel.



Edited by Mārtiņš Možeiko on
Thanks for the code sample and tips, mmozeiko! I'm closer (really close actually), but I'm still having a couple of issues. On the good side, I fixed the panning consistency when going between zooming and panning based on your feedback.

Similar to my original case, fixing any one of these two issues, breaks the other one.. So unbelievably frustrating! Here are the two cases and the relating issue:

1) I can make zooming focus in on any arbitrary point, which feels really nice from a user perspective. Panning works fine as well. However, any subsequent zooming after the initial one causes the paths/strokes to "jump". Here's a video for reference: Sample 1

2) By fixing the jumping issue above, zooming no longer focuses in on the focus point. Instead it kind of "drifts" around it. It's pretty close much of the time, but not good enough for my tastes. Here is a video reference: Sample 2

Based on your feedback, I made some fundamental changes to how I apply the transform. I no longer apply any transformation to the graphics context (canvas) itself. Instead, the transform is applied onto the paths themselves. Secondly, I handle zooming and panning in separate cases as you will see in my code sample. When just a panning gesture is recognized, it just translates the affine transform by a delta amount.

Here is the code inside the draw method of my canvas view:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
if isZooming {
            #if DEBUG
            let rect = CGRect(origin: focus - CGPoint(x: 2.0, y: 2.0), size: CGSize(width: 4.0, height: 4.0))
            context.addEllipse(in: rect)
            context.setFillColor(UIColor.red.cgColor)
            context.fillPath()
            #endif
            
            #if false
            let tp = offset - focus
            
            t = .identity
            t = t.translatedBy(x: focus.x, y: focus.y) // translate to focus point so scaling is relative to it
            t = t.scaledBy(x: scale, y: scale)
            t = t.translatedBy(x: tp.x, y: tp.y) // translate "back" to compensate for focus point and include any panning offset
            #else
            let f = (focus - prevFocus)/scale // calculate focus point's delta
            let tp = deltaOffset*(1.0/scale) - f // calculate translation for compensating translation to focus point, and account for any panning movement
            
            t = t.translatedBy(x: f.x, y: f.y) // translate to focus point so scaling is relative to it
            t = t.scaledBy(x: scale/prevScale, y: scale/prevScale)
            t = t.translatedBy(x: tp.x, y: tp.y) // translate "back" to compensate for focus point
            #endif
        } else {
            let delta = deltaOffset * (1.0/scale)
            t = t.translatedBy(x: delta.x, y: delta.y)
        }
        
        prevFocus = focus


You can see each case around the #if #else #endif blocks. In the first case (zoom in on focus point), the affine transform is created new each time, which is probably the cause of why the paths jump when initiating subsequent zooms. The new affine transform doesn't match the previous one from when the last zoom operation finished. So I must be missing an offset value here in order to create the new affine transform that "picks up" from the last one.

In the second case (zooming focus drifts, but no jumps), the affine transform is not recreated each redraw, as everything is based on deltas. I suspect this is the cause of the drifting -- the deltas are slightly off, or not being applied quite right in order to focus in on the right point.

Any further insight into my issue here? I'm quite surprised how fussy this has turned out to be. It seems like it should be relatively straightforward (and quite a common) thing to do when dealing with a drawing surface.

Edit: In case it's useful, here is the code that handles the iOS gestures themselves:
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
@objc func handlePan(_ sender: UIPanGestureRecognizer) {
        let translation = sender.translation(in: view)
        let canvasView = view as! QLCanvasView
        
        switch sender.state {
        case .began:
            baseOffset = canvasView.offset
        case .changed:
            canvasView.offset = baseOffset! + translation
        default:
            break
        }
        
        canvasView.setNeedsDisplay()
    }
    
    @objc func handlePinch(_ sender: UIPinchGestureRecognizer) {
        let canvasView = view as! QLCanvasView
        
        switch sender.state {
        case .began:
            baseScale = canvasView.scale
            canvasView.isZooming = true
            canvasView.prevFocus = canvasView.focus
            canvasView.focus = sender.location(in: view)
        case .changed:
            canvasView.scale = baseScale! + (baseScale! * (sender.scale - 1.0))
            canvasView.focus = sender.location(in: view)
        case .ended:
            canvasView.isZooming = false
        default:
            break
        }
        
        canvasView.setNeedsDisplay()
    }

Edited by Flyingsand on Reason: Adding code sample
Sorry, I'm not familiar with these iOS api functions. Check carefully if your calculations match. Applying transform to graphics context should work fine. That's pretty much what I am doing in my code example. It applies one transformation matrix to whole drawing context. And it allows to specify arbitrary point to zoom around, for example, mouse cursor.

I don't think your "drifting" is related to matching previous transform/deltas. In my example I am also creating transformation matrix on every frame from scratch. I believe your calculation of how to apply scaling is wrong, it scales around different point than you expect. That's why it seems it is moving away. Check your math.

Edited by Mārtiņš Možeiko on
mmozeiko
I don't think your "drifting" is related to matching previous transform/deltas. In my example I am also creating transformation matrix on every frame from scratch. I believe your calculation of how to apply scaling is wrong, it scales around different point than you expect. That's why it seems it is moving away. Check your math.


Yep, I was applying my scaling slightly wrong. I carefully went through the math again, and got it working! It feels great! The only additional thing I needed to do was to account for simultaneous pan & pinch gestures in the calculation of the transform. Thanks again for your help.
@mmozeiko I still can't get this working. I compared your matrix functions and mine and found a difference.

I'm using 4x4 matrices but since I don't use z, let's say they are 3x3 matrices. You use 3x2. When I pass a vector for transformation by a 3x3 matrix I pass vec3(x,y,1). I also use column major matrix (first 3 elements of the array are the first column of the matrix). I thought those changes made no difference but after comparing the result of your transformation there was a difference with the scaling function.

You only multiply the m[0] and m[3] for the scale, but shouldn't the translation part (m[4] and m[5]) be modified too ?

Lets say I translate the identity matrix -2 on x and -2 on y, and than multiply it by a scale matrix of 2 on x and y. What does the scale matrix looks like ? Is it this ?
1
2
3
4
5
6
7
8
   T           S
| 1  0|     |2 0 0|     | 2  0  0|
| 0  1|  x  |0 2 0|  =  | 0  2  0|
|-2 -2|                 |-4 -4  0|
Which is more or less the same as using a 3x3 matrix.
| 1  0  0|     |2 0 0|   | 2  0  0|
| 0  1  0|  x  |0 2 0| = | 0  2  0|
|-2 -2  1|     |0 0 1|   |-4 -4  1|

If it's the case the result is different than what you have in code that produces
1
2
3
4
   T
| 1  0|                    | 2  0|
| 0  1| with mat_scale  =  | 0  2|
|-2 -2|                    |-2 -2|

If the above is correct and the code does what you want, why do we leave the translation part out of the scale multiply ?
mrmixer
If the above is correct and the code does what you want, why do we leave the translation part out of the scale multiply ?


It's not that the translation is left out, it's that the order is wrong. So going through the example you gave, translating the identity matrix by (-2, -2), we have:
1
2
3
4
     T              I               R
| 1  0 0 |     | 1  0  0 |     | 1  0 0 |
| 0  1 0 |  x  | 0  1  0 |  =  | 0  1 0 |
|-2 -2 1 |     | 0  0  1 |     |-2 -2 1 |


Then scaling that result by (2, 2) is:
1
2
3
4
     S               R              R'
| 2  0  0 |     | 1  0 0 |     | 2  0 0 |
| 0  2  0 |  x  | 0  1 0 |  =  | 0  2 0 |
| 0  0  1 |     |-2 -2 1 |     |-2 -2 1 |


i.e. Matrix multiplication is not commutative.
I knew the order is important but still I messed it up. Thanks

Sorry to ressurect an old thread, but I've been struggling with this for a while and can't seem to figure out why I'm getting massive oscillations when panning/zooming sometimes.

When the window is resized / panned / zoomed I call the following routine which recomputes the world to screen matrix, which gets applied to all Draw* routines (I'm currently using Raylib):

internal inline M3x3
world_to_screen_recompute(V2 window_dims, Game *game) {
    M3x3 m = identity_m3x3();
    translate_by_m3x3(&m, -game->mouse_pos.x, -game->mouse_pos.y);
    scale_by_m3x3(&m, game->camera_zoom, game->camera_zoom);
    translate_by_m3x3(&m, game->mouse_pos.x, game->mouse_pos.y);
    translate_by_m3x3(&m, -game->camera_pos.x, -game->camera_pos.y);
    scale_by_m3x3(&m, TILE_DIM_PX, -TILE_DIM_PX);
    translate_by_m3x3(&m, window_dims.x/2.f, window_dims.y/2.f);
    return m;
}

mouse_pos and camera_pos are in world coords. TILE_DIM_PX is just a constant (1 world unit is equivalent to the radius of a tile). It works totally fine if I don't do the mouse translate, and instead scale after the camera has been offset.

Any advice would be hugely appreciated


Edited by panthalassadigital on

I'm not super confident with this, but I didn't see any issue with what you wrote. But it could be wrong if the matrix multiplication is in the wrong order, including how you multiply the points by the matrix.

The way I try to work out those issues is to write the math by hand and be sure that I understand what's happening. I often found that I did some operation in the wrong order, or assumed some convention that I didn't follow. It also let you compare the result of the code with a result that you should expect.

Here is an example of what I'm talking about using your example, but note that I don't like how at the end it didn't seem right and I had to change p' = p * m to p' = m * p. That's the kind of "error" that makes me uncomfortable and redo/verify the conventions/order I use. I'm not doing it here since I don't want to spend a lot of time. What I would also do is compute and write every step on the point, instead of computing the matrix combination. Like p1 = p * m1, p2 = p1 * s... to make sure it gives the same result as the matrix combination. It also let you understand what every operation does to the points.

p = [2,3,0]
mouse = [1,1]

i =
| 1 0 0 |
| 0 1 0 |
| 0 0 1 |

m1 -mouse
|  1  0  0 |
|  0  1  0 |
| -x -y  1 |

m = i * m1
m = m1

s scale
|  s  0  0 |
|  0  s  0 |
|  0  0  1 |

m = m * s

   s,    0, 0
   0,    s, 0
-x*s, -y*s, 1

m2 mouse
| 1 0 0 |
| 0 1 0 |
| x y 1 |

m = m * m2

     s,      0, 0
     0,      s, 0
-x*s+x, -y*s+y, 1

c -camera

|   1   0 0 |
|   0   1 0 |
| -cx -cy 1 |

m = m * c

        s,         0, 0
        0,         s, 0
-x*s+x-cx, -y*s+y-cy, 1

ts tile

| tx  0 0 |
|  0 ty 0 |
|  0  0 1 |

m = m * ts

          s*tx,              0, 0
             0,           s*ty, 0
(-x*s+x-cx)*tx, (-y*s+y-cy)*ty, 1

w window

|     1     0 0 |
|     0     1 0 |
| -wx/2 -wy/2 1 |

m = m * w

                    s*tx,                        0, 0
                       0,                     s*ty, 0
((-x*s)+x-cx)*tx+(-wx/2), ((-y*s)+y-cy)*ty+(-wy/2), 1

p' = i * m1 * s * m2 * c * ts * w * p
p' = m * p
p = [2,3,1]

s*tx*2
s*ty*3
((-x*s)+x-cx)*tx+(-wx/2) * 2 + ((-y*s)+y-cy)*ty+(-wy/2) * 3 + 1 => doesn't seem right

p' = p * i * m1 * s * m2 * c * ts * w
p' = p * m
p = [2,3,1]

2 * (s*tx) + ((-x*s)+x-cx)*tx+(-wx/2)
3 * (s*ty) + ((-y*s)+y-cy)*ty+(-wy/2)
1

2 * (2*10) + ((-1*2)+1-(-4))*10+(-400/2) = -130
3 * (2*10) + ((-1*2)+1-(-5))*10+(-200/2) = 0
1

Is this correct ? I don't know.

Replying to panthalassadigital (#30282)

After testing the xform for a couple different zoom levels, it really feels to me like the calculation in its current form is somehow trying to achive 2 impossible constraints:

  1. Keeping the position under the cursor constant, independent of zoom level
  2. Making sure the camera is at (0,0) before the screen xform

You can see that in this test output here - the mouse cursor pos stays constant, but the camera position is not always (0,0). If I were to account for the scaling of the camera's position, then the mouse cursor's position wouldn't be constant:

----- zoom: 1 ------

cam = 
Mf*( 1,  1) // Mf = translate origin to mouse pos
Zf*(-1, -1) // Zf = zoom scale
Mb*(-1, -1) // Mb = translate origin to world origin
Cf*( 1,  1) // Cf = translate origin to camera pos
= (0, 0)

mouse = 
Mf*( 2,  2)
Zf*( 0,  0)
Mb*( 0,  0)
Cf*( 2,  2)
= (1, 1)

----- zoom: 2 ------

cam = 
Mf*( 1,  1)
Zf*(-1, -1)
Mb*(-2, -2)
Cf*( 0,  0)
= (-1, -1)

mouse = 
Mf*( 2,  2)
Zf*( 0,  0)
Mb*( 0,  0)
Cf*( 2,  2)
= (1, 1)

----- zoom: 3 ------

cam = 
Mf*( 1,  1)
Zf*(-1, -1)
Mb*(-3, -3)
Cf*(-1, -1)
= (-2, -2)

mouse = 
Mf*( 2,  2)
Zf*( 0,  0)
Mb*( 0,  0)
Cf*( 2,  2)
= (1, 1)

I'm absolutely baffled by this :/


Replying to mrmixer (#30283)

Yeah, it seems that when we scale around to mouse position, we need to change the camera position if you want to keep the mouse at the same position. Something like scaling the camera position by 1/zoom.

camera_position = (camera_position - mouse_position)*(1/zoom) + mouse_position;
// Now do the transform as before.

If that works, maybe there is a way to have that in the matrices directly. Maybe you could ask on the handmade discord. If you find out, I'd be interested to know.


Edited by Simon Anciaux on
Replying to panthalassadigital (#30285)

This stayed in my mind so I tried to figure it out again (I already did when the thread was created). After I fixed some issues I did figure it out. And after reading the thread again, the code is just what mmozeiko shared in their gist. Still was nice to figure it out again now that I'm a bit more familiar with matrices.

To sum it up, we need to transform the camera position first to move it to a location where after the zoom the zoom origin is still at the same location. And then you can do a regular transform to render. Those need to be two separate operation.

The issues I ran into:

  • The scale to use to compute the camera position needs to be the change from the last zoom value, not the full zoom value. And inverted, so previous_zoom/zoom.
  • I computed the mouse position in world space wrong. I subtracted the camera position instead of adding it, and due to other errors, it resulted in nearly correct result.
  • The error that made it almost work was that at some point it looked like that the zoom origin was opposite to what I expected, and so I tried to do translate(mouse), scale(zoom), translate(-mouse) and it seemed to work, but I knew there was something wrong.
  • I used the new zoom value to compute the mouse world space position instead of the previous value when the zoom change, which made the value incorrect, but looked correct some of the time, which was confusing.

Just to have another example here is the code I wrote.

#include "../lib/common.h"
#include "../lib/window.h"
#include "../lib/gl.h"
#include "../lib/matrix.h"

int main( int argc, char** argv ) {
    
    window_t window = { 0 };
    u32 window_error = 0;
    f32 width = 800;
    f32 height = 400;
    f32 zoom = 1;
    vec4 camera = v4( 0, 0, 0, 1 );
    window_create( &window, string_l( "Zoom to cursor" ), cast( u32, width ), cast( u32, height ), s32_max, s32_max, 0, &window_error );
    
    gl_t gl = gl_make( &window, gl_flag_alpha_blending, &g_gl_error );
    gl_batch_t* lines = gl_add_line( &gl, &g_gl_error );
    
    matrix_t final;
    matrix_identity_4( final.e );
    lines->matrix = final.e;
    
    while ( window.running ) {
        
        window_handle_messages( &window, whm_none );
        
        if ( window.close_requested ) {
            window.running = false;
        }
        
        if ( window.resized ) {
            gl_resize_frame_buffer( &gl, &g_gl_error );
            width = cast( f32, window.width );
            height = cast( f32, window.height );
        }
        
        gl_frame_start( &gl, &g_gl_error );
        
        gl_clear( &gl, black );
        
        f32 previous_zoom = zoom;
        
        if ( window.scroll_y ) {
            zoom += math_sign( window.scroll_y ) * 0.1f;
        }
        
        if ( input_key_just_pressed_or_repeated( &window, vk_add ) ) {
            zoom += 0.1f;
        } else if ( input_key_just_pressed_or_repeated( &window, vk_subtract ) ) {
            zoom -= 0.1f;
        }
        
        if ( zoom < 0.1f ) {
            zoom = 0.1f;
        }
        
        f32 w_width = 10.0f * ( 1.0f / previous_zoom);
        f32 w_height = w_width * ( height / width );
        
        debug_l( "world: " );
        debug_f32( w_width, false );
        debug_l( ", " );
        debug_f32( w_height, true );
        
        vec4 mouse = v4_s32( window.mouse_x, window.mouse_y, 0, 1 );
        mouse.x /= width;
        mouse.x -= 0.5f;
        mouse.y /= height;
        mouse.y -= 0.5f;
        mouse = v4_hadamard( mouse, v4( w_width, w_height, 0, 1 ) );
        mouse = v4_add( mouse, v4( camera.x, camera.y, 0, 0 ) );
        
        debug_l( "mouse_pos: " );
        debug_f32( mouse.x, false );
        debug_l( ", " );
        debug_f32( mouse.y, true );
        
        if ( input_key_just_pressed( &window, vk_space ) ) {
            debug_break( );
        }
        
        matrix_t m1, s, m2, out1, out2;
        
        matrix_translation_4( -mouse.x, -mouse.y, 0, m1.e );
        matrix_scale_4( previous_zoom/zoom, previous_zoom/zoom, 1, s.e );
        matrix_translation_4( mouse.x, mouse.y, 0, m2.e );
        matrix_mul_4( m1.e, s.e, out1.e ); 
        matrix_mul_4( out1.e, m2.e, out2.e );
        
        camera = matrix_mul_vec_4( camera.e, out2.e );
        
        matrix_t ct, cs, cm;
        matrix_translation_4( -camera.x, -camera.y, 0, ct.e );
        matrix_scale_4( zoom, zoom, 1, cs.e );
        matrix_orthographic_x_4( width, height, 10, 0.01f, 10.0f, cm.e );
        matrix_mul_4( ct.e, cs.e, out1.e );
        matrix_mul_4( out1.e, cm.e, final.e );
        
        memory_free( &lines->uniform );
        memory_push_copy_p( &lines->uniform, final.e, sizeof( final ), 4 ); 
        
        gl_vertex_p2_c4_t vertices[ 2 ] = { 0 };
        vertices[ 0 ].color = white;
        vertices[ 1 ].color = white;
        
        vertices[ 0 ].position = v2( -2.0f, -2.0f );
        vertices[ 1 ].position = v2( -2.0f, 2.0f );
        gl_batch_lines( lines, vertices->e, 2, white );
        
        vertices[ 0 ].position = v2( -1.0f, -2.0f );
        vertices[ 1 ].position = v2( -1.0f, 2.0f );
        gl_batch_lines( lines, vertices->e, 2, white );
        
        vertices[ 0 ].position = v2( -0.0f, -2.0f );
        vertices[ 1 ].position = v2( -0.0f, 2.0f );
        gl_batch_lines( lines, vertices->e, 2, white );
        
        vertices[ 0 ].position = v2( 1.0f, -2.0f );
        vertices[ 1 ].position = v2( 1.0f, 2.0f );
        gl_batch_lines( lines, vertices->e, 2, white );
        
        vertices[ 0 ].position = v2( 2.0f, -2.0f );
        vertices[ 1 ].position = v2( 2.0f, 2.0f );
        gl_batch_lines( lines, vertices->e, 2, white );
        
        
        vertices[ 0 ].position = v2( -2.0f, -2.0f );
        vertices[ 1 ].position = v2( 2.0f, -2.0f );
        gl_batch_lines( lines, vertices->e, 2, white );
        
        vertices[ 0 ].position = v2( -2.0f, -1.0f );
        vertices[ 1 ].position = v2( 2.0f, -1.0f );
        gl_batch_lines( lines, vertices->e, 2, white );
        
        vertices[ 0 ].position = v2( -2.0f, 0.0f );
        vertices[ 1 ].position = v2( 2.0f, 0.0f );
        gl_batch_lines( lines, vertices->e, 2, white );
        
        vertices[ 0 ].position = v2( -2.0f, 1.0f );
        vertices[ 1 ].position = v2( 2.0f, 1.0f );
        gl_batch_lines( lines, vertices->e, 2, white );
        
        vertices[ 0 ].position = v2( -2.0f, 2.0f );
        vertices[ 1 ].position = v2( 2.0f, 2.0f );
        gl_batch_lines( lines, vertices->e, 2, white );
        
        gl_render_batches( &gl, &g_gl_error );
        
        window_set_cursor( &window, window.platform.cursor_arrow );
        
        gl_swap_buffers( &gl );
    }
    
    gl_cleanup( &gl, &g_gl_error );
    
    log_to_file( g_log_filename );
    
    return 0;
}