When to process images in linear versus gamma space?

Hey all. I'm curious if, unless specified otherwise, all image processing algorithms should be performed in linear color space.

I can verify visually what color space a Gaussian blur should be done in: blurring an image with one half pure green and one half pure blue will yield some cyan between the colors when done in linear space. The same blur done in sRGB space yields some colors that don't make as much sense. Some algorithms, like colorimetric conversion to grayscale, even specify what color space images should be processed in.

But what about something like a Sobel filter? Or an unsharp mask? With operations like these it's hard for me to discern visually whether or not I did the math in the correct color space. The best I can do is look at how GIMP or ImageMagick do it, and I can't seem to find consistent trends there. Does anyone know if I should just assume that my math should always be done in linear color space? Or of a rule of thumb I could adopt to determine this sort of thing intuitively for different algorithms—sort of like the visual check for the Gaussian blur?

Any help is appreciated!

Edited by twelvefifteen on Reason: Initial post
I'm not sure but I think computation should always be done in linear space. In my mental model, in linear space you have the "real" values, while in gamma space you've got the value mapped to look good on a given display.

I'm thinking that if you have an image that was exported to two different gamma space, and you want to apply a filter on it, the only way to get the same result with the two images would be to first go back to linear to apply the filter on the same value and than go back to the desired gamma space.

1
2
3
4
5
       gamma space a                        gamma space a
      /             \                      /
source               linear space -> filter
      \             /                      \
       gamma space b                        gamma space b
You generally do all color math in linear space unless a technique explicitly calls to do otherwise. The thing to watch out for is when things may already be encoded in linear space - for example, the alpha channel in an image is usually linear, same with image files encoding non-color data like height maps, normal maps, specular maps, etc.
Thank you guys! Your responses make a lot of sense. I just second guess this stuff easily when I see big codebases avoid doing math in linear space. Leaves my thoughts all jumbled.

I appreciate the help!