Handmade Network»Forums
Ryan
26 posts
Valid Conclusions to Make from VM Testing
Edited by Ryan on
Say I have an opengl and sdl2 program that I have developed on windows and wish to release. The program passes all unit tests and gameplay tests on the windows host and on separate osx and ubuntu virtual machines. Given this, is it valid and acceptable to state (for the release version) that the program will work on windows, osx and ubuntu even if I haven't actually tested them all on physical machines?

More generally, what results obtained from VM testing/usage:
  1. Are valid to assume will hold true for a physical machine?
  2. Are not valid to assume will hold true for a physical machine?


Mārtiņš Možeiko
2559 posts / 2 projects
Valid Conclusions to Make from VM Testing
Edited by Mārtiņš Možeiko on
No to both questions. There are million different ways how users environment can be (and most likely is) different from yours.

Some obvious limitations - memory size available can be different. CPU count can be different (in case you are expecting N cores but user has less). Race conditions - users hdd is slower than your hdd/ssd and assets are loading slower. In case you don't expect this in code, expect crashes on weird glitches.

Users computer can have wrong GPU driver installed or no GPU driver installed (no OpenGL context or smth). It may have old driver installed which may have bug for one specific feature you are using (or combination of features). Most rendering engines have workarounds for specific GPU driver versions (not only on Windows/GL). Here's an example of such bugs & workarounds: https://doc.magnum.graphics/magnum/opengl-workarounds.html
And I'm not even talking about different GL extensions you might be using.

Users computer can have nvidia optimus, have you tested against that? What if they have three GPUs - Intel, AMD and nvidia installed? Four GPUs (two nvidia, two AMD)?

Users computer can have no audio device installed - does your code handle that? What if they have three audio devices?

Users computer can have strange anti-virus / "security software" installed that injects extra dll files into all processes and conflicts with your code.

Users computer can have strange settings applied - font scaling 300%, accessibility settings on, wide windows borders. In case you have hardcoded some values it may look weird or render wrong.

Users computer can have other software installed that forcefully resizes window (tiling window manager?) or interacts with your window differently than you expect.

Users computer can have different keyboard layout (in case you are using char codes not scancodes). German layout have z and y swapped. French keyboard is using azerty not qwerty.

Users regional settings is different from yours. For example, usually not a problem but on Japanese locale \ symbol for path delimiters looks like ¥. Other regions have different float decimal "point" - comma vs dot (what if you are reading text file produced by other software or creating exported file for other software).

etc...
Ryan
26 posts
Valid Conclusions to Make from VM Testing
Edited by Ryan on
Thank you for that detailed answer.

Are there simple ways to test all these things? I feel they apply to a lot of programs and therefore a general approach might exist.
Mārtiņš Možeiko
2559 posts / 2 projects
Valid Conclusions to Make from VM Testing
Unfortunately there is no simple way. All you can do is test most common things and then patch whatever breaks afterwards.