I recently changed graphics card for a AMD RX580 which is quite old, but a bit better than the NVidia 1050 I had. I'm on Windows 10.
Since that change, I noticed an issue with my OpenGL applications which is that when I'm in fullscreen and try to alt tab to another application, I have a black flash (which looks like windows or the graphics driver is switching mode). It's OK for a game where I don't often alt tab, but I have the same issue with 4coder where I alt tab often and it got pretty annoying.
After searching on the web I found somewhere that using a dxgi swap chain could solve that issue. Supposedly setting OpenGL tripple buffering in the AMD driver should make OpenGL application use a DXGI swap chain, but I don't see any change in my case.
I then found this github: https://github.com/pmttavara/OpenGL-on-DXGI1.3
And the sample is working, but it creates several copies of different buffer and has a somewhat involved setup.
The page contained a gist from mmozeiko: https://gist.github.com/mmozeiko/c99f9891ce723234854f0919bfd88eae#file-dxgl_flip-c
Which seemed simple enough, but after trying it, it doesn't work when I try to resize the window, and when I added IDXGIDebug_ReportLiveObjects( dxgi_debug, DXGI_DEBUG_ALL, DXGI_DEBUG_RLO_ALL );
it reported a lot of texture reference not released (in the output window of the debugger).
So I tried to change the code to address the reference leak, and fix the error. The leak was fixed by not creating the render target view each frame, only create it when the window is resized (although it's probably a symptom of the other issue).
But the error still happen when I resize the window. The error message is:
DXGI ERROR: IDXGISwapChain::ResizeBuffers: Swapchain cannot be resized unless all outstanding buffer references have been released.
When I call IDXGIDebug_ReportLiveObjects
it shows that there are 3 buffers still active at the time I call ResizeBuffers
, and not matter what I tried in the past few days, I couldn't find a solution.
My understanding is that there are 2 buffer for color (back and front buffer ?) and 1 for the depth stencil. The 2 color buffer reference "appear" when I call SwapChain_GetBuffer
, and the depth stencil is created manually.
At some point I stopped using the depth stencil to make it simpler. With that out, I managed to make it work but I think what I do is a bad idea: I release the color buffer twice. That works and I was able to confirm that I had not black flash when alt tabbing in full screen. The thing that is weird is that when I release only once, the reference count in the report doesn't change.
Also it doesn't work when the depth stencil is present, and I'm not comfortable with releasing the color buffer twice.
Here is the code: https://sisyphe.be/donotdelete/dxgl.zip
It's a modification of mmozeiko gist. At the top of the file there are 2 defines, one to enable the depth stencil buffer, and one to enable the hack of releasing the buffer twice (which is enabled at the moment).
So my questions are:
1 If anyone has a simple fix for the black flash when alt+tabbing on AMD cards in fullscreen OpenGL, I'd like to know.
2 Does anyone knows how to properly release the buffer before resizing the swap chain ?
3 My goal is only to use dxgi to present what OpenGL has rendered. I use glClipControl to flip the OpenGL viewport coordinates so that when DirectX display the buffer it's in the correct orientation. Is there an easier way to use dxgi to display the render of OpenGL ? Or some way to control the OpenGL swap chain ?
4 I want to have a depth buffer in OpenGL, do I need to do the DepthStencil in DirectX ? Or could I just have it has an OpenGL render buffer (I haven't looked at that very much at the moment). So the OpenGL framebuffer would be the color buffer from DirectX but a depth stencil buffer from OpenGL ?
One of the reason I would like it to be simple is that I would like to add it in 4coder and I don't want to complicate a code base shared by other.
I feel like solving toggling from fullscreen window away with DXGI swap chain integration with GL is a really wrong way to solve this. I would strongly suggest against this. It's really mean for things that require integrating rendering across gpu api's.
As long as application just maximizes window without title/borders (remove overlapped style), there should be no display mode switch. Just don't use DisplayChangeSettings function - that is the bad one, which people often used with GL. But I don't what 4coder use. It had some bad GL code usage in past there, and I have no idea if all of that is fixed.
Releasing object twice is not a good way to do this. Because that means somebody else is keeping reference and will eventually call Release, thus most likely crashing everything. Try to find out which functionality actually adds that reference - check the count after different api calls, maybe that will allow understand who is keeping that reference.
Another alternative I would strongly suggest is to change GL to D3D11 rendering in 4coder. I expect that rendering functionality is fairly isolated, so should not be a big deal. That will be way more stable & reliable than this DXGI/GL integration.
Oh, and answering question about depth buffer - you don't need to share it with d3d, if you're not doing any rendering in d3d. You can keep it fully in GL.
Thanks for the reply.
I feel like solving toggling from fullscreen window away with DXGI swap chain integration with GL is a really wrong way to solve this.
Yeah, It felt wrong, but it was the only thing that I found that solved the issue.
As long as application just maximizes window without title/borders (remove overlapped style), there should be no display mode switch. Just don't use DisplayChangeSettings function - that is the bad one, which people often used with GL. But I don't what 4coder use. It had some bad GL code usage in past there, and I have no idea if all of that is fixed.
I don't use DisplayChangeSettings or anything special.
I used to just set the style to WS_POPUP to go fullscreen but I observe some oddities when doing that and alt tabbing: sometimes the fullscreen window would still be in front of the taskbar even if another non fullscreen window has the focus and is in front. So I switched to keeping the style to WS_OVERLAPPEDWINDOW and handling WM_NCCALCSIZE to make the client rect cover the entire window (and handling WM_NCHITTEST too). I don't think that's what 4coder is doing but I need to verify that.
As I was sure it happened on any OpenGL application that went fullscreen. I just created the smallest OpenGL program with a fullscreen window (code below), and it doesn't happen. So there is something wrong with my code base and I need to find out what. Should have started there.
Releasing object twice is not a good way to do this. Because that means somebody else is keeping reference and will eventually call Release, thus most likely crashing everything. Try to find out which functionality actually adds that reference - check the count after different api calls, maybe that will allow understand who is keeping that reference.
I've spent a lot of time trying to figure what part holds those references. The code is full of ReportLiveObjects call. I couldn't figure it out, that's why I posted here.
When I call release on the color and depth stencil buffer the reference count goes down on the object (I know that because if I call it several time it will eventually say that the count was zero), but the report live object call will still say there are 3 buffers/textures in use. Two of those are created by SwapChain_GetBuffer. If I call release on the buffer directly after the call they are properly released (so it appears there is 1 reference which is mine and 1 internal to d3d ?). But if I pass it to the render view target I can't release it. Calling release on the buffer doesn't decrease the number in report live object. Even if I release the render target view before. A similar thing happens for the depth stencil buffer, but with a count of 1 buffer.
If you have any idea I'm interested (the zip in the post should contains every thing needed to compile quickly).
Another alternative I would strongly suggest is to change GL to D3D11 rendering in 4coder. I expect that rendering functionality is fairly isolated, so should not be a big deal.
I figured at some point I'll have to switch to D3D11 in my codebase. If I get comfortable enough I might try to do it for 4coder, and hopefully it will not cause problem for the other platform supported.
/* cl fullscreen.c -nologo -Zi -Od gdi32.lib user32.lib opengl32.lib */ #include <windows.h> #include <gl/GL.h> #define Assert(cond) do { if (!(cond)) __debugbreak(); } while (0) static LRESULT CALLBACK WindowProc(HWND window, UINT msg, WPARAM wparam, LPARAM lparam) { switch ( msg ) { case WM_DESTROY: { PostQuitMessage( 0 ); } break; } return DefWindowProcW( window, msg, wparam, lparam ); } INT WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, PSTR lpCmdLine, INT nCmdShow) { WNDCLASSW window_class = { 0 }; window_class.lpfnWndProc = WindowProc; window_class.lpszClassName = L"Fullscreen"; window_class.hCursor = LoadCursorW( 0, ( LPCWSTR ) IDC_ARROW ); ATOM atom = RegisterClassW( &window_class ); Assert( atom ); HWND window = CreateWindowExW( 0, L"Fullscreen", L"Fullscreen", WS_POPUP, 0, 0, 1920, 1080, 0, 0, 0, 0 ); HDC device_context = GetDC( window ); PIXELFORMATDESCRIPTOR pfd = { .nSize = sizeof(pfd), .nVersion = 1, .dwFlags = PFD_SUPPORT_OPENGL, .iPixelType = PFD_TYPE_RGBA, .iLayerType = PFD_MAIN_PLANE, }; int format = ChoosePixelFormat( device_context, &pfd ); Assert( format ); DescribePixelFormat( device_context, format, sizeof(pfd), &pfd ); BOOL pixel_format_set = SetPixelFormat( device_context, format, &pfd ); Assert( pixel_format_set ); HGLRC render_context = wglCreateContext( device_context ); Assert( render_context ); BOOL render_context_set = wglMakeCurrent( device_context, render_context ); Assert( render_context_set ); ShowWindow( window, SW_SHOW ); BOOL running = 1; while ( running ) { MSG msg; while ( PeekMessageW( &msg, NULL, 0, 0, PM_REMOVE ) ) { if ( msg.message == WM_QUIT ) { running = 0; break; } TranslateMessage( &msg ); DispatchMessageA( &msg ); } glClearColor( 0.5f, 0.5f, 0.5f, 1.0f ); glClear( GL_COLOR_BUFFER_BIT ); SwapBuffers( device_context ); } return 0; }
As I was sure it happened on any OpenGL application that went fullscreen. I just created the smallest OpenGL program with a fullscreen window (code below), and it doesn't happen. So there is something wrong with my code base and I need to find out what. Should have started there.
And no. It worked and I tested several time while writing the previous post. So I started to clean up my code (removing all the DXGI test things from my code base) and my application didn't have the black flash when alt tabbing in fullscreen (without any modification). So I tried 4coder to see, and in there the problem still persisted.
Then I went back to the simple test program I posted above and now the black flash is there.