Howdy folks,
I'm building a little game and engine heavily influenced by Casey's Handmade Hero, but swapping out the software renderer for DirectX 11. FWIW, I'm completely new to graphics programming.
I was about to start working on displaying debug text and I was planning on using the stb library to prerasterize a font and then draw each character as a 2d texture across a quad. I think that will work fine for a little debug text here and there, but is there a better solution for the final game if I end up needing a lot of dynamic (e.g., user entered) text displayed?
I figure for any large known sentences I can create a single texture in advance. But I don't know what a good strategy is for text I don't know in advance other than many draw calls. If I have hundreds of extra draw calls, I'm guessing I'll be able to handle that OK, but it also feels a little wasteful. For example, is there maybe a direct3d method of lining up a bunch of textures side-by-side on a quad for a single draw call?
(I'm aware that there is some kind of DirectWrite API, but I had read somewhere that it may have some quirks with DX11 and if I can just use Direct3D instead I think I'd prefer to do so.)
Thanks!
After thinking about it for a minute, maybe my real question is: is there a way to specify different textures for different sets of vertices in a single draw call?
For example, one approach may be that if I need to draw a bunch of letters I could:
Does that make sense? I think that would cut down on any draw calls for duplicate letters and reduce texture switching.
But is there instead a way to:
If the second option is even possible it seems a little more complicated to wrap my mind around, so maybe I'll just see how far that first option takes me. It seems like that could cut my draws down from potentially hundreds to maybe dozens which is probably enough.
Let me know if I'm talking nonsense.
The typical way to render text is to create a single texture with all the characters pre-rendered in it (multiple textures can be used if there are a lot of characters or if you want to use different alphabets) and use UV coordinates to select which part of the texture to render.
UV coordinates are coordinates of the texture that generally go from 0 to 1 on U
(the horizontal dimension of the texture) and V
(the vertical dimension). For example using UVs of (0.0, 0.0) (bottom left) to (0.25f, 0.5f) top right you would "map" the bottom left corner of the texture with a rectangle that is 1/4th of the width and half the height.
You pass one UV coordinate for each vertex, and the graphics card will map the bound texture to the triangles.
You can create your vertex (with UVs) and index buffer and draw all the text in a single drawcall (using primitive restart index, I don't know if directx use that term, but you can look at the last paragraph here).
If you prefer working with pixel shaders, another way is to store the character's UV offsets in a texture without interpolation and apply that when reading fonts from an atlas. Use monospace and fill voids with spaces when generating on the CPU. Slightly more pixel processing on the GPU by having to fill empty pixels, but you can have a tight bound on the quad and store the resulting text image for next render unless it needs to update with new text.
This makes sense! Especially the single texture with all the characters in it, and UV mapping to quads.
Thanks!!
Alternative to vertex/index buffer is to use StructuredBuffer (SSBO in OpenGL). Have something like:
struct Glyph { int2 pos; // position on screen int2 size; // size of glyph in texture int2 offset; // offset of glyph in texture (maybe int3 if you need to use array texture) // int color; // rgba color + other info if needed };
Then you fill buffer with these structures, only one per glyph - so you will need only 6*4 bytes per glyph which is much smaller than any vertex/index buffer apporach.
Then you issue to draw 6*glyph count vertices without any InputLayout bound. And in vertex shader you use SV_VertexID
to map vertex index to array index into this buffer - fetch pos/size/offset and calculate proper vertex coordinate & uv values to pass to fragment shader.
This may not be much improvement over simple vertex/index buffer approach because usually there is not so many glyphs drawn for this to be bottleneck. But anyway this is good technique to know how to use it, it often is useful for different purposes.
Thank you for the suggestion! I had not heard of a structured buffer, so I may give that approach a try if only to figure it out.
Not sure if this adds anything than what’s been suggested, it’s a text editor (very basic) that I’ve been doing, uses d3d11 to render the text. https://github.com/Olster1/woodland
It renders a texture atlas - 256 characters per atlas, and creates a new atlas if the user inputs a character that isn’t in an atlas yet. Than renders this using a pixel shader.
Quick update.
First, thanks for the reference app, @OliverMarsh! I haven't reviewed it yet (probably should), but I appreciate the sharing.
Second, thanks for the suggestion of using a structured buffer, @mmozeiko! Although not expressly necessary, working through that helped me understand a bit more about how the gpu pipeline works.
I used the stb truetype library to pull bitmaps for the codepoints I wanted, and it worked great. It looked like the stb library had some kind of PackBegin/PackEnd stuff to do the texture packing, but I couldn't figure out how to make that work. So I just loaded bitmaps for all the codepoints and stitched together my own glyph atlas. Worked great!
Here's my goofy test scene. One draw call for all the text!
It was suggested to use just pos, size, and offset, but I couldn't think of a way to get my desired results without pos, size, and two sets of uv values (two extra floats basically). And my math for my vertex shader is probably goofy. I was assuming I wanted to avoid conditionals in my shaders so I came up with some goofy math to get the right values. The vertex shader ended up looking like this:
vs_out vs_main(vs_in input) { float light = 1.0f; vs_out output; // NOTE: triangle order should be 2, 0, 1, 2, 1, 3 // matching vertices of 0, 1, 2, 3, 4, 5 int vertRem = input.vertId % 6; int xCoeff = clamp(vertRem - 1, 0, 1) - clamp(vertRem - 2, 0, 1) + clamp(vertRem - 3, 0, 1); int xCoeffFlip = (xCoeff - 1) * -1; int vertRemY = (input.vertId + 5) % 6; int yCoeff = clamp(vertRemY-1, 0, 1) - clamp(vertRemY-2, 0, 1) + clamp(vertRemY-3, 0, 1); int yCoeffFlip = (yCoeff - 1) * -1; glyph_quad gq = glyphs[input.vertId/6]; float x = gq.position[0] + (gq.size[0] * xCoeff); float y = gq.position[1] + (gq.size[1] * yCoeffFlip); float4 pos = float4(x, y, 30.0f, 1.0f); matrix mvp = mul(projectionMatrix, mul(viewMatrix, objMatrix)); output.position = mul(mvp, pos); // NOTE: texcoords are populated in order of u1, v1, u2, v2 float u = gq.texcoords[0] * xCoeffFlip + gq.texcoords[2] * xCoeff; float v = gq.texcoords[1] * yCoeffFlip + gq.texcoords[3] * yCoeff; output.texcoord = float2(u, v); output.color = gq.color * light; return output; }
I'm not sure if that kind of approach is typical in shaders or not. But it seems to work! Plenty of room for improvement with what I've got, but I'm very happy with how it all turned out. Thanks for the help!!
You don't need two uv's because you can get them from size & offset (where glyph is located in atlas texture).
Texture2D<float> texture; // your atlas texture float xc = ...; // calculate 0 or 1 for left or right vertex float yc = ...; // calculate 0 or 1 for bottom or top vertex float2 coord = float2(xc, yc); float2 texSize; texture.GetDimensions(texSize.x, texSize.y); float2 uv = (offset + size * coord) / texSize; // done, output uv
float2 texSize; texture.GetDimensions(texSize.x, texSize.y);
I did not know I could do that! Good stuff. Thanks!