Handmade Network»Forums
Abhaya Uprety
8 posts
Q: What is the workflow from creating the mesh of an object in a 3d tool to the appropriate application of scaling and translation to place it in a world view inside a game?
Edited by Abhaya Uprety on

Here is how I think this would be done, but I pretty sure there is a more standard way of doing this:

Artist would start by modelling mesh of an object in the 3d tool but before exporting, he would make sure that all the vertices are centered about the origin and that they are normalized between (0, 1) or (-1, 1).

On the programming side, the necessary scale and translation to place it in the world view would come from some place like a global variable or some config file. That scale and translation value would be suggested by artist and after some tweaking around in the world view, the final value would be decided.

Having said that, I saw an example where a building was modeled with individual cylinders and cones and the trees were modeled the same way using cones and cylinders of unit sizes and then scaling and translation were applied right in the source code. From what I can say, this is not the most efficient way of doing this for variety of reasons, and instead it would be better to model the entire scene in the 3d tool and export the entire scene and not just rudimentry cones and cylinders. My question is, how would you model a complex scene say with houses, cars, people, perhaps lighting ,etc.

Also to challenge my vertex normalization, I have seen plenty of examples of .obj files online where vertices weren't normalized. Are they already scaled right in the 3d edit tool before exporting?

Basically my question is about the standard and effective workflow from creation of the mesh of an object in a 3d tool to its placement in a world view inside a game.

P.S. I don't use game engines, and all I have is a mesh loader library. I don't need skinning. I am mostly interested in joints for animation, some world around me to interact with and some experimentation with camera movement.

Mārtiņš Možeiko
2559 posts / 2 projects
Q: What is the workflow from creating the mesh of an object in a 3d tool to the appropriate application of scaling and translation to place it in a world view inside a game?
Edited by Mārtiņš Možeiko on

Afaik nobody really normalizes vertices when exporting model from 3d software. They can have any values. Ideally centered about "origin" of object, but that's also not necessary, as you can just move object around to place it in scene wherever you want.

If you can you can agree with artist to use position values that mean something - meters, inches, etc... then that's nice. But that doesn't matter really. When you place object in scene you typically maintain its local transform which includes scaling, translation & rotation so you can freely adjust is as needed.

Simon Anciaux
1337 posts
Q: What is the workflow from creating the mesh of an object in a 3d tool to the appropriate application of scaling and translation to place it in a world view inside a game?

My experience is that when modeling, you generally set up what a unit in the modeling software means. For example, 1 unit means 1 meter. Then you try to get the model origin somewhere sensible in the model. For example for a character, the model origin should be between their feet. That is so that when you import the model in the game, it will be display with a valid size, position and orientation with a identity matrix. The artist making the model and the person creating the level aren't always the same person.

The file/scene in the modeling tool can contain several model (other real models, visual references...) so you generally export models separately. You export models separately (e.g. the wall, furnitures of a house) because it makes it easy to create several different things using the different pieces or adjust things based on the gameplay without needing to go back to the 3d tool and redo the export (if the level designer and modeler are not the same person it will be slow).

While creating the models, they can be anywhere in the scene. For example, if you are modeling a house, you model the different pieces of furniture and place them in the house to make sure they look OK (in most 3d package you can have object references, so you model in one scene and assemble a test scene on the side/other file). But when you export them you want to use the object origin, not the scene origin. Another reason to export scene in parts, is that it allows you to hide part of it in the engine to make it easier to navigate the scene.

In the engine after you created a level you can batch models together to draw several with one draw call (not limited to one model, as long as they use the same shader/properties/textures). This can be done at runtime, offline or both.

183 posts / 1 project
Q: What is the workflow from creating the mesh of an object in a 3d tool to the appropriate application of scaling and translation to place it in a world view inside a game?
Edited by Dawoodoz on

For canned animations, it can make sense to place the origin at the feet, but for skeletal animations it's also easy to recalculate the offset from a dynamic bounding box using the initial pose and just let the map store the spawning point at the ground.

Rigid dynamic objects are centered around the point of mass for realistic rotation.

Buildings should be made using reusable building blocks (How Assassin's Creed does it) so that a highly detailed model with a chimney or roof tiles can be repeated for many houses that all look unique but with the same style.

  • For highly detail parts, it makes sense to render individual models to reduce the memory use and get more fine control over occlusion and reduction of detail level at a distance.
  • For many low detailed parts, you should bake them together into blocks of 30x30 meters with depth sorted triangles for each material, to reduce overhead from draw calls. Your game can recalculate the combined geometry dynamically if static geometry is added or removed, while falling back on instancing while the block is having the triangle count optimized in a background thread.

Normals have to be normalized per pixel to get specular light between vertices, but they should still have the same length to avoid uneven interpolation weights. To save space, items using only faceted or smooth normals can have them calculated from triangle positions when loading assets.

Miles
131 posts / 4 projects
Q: What is the workflow from creating the mesh of an object in a 3d tool to the appropriate application of scaling and translation to place it in a world view inside a game?
Edited by Miles on

From what I can say, this is not the most efficient way of doing this for variety of reasons, and instead it would be better to model the entire scene in the 3d tool and export the entire scene and not just rudimentry cones and cylinders. My question is, how would you model a complex scene say with houses, cars, people, perhaps lighting ,etc.

Typically you don't export an entire scene from the 3D software, you generally export individual models and use the game's editor tools to place the models and lighting into the game world, and to attach them to things like physics bodies, pathfinding data, etc.

In the engine after you created a level you can batch models together to draw several with one draw call (not limited to one model, as long as they use the same shader/properties/textures).

Really? I was under the impression that instanced rendering is only possible with the same model and shader, but with different properties and (to a limited extent) different textures.

Rigid dynamic objects are centered around the point of mass for realistic rotation.

Calculating the center or mass is something handled in code by a physics system, not by artists. And the center of mass is probably based on a simplified collider rather than the full-detail mesh anyway.

For many low detailed parts, you should bake them together into blocks of 30x30 meters with depth sorted triangles for each material, to reduce overhead from draw calls. Your game can recalculate the combined geometry dynamically if static geometry is added or removed, while falling back on instancing while the block is having the triangle count optimized in a background thread.

This kind of optimization is highly game-specific and completely irrelevant to someone just getting started with a 3D asset pipeline.

Simon Anciaux
1337 posts
Q: What is the workflow from creating the mesh of an object in a 3d tool to the appropriate application of scaling and translation to place it in a world view inside a game?
Edited by Simon Anciaux on Reason: fixed link
Replying to notnullnotvoid (#25006)

notnullnotvoid
Really? I was under the impression that instanced rendering is only possible with the same model and shader, but with different properties and (to a limited extent) different textures.

I wasn't referring to geometry instancing, but to draw call batching (I had the Unity thing in mind, but I suspect there is something similar in Unreal).

Miles
131 posts / 4 projects
Q: What is the workflow from creating the mesh of an object in a 3d tool to the appropriate application of scaling and translation to place it in a world view inside a game?
Replying to mrmixer (#25008)

Your link is broken, it's a link to reply to this thread instead of a link to the unity documentation page I assume it was supposed to point to. Not sure if that's a mistake or the new site being buggy.

Simon Anciaux
1337 posts
Q: What is the workflow from creating the mesh of an object in a 3d tool to the appropriate application of scaling and translation to place it in a world view inside a game?

Sorry, I must have mixed things up. Link fixed.

Mārtiņš Možeiko
2559 posts / 2 projects
Q: What is the workflow from creating the mesh of an object in a 3d tool to the appropriate application of scaling and translation to place it in a world view inside a game?
Replying to mrmixer (#25016)

There's also extra information in SRP batcher: https://docs.unity3d.com/Manual/SRPBatcher.html

Miles
131 posts / 4 projects
Q: What is the workflow from creating the mesh of an object in a 3d tool to the appropriate application of scaling and translation to place it in a world view inside a game?

It's surprising to me that transforming such large meshes on the CPU would ever be a win, but if Unity went to the trouble of implementing it, I guess it must have been a significant win for them at some point. I'm pretty sure I remember finding in Escher that generating (not even transforming) <50 verts per model on the CPU and uploading that buffer to the GPU was already slower than issuing a draw call per model. But that was single-threaded code and there were relatively minimal state changes between draw calls, so I suppose it makes sense that for Unity's renderer the tradeoff could be very different. That SRP batcher page seems to suggest that the number of state changes is indeed what primarily makes their default render path slow, rather than the draw calls themselves. But I don't use Unity so the engine-specific details go over my head.

183 posts / 1 project
Q: What is the workflow from creating the mesh of an object in a 3d tool to the appropriate application of scaling and translation to place it in a world view inside a game?
Edited by Dawoodoz on

Merging together instances into a huge vertex buffer is the low hanging fruit that can be reused for most outdoor city games. Just check which low detailed items have their origins within the bound and add their triangles transformed into the combined model using a nested loop. Only has to be done when walking into a new area.

Miles
131 posts / 4 projects
Q: What is the workflow from creating the mesh of an object in a 3d tool to the appropriate application of scaling and translation to place it in a world view inside a game?
Edited by Miles on
Replying to Dawoodoz (#25036)

Right. So it's an optimization strategy which only applies to certain specific kinds of games. And it also conflicts or has complex interactions with many other common rendering optimizations. And you have to take care to not blow up memory usage by duplicating many copies of a mesh. And the heuristics for making good decisions about which objects to merge would depend on the specifics of the game and platforms you're optimizing for. And if done procedurally at runtime as you imply, it would likely be mostly separate from the asset pipeline anyway, which is what this thread is asking about. And nevermind all the oddly specific details you originally gave about an exact grid size and shape, depth sorting, materials, and multithreading, which if I had to guess, seem like they were either pulled from a single specific game, or just made up whole-cloth based on nothing.

183 posts / 1 project
Q: What is the workflow from creating the mesh of an object in a 3d tool to the appropriate application of scaling and translation to place it in a world view inside a game?
Edited by Dawoodoz on
Replying to notnullnotvoid (#25037)

OP did discuss exporting the whole complex level (including references to city objects) as a single model, which is specifically the type of scene where these techniques are useful to keep memory usage down and visual quality up. Many beginners get frustrated when a single model level cannot go past low detailed Counter Strike clones with monolith levels. Exporting from the 3D modeling application grinds to a halt with quadratic complexity and maximum size limits as the level gets bigger. Might as well skip that painful experience with a few lines of code that can be reused in a game engine. Making a level requires a level editor (always specific to a type of game) so that you have more than just the visuals to load. Chimneys being loaded as their own models can make placement of particle emitters trivial instead of tedious.

511 posts
Q: What is the workflow from creating the mesh of an object in a 3d tool to the appropriate application of scaling and translation to place it in a world view inside a game?
Replying to Dawoodoz (#25044)

Also if you have multiple copies of and object in the level, like several identical chimneys, then it's less memory to instead have only 1 chimney model and instead store the positions/rotations they need to be transformed to.

This can save a lot of memory (both in the model file and in gpu memory) is applied to all the little props that detail a level. and lets you apply LOD techniques to not render 1000 triangles of detail for 2 pixels of screen real estate.

Splitting the level into multiple parts also lets you do culling, only rendering the part of the level you are in while the rest is blocked by scenery.

183 posts / 1 project
Q: What is the workflow from creating the mesh of an object in a 3d tool to the appropriate application of scaling and translation to place it in a world view inside a game?
Edited by Dawoodoz on
Replying to ratchetfreak (#25045)

Exactly.

Highly detailed and only a few of the same seen at a time: Use individual draw calls to get more control over triangle count.

Highly detailed but many similar close together within a shared culling bound: Use hardware instancing to allow randomizing the pattern and automatically following the ground's height map. A pile of leaves or empty beer cans.

Many low detailed items that can reuse the pattern: Merge together in a model stored as a file. Grass can be designed with extra height below to not mind overlap with the ground and can have individual straws placed manually for each set of grass to prevent ugly overlaps.

Many different low detailed models of the same material that is specific to the level design: Merge into a model while loading the area for a 95% reduction in render time while keeping useful information about locations and functionality. Regions need to be small enough to have efficient culling yet large enough to reduce data transfer over PCI express. Can have a dynamically generated model for a combination of low detailed brick modules to allow seeing the city skyline kilometers away with only a few extra draw calls, triangles and pixels. An atlas texture can be used to have many images sharing the same draw call if the textures don't have to be looped over large polygons.