How does 3D modeling work in video games?

1.10K views

I know from a very basic standpoint that game devs create a model for a character/object, but how exactly do they keep consistency? Do they use that same exact model for all cutscenes, different angles, different depth distances etc? As in if a model of a character was a mile away, could you theoretically walk all the way up to that model and it would be the same perspective as if you were walking up to a real person/object? Or say for instance you had a camera shot from the foot of a character model looking up at them, is that the same model used for every other shot to keep consistency? Sorry if I’m making no sense here lmao this has just been bugging me

In: Technology

5 Answers

Anonymous 0 Comments

adding to what is alrdy here. The camera plays a large role in models as well. If u arent seeing it on camera then it isnt physically shown in the world. They are still their as line of code but they arent being rendered. the moment u look back they pop bak up again.

Anonymous 0 Comments

There’s a few things they do to make this work.

Mainly, LOD (Level of Detail), meaning they make a nice high detail model for close up shots, then simplify that model a few times, reducing it’s complexity, and swap it in and out on the fly.

Psyk60 pointed out you can sometimes see this in games.

Other tricks that aren’t really modelling, but semi-related, are normal and displacement maps. Normal maps affect how light reflects off the model, and they can use that to simulate higher detail than there actually is.Displacement maps can affect the geometry itself (maybe DX12 tessellation, I’m absolutely guessing here) or just displace the pixel, which looks nice, just has some limitations and performance improvements.

I’m no expert, that’s all based on my limited Blender knowledge and interest in the area 🙂

EDIT: I pointed out the normal/displacement maps because sometimes you could get away with not having a different model, and just having fancy maps, but usually a combination of both since simple models are easy to spot (old games with pointy faces, etc)

Anonymous 0 Comments

You could use the same model in all of those cases. But for performance reasons most games don’t.

It doesn’t matter what angle you’re looking at the character. That’s the point of 3D models, you can look at them from any angle. Unlike 2D sprites where you needed separate images for different angles.

But there’s not much point in using a highly detailed model when it’s too far away to see that detail. So games usually use a model with a lower Level Of Detail (aka LOD), which gets switched out for one with more detail when you get closer.

In some games you can see that happening if you look closely enough when moving towards something.

They might do a similar thing for cutscenes, where a model with higher detail is used for cutscenes particularly when there’s a close up of the face.

Anonymous 0 Comments

A big thing in computer graphics is that a billion things are based on tricks, there is a million different things that in real games work really weird, because if something is stupid but looks good it’s not stupid.

At the default, yes, the model is the model and you could move the camera up to it and around it and it’d be the same model. In real life games, there is often a million little graphical tricks and some far away background thing or something in a cut scene might really be low quality, or just a flat texture, or missing parts that you can’t see, or just a looping video, or a million other weird gimmicks.

Like the straightfoward way to do things is just make a complete model and model it at all times, but there is lots of resource reasons that stuff is done weird. If you walk up to something like a mirror and it works, you can almost guarantee what is really going on is the stupidest thing in the world, but if it looks good, it’s fine.

Anonymous 0 Comments

3d modelling for games and other media are essentially the same. The basic principles are:

A wireframe: This is the boundaries of your object. Wireframes are made up of vertices, in general, the more you have, the “smoother” an object will look. However, the more there are, the more computational power is needed. This would be the outline of all of the distinct faces on a house.

Textures: These “paint” material over the wireframe. As the detail in the texture increases (or its complexity) the more computational power is needed. These would be wallpaper, flooring, etc. There are lot of tricks, like bump maps, but they aren’t necessarily worth digging too far in to.

Lighting is self explanatory.

Viewports are like cameras, it’s your perspective viewing a scene with one or many models, light sources etc. Only what can be seen in the viewport is rendered, so for example, if you’re looking at a house, the back of the house, and its interior are not rendered, if you cannot see them.

Because an object far away (relative to the viewport) will not need a lot of detail, you can get away with using fewer vertices, and less detailed textures. So essentially it’s the same model, just with less detail. Using the house as an example, you don’t need to individually render shingles on a roof if you’re far away, but as you get close you may need to, and even closer you may need better textures to show grain.

Depending on the cut scene, you may use different models (and processes) entirely, especially if you can’t interact with it, because you know exactly what the viewport will be. But you could also use a super detailed version of the same model.