Why are games rendered with a GPU while Blender, Cinebench and other programs use the CPU to render high quality 3d imagery? Why do some start rendering in the center and go outwards (e.g. Cinebench, Blender) and others first make a crappy image and then refine it (vRay Benchmark)?

999 views

Why are games rendered with a GPU while Blender, Cinebench and other programs use the CPU to render high quality 3d imagery? Why do some start rendering in the center and go outwards (e.g. Cinebench, Blender) and others first make a crappy image and then refine it (vRay Benchmark)?

In: Technology

17 Answers

Anonymous 0 Comments

Games and offline renderers generate images in very different ways. This is mainly for performances reasons (offline renderers can take hours to render a single frame, while games have to spew them out in a fraction of a second).

Games use rasterization, while offline renderers use ray-tracing. Ray tracing is a lot slower, but can give more accurate results than rasterization^[1]. Ray tracing can be very hard to do well on the GPU because of the more restricted architecture, so most offline renderer default to the CPU.

GPUs usually have a better computing power/$ ratio than CPUs, so it can be advantageous to do computational expensive stuff on the GPU. Most modern renderers can be GPU accelerated for this reason.

> Why do some start rendering in the center and go outwards (e.g. Cinebench, Blender) and others first make a crappy image and then refine it (vRay Benchmark)?

Cutting the image into square blocks and rendering them one after the other make it easier to schedule when each pixels should be rendered, while progressively refining an image allows the user to see what the final render will look like quickly. It’s a tradeoff, some (most?) renderer offer the two options.

************

[1] This is a massive oversimplification, but if you are trying to render photorealistic images it’s mostly true.

Anonymous 0 Comments

These are all different programs each with a different way of rendering graphics.

GPUs tend to render the image as a series of triangles with textures on them. This is good enough for video games and more importantly with the GPU it can be done in real time so you can get 60-120 frames per second without too much issue. Lighting calculations must be done separately and you’ve likely seen video games produce crappy shadows for moving objects and maybe have a setting to control how good they look in exchange for CPU performance.

You CAN make GPUs do rendering differently, but you have to write the code to do it yourself rather than using Direct3D or OpenGL to do it for you. This can be difficult to do as it’s like a whole new language.

These other programs use different methods of rendering. What matters most though is they are doing it pixel by pixel and take the properties of light and reflection very seriously. The shadows produced will be as close to perfect as possible taking into account multiple light sources, point vs area light, and reflections. Consequently they look VERY good but take a lot longer to render.

Starting from the centre and working your way out is just a preference thing. Some renderers start from the top-left corner. But since the object in question tends to be at the centre of the camera shot and these renders take a while, starting from the centre makes sense in order to draw the thing in frame most quickly.

vRay renders the whole frame at once rather than starting in a small spot and working its way out. I don’t use it, but from seeing other benchmarks I suspect it works by firing light rays from the light sources (eg: the sun) which find their way to the camera rather than firing scanning rays from the camera to produce an image more consistently. This means the image is produced chaotically as photons from the sun find the camera rather than the camera discovering the scene lit by the sun.

Anonymous 0 Comments

GPU = quick and dirty.

CPU = slow but perfect and doesn’t need expensive hardware.

If you’re rendering graphics for a movie, it doesn’t matter if it takes an hour per frame, even. You just want it to look perfect. If you’re rendering a game where it has to be on-screen immediately, and re-rendered 60 times a second, then you’ll accept some blur, inaccuracy, low-res textures in the background, etc.

How the scene renders is entirely up to the software in question. Do they render it all in high quality immediately (which means you have to wait for each pixel to be drawn but once it’s drawn, it stays like that), or do they render a low-res version first, so you can get a rough idea of what the screen will look like, and then fill in the gaps in a second, third, fourth pass?

However, I bet you that Blender, etc. are using the GPU just as much, if not more. They’re just using it in a way that they aren’t trying to render 60fps. They’ll render far fewer frames, but in perfect quality (they often use things like compute shaders, for example, to do the computations on the GPU… and often at the same time as using the CPU).

Anonymous 0 Comments

Blender does use GPU to speed up its Cycles rendering engine. Larger scenes may cap out the Vram on the GPU so you may have to use CPU for rendering.

Anonymous 0 Comments

Blender can also use the GPU, most render farms for blender do use the GPU since it is faster and cheaper.
Games and such use a different renderer.

Anonymous 0 Comments

Almost every 3D software has its own rendering engine that’s different from others by the kinds of calculations it does in order to produce an image.

– Videogame engines are optimized to do rendering in real-time, and GPUs are in turn optimized to help them achieve that. Making the quality as good as possible while being able to render 30/60/240 frames per second. Videogames do *a lot* of shortcuts and clever tricks do make the image look great with minimal computing. Like normal maps, baking in lighting, a plethora of shaders, lots of post-processing, etc.

– Professional 3D rendering engines are optimized for quality and realism. As in, putting an actual light in the scene, and calculating how the rays will bounce off the objects and into the camera. Those kinds of calculations take more time, but produce much better results and are more flexible.

But when it’s all said and done, the rendering calculations themselves can be processed by the CPU or GPU cores, depending on which will do the task faster/cheaper/more energy efficient with the software in question.

You can try it for yourself with Blender. Take any scene, and render it out using Cycles renderer. First using a GPU and then a CPU to see how they’ll perform. A GPU will render one sector at a time, but very fast, whereas a CPU will render multiple sectors at once (with each of its physical cores), but each sector will take longer to render.

But that’s an ELI5 version, 3D rendering is one of the most mathematically complex subjects in computer science and I’m too uneducated to dive into more details.

Anonymous 0 Comments

Most of Ray Tracing renders like vRay or Cycles had options for GPU rendering for long time. Problem is that heavy scenes need large pools of memory something that wasn’t available for GPUs until recent. If GPU can’t load a scene into it’s memory it simply can’t render it at all which means despite CPU being slower it’s still better because it can complete task, CPU can have terabyte of RAM… however with more modern CUDA GPU can also use RAM in addition to VRAM for rendering.

Games heavily optimized to be used in real time renders with stable FPS and fit into GPU memory, while scenes in Blender or other 3d packages aren’t and usually much more heavy.

>Why do some start rendering in the center and go outwards (e.g. Cinebench, Blender)

No real reason as example Blender have options for this, centre is good because that usually focus of the picture, why would you want to spend time rendering corner that might not show potential errors…

>and others first make a crappy image and then refine it (vRay Benchmark)?

More samples, more precision.

Anonymous 0 Comments

There are both CPU and GPU renderers for offline rendering. GPUs have massive scalability and so more and more people are using them because they can just throw in another gpu and increase their render speed whereas with a CPU you might have to change your entire system. Games are heavily optimised and rul alot better on GPU due to the amount of precomputation that is usually done to optimise loading times and things like that.

Anonymous 0 Comments

Software engineer here. There’s a lot of wrong information in here guys… I cannot delve into all of it. But these are the big ones: (also, this is going to be more like an ELI15)

A lot of you are saying CPU render favors quality and GPU does quick but dirty output. This is wrong. Both a CPU and GPU are chips able to execute calculations at insane speeds. They are unaware of what they are calculating. They just calculate what the software asks them to.

**Quality is determined by the software.** A 3D image is built up by a 3D mesh, shaders and light. The mesh (shape) of which the quality is mostly expressed in amount of polygons, where high poly count adds lots of shape detail but makes the shape a lot more complex to handle. A low poly rock shape can be anywhere from 500 to 2000 poly, meaning amount of little facets. A high poly rock can be as stupid as 2 to 20 million polygons.

You may know this mesh as wireframe.

Games will use the lowest amount of polygons per object mesh as possible to still make it look good. Offline renderer projects will favor high poly for the detail, adding time to calculate as a cost.

That 3D mesh is just a “clay” shape though. It needs to be colored and textures. Meet shaders. A shader is a set of instructions on how to display a ‘surface’. The simplest shader is a color. Add to that, a behavior with light reflectance. Glossy? Matte? Transparant? Add those settings to calculate. We can fake a lot of things in a shader. A lot of things that seems geometry even.

We tell the shader to fake bumpiness and height in a surface (eg a brick wall) by giving it a bump map which it used to add fake depth in a surface. That way the mesh needs to be way less detailed. I can make a 4 point square look like a detailed wall with grit, shadows and height texture all with a good shader.

Example: http://www.xperialize.com/nidal/Polycount/Substance/Brickwall.jpg
This is purely a shader with all texture maps. Plug these maps in a shader in the right channels and your 4-point plane can look like a detailed mesh all by virtue of shader faking the geometry.

Some shaders can even mimic light passing through like skin or candle wax. Subsurface scattering. Some shaders emit light like fire should.

The more complex the shader, the more time to calculate. In a renderend frame, every mesh needs it’s own shader(s) or materials (configured shaders, reusable for a consistent look).

Let’s just say games have a 60 fps target. Meaning 60 rendered images per second go to your screen. That means that every 60th of a second an image must be ready.

For a game, we really need to watch our polygon count per frame and have a polygon budget. Never use high poly meshes and don’t go crazy with shaders.

The CPU calculates physics, networking, mesh points moving, shader data etc per frame. Why the CPU? Simple explanation is because we have been programming CPUs for a long time and we are good at it. The CPU has more on its plate but we know how to talk to it and our shaders are written in it’s language.

A GPU is just as dumb as a CPU but it is more available if that makes sense. It is also built to do major grunt work as an image rasterizer. In games, we let the GPU do just that. Process the bulk data after the CPU and raster it to pixels. It’s more difficult to talk to though, so we tend not to instruct it directly. But more and more, we are giving it traditionally CPU roles to offload, because we can talk to it better and better due to genius people.

Games use a technique called Direct Lighting. Where light is mostly faked and calculated as a flash. As a whole. Shadows and reflections can be baked into maps. It’s a fast way for a game but looks less real.

Enter the third (mesh, shader, now light) aspect of rendering time. Games have to fake it. Because this is what takes the highest render time. The most accurate way we can simulate light rays onto shaded meshes is Ray tracing. This is a calculation of a light Ray travelling across the scene and hitting everything it can, just like real light.

Ray tracing is very intensive but it is vastly superior to DL. Offline rendering for realism is done with RT. In DirectX12, Microsoft has given games a way to use a basic form of Ray tracing. But it slams our current cpus and gpus because even this basic version is so heavy.

Things like Nvidia RTX use hardware dedicated to process Ray tracing, but it’s baby steps. Without RTX cores though, RT is too heavy to do real time. But technically, RTX was made to process the DirectX raytracing and it is not required. It’s just too heavy to enable for the older GPU’s and it won’t make sense.

And even offline renderers are benefiting from the RTX cores. Octane Renderer 2020 can render scenes up to 7X faster due to usage of the RTX cores. So that’s really cool.

— edit

Just to compare; here is a mesh model with Octane shader materials and offline raytracing rendering I did recently: https://i.redd.it/d1dulaucg4g41.png (took just under an hour to render on my RTX 2080S)

And here is the same mesh model with game engine shaders in realtime non-RT rendering: https://imgur.com/a/zhrWPdu (took 1/140th of a second to render)

Different techniques using the hardware differently for, well, a different purpose 😉

Anonymous 0 Comments

Can you ask this question like I’m five?