How exactly are graphics cards rated and compared to each other?

96 views
0

How exactly are graphics cards rated and compared to each other?

In: Technology

If you’re referring to the numbers on cards such as 1060 or 2070 they are an arbitrary system from the card manufacturers. The first part of the number like 10- or 20- refer to the generation of the card while the second part indicates where the card is compared to other cards of that generation like -60 or -70.

If you’re talking about how the cards perform then there are various “benchmarks” which test the cards by rendering or doing other difficult tasks and then measuring how well they perform those tasks. That measurement of performance is then converted to a number allowing different cards to be compared on that specific benchmark.

There’s a few ways to compare graphics cards.

TFLOPS (trillion floating point operations per second) is the common one used in a lot of marketing these days. GPUs are just a bunch of simple processor cores specifically designed to just solve a lot of basic math equations. And they do it really fast. FLOPS is how fast they do it.

The problem with FLOPS though is that it’s only really useful for comparing GPUs from the same manufacturer, and using the same architecture. It’s more of a theoretically maximum performance. And the way Nvidia and AMD measure it differs. You’ll notice a lot of Nvidia cards have a lower floating point score than AMD’s, but still outperform AMD’s. When it comes to gaming, floating point matters, but it’s not all that matters, and there are other factors that affect performance. Indeed, higher floating point scores tend to matter more for productivity than they do video games.

Memory bandwidth (how fast the VRAM is on the card) and how efficient the GPU architecture is at handling certain tasks play a big role in how well a card will perform in games. Another factor is how many compute units a GPU has, that is how many cores it has. Manufactures “bin” components. The process for making them isn’t an exact science. Some will come out perfect, some will have defective cores. Those defective cores get are disabled, and the GPU is sold off at a lower price. The GPU will still work fine, but it won’t have the same performance as a perfect card. This is done with CPUs as well. And it’s primarily how performance tiers are set up.

Some other measures we can use are gigapixels per second, how many pixels it can render on screen at a time, and gigatextels per second, how many texture components can be rendered at a time. Older GPUs also used to measure polygons per second, as all 3D models are made up of triangles (polygons).

The most common way though is to benchmark them with real world applications. Test to see how they perform.