Why are the GHz speeds in computers the same after such a long period of time?

1.23K views
0

Why are the GHz speeds in computers the same after such a long period of time?

In: Engineering

We’re reaching size limitations. Our computers are so fast that the speed of electricity (a decent portion of the speed if light) itself is hindering them. Thus, we need smaller systems, but smaller systems are encountering issues with being unable to prevent electrical flow due to quantum effects.

Rather than increasing processor speed, which is becoming increasingly difficult thanks to things like substrate bleed (which is a whole other conversation), the push hasn’t been to increase clock speed (measured in Hz) but rather to simply add more processor cores. As software development has matured and proper utilization of multiple cores to get work done has become commonplace, the value of number of cores has steadily outpaced raw clock speed.

Clock speed used to be king because there was only a single “pipeline” at work in the processor. Stuff went in at one end, did what it had to do, and came out the other end. The faster you could get through the pipeline, the better. Modern processor architecture has added more and more pipelines running together. By spreading what’s coming in across multiple pipelines, it keeps everything flowing more smoothly than trying to stuff it all through one pipeline.

Additional factors include decreasing cost and size of what’s called *cache memory*. This is memory that’s actually on the processor itself and is used to store data the processor is actively using. It’s far, far faster than having to write data to system memory and retrieve it. Between increased cache memory and more effective use of multiple processor cores, the importance of raw clock speed has sharply dropped off over the past 10 years.

We’re hitting the upper speed limit for processors because of limitations like the speed of light, and we are having trouble making things smaller because on that scale quantum mechanics starts doing unexpected or unwanted things.

We are overdue for a discovery that will revolutionize computer processing yet again.

So for the past decade the focus hasn’t been to increase speed, but to increase efficiency. Processors are being made with increasing numbers of cores so they can do more at once, and bus speeds are increasing so the processor can talk to devices and RAM more quickly. All of these things translate to improved performance.

For something to oscillate past a few GHz, the length of time between two signals has to be short. That time is now *not long enough* to cross the entire length of the silicon chip. This means that one signal can “overtake” the one in front of it, which causes absolute confusion and merry hell as we design chips to be “synchronous” (i.e. things happen at the same time everywhere).

We don’t have asynchronous chips (we can have multiple chips that aren’t in sync, but one chip tends to have a single concept of “a clock signal” that turns on and off regularly and makes everything else happen).

Past about 5GHz, the length of the pulses needed for that clock signal mean that, even at the speed of light, they can’t make it across the physical length of the silicon chip before another one starts its journey.

Making the chips asynchronous, shorter, or quicker actually makes things incredibly complex and liable to all kinds of problems if there’s a bug found later on. Not to mention, the higher the clock speed, the more heat given out (because the power required to make more oscillations is greater), which means more cooling or more problems with heat, and more interference.

Pretty much, we’ve hit a physical boundary that you can only compensate for by making chips tinier and tinier (which has other problems, not least manufacturing), colder and colder (supercomputers are sometimes liquid-helium cooled or similar), or more and more complex to design, produce, run, program and diagnose.

Lets say you own a deli shop making sandwiches. When you have a large order there are 2 ways to get the job done quicker. One is to make sandwiches faster (ghz). The other is to hire more people to make the sandwich (core). Current technology it is just cheaper to hire more people than to get people to work faster.

Simply put we can’t make them any smaller. For the longest time & for most of the speed up, computers got faster because we could make their most basic part(a transistor) smaller. The smaller the parts, the smaller the distance electricity had to move to make things work.

But now we’ve gotten to the point that the transistor is only a few atoms, and any smaller & it just won’t work.

The GHz ‘speed’ is cycles per second, but that tells us nothing about how much the computer can do with each cycle. Modern computers tend to do more with each cycle, and have multiple cores running at once. So even though they don’t appear to be getting faster, they get a lot more work done in the same amount of time. Immagine a little car vs a fleet of vans; they all drive at the same speed, but the vans deliver a lot more stuff in the same time.

As for *why* the speeds haven’t increased, you can improve performance either by speeding it up, or by improving efficiency. Currently it’s easier to do the latter. Making anything go really quickly is hard; at larger scales this is generally self evident, but it still applies at smaller scales. At their heart computers rely on moving electrons about; they’re really small so they go really fast, but there’s still a limit.

The greatest the processor speed, the greater the power required for the processor to run.

The greater the power required, the greater the heat dissipated.

The more heat dissipated, the harder it becomes to make a reasonably sized machine (desktop/laptop) that is useable and does not feel like an oven.

So this sets a practical limitation. Computer designers have elected to improve performance by increasing the number of processors while maintaining their speed to a (relatively) slow speed instead.

I haven’t seen it here yet for some reason, but one of the biggest reasons is heat from power consumption. Processors get unsustainably hot because they are less efficient as power consumption increases.

For decades, if people’s computer programs were too slow, they would wait a year for processor speeds to increase in order to get a “free lunch”.

Let’s pretend your family is moving across town and you have 5000 boxes of toys. The moving truck can only fit 50 boxes at a time which means your dad will have to make 100 round trips. The fastest your dad can drive on the freeway is 65mph. Sure, he could drive at 100mph to make better time but damn it, he loves you too much to risk jail time. So instead of pushing himself to drive faster on the freeway, endangering himself and others, he decides it’s a better idea to have your mom drive a second moving truck, also packed with 50 boxes of your awesome toys. Now the two of them only have to make 50 round trips each, halving the initial time it would’ve taken, all without the risk of a speeding ticket or going to prison! Now imagine how faster the move would be if your parents also recruited your uncle Bob and aunt Sally to drive a third and a fourth moving truck. They would be able to move all of your toys 4 times faster than if your dad had to move everything by himself! To match this speed by himself, your dad would have to drive at 260mph and the U-haul down the street isn’t renting out Koenigseggs yet.

A lot of this has already been answered, but let me provide a bit of perspective from closer to the silicon level since I’m currently on an internship working with this issue. While clock speeds are important, they are not the only factor in computer performance. Thus, current designs aren’t focused solely on increasing clock speeds.

One of the main issues is simply heat. As we increase the rate transistors switch the power required increases exponentially and it gets difficult to cool.

A more fundamental issue is that transistors and associated wire have capacitances, or the ability to store electrical charge. This effectively slows down the rate you change your signal- as the electrons in these reservoirs counteracts any changes you make until the electrons it holds is depleted. This makes a nice sharp clock signal flatten out and slows down rise times.

Lastly, it is difficult to design good interconnects. Even if we have a really high clock speed, it’s not easy to design wires that can carry information at that clock speed. All wires have some capacitance and inductance where energy is temporarily stored in electric and magnetic fields instead of being sent down the wire. Worse still, the magnitude of this energy that is stored is frequency-dependent. This means at higher clock speeds/frequencies a lot more energy is “lost” before getting to the end. This means that the magnitude of the signal at the end is a lot less. For example, one thing you see is that at higher frequencies, signals on one wire start leaking to other wires close by- something you obviously want to avoid.

Processors can’t multitask. What they can do, however, is rapidly switch between different tasks. Pushing clock speed (Ghz) higher has diminishing returns for the amount of effort and cost involved, in getting better performance from the computer. Instead, its better to have multiple cores, each one capable of doing its own task so collectively, the CPU as a whole is multitasking. This means you can play a video game, for instance, having one core dedicated to that, and another to a background task so they aren’t competing to use the same core.

Used to be that we’d figure out how to cram twice as many transistors on chips every year and a half or so, that’s what we call Moore’s law. Used to be, too, that those transistors which were half the size also drew half the power, but at one point that stopped, meaning you could maybe put twice the transistors on the chip but it would also draw twice the power, meaning it would heat up twice as much, and heat dissipation became the limiting factor, not ability to make faster processors in itself.

Physically, processors can only process so quickly. We’re limited by physics. We can only make things so small before we run into issues, and we can only transmit information so quickly with our current technology.

Imagine you’re an Olympic athlete. Let’s say you do the long jump. Due to limitations of physics and due to the limitations of the human body, there’s only so far a person can really long jump. It can only be optimized so much before humans reach a ceiling where they just can’t higher records for the long jump. The Olympic record for the men’s long jump is 8.90 meters in 1968. That’s over 50 years ago!

Imagine you had to blow into a straw. If you blew slowly, it’s pretty easy. If you blew hard you’d get a lot of resistance but it’s possible. Now try blowing with your full strength. It’s very hard right? Now try the same with 2 straws. It’s suddenly a lot easier to blow air out of them. What if you had 4 or even 8? It’s similar in computers. It becomes very hard to make it tick tock after a certain point (which seems to be about now). But it is fairly easy to add more things that go tick tocking, as you have to solve more logistical issues rather that technical ones.

so if you were over 5: making a cpu with a clock speed of say 8 GHz would require a lot of advanced physics, possibly a better understanding of quantum mechanics and so on (other comments explain this better). The only thing you have to figure out with sticking more cores is, how do you remove the heat from there (as it’s a very small surface and you can conduct so much heat per square cm), how to keep them supplied with things to do. These are not easy things to tackle, but are easier than increasing clock speed. Now, this makes the job of a programmer harder, but apparently it doesn’t seem too bad for now.

I’ll give you an analogy.

Let’s say you want to clean your kitchen.

Increasing the frequency (ghz) is sort of like you moving around and doing things faster, eg walking, picking things up etc. Now, you can get yourself pretty fast if you drink a lot of coffee say, but you will reach a limit.

Now to get around this limit we can do two things in our kitchen cleaning analogy.

Adding another person to help you clean is like adding another core. As time has moved on the cores or in the analogy people over time are able to work together better, eg not getting In each others way, blocking the sink. There is a limit with this too, think of trying to clean your kitchen with 20 people, you wouldn’t be able to manage that in normal circumstances at home.

And the other way to improve performance is how you can accomplish a task. Back to the kitchen analogy. Compare manually sweeping up the dust and crumbs with a brush and using a vacuum cleaner. Or adding in a dishwasher. Lots of the performance gains in processors these days are also from optimizing how they perform common sub tasks that they will run into.

I hope that clears it up a bit.

Think a lot of these explanations are too technical for this sub.

GHz is only one factor in how fast a computer is. Like how in cars, horsepower is only one of many factors that impacts how fast it is.

Nowadays, it’s easier to make computers by making them more efficient, rather than raw power.

They went from one core that was say 3.74 GHz in 2006 – that used 115W of power and 135 mm2 in size and $999 to 6 cores that are each 4.0 GHz that use 140W of power and 82 mm2 in size for $617 in 2015

Thats a big jump in performance, dropping in size and only using 23W per core vs 115W

Imagine you had a maid who comes in and cleans your house every day. Over time it turns out you make more of a mess so you make the maid work faster and faster. But realistically, there’s only so fast she can work. That’s the problem with clock speed. You can make it faster and faster, but there’s only so fast it can reasonably go.

A much better solution is to hire multiple maids and have them work at a reasonable pace. So while one cleans the kitchen, another is cleaning the living room, etc. Overall, the amount of work they can get through is more than one maid working really fast. This is like a CPU with multiple cores.

So basically, instead of struggling to make 1 CPU that runs at 10ghz (which is really hard), manufacturers instead make a 4 core CPU where each core runs at, say, 2.5ghz, for roughly the same overall performance, and that’s really easy.

Idk if somebody mentioned this yet but there will come a point where the transistors bridges inside the CPU are so close together that the electrons will quantum tunnel across the bridge whether it’s open or closed. Basically like a light switch turning itself on and off randomly because the wires are so close together the electricity will jump across the gap anyways. So there are upper limits, defined by the laws of physics, on how tightly we can pack the transistors. There are downsides to adding cores rather than increasing speeds but there might not be much of an option. Programmers write sets of instructions and the CPU executes one, and then the next, and then the next. By using multiple cores you have to send different sets of instructions to different cpus. It’s called threading And it can be very difficult for a novice programmer to do correctly. It requires skill, knowledge, and experience to do it correctly. But if done correctly, it can be more useful to have multiple cpus than one very powerful one