Why is file size measured in megabytes but internet speed measured in megabits?

482 views

Why is file size measured in megabytes but internet speed measured in megabits?

In: Technology

6 Answers

Anonymous 0 Comments

it’s both more accurate and more intuitive to measure speed as a factor of the number of bits per second that an internet connection is capable of transmitting, not the total number of memory units, or bytes, it transmits.

Anonymous 0 Comments

Simply advertising and deceptive manner. Because a byte is 8 times as a bit. So saying 20 megabit is more satisfying than saying 2.5 megabyte.

Anonymous 0 Comments

Because using bits makes for 8x bigger numbers, and big numbers sound good for marketing purposes.

Anonymous 0 Comments

Other than the marketing angle, it does have a lot of sense for hardware engineers.

File size typically refers to storage and storage is organized in terms of bytes because the data moves mostly in parallel data lines that are byte or multiple byte sizes. From the 70’s onward, a byte was almost universally 8 bits “wide”. The storage elements are also organized that way (RAM, HDD etc) from a hardware perspective. Typically the hardware cannot address a bit, the engineers designed their address logic around byte wide “chunks”.

Data transmission especially serial transmission (ethernet and pretty much all internet connections) are sent bit-wise. From an engineering perspective, the limiting factor is how quickly the bits can be processed, which gives the bandwidth. The protocol will add their own “stuff” like framing bits etc but to the hardware layer, all of that are counted as a “bit” from a timing perspective.

Anonymous 0 Comments

Tradition mostly at this point.

Up until the dial up modem and ISDN point, a common way of measuring transfer speeds was Baud alongside the bitrate.

Baud was a way of measuring transfer speed going back to the morse telegraph and measured in symbols per second.

People cared about how fast the text they were sending was transmitted and not the underlying technology. And the bit-per-second to baud ratio could vary quite a bit.

If you transferred pure ASCII characters you could encode on character with only 7 bit, but computers quickly standardized to a single character being represented by an 8 bit byte.

How much data in bytes you transferred over a line with a fixed bit-rate could depend on the encoding scheme that was used.

Baud was dropped eventually as a relic of an analogue past and transfer speed came to be measured in bits per second, not only because that was a large number, but also because it was a consistent number.

The people who provided the technology could advertise their connection in bits per second and what the user gout out of it in bytes per second was their problem.

The gross bit rate is something fixed and guranteed on a hardware level. the effective bitrate and the resulting bytes are depending on overhead in various protocols, compression, error correction and other stuff that the people responsible for the hardware have little influence over.

So bitrate is what is promised, byte rate is what you make of it.

It of course helps that 1 terabit per second sounds a lot more impressive than 125 gigabytes per second max with around 110 gigbytes in normal usage.

Anonymous 0 Comments

This does have a historic presidence. We find the same terms used in the same way all the way back in the 60s. This was before we standardized on 8 bits in a byte. There are examples of systems using byte sizes all the way from 5 bits to 24 bits and everything in between. And because you are using any of these when transfering data over the same link it did not make sense to use bytes when talking about bandwidth. However storage was more dependent on the byte size and even had hardware error detection for each byte. That made it more sensible to talk about bytes when talking about storage. Another thing is that the term bandwidth comes from radio communications where it is used to describe a certain width of the frequency band that is in use. And it turns out that the amount of bits you can transfer through a cetrain bandwidth is the same as the frequency of the bandwidth. So a 20MHz bandwidth signal can transfer 20Mbit/s using trivial modulations.