Why is the number range 0-255 common in many computer files?

1.20K views

Why is the number range 0-255 common in many computer files?

In: Technology

6 Answers

Anonymous 0 Comments

Because that is exactly one byte worth of counting.

Computers count in binary and humans count in decimal.

When humans think of round numbers they think of numbers like 100 or 1000 or 50 or 1,000,000; powers of 10 or at least multiples or simple fractions of powers of ten.

For the way computers count powers of 2 are round numbers.

While in human terms or at least in European ones we tend to group 3 digits together to get thousands and million and billion, for computers we tend to group 8 binary digits together.

8 binary digits (bits for short) together make a byte. This is a grouping used by computers deep down on the hardware level and thus present everywhere in computing.

One byte has 2^8 different possible values. It can be used to represent values of 0 to 255 or -128 to +127 if you use it to count whole numbers (called integers in computing).

The same way that a 3 digit decimal number can go from 0 up to 999, an 8 bit number can go from 0 to 255. It is simply when you run out of digits.

Other examples of numbers represented by whole bytes cropping up frequently is 16 bit or 2 bytes which from 0 goes up to 65,535 aka 64k different values. or 3 byte/24-bit which can be used to represent 16,777,216 different values.

These are the sort of number that are round numbers for computer and those using them.

Anonymous 0 Comments

Normal numbers that we use every day is Base 10. That is, there are ten numbers. 0,1,2,3,4,5,6,7,8,9.

When you run out of numbers (i.e. you get to 9) you add a 1 to the left, and carry on. 10,11, 12…

When you run out of numbers again, you change the first number to a 2.

Binary (Base 2) works exactly the same way, but you only have two numbers, 0 and 1.

So you can count to 1 before you run out of numbers. Then you have to add one to the left. It works the same as that all the way along.

0 = 0

1 = 1

10 = 2

11 = 3

100 = 4

101 = 5

110 = 6

111 = 7

and so on.

A single binary digit is called (a contraction of **b**inary dig**it** really) a ‘bit’. A lot of computers for a lot of the history of personal computing use 8 bits to store a character. Using the system of binary counting above, the highest number you can store in eight bits is 255.

00000000 = 0

11111111 = 255

That’s why it comes up a lot.

Anonymous 0 Comments

Because of binary and bytes.
Computer files work in bytes (and bits). A byte is made up of 8 bits.

Now each bit can only be 1 or 0 (ie. Binary).
Binary doesn’t use base 10 numbering that we are used to, it uses base 2 numbering.

For example, 216 is 2 in the hundreds column (10^2 ), 1 in the tens column (10^1 ) and 6 in the ones column (10^0 ) – ie. Each column is another factor of 10.

Binary does the same, but with 2.
Hence the number 1011 (which is 4 bits) = 1*(2^3 ) + 0*(2^2 ) + 1*(2^1 ) + 1*(2^0 ) = 11 (eleven)

Now back to the byte which is 8 bits. The maximum number you can have (in binary) in 1 byte is all eight 1’s: 11111111.
If you do the binary math for that you’ll find it’s 255.

Therefore since computers use bytes, the range you’ll find is 0 (00000000) to 255 (11111111).

Anonymous 0 Comments

A byte is a type of unit for digital info. 1 byte consists of 8 bits. Computers are used to talk in binary ( a language composed of 0s and 1s) just like we are used to speak English. The highest number you can say in binary using just 8 digits ( called an octet ) is 255. Imagine you have just 8 spaces in which you could either put 0 or 1. If all those spaces are filled with 1 it makes 11111111 which translated to binary means 255.

Anonymous 0 Comments

Work on your google-fu.

256 is 2 to the power of 8 or 8 bits/1 byte, so counting from 0, 255 is the largest number that can be stored in a byte. 2 ^ 8 = 256, if you include 0 that’s 255. Meaning that you’re using the place for 256 for the 0, so you have one less to use. So that would be 8 bits that are available to you.

Anonymous 0 Comments

Zeros and ones are regular in computers (binary). The largest number you can make with 8 bits is 255. There’s 8 bits in a byte, so 0-255 counter is a byte note: I don’t know anything about these things so I might be totally wrong