If videos are basically just a bunch of 0’s and 1’s, what determines image quality?

801 views

And can it be altered to increase image quality?

In: Technology

5 Answers

Anonymous 0 Comments

Compression algorithms.

You can have a video where each frame is stored exactly like it is, with a perfect representation of 0’s and 1’s. This is a “raw” file format. The video quality is perfect, but unfortunately the file sizes are huge, so they become impractical.

But you can compress the video using smart algorithms to retain the image quality but still reduce the number of 0’s and 1’s. For example, hypothetically if a movie was just 2 hours of complete darkness, it would be a waste to store millions of frames of black, when you can just store *one* of those frames, and then tell the computer to repeat the frame a million times.

Then there is lossy compression techniques where you reduce the image quality, for example by lowering the resolution or color data. This usually saves a lot of memory, but the quality is tarnished and cannot be reconstructed perfectly.

It’s an entire science, and today’s compression algorithms are really clever and can save a huge amount of data at a minimal loss of quality.

Anonymous 0 Comments

If you have just one 1/0, you screen is just on or off.
If you have just 4, you could have 4 boxes one or zero.
if you have 921,600 you can have a 720 p black and white image.
If you have 29,491,200, you have 32 1/0s to describe what color each of the pixels should be.
Multiply that by 20 every second for a movement.
Then add sound, make the sound better, up the pix count, do 3d, add subtitles, so on and so forth.

Higher quality videos have more 1/0s to do more things.
Though there are some tricks to reduce the number of 1/0s you need, but that’s another topic.

Anonymous 0 Comments

Number of 1’s and 0’s, contrast of your screen, color saturation, exposure, blur, etc… Camerawork often heavily impacts video quality.

Anonymous 0 Comments

A book is just a bunch of letters, but what those letters are saying and how many there are says far more than just the length of the alphabet.

Image quality is simply how much data is being encoded in those bits, and how accurate it is to the real life subject. A video can be very big and have millions of pixels making each frame, or it could just be a couple thousand making it much, much smaller.

Sometimes the way you “translate” the video to bits affects things. you might want to save some space, so you compress the video and try to make it so that it throws a bit of the information away that it can recreate “well enough” later. If that compression is too conservative it might throw away information you can’t retrieve, and that makes the video look worse.

Or it just might have been shot on an old camera that didn’t provide enough information to make a better looking picture. The camera might have said “Here are enough pixels to fill a 128 by 128 screen” but then you try to view that video on a screen ten times as large: you can’t guess what pixels could go in to the empty space, so you just stretch the image and call it a day, even if it looks really bad.

Anonymous 0 Comments

Much of the time image quality is reduced due to efforts to reduce the bandwidth or size of the image/video file; in other words there are fewer 1’s and 0’s to form the images.

You might be able to increase the quality by allowing for more binary numbers, but you need to know what they should be. An image of low quality can’t be made better without making up new data.