How can our eyes tell the difference between a close object and a far object?

133 views
0

How can our eyes tell the difference between a close object and a far object

In: Biology

Two ways. One is just the size of what we see compared to how big we know that object to be. The second is based on the angle of your eyes. Use your left and right hands to point at your computer screen. Now instead point at something very far away. See how the angle between your fingers changes as you point to things further away? Your brain can tell how far something is by how it has to aim your eyes in order to focus on it.

Binocular/stereoscopic vision basically each eye gets a separate view of the object and by comparing the two visions the brain is able to perceive depth, the binocular vision is why predators have the eyes facing forward where as herbivores need close to 360 degree vision to see what is sneaking up on them https://youtu.be/kw_d5lu0UlY

1. Your eyes point in a bit when you’re looking at something vaguely close so they’re both looking at the same thing. At the extreme this is you being cross eyed when you try look at the tip of your nose. Your brain measures this and can detect distance from it. This works for short to middle distances.

2. Lens focus. The amount that our lenses have to adjust to focus is an indicator of distance, your brain also measures this and estimates distance. This works for short to middle distances.

3. Experience. You know roughly how big certain things are, when they’re further away they look smaller. Your brain then uses expected size vs apparent size to gauge how far away something is. This works for longer distances.

It learns how to do this when you’re an infant and clumsily bumping into things or reaching for stuff that’s way out of your reach.

This is also how optical illusions like Forced Perspective work, the graphics/setup presents information that makes it seem something is at a different distance than it is, either by tricking size cues or perspective cues. Similarly this is why when you look at clouds from above when in an airplane it’s very difficult to work out how far away they are – the focal length is maxed out and there’s no frame of reference to compare sizes to, so although the clouds might be many miles away they can look relatively close.

There are a few ways that you can do this, and it’s really quite complicated.

There are things called monocular cues to 3D space. Monocular cues are methods that let you tell the difference between a close object and a far away one with only one eye. For example, occlusion is one example of a monocular cue. Occlusion describes the situation where one object covers the view of another object. The object being obstructed is typically seen as the one being further away.

Another monocular cue is relative size. Simply put, smaller things are typically seen as being further away. Relative size is often paired with a texture gradient, where closer things are typically seen closer to the “bottom” of your visual field, and things farther away are seen towards the “top.” You also have an implicit understanding of how big or small things “usually” are. If you see a coke can and a person, and they look almost the same height, you can infer that the person is probably very far away since you have a decent idea of about how large a coke can is.

You are also born with an implicit understanding of properties of the atmosphere. Things further away have to pass through more air, and so the light reflecting off of them is more scattered and appear fainter and more distance. This is called aerial perspective, and its another clue to which things are further away and which things are closer to you.

You also have the tool of linear perspective. Imagine a road going off into the distance. In this case, the two sides of the road are parallel, but from your perspective they will converge onto a single point called the vanishing point. Linear perspective helps you understand that the vanishing point is probably very far away.

There are also motion cues that you can use to determine depth. When you’re driving in a car and look out the side window, objects close to the car like the railing on a bridge seem to zoom past, but that tree in the distance appears to move a lot slower. This is an example of something called motion parallax, and it’s a situation where you implicitly use outside influences to gain an understanding of depth.

These are all monocular depth cues, and they’re all very basic. They seem obvious to you, but only because they’re so hardwired into your brain– don’t take them for granted, since such a well designed visual system took millions of years to evolve.

There’s also stereopsis. I won’t get very much into this unless someone specifically asks, since it gets pretty complicated pretty quickly, but here is the short version. The nature of the way your eyes are set up is that they see slightly different images. There is an imaginary circle going around your head called the horopter. Again, without going into the geometry, objects that fall on the horopter are seen by the “same” cells on each retina– they fall on the same relative position. As objects get farther from the horopter in either direction, the disparity the two retinas gets larger and larger. This is another way that you can tell the difference between close objects and objects that are farther away. How does your brain take the fact that the two eyes see slightly different images and combine them into a single coherent perception of the world? That’s a fascinating question that I won’t go into here, but I’m also happy to talk about.

It’s mostly your brain doing it, not your eyes. /u/OccassionalReddit194 described it well.

A lot of people think it is just because we have two eyes and hence a slightly different view from each eye. That really only works for things that are quite close. For distant objects, it’s pretty much useless, since both eyes get the same view. You can easily test this yourself. Cover one eye and look around. Can you tell what is close and what is far? Yes, you can.

Lens focusing is a pretty minor part of it. People with artificial lenses in their eyes (which do not change focus) can perceive close/far objects just fine. Many people have artificial lenses to cure cataracts (me, for example).

Most of the work is done by your brain. You are constantly updating a model of your environment, figuring out what is close and what is far. Your brain does this in a variety of ways, such as seeing what objects obscure the view of others. But it relies on experience to a great deal.

Interestingly, your brain doesn’t keep an exact “image” of things to make this map. It seems to have a more abstract model to let you know just enough about the things that you aren’t actively looking at and concentrating on. Presumably that limits how much “processing power” or memory you need to be aware of your environment, freeing up your brain to do other tasks.

1. You have two eyes that are slightly apart from one another. Anything you look at will be in a slightly different position relative to everything else for the one eye than for the other. This is called *parallax* and it allows the eyes and relevant brain parts to automatically estimate distance with no other information.
2. You have a mega smart human-brain that remembers about how big common things like human beings and cars and houses are, so you can estimate their distance by apparent size alone.
3. Mega smart human-brain can also remember the geography of places you’ve been to several times and use this as a metric to gauge distance between the people and things that are in those places.

None of these techniques is flawless because light doesn’t always travel in a straight line, leading to bizarre optical illusions like mirages, but there’s no feasible improvement that could be made to address that.