This is the final part of the three-part series.
A challenge with only taking a snapshot of what lies ahead of us isn’t enough for most practical purposes. We have to focus on a subject, track it or observe a moving object, while the rest of the scene needs to discounted (to prevent confusion).
The brain recognizes whether the eyes are tracking a particular subject or only scanning the environment (i.e. focusing on different objects) so it can conveniently shift back and forth between smooth flow (while tracking) and saccades. This is called Optokinetic reflex. That the brain also takes in information from our motion sensors should tell you how much goes on behind the scenes (or perceived one).
The same also can lead to some involuntary movement of eyes (called nystagmus) which means there’s an issue with the system. As far as ‘seeing stars’ when you hit your head is concerned, the sudden jolt to the brain causes scrambling of the visual feed. This results in a temporary display of weird shapes and colors.
A system in the primary visual cortex, consisting of an array of columns track an object (the edges of the object rather), thus helping us to, for instance, catch an object in motion. This is called Orientation Sensitivity. The secondary visual cortex works on recognizing colors. But this is dependent on the amount of light it receives, thus being the reason why we tend to disagree with what color an object is (it’s subjective!).
Finally, the three-dimensional world we experience so naturally is built using the slightly different images that the brain receives from the two eyes. It will also work with a single eye too (because, brain!). In fact, it’s the brain that does much of the heavy lifting when you watch a 3D film. The glasses are used to pass two separate images while the brain does the rest.
And like all other senses, we can be fooled by what we see (or what we think we see). The system which is responsible for recognizing faces can sometimes over-estimate, resulting in a rather abstract input to be perceived as a known face. It’s also the reason why we see different objects even when the same is placed in different backgrounds.
Our brains have a working memory. Similar to the RAM on our devices, there’s only so much that it can handle at any given time. More the things to focus on, lesser attention towards each. This is called the capacity model. Consider driving while using the phone (you should never do that!). Even with both the hands on the wheel, the very act of speaking or changing the radio station reduces the attention that we need to keep on the road. But all’s not that bad.
To understand how we devote our attention to a laundry list of tasks, we need to understand exogenous and endogenous cues for attention. Exogenous is a bottom-up way of earning your attention. Imagine yourself in the middle of an interesting read (maybe this one) and you hear a loud thud. This external cue makes you notice the source of this sound, tend to the scene (if it is necessary) and then return to what you were doing.
Endogenous is a top-down cue system where the brain starts the proceedings. This attention seeking approach is effective because your eyes are pointed towards the object of interest. The brain trusts visual cues the most and uses it to drive its attention. When you are driving around, do you experience this sudden shift of focus to a signboard? The conscious brain drives the attention towards it.
How well do we manage your attention?
A more evolution based attention seeking cue triggers when you hear a sound and it may mean danger. A growl of an animal immediately sets you to be more attentive towards the source, with your conscious brain later reorienting itself later on. This is called Covert Orientation. It’s more related to the survival instinct, where the system overrides the brain and not having to wait for a visual aid.
The same system in the brain (posterior parietal cortex) also helps us observe something in our peripheral vision and take notice of it, only if it begs for our attention. Else, a mere acknowledgment of the happenings is enough for the conscious brain to not be oriented towards it. For example, if you are working with kids, you don’t need to be looking directly at them all the time, but you know what they’re doing. It also helps that the brain knows what to expect, so only an unexpected event will trigger your attention. The kids can play, with you knowing full well what to expect and not having to look at them every time they speak loudly.
Still, attention is a tricky issue. While it can be divided into different ‘modalities’ like listening to music while driving, a little more complex a task is and our attention falters. Experience in completing a task should help us in being comfortable with doing a couple of them simultaneously. While very small exposure to some evolutionarily perceived threats can trigger our attention, in mundane settings though, we can easily miss out on a lot of information. One study showed how half the people failed to notice a stranger change when temporarily shielded from their vision.
Human attention, after all, isn’t so effective. Besides, it’s an enigma by itself.
Burnett, D. (2016).The Idiot Brain. 1st ed. London: Guardian Faber Publishing.
The 1981 discovery of the Orientation Sensitivity by David Hubel and Torsten Wiesel earned the duo a Nobel Prize.
A 2013 study at the University of Utah showed that using a hands-free device is just as bad as using a phone with your hands while driving.
An experiment conducted by Dan Simons and Daniel Levin in 1998 involved asking random pedestrians for directions. When the latter was busy looking at the map and showing directions, a person walked between them and the person asking for directions. In this tiny bit, when the person changed, many of the subjects fail to notice. It’s known as ‘Door Study’.