A study of peripheral vision has given clues on how our brain stitches together high quality 'video' from the limited input we get from our eyes.
Cognitive scientist Associate Professor Mark Williams, of Macquarie University in Sydney, and colleagues, report their findings online ahead of print publication in the journal Cortex.
"Our vision is really quite poor yet the representation that we actually see in our mind's eye, that lovely 'video' that we get, is much better than the input that we actually receive," says Williams who is an ARC-funded Queen Elizabeth II Research Fellow.
"We do a lot of 'photoshopping' and this may be the mechanism that is actually doing it."
Peripheral vision is best at picking up things in the dark and detecting movement. Our central vision, on the other hand, is best for focusing and detecting colour.
The conventional understanding is that information from different parts of our retina are processed initially by specific parts of the brain's visual cortex.
The lion's share of this area of the brain is dedicated to signals collected by the foveal (central) region of our retina.
Feeding back and forward
From the visual cortex, the signals are fed forward to other parts of the brain that do the more sophisticated processing of images.
"We have generally thought the frontal lobes do all the interesting stuff - such as deciding what's actually out there and giving you consciousness," says Williams.
But, in more recent years, there have been suggestions that image processing involves the feeding back of information from these higher levels of the brain to lower levels.
For example, physiologists have shown that 80 per cent of neurones go backwards instead of forward.
And in 2007, brain imaging studies by Williams and colleagues showed that when people look at certain things in their peripheral vision, the foveal region of their visual cortex lights up.
Could this be a case in which the peripheral visual system is feeding back to and using a part of our brain that has traditionally been associated with our central visual field?
"According to all the textbooks this shouldn't happen," says Williams.
In their most recent study, Williams and colleagues wanted to confirm that the lighting up of the foveal region of the visual cortex was actually feedback and not just an interesting correlation.
Participants were presented with the same task involving peripheral vision as was used in the first brain imaging study.
But this time, the researchers used transcranial magnetic stimulation (TMS) to disrupt the foveal region of the visual cortex.
When the region was disrupted 350 milliseconds after the participants were presented with the test objects, their responses to the task were less accurate.
Scientists know that it takes about 100 milliseconds for a signal to get from the retina to the visual cortex, but in this case the signal was taking 350 milliseconds to get there.
"So it's gone well past the visual cortex and come back again," says Williams.
He says the peripheral vision system could be feeding back information to the foveal region to give extra information to the central vision.
Alternatively, the peripheral vision system could be using the extra processing power of the foveal region as a kind of "scratch pad" to improve itself.
Regardless, says Williams, the findings show feedback is playing a huge role in processing the images collected by our eyes.
He says the findings could help surgeons understand the implications of removing certain parts of the brain when removing cancers.
And the findings could also help computer scientists design robots that have more human-like vision, says Williams.