A recent research project led by Jolyon Troscianko, a visual ecologist from the University of Exeter, and Daniel Osorio, a neuroscientist from the University of Sussex, has contributed to the ongoing debate about whether our perception errors related to color, shade, and shape are due to the eye's mechanics or the brain's neurological pathways.
The researchers discovered that specific types of illusions can be traced back to the restrictions of our visual neurons – cells responsible for processing the visual data our eyes receive – as opposed to higher-level processing.
These neurons possess limited bandwidth, and the scientists developed a model demonstrating how this restriction impacts our perception of patterns at various scales, building on prior research that analyzed color perception in animals.
Troscianko explains, "Our eyes convey messages to the brain by altering the speed at which neurons fire. However, there is a maximum speed at which they can fire, an aspect overlooked in previous research, and this limit influences how we perceive color."
The new model suggests that processing limitations and metabolic energy constraints compel neurons to condense the visual data our eyes receive. This compression effect is less noticeable in natural scenery, but significantly alters our perception of simpler patterns.
This is similar to the compression of digital images, where compression artifacts are harder to spot in a real-world photo due to the variation and complexity of pixels. In a digital illustration, however, where lines and borders are clear-cut, these compression artifacts are more visible.
These findings could enhance our understanding of why we perceive contrasts in modern HDR (High Dynamic Range) televisions. Theoretically, our eyes shouldn't be able to detect the extreme contrast between the lightest white and the darkest black displayed using HDR technology.
The researchers propose that neurons have evolved for maximum efficiency: some are designed to detect minute differences in shades, while others are less sensitive to small differences but are more proficient at identifying extensive contrast ranges. This explains why the latest HDR TVs appear more striking.
Troscianko states, "Our model demonstrates how neurons with such restricted contrast bandwidth can combine their signals to allow us to perceive these vast contrasts. However, the information is compressed, leading to visual illusions. This model illustrates how our neurons are precisely evolved to utilize every bit of capacity."
This theory applies to numerous illusions arising from contrast differences. Our perception of color depends largely on context in these situations, and the new model pinpoints the part of our visual processing system responsible for this.
The computational model was tested and validated for human perception of various optical illusions, responses recorded in primate retinas, and more than 50 instances of brightness and color phenomena.
Previously, it was believed that other factors, such as our existing knowledge of shapes and objects or eye movements, may be responsible for the potency of optical illusions. These earlier theories may now need reevaluation.
The research has been published in PLOS Computational Biology.
Post a Comment