In everyday life, we often encounter situations that demand rapid decisions based on ambiguous sensory information. Consolidating the available evidence requires processing infromation presented in more than one sensory modality and exploiting this for multisensory decision-making. For example, the decision to cross a street on a foggy morning will be based on a combination of visual evidence about hazy objects in your field of view and muffled sounds from various sources. The presence of complimentary audiovisual information can improve our ability to make perceptual decisions, when compared to visual information alone. While recent studies have provided a detailed picture of the emergence of different types of uni- and multisensory representations in the brain, these studies have not provided a conclusive mechanistic account of how the brain encodes and ultimately translates the relevant sensory evidence into a decision. Specifically, it remains unclear whether the perceptual improvements of multisensory decision-making are best explained by a benefit in the early encoding of sensory information, changes in the efficiency of post-sensory processes, such as the accumulation of evidence, or changes in the required amount of accumulated evidence before committing to a choice. Using EEG and computational modelling we show that it is primarily post-sensory, rather than early sensory, representations that are being amplified during rapid audiovisual decision-making, consistent with the emergence of multisensory evidence in higher-order brain areas.
You can access the paper here: