Neurocognition of Decision Making

Imagine standing behind a window on a rainy day waiting to spot a loved one, say dad, returning home after a long day at work. How well and how quickly you will recognize him, in order to run to the door and greet him, depends on how well you can see through that window – in other words on the overall amount of available sensory evidence. How do you process and combine this incoming information to form a decision? Moreover, imagine that you have been told that dad is coming home with a present (i.e., a form of reward). Would that affect how quickly and how efficiently you process the relevant information?

Now consider the way we make preference- or value-based decisions? In many of these instances the amount of sensory information remains unchanged but the subjective value we assign to the different options changes. How to we weigh the pros and cons of the various alternatives. More generally how do we combine different sources of probabilistic information to make decisions that are more likely to lead to a reward? Naturally, reinforcement learning (i.e., our ability to learn through trial and error and feedback) is pertinent here as well. How does our prior experience with the available options help us make better choices in the future? Importantly, what is the infuence of social factors, such as peer or professional advice, on the way we make value-based decisions?

How about the influence of more abstract variables, such as choice confidence, that also play into the way we make simple every day decisions? Imagine for example running in the park on a foggy day trying to discern whether the person across the lawn is an old friend. The decision to keep concentrating on your stride or change directions to go greet her depends on your level of confidence that it is really her. Choice confidence provides a probabilistic assessment of expected outcome (i.e., degree of belief) and as such can play a key role in how we adjust in ever-changing environments, learn from trial and error, make better predictions, and plan future actions. These scenarios are representative of some of the main neuroscientific questions our lab is currently involved with (visit our Publications page for recent manuscripts). To address these questions we have devised a multimodal approach (see below) which allows us to expose the brain networks involved in human decision making as well as the mechanistic details of the underlying neural computations.

Multimodal Neuroimaging Approach

Our general research approach relies heavily on the fusion of two major disciplines: cognitive neuroscience and engineering. Cognitive neuroscience provides the foundation upon which the critical hypotheses about how the brain works to support behavior are framed. Engineering on the other hand lends itself to finding new and more sophisticated ways to collect, analyze and ultimately decode the behavioral and neural data. The computational techniques used in our lab are motivated by classical problems in signal processing, machine learning and statistical pattern recognition.

Our ultimate goal is to go beyond mere “brain mapping” and to start looking for distributed neural representations and deciphering how information flow through a “network” can lead to changes in behavior. One way we tackle this goal is through simultaneous EEG/fMRI experiments, which have the potential of simultaneously providing high-spatial and high-temporal resolution information about neural function. Importantly, linking fMRI brain activations with temporally specific EEG components would help infer the causal interactions of the underlying network that would have otherwise been difficult to discern with either modality alone (see illustration below; left).

In addition, we capitalize on the power of computational models to describe behavior and use the models’ predictions to inform the analysis of our neuroimaging data (e.g., EEG/MEG, fMRI, TMS). Crucially, model-based neuroimaging has the potential of providing a mechanist account of the neural processes under consideration by identifying when and where the various model parameters, which instantiate the underlying neural computations, are implemented in the brain (see illustration above; right).

Finally, we design multivariate data analysis techniques, to take advantage of the distributed nature (both in space and time) of the brain signals of interest (i.e., we are interested in networks) and to extract and exploit inter-trial and inter-subject response variability. We then use these techniques in combination with neuroimaging to identify distributed neural representations of interest and to uncover latent brain states that would have likely remained unobserved with more conventional (e.g., univariate) analysis tools. For more details visit our Publications page for manuscripts representative of this approach.

Our research is supported by generous contributions from the following funding bodies: