Publication date: 31 maart 2026
University: Vrije Universiteit Amsterdam

Visual Attention and Dopamine in Value-Based Learning

Summary

The research presented in this thesis explores the behavioural and brain mechanisms underlying the roles of visual attention and dopamine in value-based learning and decision-making. This is accomplished using a variety of behavioural, eye-tracking and fMRI methods. Based on a series of experiments, we show that:

1) rewards in the environment are distracting, lead to changes in eye movement behaviour, thereby interfering with current goals.
2) the learned association between rewarding objects and certain motor behaviours, e.g. an eye movement to the reward, does not translate to other motor output to obtain the reward, e.g. a manual response.
3) lower dopamine levels in Parkinson’s disease (PD) patients OFF compared to ON mediation lead to an increased sensitivity to negative outcomes, i.e. higher negative learning rates, during learning, which affects subsequent decision-making processes based on the value of items accrued during learning.
4) less dopamine in PD OFF than ON leads to greater tracking of negative reward prediction errors (RPEs) in the striatum.
5) less dopamine in PD OFF than ON results in better classification of good and bad outcomes in visual object-selective cortex (OSC), but worse classification in higher-level dorsolateral prefrontal cortex (DLPFC) and putamen.
6) changes in grey matter volume (GMV) in PD OFF are associated with the extent of distractibility and negative learning rates in the putamen.

In chapter 2, participants performed a fast-paced eye-tracking task, in which they had to follow a black target circle around the screen. A distractor circle was presented in each trial, which signified the level of reward that could be obtained in that trial: high, low or no reward. Crucially, rewards could only be earned if the eyes correctly moved to the target, and not to the distractor. We found increased oculomotor capture to distractors in accordance with their level of reward association, i.e. more saccades were made to the distractor signifying high compared to low and no reward, even though these erroneous saccades resulted losing the associated reward. In spite of the detrimental effect of looking to distractors, participants were unable to control it, providing strong evidence that distractors suggesting potential reward in the environment capture the eyes against top-down, goal-oriented intentions of the observer. This effect furthermore got stronger over time.

In chapter 3, we investigated whether reward-based spatial priority maps translate from overt to covert attention. We showed that, during learning, the cueing of rewarded compared to unrewarded locations led to shorter saccade latencies over time. In a subsequent test phrase, the eyes were to remain fixed, and stimuli were presented at multiple spatial locations thereby competing for covert attentional priority. We found that RT improvement for targets presented at the high reward location was actually significantly less than that for the low reward location.

In chapter 4, we carried out a probabilistic classification reinforcement learning task on PD patients, along with a decision-making task. Using a Bayesian hierarchical reinforcement learning model, we found that when patients were off dopaminergic medication they were more sensitive to negative outcomes than when on medication. We also found medication-related brain activation changes that covary with reward prediction errors (RPEs) in several brain regions, most notably in the dorsal striatum. We found that PD OFF showed greater negative RPE-related activation than ON in the caudate nucleus.

In chapter 5 we probed the brain and behavioural similarities in PD between learning from reinforcements and attentional capture by task-irrelevant stimuli. We found that PD ON were better at ignoring distracting stimuli than PD OFF. We identified significant medication-driven interactions between fronto-striatal regions and visual object-selective cortex (OSC); specifically, we found greater classification accuracy for PD ON compared to OFF in both DLPFC and the putamen, but lower classification accuracy in ON compared to OFF in visual OSC. Based on this finding, we suggest that valence processing of reinforcements may occur more at a bottom-up, visual level in PD OFF but more at a top-down, reward-driven, and working-memory level in fronto-striatal regions in PD ON.

Taken together, findings from the research presented in this dissertation highlight the combined functions of visual attention, eye movements, and dopamine in learning from reinforcements. Cognitive control exerted by frontal brain regions, such as the DLPFC, and dopamine-driven reward signals in the striatum, interact with lower-level visual regions to maintain a balance in the processing of rewards to produce desirable goal-directed behaviours according to the current context.

See also these dissertations

We print for the following universities