Corticostriatal Responses to Social Reward are Linked to Trait Reward Sensitivity and Subclinical Substance Use in Young Adults

Aberrant levels of reward sensitivity have been linked to substance use disorder and are characterized by alterations in reward processing in the ventral striatum (VS). Less is known about how reward sensitivity and subclinical substance use relate to striatal function during social rewards (e.g., positive peer feedback). Testing this relation is critical for predicting risk for development of substance use disorder. In this pre-registered study, participants (N=44) underwent fMRI while completing well-matched tasks that assess neural response to reward in social and monetary domains. Contrary to our hypotheses, aberrant reward sensitivity blunted the relationship between substance use and striatal activation during receipt of rewards, regardless of domain. Moreover, exploratory whole-brain analyses showed unique relations between substance use and social rewards in temporoparietal junction. Psychophysiological interactions demonstrated that aberrant reward sensitivity is associated with increased connectivity between the VS and ventromedial prefrontal cortex during social rewards. Finally, we found that substance use was associated with decreased connectivity between the VS and dorsomedial prefrontal cortex for social rewards, independent of reward sensitivity. These findings demonstrate nuanced relations between reward sensitivity and substance use, even among those without substance use disorder, and suggest altered reward-related engagement of cortico-VS responses as potential predictors of developing disordered behavior.

Social Reward Task.The social task was identical to the monetary task, except images of gender-matched (i.e., two female faces or two male faces) and age-matched peers were presented instead of doors.The social reward task consisted of 120 images compiled from multiple sources (internet databases of non-copyrighted images of college-aged individuals).The pictures of purported peers had positive facial expressions, were cropped so that individuals were pictured from their shoulders up and were edited to have an identical solid gray background.Smiling faces were used because they are common in social reward tasks (Richards et al., 2013;Jarcho et al., 2015;Distefano et al., 2018), and are subject to less misinterpretation than neutral faces (Rapee and Heimberg, 1997;Davis et al., 2016).Images were constrained to a standard size (aspect ratio: 11.2 width, 17.14 height).There were an equal number of trials with male and female peers across the reward and loss conditions (30 pairs each, 60 total).Participants were instructed to choose the peer that liked them based on the photograph of the participant.On reward trials, feedback was a green arrow pointing upwards, meaning the participant correctly selected the person who said they would like the participant.On loss trials, feedback was a red arrow pointing downwards, meaning they incorrectly selected the person who said they would dislike the participant.On trials where the participant failed to respond, feedback was randomly selected.

Preprocessing of Neuroimaging Data
The following information is adapted from the fMRIPrep preprocessing details; extraneous details were omitted for clarity.
Anatomical data preprocessing.The T1-weighted (T1w) image was corrected for intensity non-uniformity (INU) with `N4BiasFieldCorrection`, distributed with ANTs 2.3.3, and used as T1w-reference throughout the workflow.The T1w-reference was then skull-stripped with a Nipype implementation of the `antsBrainExtraction.sh` workflow (from ANTs), using OASIS30ANTs as target template.Brain tissue segmentation of cerebrospinal fluid (CSF), white-matter (WM), and gray-matter (GM) was performed on the brain-extracted T1w using `fast` (FSL 5.0.9).Volume-based spatial normalization to one standard space (MNI152NLin2009cAsym) was performed through nonlinear registration with `antsRegistration` (ANTs 2.3.3),using brain-extracted versions of both T1w reference and the T1w template.The following template was selected for spatial normalization: ICBM 152 Nonlinear Asymmetrical template version 2009c (TemplateFlow ID: MNI152NLin2009cAsym) Functional data preprocessing.For each of the BOLD runs per subject, the following preprocessing steps were performed.First, a reference volume and its skull-stripped version were generated by aligning and averaging 1 single-band references (SBRefs).A B0nonuniformity map (or fieldmap) was estimated based on a phase-difference map calculated with a dual-echo GRE (gradient-recall echo) sequence, processed with a custom workflow of SDCFlows inspired by the `epidewarp.fsl`script (http://www.nmr.mgh.harvard.edu/~greve/fbirn/b0/epidewarp.fsl) and further improvements in HCP Pipelines.The fieldmap was then co-registered to the target EPI (echo-planar imaging) reference run and converted to a displacements field map (amenable to registration tools such as ANTs) with FSL's `fugue` and other SDCflows tools.Based on the estimated susceptibility distortion, a corrected EPI (echo-planar imaging) reference was calculated for a more accurate co-registration with the anatomical reference.The BOLD reference was then co-registered to the T1w reference using `flirt` (FSL 5.0.9) with the boundary-based registration cost-function.Co-registration was configured with nine degrees of freedom to account for distortions remaining in the BOLD reference.Head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) are estimated before any spatiotemporal filtering using `mcflirt`.BOLD runs were slice-time corrected using `3dTshift` from AFNI 20160207.First, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep.The BOLD time-series (including slice-timing correction when applied) were resampled onto their original, native space by applying a single, composite transform to correct for head-motion and susceptibility distortions.These resampled BOLD time-series will be referred to as preprocessed BOLD in original space, or just preprocessed BOLD.The BOLD time-series were resampled into standard space, generating a preprocessed BOLD run in MNI152NLin2009cAsym space.First, a reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep.Several confounding time-series were calculated based on the preprocessed BOLD, notably including framewise displacement (FD).
Additionally, a set of physiological regressors were extracted to allow for component-based noise correction (CompCor).Principal components are estimated after high-pass filtering the preprocessed BOLD time-series (using a discrete cosine filter with 128s cut-off) for anatomical component correction (aCompCor).For aCompCor, three probabilistic masks (CSF, WM and combined CSF+WM) are generated in anatomical space.The implementation differs from that of Behzadi et al. in that instead of eroding the masks by 2 pixels on BOLD space, the aCompCor masks are subtracted from a mask of pixels that likely contain a volume fraction of GM.This mask is obtained by thresholding the corresponding partial volume map at 0.05, and it ensures components are not extracted from voxels containing a minimal fraction of GM.Finally, these masks are resampled into BOLD space and binarized by thresholding at 0.99 (as in the original implementation).Components are also calculated separately within the WM and CSF masks.For each CompCor decomposition, the k components with the largest singular values are retained, such that the retained components' time series are sufficient to explain 50 percent of variance across the nuisance mask (CSF, WM, combined, or temporal).The remaining components are dropped from consideration.The head-motion estimates calculated in the correction step were also placed within the corresponding confounds file.All resamplings can be performed with a single interpolation step by composing all the pertinent transformations (i.e., head-motion transform matrices, susceptibility distortion correction when available, and coregistrations to anatomical and output spaces).Gridded (volumetric) resamplings were performed using `antsApplyTransforms` (ANTs), configured with Lanczos interpolation to minimize the smoothing effects of other kernels.
Many internal operations of fMRIPrep use Nilearn 0.6.2,mostly within the functional processing workflow.For more details of the pipeline, see the section corresponding to workflows in fMRIPrep's documentation (https://fmriprep.readthedocs.io/en/latest/workflows.html).
Further, we applied spatial smoothing with a 5mm full-width at half-maximum (FWHM) Gaussian kernel using FEAT (FMRI Expert Analysis Tool) Version 6.00, part of FSL (FMRIB's Software Library, www.fmrib.ox.ac.uk/fsl).Non-brain removal using BET (Smith, 2002) and grand mean intensity normalization of the entire 4D dataset by a single multiplicative factor were also applied.