Research Materials

On this page, you will find image sets, materials, and stimuli sets created for my research. You are welcome to download these sets and use them in your own work. If you do, please cite the appropriate work.

The Faces Articulating Consistent Emotions Stimuli (FACES) Set


The full image set and an excel containing the ratings values can be downloaded at this link.
Please cite: March, D. S., Gaertner, L., & Olson, M. A. (2021). Danger or Dislike: Distinguishing threat from valence as sources of automatic anti-Black bias. Unpublished Manuscript.

I gathered 20 neutral faces of each race (Asian, Black, White) from the Chicago Face Database (Ma et al., 2015). To create four expression categories (angry, happy, neutral, and sad) for each face, I created templates within FaceGen (a face morphing program) corresponding with emotional expressions described by the Facial Action Coding System (Ekman & Friesen, 1978). I applied each template to every neutral face to ensure that a given expression displayed roughly equal intensity across the faces (e.g., all angry faces were equally angry). This process involved several steps: (1) One at a time, each neutral face was imported into FaceGen and then overlaid over a fungible 3D template head. (2) The just-imported neutral face was first exported to ensure that it matched the look of the emotionally morphed faces (in terms of digitalization). (3) Each emotion template was applied to the neutral face, at which point (4) each newly morphed emotional face was exported.

This process resulted in 60 faces of each expression (80 Asian, 80 Black, and 80 White; 240 total). These images were subsequently uniformly cropped to 450 x 650-pixels. One-hundred and sixty-five subjects provided ratings of how angry, happy, or sad each face looked. I excluded 5 participants who responded faster than 500ms on >20% of their trials, resulting in 160 participants providing 38,053 ratings. I deleted individual ratings that were faster than 500ms (n = 666, 1.75%) or slower than 10000ms (n = 279, .73%), resulting in 37,108 usable ratings. Based on visual examination, I excluded 12 models (each of their 4 faces; 2 Asian, 3 Black, & 7 White) due to face-morphing that caused them to appear abnormal (e.g., double nose, teeth bared, severe eye occlusion).

I calculated a mean score of each rating for each face such that every face had 3 mean ratings. We then created Z-scores for each face within its emotional expression X rating type grouping using that group’s mean and standard deviation. For example, separate Z-scores were created within the anger images for each rating of anger, happiness, and sadness, rendering three Z-scores for each face. I subsequently deleted 6 models (2 Asian, 1 White, and 3 Black) whose mean rating of any one of their three emotional faces fell >2SD above/below the mean of any of their three group means. I then excluded three models (3 Asian) whose neutral faces were rated greater than 2SD above the neutral group mean on any rating, leaving 14 Asian, 14 Black, and 12 White models. Lastly, I visually excluded models until each race group contained the 10 models used in the subsequently described study (see below for example images).

The full image set and an excel containing the ratings values can be downloaded at this link.


Threat, Negative, Neutral, and Positive Stimuli Sets


The full image set and an excel containing the ratings values can be downloaded at this link.
Please cite: March, D. S., Gaertner, L., & Olson, M. A. (2017).  In harm’s way: On preferential response to threatening stimuli.
 Personality and Social Psychology Bulletin, 43, 1519-1529.

I conducted a pilot study to obtain stimuli that are experienced as threatening, nonthreatening-negative, positive, or neutral, respectively. Icollected 400 images from public sources on the Internet, the International Affective Picture System (Lang, Bradley, & Cuthbert, 2008), the Bank of Standardized Stimuli (Brodeur, Dionne-Dostie, Montreuil, & Lepage, 2010), and images provided to us from Kveraga et al. (2015). We scaled all images to 500 x 500 pixels.

One hundred and forty-nine undergraduates rated 400 images (presented in a random order) on one of three randomly assigned dimensions of how good (n = 50), bad (n = 51), or threatening (n = 48) they deemed each image (1 = “Not at All” to 7 = “Extremely”).

Icomputed each image’s mean rating of good, bad, and threatening, and, based on those ratings, assigned each image to one of four categories: positive, neutral, nonthreatening-negative, or threating. Positive category images (n = 94) had bad and threat ratings less than 2 and good ratings greater than 5. Neutral category images (n = 92) had bad and threat ratings less than 2 and good ratings less than 5. Nonthreatening-negative category images (n = 77) had good ratings less than 3, bad ratings greater than 3, and threat ratings less than 4. Threat category images (n = 92) had good ratings less than 3, bad ratings greater than 3, and threat ratings greater than 4. Ieliminated 45 images that could not be categorized and eliminated categorized images that were (a) rendered ambiguous when scaled to 300 x 300 pixels (which was necessary for Study 1), (b) natively too bright or dark to equate luminance across sets, or (c), could shift categories based on context (e.g., a plant could shift from neutral to positive if co-occurring with other positive stimuli). This yielded a final set in which the four categories were equated on luminance and red value

The full image set and an excel containing the ratings values can be downloaded at this link.