ANES Scales Often Don’t Measure What You Think They Measure – An ERPC2016 Analysis

w/ Randall C. MacIntosh

Political surveys often include multi-item scales to measure individual predispositions such as authoritarianism, egalitarianism, or racial resentment. Scholars typically use these scales to examine how these predispositions vary across different subgroups, comparing women to men, rich to poor, or Trump to Clinton voters. Such research implicitly assumes that, say, Trump and Clinton voters’ responses to the egalitarianism scale measure the same construct in the same metric. Unfortunately, this research rarely evaluates whether this assumption holds. We therefore present a framework to test this assumption and correct scales when it fails to hold. We apply this framework to 13 commonly used scales on the 2012 and 2016 ANES. We find widespread violations of the equivalence assumption and demonstrate that these violations often lead to biased conclusions about the magnitude or direction of theoretically-important group differences. These results suggest that researchers should not rely on multi-item scales without first establishing equivalence.

The ‘Social’ Part of Social Desirability Bias: The Effects of Social Networks on Turnout Misreports

w/ Dennis F. Langley

Scholars often use survey data to study how discussion networks influence electoral turnout, typically demonstrating that individuals with more participatory networks are themselves more likely to vote. This conclusion rests on the assumption that errors in individuals’ self-reported turnout are unrelated to the composition of their discussion networks. We call this assumption into question, arguing that an individual’s perception of the social desirability of turnout depends on the engagement levels of their associates. Thus, both turnout and turnout over-reporting should increase with network participation levels. We provide experimental and observational evidence that the well-known problem of turnout over-reporting in surveys is driven by characteristics of individuals’ discussion networks. We go on to show that analyses failing to account for this pattern may obtain biased estimates of the relationship between individuals’ discussion networks and their self-reported turnout. This study therefore helps explain how social networks influence social desirability and provides empirical insight into the consequences of this relationship for the study of social influence in political behavior.

Social Proximity and ‘Friends-and-Neighbors’ Voting in Local Elections

w/ Sarah John and Donald A. DeBats

At least since Key (1949), scholars have been interested in how voters’ geographic proximity to candidates predicts their support for these candidates. This relationship has largely been studied in state elections relying on aggregate voting data. As a consequence, we know little about the reasons why geographic proximity predicts support, nor do we know whether this pattern occurs in local elections. We address these issues using a unique dataset that identifies the residential locations of all voters and candidates running in seven local elections. The data also reveal the candidate choices of every voter, their personal attributes such as ethnicity and wealth, and their social  affiliations including their occupations, churches, and families. These data allow us to examine how citizens’ geographic locations interweave with their social networks, their interests, their personal attributes, and ultimately their voting behavior.


Deliberation and Motivated Reasoning in Informal Discussion Networks

Can political discussion help individuals improve their political decisions? Formal deliberation often helps citizens overcome their political ignorance, but recent work suggests informal, everyday discussion often fails to promote these benefits. I argue that recent informal discussion research has investigated contexts in which discussants hold a narrow range of motivations, which differ from those held in many real-world conversations. I develop and test a theory explaining how motivations influence the efficacy of political discussion. The analysis examines a small-group experiment, which randomly assigns incentives to alter subjects’ (1) strength of partisan predispositions toward two computer-generated candidates, (2) motivations to form accurate judgments about these candidates, and (3) motivations to provide accurate information to fellow subjects. The results suggest that previous informal discussion research generalizes well to individuals holding elevated partisan motivations, but underestimates discussion’s civic capacity for individuals holding elevated accuracy and, especially, prosocial motivations.