cropped-big-sur1-e1395766690384.jpgI am an assistant professor in the Florida State University political science department.

My research examines how individuals’ political attitudes and voting behavior are influenced by the people around them.

I teach courses on political psychology, media and politics,  social network analysis, and research methods.

You can contact me at matthew.pietryka@gmail.com



Updated: June 19, 2017


  1. From Respondents to Networks: Bridging between Individuals, Discussants, and the Network in the Study of Political Discussion (In Press)
  2. It’s Not Just What You Have, but Who You Know: Networks, Social Proximity to Elites, and Voting in State and Local Elections (2017)
  3. Accuracy Motivations, Predispositions, and Social Information in Political Discussion Networks (2016)
  4. Real-Time Reactions to a 2012 Presidential Debate: A Method for Understanding Which Messages Matter (2014)
    • Amber E. Boydstun, Rebecca A. Glazier, Matthew T. Pietryka, and Philip Resnik
    • Public Opinion Quarterly
    • [Gated Copy]
  5. Colleague Crowdsourcing: A Method for Fostering National Student Engagement and Large-N Data Collection (2014)
    • Amber E. Boydstun, Jessica T. Feezell, Rebecca A. Glazier, Timothy P. Jurka, and Matthew T. Pietryka
    • PS: Political Science & Politics
    • [Gated Copy]
  6. Noise, Bias, and Expertise in Political Communication Networks (2014)
  7. An Analysis of ANES Items and Their Use in the Construction of Political Knowledge Scales (2013)
  8. Playing to the Crowd: Agenda Control in Presidential Debates (2013)
  9. Going Maverick: How Candidates Can Use Agenda-Setting to Influence Citizen Motivations and Offset Unpopular Issue Positions (2012)
  10. The Roles of District and National Opinion in 2010 Congressional Campaign Agendas (2012)
    • Matthew T. Pietryka
    • American Politics Research
    • [Gated Copy]


  1. Networks, Contexts, and the Process of Political Influence (2017)
    • Robert Huckfeldt, Matthew T. Pietryka, and John B. Ryan
    • In The Routledge Handbook of Elections, Voting Behaviour, and Public Opinion, eds. Justin Fisher, Edward Fieldhouse, Mark N. Franklin, Rachel Gibson, Marta Cantijoch, and Christopher Wlezien. Routledge, p. 267-279.
  2. Scalable Multidimensional Response Measurement using a Mobile Platform (2017)
    • Philip Resnik, Amber E. Boydstun, Rebecca A. Glazier, and Matthew T. Pietryka
    • In Political Communication in Real Time: Theoretical and Applied Research Approaches, eds. D. Schill, R. Kirk, and A. E. Jasperson. Routledge, p. 143-167.
  3. Noise, Bias, and Expertise: The Dynamics of Becoming Informed (2014)
  4. Opinion Leaders, Expertise, and the Complex Dynamics of Political Communication (2014)
  5. Networks, Interdependence, and Social Influence in Politics (2013)
    • Robert Huckfeldt, Jeffery J. Mondak, Matthew Hayes, Matthew T. Pietryka, and Jack Reilly
    • In Oxford Handbook of Political Psychology, eds. Leonie Huddy, David O. Sears, and Jack Levy. Oxford University Press, p. 662-698.
  6. Party, Constituency, and Representation in Congress (2011)
    • Walter J. Stone and Matthew T. Pietryka.
    • In State of the Parties, sixth edition, eds. John C. Green, and Daniel J. Coffey. Rowman & Littlefield, p. 333-347.


ANES Scales Often Don’t Measure What You Think They Measure – An ERPC2016 Analysis

w/ Randall C. MacIntosh

Political surveys often include multi-item scales to measure individual predispositions such as authoritarianism, egalitarianism, or racial resentment. Scholars typically use these scales to examine how these predispositions vary across different subgroups, comparing women to men, rich to poor, or Republican to Democratic voters. Such research implicitly assumes that, say, Republican and Democratic voters’ responses to the egalitarianism scale measure the same construct in the same metric. Unfortunately, this research rarely evaluates whether this assumption holds. We present a framework to test this assumption and correct scales when it fails to hold. We apply this framework to 13 commonly used scales on the 2012 and 2016 ANES. We find widespread violations of the equivalence assumption and demonstrate that these violations often lead to biased conclusions about the magnitude or direction of theoretically-important group differences. These results suggest that researchers should not rely on multi-item scales without first establishing measurement equivalence.

[Draft] [Supporting Information]

The ‘Social’ Part of Social Desirability Bias: The Effects of Social Networks on Turnout Misreports

w/ Dennis F. Langley

Scholars often use survey data to study how discussion networks influence electoral turnout, typically demonstrating that individuals with more participatory networks are themselves more likely to vote. This conclusion rests on the assumption that errors in individuals’ self-reported turnout are unrelated to the composition of their discussion networks. We call this assumption into question, arguing that an individual’s perception of the social desirability of turnout depends on the engagement levels of their associates. Thus, both turnout and turnout over-reporting should increase with network participation levels. We provide experimental and observational evidence that the well-known problem of turnout over-reporting in surveys is driven by characteristics of individuals’ discussion networks. We go on to show that analyses failing to account for this pattern may obtain biased estimates of the relationship between individuals’ discussion networks and their self-reported turnout. This study therefore helps explain how social networks influence social desirability and provides empirical insight into the consequences of this relationship for the study of social influence in political behavior.

Social Proximity and ‘Friends-and-Neighbors’ Voting in Local Elections

w/ Sarah John and Donald A. DeBats

At least since Key (1949), scholars have been interested in how voters’ geographic proximity to candidates predicts their support for these candidates. This relationship has largely been studied in state elections relying on aggregate voting data. As a consequence, we know little about the reasons why geographic proximity predicts support, nor do we know whether this pattern occurs in local elections. We address these issues using a unique dataset that identifies the residential locations of all voters and candidates running in seven local elections. The data also reveal the candidate choices of every voter, their personal attributes such as ethnicity and wealth, and their social  affiliations including their occupations, churches, and families. These data allow us to examine how citizens’ geographic locations interweave with their social networks, their interests, their personal attributes, and ultimately their voting behavior.


Deliberation and Motivated Reasoning in Informal Discussion Networks

Can political discussion help individuals improve their political decisions? Formal deliberation often helps citizens overcome their political ignorance, but recent work suggests informal, everyday discussion often fails to promote these benefits. I argue that recent informal discussion research has investigated contexts in which discussants hold a narrow range of motivations, which differ from those held in many real-world conversations. I develop and test a theory explaining how motivations influence the efficacy of political discussion. The analysis examines a small-group experiment, which randomly assigns incentives to alter subjects’ (1) strength of partisan predispositions toward two computer-generated candidates, (2) motivations to form accurate judgments about these candidates, and (3) motivations to provide accurate information to fellow subjects. The results suggest that previous informal discussion research generalizes well to individuals holding elevated partisan motivations, but underestimates discussion’s civic capacity for individuals holding elevated accuracy and, especially, prosocial motivations.