is time on our side? (part 1)

Last month, Jay Ulfelder posted a characteristically insightful perspective on time-sequenced selection bias, which, in the midst of the “Congress defunds political science research” debacle, remains strikingly relevant. The central point, beyond the piece’s implications for research methodology, pertains to the indispensable humility of political forecasting. Forecasting, like much of historical analysis, is a teleological process–that is, it presumes the progression of human events towards a pre-configured goal: mass atrocities, political conflict, instability, or state failure. As Jay discusses in his post, however–and Anton Strezhnev references in a related discussion–the limits of human knowledge, the dynamic nature of political events, and history’s inconsistent evolution gives forecasters pause. The inherent uncertainty of political knowledge makes our predictions, historical analysis, and political narratives chaotic, even as we develop models to rationalize them:

Given a choice between a chaotic theory of history and a conspiratorial one, I’ll take chaos…Marry chaos to biography and we live in a world of wonder and innocence. Marry conspiracy to genealogy and we live in a world of unending inequality, in which our origins are inescapable.

The quote is from Jill Lepore’s New Yorker review (gated) of David Maraniss’ recent Obama biography, which Lepore critiques for its unwillingness the confront the contradiction between crazy-random-happenstance and the smooth, deterministic course of narrative biography. You might as well substitute “international politics” for “biography,” and “political development” for “genealogy”: the result is much the same. As in the study of individual lives, the characteristics we choose, the event data we aggregate, and the behavioral typologies we establish have an inescapable effect on predictive conclusions, whether we’re assessing individual decision-making, mass atrocities, or institutional breakdown.

Modeling Historical Atrocities Trends

As concerns mass atrocities, Jay references the rarity of atrocity events as a barrier to trans-historical, empirical analysis; indeed, the multi-level, complex ubiquity of atrocity events is equally challenging. While diffuse information networks amplify knowledge of present-day atrocity events, forensic technologies yield a limited base of tangible atrocities evidence. For macro-level, multi-event-based forcasting, atrocities data needn’t be precise, but the principle remains: we know too little about historical atrocity events, and our existing forecasting base contains significant gaps across local, intra-organizational, and sub-state levels of analysis.

Prediction, however, is possible. Mass atrocities are political events, and political forecasting has a proven record of success. It’s not Moneyball, but existing models of political instability prediction have made impressive contributions to our understanding of conflict indicators. According to recent research, predictive models operate at both the macro- and micro-levels, with institutional, structural, and social factors providing a useful localized assessment framework. Atrocity analysis across history, however, remains elusive; we know plenty about the post-Cold War era, but our understanding of mass atrocities’ non-modern characteristics, their rocky, inconsistent evolution towards the present-day, and the ways in which they have interacted with their political contexts, is woefully limited.

Jay makes this point well–with 66 years of atrocities data since the Second World War, it’s hard to tell whether Sudan, Syria, Yemen, Bahrain, and Libya are historical anomalies, or problematic trend indicators. Statistically speaking, the “n” is too small, without a regionally diverse dataset to strengthen empirical inquiry. With such a small set of atrocities–Ulfelder and Valentino’s 2008 “state-sponsored mass killing” list offers 120 atrocity events, some of which span more than sixty years–modeling trends is a difficult task. But, like I said, it’s a possible one.

In keeping with trends in political science research, disaggregated atrocities modeling requires a shift in premise: atrocity events are all-encompassing, complex political events, rather than exclusive, humanitarian catastrophes. Frameworks beyond state-sponsored violence are difficult to consolidate and, perhaps as importantly, to quantify. At the same time, a reinterpretation of atrocities as an extension of political authority, rather than unique, depoliticized occurrences, may be a helpful way forward. Mass atrocities have functioned as a mechanism of power throughout human history–major historical works, epic poems, and theological texts describe mass atrocities as a critical, nearly universal characteristics of state formation, political expansion, and ideological consolidation. Human institutions may have crafted normative barriers to atrocities perpetration, but the core, driving force of violent politics makes atrocities possible, regardless of modernity. It’s not a matter of human nature, but of power–in the context of institutions which, indisputably, rely on the consolidation of acute violence, acute violence is bound to occur.

If we see atrocities as a symptom of political violence, a multi-level understanding of atrocities’ historical development emerges: that is, an evolving, interactive relationship between individual, operational, and institutional dynamics. This evolution underlines an interpretation of Jay’s central question: is our ability to mitigate atrocities improving, and how do we interpret blips on the trend graph?

Mapping Dehumanization: An Individual-Level Approach

I’ve discussed the individual-level analysis of atrocities perpetration before, but its historical and causal elements are worth developing. An individual-level understanding of mass atrocities addresses the following question: why do individuals participate in atrocity regimes, and how do individual perceptions of social factors impact atrocities mobilization? Psychologists and, more recently, anthropologists have found methodological common ground in the study of individual-level atrocity trends. The results are hardly surprising, and confirm a broad consensus on the cognitive and societal determinants of human violence: atrocities result from cognitive processes of dehumanization (the psychologist), which manifest themselves as social myths, cultures, and histories (the anthropologist). However, lest we reduce the individual-level perspective to Goldhagenian monocausality, a bit of nuance is required: dehumanization acutely manifests itself during periods of profound social, political, and economic stress; the dehumanization context in 1994 Rwanda appears notably distinct from, say, contemporary right-wing rhetoric against European immigrant and minority populations.

If atrocities operate at an individual and societal level, how can we understand historical trends in atrocities perpetration? Steven Pinker’s much-bemoaned, much-praised Better Angels of Our Nature provides an important framework, in spite of his problematic causal explanations. To observe that violence has declined throughout modern history is, to a certain extent, an empirical fact, despite  data gaps in non-Western regions. Using the “decline of violence” data and the normative evolution of popular culture, civil society, and political institutions as key factors in the modern story of human society, Pinker observes humanity inching ever-so-slowly towards a Kantian aspiration of sustainable, if not perpetual peace. Pinker’s critical analysis of popular culture is novel, but it provides insufficient grist to demonstrate that human society is, in fact, getting nicer, more peaceable, and less conflict-prone. However, given the robust qualitative interaction between hateful acts and mass atrocities, this individual-level trend may be worth probing further.

The question is likely not, “are human societies becoming less hateful?,” but, “given the existence of social, economic, and political stresses, are dehumanization processes more significant to social, political, and ethnic fractionalization?” The first question perceives hate as an apolitical, exclusively normative entity; the second, in contrast, frames the ways in which hate might interact with its historical problematic. From a qualitative perspective, this is an easy find: public discourse might become less virulent, or the lexicon of social division more peaceable. The quantitative standpoint is, of course, more challenging: given disparities in press coverage, particularly in conflict-affected states, processes of hate-speech categorization, collection, and dissemination are harder to identify, to say nothing of the “salience” factor.

Emerging analytic frameworks, centered around “dangerous speech,” may strengthen our ability to monitor and assess historical trends in dehumanization. As historical atrocities are concerned, the speech itself is less significance than its application, and the ways in which rhetoric, discourse, and public communications have mobilized atrocity events against civilians. As with institutional approaches to conflict prediction, the speech-oriented atrocities trend model may operate more effectively at the local level, especially given the relatively nascent nature of international hate-speech/-crime reporting. Insofar as transnational media trends remain non-standardized, a contextual understanding of informal media–pamphlets, new media, oral forms of information diffusion–is necessary.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s