From the Ward Lab blog a few weeks ago:
Over the past few months we have worked on regularly updating our irregular leadership change models and forecasts in order to provide monthly 6-month ahead forecasts of the probability of irregular leadership change in a large number of countries–but excluding the US–worldwide. Part of that effort has been the occasional glance back at our previous predictions, and particularly more in-depth examinations for notable cases that we missed or got right, to see whether we can improve our modeling as a result. This note is one of these glances back: a postmortem of our Yemen predictions for the first half of 2015.
To provide some background, the ILC forecasts are generated from an ensemble of seven thematic1 split-population duration models. For more details on how this works or what irregular leadership changes are and how we code them, take a look at our R&P paper or this longer arXiv writeup.
We made a couple of changes this year, notably adding data for the 1990’s, which in turn cascaded into more changes because of the variation in ICEWS event data volume. This delayed things a bit, but eventually we were able to generate new forecasts for the time period from January to June 2015, using data up to December 2014. Here were the top predictions:
This post first appeared at Predictive Heuristics.
Alexander Noyes and Sebastian Elischer wrote about good coups on Monkey Cage a few weeks ago, in the shadow of fallout from the LaCour revelations. Good coups namely are those that lead to democratization, rather than outcomes one might more commonly associate with coups, like military rule, dictatorship, or instability. Elischer, although on the whole less optimistic about good coups than Noyes, writes:
There is some good news for those who want to believe in “good coups.” A number of military interventions in Africa have led to competitive multiparty elections, creating a necessary condition for successful democratization. These cases include the often (perhaps too often)-cited Malian coup of 1991, the Lesotho coup of 1991, the Nigerien coups of 1999 and 2000, the Guinean coup of 2008, the Malian coup of 2012 and potentially Burkina Faso’s 2014 coup, among others.
Here is a quick look at the larger picture. I took the same Powell and Thyne data on coups that is referenced in the blog posts and added the Polity data on regimes to it.1 Specifically, I added the Polity score 7 days before a coup, and 1 and 2 years afterwards, although I’ll focus on the changes 2 years later. The Polity score measures, on a scale from -10 to 10, how autocratic or democratic a regime is. The scale is in turn based on a larger number of items coded by the Polity project. It’s not quite an ordinal or interval scale, in part because there are a couple of special codes for regimes that are in transition or where a country is occupied or without a national government (failed state). Rather than exclude these special scores or convert them to Polity scores, I grouped the Polity scores into several broader categories from autocracy to full democracy, and kept the special codes under the label “unstable”, which may or may not be a good description for them.
The overwhelming pattern for all 227 successful coups that the data cover is that things stay the same (0.41 of cases) or get worse (0.40 of cases). The plot below shows the number of times specific category-to-category switches took place, with the regime 7 days before a successful coup on the y-axis, and the regime 2 years later on the x-axis. It’s really just a slightly more fancy version of a transition matrix.2
The ICEWS data, including the underlying raw event data as well as some aggregations, were quietly posted on Dataverse the Friday before last. I’ve worked with the ICEWS data for several years now, first when I was working on on the ICEWS package–we deliver updated monthly forecasts for the ICEWS events of interest (EOIs) once a month–and more recently for the irregular leadership change forecasting project. The public data are formatted differently from the data I’ve worked with, so most of the code I have lying around is not that useful, but in going through the public data I did recreate a short overview that is nowhere near as complete as David Masad‘s first look (using Python), and some code that might be useful for getting started in R.
One of the nice things about the public release of these data, aside from the hope that they will start to get used in modeling (repost), is that it is very interesting to read new takes by people whose perspectives are different than mine, like, so far:
- Overview and descriptives by David Masad
- Jay Ulfelder’s notes on using ICEWS in country-month modeling, including some starter R code.
- Phil Schrodt’s comments on the public release, from an event data producer’s perspective.
Now to the quick overview, using R rather than Python (link to code at end). The first figure below shows the daily event totals, as well as a 30-day moving average. The daily totals increase from around 500 in 1996 to a steady level of around 3,000 from 2005 on, before decreasing again around 2009/2010. As others have pointed out, this stability is a good feature to have since it makes it plausible to model without some kind of normalization to account for changes in the underlying event volume. This is in contrast to GDELT, where the story corpus and event counts increase dramatically over time.
ROC curves are a fairly standard way to evaluate model fit with binary outcomes, like (civil) war onset. I would be willing to bet that most if not all quantitative political scientists know what they are and how to construct one. Unlike simpler fit statistics like accuracy or percentage reduction in error (PRE), they do not depend on the particular threshold value used to divide probabilistic predictions into binary predictions, and thus give a better sense of the tradeoff between true and false positives inherent in any probabilistic model. The area under a ROC curve (AUC) can summarize a model’s performance and has the somewhat intuitive alternative interpretation of representing the probability that a randomly picked positive outcome case will have been ranked higher by the model than a randomly picked negative outcome case. What I didn’t realize until more recently though is that ROC curves are a misleading indication of model performance with kind of sparse data that happens to be the norm in conflict research.
This post is about the Archigos data, which you can find here.
Political scientists, and maybe historians as well, are familiar with coups, rebellions, and mass protests as distinct phenomena that lead to the fall of regimes occasionally. Another way to view these events is from the perspective of state leaders, and how these events affect transitions between political leaders. Selectorate theory does this by considering the sets of people within a regime that a leader must rely on to remain in power, and how their relative sizes shape behavior. We do this empirically by modeling irregular leadership changes, where we draw our dependent variable from the Archigos dataset. I’ve been vaguely aware of these data for a while, but honestly did not understand well how useful they could be. In this post I’ll try to give a quick overview of the data.
Archigos is a dataset of the political leaders of states from 1875 on collected by Hein Goemans, Kristian Skrede Gleditsch, and Giacomo Chiozza. The most recent version, 2.9, has more than 3,000 leaders through 2004 and an update to 2014 is in the works. Aside from identifying leaders and when they gained and lost office, it codes how they did so (from the Archigos codebook):
Archigos codes the manner in which transfers between rulers occur. Our main interest is whether transfers of power between leaders take place in a regular or irregular fashion. We code transfers as regular or irregular depending on the political institutions and selection mechanisms in place. We identify whether leaders are selected into and leave political office in a manner prescribed by either explicit rules or established conventions. In a democracy, a leader may come to power through direct election or establishing a sufficient coalition of representatives in the legislature. Although leaders may not be elected or selected in particularly competitive processes, many autocracies have similar implicit or explicit rules for transfers of executive power. Leader changes that occur through designation by an outgoing leader, hereditary succession in a monarchy, and appointment by the central committee of a ruling party would all be considered regular transfers of power from one leader to another in an autocratic regime.
My limited knowledge of what happens in Terminal, and thus by extension shell, is mostly driven by PostgreSQL/PostGID/rgdal/RPostgreSQL install errors. In the latest variant of this,
rgdal throws the following error when attempting to build from source:
checking PROJ.4: epsg found and readable... no Error: proj/epsg not found Either install missing proj support files, for example the proj-nad and proj-epsg RPMs on systems using RPMs, or if installed but not autodetected, set PROJ_LIB to the correct path, and if need be use the --with-proj-share=configure argument.
I have to build from source by the way because the default
rgdal package for Mac does not include a PostgreSQL driver, meaning I have to build it against another version of GDAL that does. This was another fun thing to discover, but at least is easy to diagnose by checking whether
PostgreSQL shows up when you run
ogrDrivers() in R. Anyways, as far as I can tell the problem was that I installed
proj via homebrew, a package manager for OS X. As a result although
rgdal could find the
proj binary via a symlink, it could not find the
epsg and related data files that were in a little dark corner by themselves. The solution was to build the package with an option providing the file location manually:
install.packages("rgdal", type = "source", configure.args="--with-proj-share=/usr/local/Cellar/proj/4.8.0/share/proj")
This is I guess exactly what the install error message told me to do.
or, How I learned to stop worrying and love event data. (This post first appeared on Predictive Heuristics)
Nobody in their right mind would think that the chances of civil war in Denmark and Mauritania are the same. One is a well-established democracy with a GDP of $38,000 per person and which ranks in the top 10 by Human Development Index (HDI), while the other is a fledgling republic in which the current President gained power through a military coup, with a GDP of $2,000 per person and near the bottom of the HDI rankings. A lot of existing models of civil war do a good job at separating such countries on the basis of structural factors like those in this example: regime type, wealth, ethnic diversity, military spending. Ditto for similar structural models of other expressions of political conflict, like coups and insurgencies. What they fail to do well is to predict the timing of civil wars, insurgencies, etc. in places like Mauritania that we know are at risk because of their structural characteristics. And this gets worse as you leave the conventional country-year paradigm and try to predict over shorter time periods.
The reason for this is obvious when you consider the underlying variance structure. First, to predict something that changes, say dissident-government conflict, the nature of relationships between political parties, or political conflict, you need predictors that change.