The Conflict Early Warning Quandry

blow horn


If you create a system that successfully decreases or stops conflict, how can you prove that it worked?

That’s the problem, and it’s a big one.

So far what everyone has seen with Ushahidi is gathering information and reports during crisis situations. What we’re working towards is something which we believe can be a lot more powerful, that is conflict early warning. It’s especially pertinent when you have regions where genocide and mass atrocities are happening (or could happen).

What we ask is how can we create a system that helps in the creation of acute warnings of news and alerts around mass atrocities and genocide. Preventive actions will always differ, especially as this is a regional problem that tends to cross the immediate borders of the country that it is happening in.

We sit here trying to create a platform, one that is federated and modular, which we hope can become part of a greater ecosystem to help with just this problem. Will we ever be able to prove if it’s successful? Probably not. Proof of “decreasing the impact of a disaster” is hard to quantify, but we think it is of the utmost importance to try.

6 Responses to “The Conflict Early Warning Quandry”

  1. Hi Erik, thanks for bringing this up. Academics and practitioners in the conflict early warning/response community have been struggling with several challenges over the past 15 or so years. Two of the main challenges are: (1) bridging the warning-response gap; and (2) demonstrating impact. (They are not unrelated).

    The point you raise relates to the second challenge. Much ink has been spilled by Ivory Tower academics and consultants on this topic. We all agree that monitoring and impact evaluation is a top priority in our field, but we don’t all agree on whether it is possible, and if it is, how to go about it? These questions are already being raised vis-a-vis our joint work on the conflict early warning ecosystem.

    HHI has spent the past 18 months revisiting this issue and (and others) as part of our Humanity United grant on Conflict Early Warning and Crisis Mapping. What follows is a very brief summary:

    First, before we can even prove that an early warning/response system successfully prevents conflict, we need to show that a link between some form of operational response and early warnings exists, regardless of whether or not that response is successful. In other words, no need to bother trying to prove the success of an early warning system if we can’t even show any kind of operational (eg, logistical) response that would generate that would-be success. At this point, our community isn’t even linking warning and operational response (probably because there are few links, cf Zimbabwe).

    Second, some of the Ivory Tower academics who have been addressing this issue are particularly fond of claiming that successful early warning/response can’t be proven. The common saying goes like this: “You can’t prove a negative.” And that stops all conversations because it sounds authoritative. (I’ve seen that done effectively on several occasions). Needless to say, I don’t agree with this view.

    The first step is to identify whether alerts lead to operational response, and to start backwards. If an operational response takes place, identify whether early warning analysis was one of the information feeds that informed the response.

    The second step is to ask the local stakeholders themselves. Did the alerts received via Ushahidi enable you to get out of harm’s way? To organize a peaceful protest? etc. (There is an important nonviolent tactical response angle that we need to talk about soon). Ask the local communities themselves how their situational awareness has changed with the roll out of Ushahidi. Start capturing human interest stories. As I mentioned to David back in Orlando, provide a link on the Ushahidi platform for users to submit feedback, anecdotes, etc., on how they used Ushahidi.

    Neither of these steps are quantitative, but there is a role for data-driven impact evaluation. Unfortunately, there are few well qualified professionals in the field of monitoring and evaluation (M&E), which, when done well, is very rigorous. Many view M&E as an art. It isn’t, it’s a science. For our ecosystem project, we will want to carry out an initial baseline analysis before moving forward and to continue the baseline analysis throughout (something called formative evaluation). But this quantitative analysis on its own is not sufficient, which is why the first two qualitative steps described above need to be part of the mix.

    If we are to help foster a conflict early warning ecosystem in Africa, our evaluation approach must mirror that typology, ie, we need to take an ecosystem approach to evaluation. In other words, we need a combined mixed methods approach to demonstrate whether or not we are having any impact.

  2. Wow, Patrick, great points as always. (Everyone: Patrick’s blog is really one of the best in this space — http://irevolution.wordpress.com/ )

    Program evaluation is extremely difficult, but extremely valuable.

    In my (limited) experience with evaluation I have see some amazing things result from simply engaging your stakeholders in the process of “evaluating” your success. People want to tell you what’s working.

    (For those who are not part of this ULTRA-geeky part of academia and the nonprofit world, capital-E “Evaluation” is an entire discipline of, basically, figuring out how well money was spent on a project. Not surprisingly, the big Funder$ almost always expect that you have an evaluation program and can prove your worth. Surprisingly, it can be really engaging and useful.)

    There is a division, as always, between quantitative and qualitative evaluation — the numbers vs. the stories. I agree with Patrick that both approaches are necessary. You can’t have the numbers without the stories.

    I’d like simply to add the point that evaluation can actually be fun, and in fact deeply enriching for a project — it’s not just about proving to funders that you aren’t blowing their money.

    In particular, qualitative evaluation — talking to people and amplifying their stories — can be synthesized with engaging your stakeholders, and can be a genuinely useful tool for improving your program. After all, the experiences of the people on the ground is what we we are trying to change.

    Frankly, an Ushahidi evaluation would be expensive, time consuming, and highlight lots of problems. Some nonprofits tend to shy away from those three things — but I have the feeling that Ushahidi is just the type of program that could really kick ass with a rigorous evaluation.

    PS: Usability testing can be considered a subset of program evaluation!

  3. @Patrick – Thanks for the wonderful feedback on this question. I was sure we weren’t the first to ask it, but there’s nothing like hearing from someone who is immersed in it all the time.

    Your response begs the question though; what mechanisms are already in place for early warning so that a baseline can be found?

    From what I can tell, there are two types of scenarios where early warning can take place, both of which might require different tools, mechanisms or practices to be in place beforehand. There is the “slow burn” that you see coming (Zimbabwe), and the “hot flash” which you have little to no warning of (Mumbai).

    From our perspective this proves to be a troubling point. Do we spread wide and get Ushahidi everywhere so that it’s in place before things happen? Do we focus on specific spots to ensure coverage, connections and community before something blows up?

  4. @Chris Blow – You’re right, truly sitting back and evaluating the platform and the impact it has (or doesn’t have) is incredibly important. We’re just now getting out there with the new platform, and we’re trying to document as much as we can of the different instances of it that we’re directly involved in.

    A really (REALLY) smart small foundation manager once told me to always be skeptical, especially of our own tool.

    We’re trying to do that (see Ory’s post on the DRC thus far), and it’s hard, since we’re optimists by nature.

    Hopefully you can help us in this evaluation-side of things, at least on the platform level.

  5. @Chris, thanks! I completely agree with you, process vis-a-vis qualitative evaluations is absolutely key.

    @Erik, thanks for your questions. I think they get at different issues and I don’t have all the answers but it’s good to brainstorm.

    First on baseline analysis vis-a-vis “slow burn” and “hot flash”, different sets of indicators are necessary (different time/spatial scales). The purpose of an initial conflict assessment is to identify these indicators, which are then monitored before, during, after. The indicators can be quantitative and qualitative, i.e., outcome-based or process-based. We are starting to identify data sources for these indicators for the Conflict Early Warning Ecosystem. Note that conflict early warning systems also provide baseline analyses albeit of varying quality. Please let me know if this answers your question.

    Your second question raises the issue of needs and priority. “Do we spread wide?” or “Do we focus on specific spots”? A related question worth asking is in which scenario Ushahidi can do the most good in the shortest time? This depends on willing partners who send clear signals of being pro-active and championing Ushahidi in their own countries. Either way, the full Ushahidi platform needs to be completed (with SMS subscription) earlier rather than later so that we can start crowdsourcing response.