Crisis Info: Crowdsourcing the Filter

What happened in Mumbai was a classic “hot flash” event: they’re hard to detect before they happen, and they’re over relatively quickly. There is little to no time to deploy anything and still be relevant once the event has started.

It was that crisis that started two members of the Ushahidi dev community (Chris Blow and Kaushal Jhalla) thinking about what needs to be done when you have massive amounts of information flying around. We’re at that point where the barriers for any ordinary person sharing valuable tactical and strategic information openly is at hand. How do you ferret the good data from the bad?

When the noise is overwhelming the signal, what do you do?

Thus began project “Swift River” at Ushahidi, which for 3 months now has been thought through, wireframed, re-thought and prototyped. Chris and Kaushal started asking, what can we do that most significantly effects quality of information in the first 3 hours of a crisis? And then answered, what if we created a swift river of information that gets quickly edited? Events like US Airways Flight 1549 and the inauguration gave us real-time live events that also had massive amounts of data to test things out on.

And, after all that, we’re not done, but we do have some solid ideas on what needs to be done. We think of it as using a crowd to filter, or edit, the already crowdsourced information coming through tools like Ushahidi, Twitter, Flickr and YouTube. To us, Swift River is “Crowdsourcing the Filter”.

How does it work? (non-tech version)

Since we don’t believe there will ever be one tool that everyone uses for gathering information on global crisis, we see a future where a tool like Swift River aggregates data from tools such as the aforementioned Twitter, Ushahidi, Flickr, YouTube, local mobile and web social networks. At this point what you have is a whole lot of noise and very little signal as to what the value is of the data you’re seeing.

Anyone who has access to a computer (and possibly just a mobile phone in the future), can then go and rate information as it comes in. This is classic “crowdsourcing”, where the more people you have weighing in on any specific data point raises the probability of the finding the right answer. The information with greater veracity is highlighted and bubbles to the top, weighted also by proximity, severity and category of the incident.

At this point we have successfully filtered a large amount of data. Something difficult to do with a small team of experts, which can be accomplished by a large number of non-experts and experts combined.

What Next?

So far we have some comps and David created a rough prototype of the engine driving it for the US inauguration. If this type of tool interests you, and you’d like to help, then do let us know. Here’s a glimpse at some of the idea flows that spur on our conversation at Ushahidi. This was created by Chris Blow, using the assumption that the user of this tool and protocol was a Twitter user:

The tool is really quite simple, and can be made better by clustering “like” incidents and reports, rating of the users on proximity, history and expertise and by developing a general protocol so that any other developer can expand on it as well.

48 Responses to “Crisis Info: Crowdsourcing the Filter”

  1. This is brilliant. I am looking forward to following your progress. I am particularily intrigued that you are trialing it on other types of data (the inauguration) because the broader the appeal of the tool the more effective it will be, hopefully.

  2. Wikipedia has one of the most effective “crowdsourcing filters” I know of. Any new entry I create on Wikipedia gets reviewed by an editor within minutes.

  3. Great article withan interesting point of view. I also agree with with Patrick. Wikipedia is probably the best example for effective Crowdsourcing.

  4. Matt Cooperrider

    We used a similar system to tag inputs during http://twittervotereport.com. We invited “sweepers” to clean up unstructured data. The most important lesson I learned is that people thought this was fun. It became a game to see who had the most “sweeps”. I’m glad that someone is perfecting this process.

  5. Reminded me of an idea around 3 years ago.

    “The idea was to have a webpage broken down into various tabs that could represent geographical areas, issues, political parties, NGO groups etc, that each had a list of SMS or MMS messages. This simple webpage could be a powerful way of ascertaining public opinion and stimulating dialogue on certain issues, quickly ascertaining ground condition after a disaster and also help in the public dissemination of information. The idea can of course be developed further to have secure webpages with information that is more sensitive coming in from the field. I also suggested the possibility of an RSS feed from this webpage – which would enable the SMS / MMS feeds to be read by any newsreader or RSS capable programme / device – including for instance, Microsoft FM radio enabled watch capable of displaying emails, SMS and well, the time and date.”

    For more read http://ict4peace.wordpress.com/2006/08/23/strong-angel-iii-20th-august-2006/

    Cheers,

    Sanjana

  6. Eric,

    I’m very interested in talking with you and your team about this, specifically because I’ve been trying to figure a way to do this very thing with CrisisWire.com. Please feel free to contact me if/when you have time.

    Thanks,

    Nate

  7. I see big applications in schools. This’d be a great tool to use to teach kids about filtering information from a variety of (even contradicting) sources.

    Don’t have much in the way of programming skills, but I do have a small school of 50 students who’d be keen to give it a go.

  8. Comments are mines of information, that become noise. And when subject to filters would be useful.