Asking Questions, Verifying Answers

Ushahidi
Mar 7, 2010

vark.com Sean Conner recently asked a great question about integrating a Question and Answer service like Aardvark or Yahoo Answers into Swiftriver. Here is our approach at Team Swift... In a Swift instance, Aardvark could be used as an additional 'channel' of input. Existing channels are Twitter, Email, SMS, News, RSS (any RSS feed), and Other (the catch-all for items coming in via our API). The only thing the Swift app wants to do is receive content, allow users and our algorithms to tag that content, and based on user behavior it scores the originating content source. As an example for Aardvark: Johhny asks the question "Did an earthquake really happen in Chile?" on Vark.com on Feb 28th, only a day after the quake actually occurs. Robert responds on Vark with "No, at least I haven't heard of one." Vark user Jeremy responds with "Actually, Yes. An 8.8 magnitude earthquake occurred in Chile on Feb 27th." In Swift, the answer and the accuracy of that answer is more important to us than the actual question (which just provides context). To integrate Aardvark in Swift we'd probably write a module using their API that aggregates Answers with the corresponding Question as the 'description'. Example of how that data would post to the Swiftriver API:

Title: "Actually, Yes. An 8.8 magnitude earthquake occurred in Chile on Feb 27th." Description: "Did an earthquake really happen in Chile? - Johnny" Time: 17:08 EST Date: Feb 30, 2010 Source: Jeremy's user id on Vark.com Channel: Vark API Lat: 10.31 Lon: 01.40 Tags: 8.8, earthquake, chile Title: "No, at least I haven't heard of one." Description: "Did an earthquake really happen in Chile? - Johnny" Time: 03:10 EST Date: Feb 30, 2010 Source: Robert's user id on Vark.com Channel: Vark API Lat: 10.31 Lon: 01.40 Tags: heard, chile, earthquake

Within Swift this is the primary information we need to verify information. Users with a careful eye will notice that we've included location data that Vark probably may or may not provide. We can easily extract that info from the hosted service SULSa. Here, the source is what we're scoring. The channel is just an indicator for the user about where the content is coming from. That said, the source is not Vark itself, nor is it the user's answer on Vark, but rather the user id on Vark. Thus, if Robert keeps giving inaccurate answers, he maintains a very low score in Swift while Jeremy is viewed as the more trusted authority. Now this approach assumes that Vark.com offers an API that allows for this type of data aggregation which I don't think they currently do. Perhaps, it's a question for the Vark team?