Hi everyone,
I've been hacking on a new tool and I thought I'd share what (little) I have so far to get some comments and learn of related approaches from the community.
The basic idea would be to have a browser extension that tells the user if the current page they're viewing looks like a good reference for a Wikipedia article, for some whitelisted domains like news websites. This would hopefully prompt casual/opportunistic edits, especially for articles that may be overlooked normally.
As a proof of concept for a backend, I built a simple bag-of-words model of the TextExtracts of enwiki's Category:All_articles_needing_additional_references. I then set up a tool [1] to receive HTML input and retrieve the 5 most similar articles to that input. You can try it out in your browser [2], or on the command line [3]. The results could definitely be better, but having tried it on a few different articles over the past few days, I think there's some potential there.
I'd be interested in hearing your thoughts on this. Specifically:
* If such a backend/API were available, would you be interested in using
it for other tools? If so, what functionality would you expect from it?
* I'm thinking of just throwing away the above proof of concept and using ElasticSearch, though I don't know a lot about it. Is anyone aware of a similar dataset that already exists there, by any chance? Or any reasons not to go that way?
* Any other comments on the overall idea or implementation?
Thanks!
--
Guilherme P. Gonçalves