For example, its hard to tell from your description whether you are doing anything different than the wikiwho api with tracking content historically.
FWIW wikiwho also tracks exactly when a token appeared, dissappeared and reappeared, including if it was a reintroduction, repeated delete, etc. We also added the calculation of relationships between revisions (and in aggregation: editors), which is the data used in the whoVIS visualization . It’s all avaliable in the WikiwhoRelationships.py at . The API, however, so far only delivers information about provenance (first appearance and authors), but in time we will add some parameters to receive that information as well.
Further the work I have been doing with diff-based content persistence (e.g. ) is not so simple as to not notice removals and re-additions under most circumstances.
In my opinion, this is much better for measuring the productivity of a contribution (adding content that looks like content that was removed long ago is still productive, isn't it?),
I strongly agree that more qualitative analysis of the algorithm outputs is necessary, as the problem is not that trivial in all cases (as can bee seen from our results in , where we compared wikiwho with one instantiation of A3). I’m also not aware of any other evaluation than the one we did in the wikiwho paper. But with Wiki Labels (as far as I understand), we now have a great tool to do more human assessment of provenance and content persistence.
Regardless, it seems that a qualitative analysis is necessary to determine whether these differences matter and whether one strategy is better than the other. AFAICT, the only software that has received this kind of analysis is wikiwho (discussed in ).
On 22.08.2015, at 17:01, Aaron Halfaker <email@example.com> wrote:
No worries. Glad to have your code out there. In a lot of ways, this mailing list is a public record, so I wanted to make sure there was a good summary of the state to accompany your announcement. I meant it when I said that I'm glad you are working in this space and I look forward to working with you. :)
On Sat, Aug 22, 2015 at 7:26 AM, Luca de Alfaro <firstname.lastname@example.org> wrote:
Sorry, I meant to say: if there is interest in the code for the Mediawiki extension, let me know, and _we_ will clean it up and put on github (you won't have to clean it up :-).Luca
On Sat, Aug 22, 2015 at 7:25 AM, Luca de Alfaro <email@example.com> wrote:
Thank you Federico. Done.
BTW, we also had code for a Mediawiki extension that computed this in real time. That code has not yet been cleaned up, but it is available from here: https://sites.google.com/a/ucsc.edu/luca/the-wikipedia-authorship-projectIf there is interest, I don't think it would be hard to clean up and post better to github.The extension uses the edit hook to attribute the content of every new revision of a wiki page, using the "earliest plausible attribution" idea & algo we used in the paper.
On Sat, Aug 22, 2015 at 12:20 AM, Federico Leva (Nemo) <firstname.lastname@example.org> wrote:
Luca de Alfaro, 22/08/2015 01:51:
So I got inspired, and I cleaned up some code that Michael Shavlovsky
and I had written for this:
Great! It's always good when code behind a paper is published, it's never too late.
If you can please add a link from wikipapers: http://wikipapers.referata.com/wiki/Form:Tool
Wiki-research-l mailing list
Wiki-research-l mailing list