Dear Andre,
let me say that the algorithms need tuning, so we are not sure we are doing the best, but here is the idea:
When a user of reputation 10 (for example) edits the page, the text that is added only gets trust 6 or so. It is not immediately considered high trust, because others have not yet had a chance to vet it.
When a user of reputation 10 edits the page, the trust of the text already on the page raises a bit (over several edits, it would approach 10). This models the fact that the user, by leaving the text there, gave an implicit vote of assent.
The combination of the two effects explains what you are seeing. The goal is that even high-reputation authors can only lend part of their reputation to the text they create; community vetting is still needed to achieve high trust.
Now as I say, we must still tune the various coefficients in the algorithms via a learning approach, and there is a bit more in the algorithm than i describe above, but that's the rough idea.
Another thing I am pondering is how much a reputation change should spill over paragraph or bullet-point breaks. I could change easily what I do, but I will first set up the optimization/learning - I want to have some quantitative measure of how well the trust algo behaves.
Thanks for your careful analysis of the results!
Luca
On 7/30/07, Andre Engels andreengels@gmail.com wrote:
2007/7/29, Luca de Alfaro luca@soe.ucsc.edu:
We first analyze the whole English Wikipedia, computing the reputation
of
each author at every point in time, so that we can answer questions like "what was the reputation of author with id 453 at 5:32 pm of March 14, 2006". The reputation is computed according to the idea of
content-driven
reputation.
For new portions of text, the trust is equal to (a scaling function of)
the
reputation of the text author. Portions of text that were already present in the previous revision can
gain
reputation when the page is revised by higher-reputation authors,
especially
if those authors perform an edit in proximity of the portion of text. Portions of text can also lose trust, if low-reputation authors edit in their proximity. All the algorithms are still very preliminary, and I must still apply a rigorous learning approach to optimize the computation. Please see the demo page for more details.
One thing I find peculiar is that adding a text somewhere can lower the trust of the surrounding text while at the same thing heightening that of far away text. Why is that? See for example
http://enwiki-trust.cse.ucsc.edu/index.php?title=Collation&diff=prev&...
- trust:6 text is added between trust:8 text, causing the surrounding
text to go down to trust:6 or even trust:5, but at the same time improving text elsewhere in the page from trust:8 to trust:9. Why would the author count as low-reputation for the direct environment, but high-reputation farther away?
-- Andre Engels, andreengels@gmail.com ICQ: 6260644 -- Skype: a_engels