Anthere wrote:
> Brion Vibber a écrit:
>
>> On Feb 15, 2004, at 19:44, Anthere wrote:
>>
>>> By the way, is it okay that I relicense all the images I offered
>>> wikipedia as copyright by Anthere with permission, with a link to my
>>> user page, instead of under gfdl ?
>>
>> You can choose to allow that use _as well as_ the GFDL you've already
>> agreed to. But, once given, I'm not sure you can withdraw the GFDL
>> permission without cause. (IANAL!)
>
Actually, I'm quite sure you can't withdraw it "without cause," and I
doubt you could find a reason to withdraw the license "with cause." The
text of the GFDL specifically states that the license is "unlimited in
duration". A GPL-type license is perpetual and cannot be withdrawn.
>> Keep in mind that if you were to withdraw the GFDL license, the
>> images would be removed from Wikipedia.
>
No, they shouldn't be. Since the license cannot be withdrawn, the images
stay. By uploading them to Wikipedia, you agree to the terms of the
license. The only images that should be withdrawn are those under
copyright, if the user did not have authority to grant a GFDL.
> I have doubts. Many images are indicated copyrighted, but that the
> author gave permission for use. And they are in Wikipedia.
The images may still be copyrighted. The GFDL and copyright are not
mutually exclusive. However, the author's permission must be GFDL, or
something less restrictive than the GFDL.
--Michael Snow
Tomasz Wegrzanowski wote:
>There is very serious problem with this approach - the
>trust is not transitive at all, especially wrt POV issues.
Yeah it is - did you not read the part where I stated that users can trust by
proxy? That makes it transitive without any possibility of people gaming the
system through sock puppets.
-- mav
__________________________________
Do you Yahoo!?
Yahoo! Finance: Get your refund fast by filing online.
http://taxes.yahoo.com/filing.html
"Magnus Manske" <magnus.manske(a)web.de> schrieb:
> Once upon a time ;-) I implemented a checkbox system where the uploading
> party has to choose "GFDL/PD/fair use/something else" upon upload. Would
> be nice to automatically categorize images, or at least assign blame to
> a specific user ;-)
Good idea. I would like to get another field 'author' - that's the
second thing I think we should have, every image should have its copyright
status AND the name of the person or institute that either holds the
copyright or would hold it if the material was not in the PD.
It's obligatory on GNU/FDL, it gives a 'fair use' defense more chance, it
is required by many who otherwise allow usage, and is at least common
courtesy in other cases.
Andre Engels
Anthere wrote:
> I see Alex756 left, probably because some here did confuse attacking
> the bearer of bad news, from the one responsable of bad news
> I wonder if people think it is better now that we have no lawyer and
> reasonable man to help us
Having Alex756 leave is certainly regrettable, but it doesn't leave us
with no lawyers. As with any other institution, Wikipedia will always
have people coming and going. Some will lose faith in the project, or
their experiences will disillusion them. New people will also come
along. Nobody, except probably Jimbo, at least for right now, is
irreplaceable. A few others come pretty close.
--Michael Snow
Erik Moeller wrote:
>I see three possibilities:
>1) Waste much of our time arguing with Anthony about his FDL
>interpretation, possibly hire a lawyer, possibly go to court, possibly
>lose.
>2) Name clear conditions for how individual articles must be licensed. If
>Anthony refuses to comply, permanently ban him from editing Wikipedia.
>3) Do nothing.
>
>Option 2) seems the wisest to me. Anthony is clearly a troll who, like a
>vampire, wants to suck energy from us and our project. We should not give
>him what he wants. His "fork" seems to have no intention other than to
>piss people off. It will likely fade away into the darkness as he himself
>does.
>
>Either he adjusts his behavior and can be unbanned, or he does not and he
>cannot. Whatever he does, we win: We either get rid of a persistent troll,
>or we get our interpretation of the FDL confirmed.
>
>In my opinion, this is a decision Jimbo should make, as this is clearly a
>Wikimedia issue. I can only attribute his silence on the matter to him not
>haven taken notice of the fork and related discussion yet.
>
I could also attribute it to Jimbo concluding that the fork is not
important enough to bother about. There's no significant harm to
Wikipedia, except when we waste time on option 1). Banning Anthony
doesn't prevent him from copying Wikipedia for his fork. If McFly will
likely fade away, and I believe it will, then option 3) is a better choice.
Calling Anthony a troll, when he's complying with his interpretation of
the GFDL in good faith, seems like a good way to turn him into one.
--Michael Snow
Jimmy Wales wrote:
> Daniel Mayer wrote:
>
>>That would prevent any incentive to create sock puppets since my
>>selections only affect what *I* see and what the people who trust my
>>judgment see (if they set their preferences accordingly).
>>
>>
>
>I think that's really fascinating. The incentive only arises if the
>web of trust is "summed up" across different people to arrive at an
>overall "score".
>
>As long as we don't do that, there's no incentive for sock puppetry.
>
>So, hmm, why did I want to do it that way in the first place? Well, a
>"summed up" score could be really handy for certain types of decision
>making. It could provide people with feedback on their overall
>behavior.
>
>But the real point is just to find a way for us to scale better as the
>number of editors grows, to ensure that newcomers are assisted, that
>vandalism is properly watched for, etc.
>
>I like your idea a lot.
>
>--Jimbo
>
At the risk of me-too-ism, I think mav's web of trust concept at least
avoids most of the dangers I see in a feedback-based reputation system.
That alone is wonderful progress. I'm a little more skeptical about how
widely the concept would be adopted, but I'm also not that much of a
Recent Changes junkie, so maybe I'm missing the appeal. Anyway, the
web-of-trust system wouldn't have to be that widespread to be useful.
However, I would still avoid the pitfalls of generating a reputation
score for individual users, even using this framework. Better just to
let user X know that user Y trusts or distrusts X.
--Michael Snow
To prove seriousness (i.e. real love of Wikipedia) and
to prevent sock puppetry why not restrict the
allocation of trust to those users who have edited
x(500? 1000?) times?
I would think that this would resolve most problems.
You might have to make an exception for certain
current sysops who don't really edit but have proved
their worth to the community.
Sysop status would become automatic after x number of
edits provided the user hadn't accumulated negative
votes already from existing trusted users. You could
keep the voting figures out of the public domain until
the trigger figure is reached to prevent gamesplaying,
since people would not know their voting score.
Over time sysop status could be revoked if votes
against a user became weighted too negatively. For
questionable cases you could call a vote.
Good behaviour conversely would inevitably lead to the
regaining of trust as people change their votes and
might lead to the regaining of sysop status.
___________________________________________________________
BT Yahoo! Broadband - Free modem offer, sign up online today and save £80 http://btyahoo.yahoo.co.uk
>
>So how's this: each user gets one "opinion" of each other user. The
>opinions can be changed at any time. An opinion is positive,
negative,
>or neutral. The individual opinions are private, and accessible only
>by the users who hold them. This encourages honesty and avoids flame
>wars. Each user has a reputation rating based on other users'
opinions
>of them. Opinions are weighted: opinions held by users with higher
>reputations have greater influence.
>
Wikipedias first task as an encyclopedia is starting new and improving
existing articles. So we should encourage users to edit and start
articles by including this actions in calculating the reputation of a
user. Perhaps this can be combined with the proposed article rating
system, as for 'good' rated articles resulting in a higher reputation
than 'worthless'-rated articles.
Regards,
Andre
Jimmy Wales wrote:
> For us, we have nothing like a 'transaction', but the system could be
> generalized from that. Each person could assign positive or negative
> feedback to others.
...
> 2. Difficult to game -- it is NOT automated, so there's no way to
> game the system by engaging in repetitive actions to score points.
On the contrary it's very easy to game: create a bunch of sock puppet
accounts to give each other feedback. On E-Bay you'd have to go to the
trouble of faking some auctions to yourself, but not here...
-- brion vibber (brion @ pobox.com)
I am not advocating anything in this post, I'm just sharing some of my
thoughts over the past few days.
There are perennial discussions of trust metrics for things like
automatic sysopping and general "reputation management" system. It is
rightly pointed out (by me and many others!) that such systems are
difficult to design properly and often easy to "game". At the same
time, the hope is that a well-designed system would be scalable and
informative, while not oppressive or empowering of tyrants.
The system at slashdot is clearly broken, as many have said.
One system that I think does actually work, though, is the system at
Ebay. At Ebay, after every transaction, buyers and sellers can leave
feedback for each other, positive or negative. Whenever people want
to buy or sell something, they can look at the feedback of the
potential counterparty and see how much total feedback there is, how
many positive, how many negative.
For us, we have nothing like a 'transaction', but the system could be
generalized from that. Each person could assign positive or negative
feedback to others. Just a simple 'up' or 'down' rating with an
optional comment. Some people might give an 'up' rating to everyone,
some might give a 'down' rating to everyone, some might just abstain
totally.
But most would adopt a personal policy of giving mostly positives or
abstaining, reserving negatives for worst case scenarios.
Newcomers would have no rating at all, obviously. Very prominent
people would have lots of ratings, mostly positive I would have to
assume. I would probably have 95% positive rating, but not perfect,
since beloved though I am and obviously deserve to be (*wink*), I am a
target.
We'd likely see perfect positive ratings for people like Michael
Hardy, who keeps his nose to the grindstone editing topics that aren't
controversial, and who stays out of internal politics almost
completely as far as I know.
Some sysops have taken enormous and weighty responsibilities on
themselves to do important but controversial work like VfD or banning
trolls or mediating disputes or editing articles about the Middle
East. We'd naturally expect them to get mixed reviews, but we might
be surprised... lots of people would give them positive ratings just
for doing those jobs, acknowledging the difficulty and risk involved.
Some virtues of this concept:
1. Easy to program... it's just a single table in the database,
feedback, with 5 fields -- 'from', 'to', 'up/down', 'comment',
'timestamp'. (The timestamp is so old feedback can be expired. Maybe
positive votes expire after 1 year, negatives after 1 month, so as to
encourage more positivity!)
2. Difficult to game -- it is NOT automated, so there's no way to
game the system by engaging in repetitive actions to score points.
3. Easy for end users -- no complex system of approving or
disapproving of individual edits. You can just give someone a smile
or a frown, as you wish, when you wish, or not.
4. Rewards co-operativeness and friendliness and neutrality, because
to get a high rating, you have to please lots of people.
5. It would be a relatively simple matter to also calculate a
"weighted" score, where the weight is based on the raw scores of those
who have rated this user. What I mean is that if someone has a
perfect positive score, then their impact on the weighted score
calculations would be higher than the impact of someone with a high
negative score. I consider this weighted score to be optional and
possibly dangerous, but it is at least easy enough to do.
6. It *might* lead to a lot less demands and bickering. It isn't
uncommon for someone to write to the lists or to me personally asking
that such-and-such prominent wikipedian be banned or desysopped. This
is exhausting and divisive. Possibly instead we could have a system
where people have a way to express displeasure, and to privately
advocate that others express displeasure, but in a way that doesn't
involve long drawn out flamewars.
7. It is reflective of what we actually do in practice, i.e. it's
just a formalization of how we actually do operate. Everyone has an
opinion, and when I think about who is good and prominent, I think
about my *own* opinion, but I also think about the opinions of
*others*. But this can only be guessed at, not measured. Tim
Starling held a quick poll on whether 172 should be de-sysopped, and
when it went heavily in one direction, he followed through. If the
system I am talking about were in place, we'd instead just see 172's
positive rating start to evaporate.
-------
Some possible downsides, and there are many...
1. Unintended consequences -- I'm imagining this working in one way,
but it might work in a totally different way in practice. Perhaps the
system would encourage some bad behaviors that don't happen now, while
at the same time not discouraging any of our current bad behaviors.
2. People might be dissuaded from taking controversial and brave
stands, if it's going to get them some negative feedback.
3. People might be incentivized to create sham accounts just to give
themselves positive feedback. This could be minimized, maybe, with a
second-order calculated metric which would take into account that
positive feedback from people with no feedback is essentially
meaningless.
4. Well, I thought up the system, so I'm having a hard time seeing
other downsides, but I'm sure they exist. :-)
-------
At least initially, such a system should have *no* real-world
consequences. It would just be an indicator, which might be ignored
or not. That's the way Ebay is -- you can have a pretty mediocre
rating, and it doesn't really affect anything automatically. It may
inform others, though.
But with experience, we would surely organically come to some customs.
After some minimum number of feedbacks, with some percentage of them
positive, people could be sysopped. When feedback gets sufficiently
negative, people could be desysopped.
--Jimbo