On Fri, Jun 15, 2012 at 9:51 PM, ENWP Pine <deyntestiss(a)hotmail.com> wrote:
Hi
Nathan,
For a moment, let's suppose that there is a global policy that all CU
checks must be disclosed to the person being checked, with the
information
disclosed in private email, and only consisting of the date of the check
and the user who performed the check. What benefit does this have to the
user who was checked? This information doesn't make the user more secure,
it doesn't make the user's information more private, and there are no
actions that the user is asked to take. Perhaps there is a benefit, but I
am having difficulty thinking of what that benefit would be. I can think
of
how this information would benefit a dishonest user, but not how it would
benefit an honest user. If there is a valuable benefit that an honest
user
receives from this information, what is it?
Thanks,
Pine
Pine: As you have said, checkuser oversight comes from AUSC, ArbCom and
the
ombudspeople. These groups typically respond to requests and complaints
(well, the ombuds commission typically doesn't respond at all). But you
only know to make a request or complaint if you know you've been CU'd. So
notifying people that they have been CU'd would allow them to follow up
with the oversight bodies. My guess is most would choose not to, but at
least some might have a reason to. It's also plain that even if there is
no
recourse, people will want to know if their identifying information has
been disclosed.
Hi Nathan,
Thanks, I think I understand your points better now. Let me see if I can
respond. I'm not a Checkuser or CU clerk, and I am commenting only from my
limited ability to get information as an outsider.
If we notify all users who have been CU'd as we are discussing, what I
speculate will happen is an increase in the volume of people who contact the
CU who used the tool, their local AUSC or ArbCom, other local CUs, OTRS, and
the ombudsmen. This will increase the workload of emailed questions for the
CU who used the tool and anyone else who might be contacted. This increase
in workload could require an increase the number of people on AUSC or other
audit groups who have access to the tool in order to supervise the CUs who
are doing the front-line work, and this increase in the number of CUs makes
it more possible for a bad CU to slip through.
Another other problem that I foresee is that if a user appeals the original
CU decision to another CU or any group that audits CUs, then the user is put
in the position of trusting that whoever reviews the first CU's work is
themselves trustworthy and competent. The user still doesn't get the
personal authority to review and debate the details of the CU's work. Since
my understanding is that CUs already check each other's work, I'm unsure
that an increase in inquiries and appeals to supervisory groups would lead
to a meaningful improvement as compared to the current system in CU accuracy
or data privacy.
So, what I foresee is an increase in workload for audit groups, but little
meaningful increase to the assurance that the CU tool and data are used and
contained properly. Additionally, as has been mentioned before, I worry
about the risk of giving sockpuppets additional information that they might
be able to use to evade detection.
I agree with you that there might be bad CUs in the current system, although
personally I haven't heard of any. Where I think we differ is on the
question of what should be done to limit the risk of bad CUs while balancing
other considerations. At this point, I think the available public evidence
is that there are more problems with sophisticated and persistent
sockpuppets than there are problems with current CUs. I hope and believe
that current CUs and auditors are generally honest, competent, and vigilant
about watching each other's work.
Pine
I do hear and understand the argument here, but it is somewhat
problematic to have to have the argument "if we do this, we'll be
handing over information to sockpuppeteers we don't want them to have,
and we can't tell you what that information is, because otherwise
we'll be handing over information to sockpuppeteers we don't want them
to have". While I think the methods currently used are probably sound,
and the information would indeed give them more possibilities to evade
the system, I can't be sure of it, because I can't be told what that
information is.
I don't think this is a viable long-term strategy. The Audit Committee
is a way around this, but as indicated before, there is somewhat of an
overlap between the committee and the Check-User in-crowd, which could
(again, could, I'm not sure if it is indeed true).
Apart from the 'timed release' of information I proposed earlier, I
don't really see a viable solution for this, as I doubt we have enough
people that are sufficiently qualified on a technical level to
actually judge the checkuser results, who also have enough statistical
knowledge to interpret the level of certainty indicated in a result,
who also have the trust of the community to carry out the task, who
also have never been a checkuser or arb, who also have the backbone to
blow the whistle if something goes wring, who also have the
willingness and time to take it upon themselves to be a meaningful
member of the Audit Committee.