ok, here's my dispassionate opinion. I post on the list because the problems I refer to below are not specific to this project but to many other requests.
We have been receiving a *lot* of SR requests from students over the last few months and I think we should start deciding more aggressively what research is likely to have an impact and what research won't produce any major tangible results.
I really like the topic of the Anonymity and Conformity study but I several concerns with the solidity of the current proposal:
• we will be approving for the first time some kind of large-scale recruitment approach via user talk pages for a student project: this is something we've never done before and we should only do it if there's a good reason.
• the advisor of the proponent doesn't seem to be involved at all in this project and is not even named in the proposal. Aaron asked the proponent to share the name of his supervisor in September, but he hasn't done so (yet?)
• the proponent says that no funding is supporting this research and that this study is "conducted with the author's own efforts"
• no one else other than the applicant will be implied in the data collection and analysis and the proponent doesn't seem to have an actual research record
• there is no trace in this proposal of an approval by an ethics committee. The proponent says that this is not applicable (and it's true that IRB policy is very different between the US and other countries), but some official record would help us assess the credibility of the proposal.
for these reasons, I am hesitant whether we should blindly approve this request. I would be happy to give my approval conditionally on having (at least) the student's supervisor involved. I actually think this would be a sensible rule to apply to *all* SR proposals submitted by students.
I hope it's clear that I don't want to shut down research for anybody but top researchers with big grants, but we also cannot afford spending time and effort and community attention for projects of an unclear scientific value or interest.
I'd love to hear your thoughts on this
Dario
On Nov 16, 2011, at 12:30 PM, Aaron Halfaker wrote:
I was hoping to close this poll hours ago, but we only have three members of RCom participating (thanks Yaroslav and Steven!).
It is absolutely crucial that if we end up technically approving this study methodology that such approval actually reflects the consensus of RCom members.
For your benefit, I'll summarize the proposed plan:
A request to participate in a survey about enforcing conformance with community/group outcomes needs 200-300 responses from general Wikipedia editors. Invitations to take the survey will be posted an editors' User_talk pages. A pilot set of 15 requests will be posted immediately following approval from RCom to test for problems and determine the expected response rate. Afterwards, up to 500 User_talk postings will be made (depending on response rate) to illicit enough responses to give statistical confidence.
This is the first proposed project of this scale that we are reviewing for approval so I really want to make sure we are doing it right.
-Aaron
_______________________________________________
RCom-l mailing list
RCom-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/rcom-l