Just a few comments adding on to Jodi and Reid's recent posts:
Neeru & Mike, can you comment on who's doing sysadmin work now?
My point here is: I would like to depend on pros for the sysadmin work,
rather than volunteers, because there's no need for us to be sysadmins.
Let the experts be expert on what they're expert in and all that.
Bottom line: right now I'm not persuaded that the AcaWiki hosting
situation is stable. The key example is letting the domain expire and
the apparent lack of access to someone who can fix it (see
http://lists.ibiblio.org/pipermail/acawiki-general/2011-March/000021.html and
http://code.creativecommons.org/issues/msg2778).
I don't know about the history of AcaWiki stability, but I strongly
agree in principle that this project (whether we go with AcaWiki or
WikiScholar or whatever) requires a reliable solution where someone
is paid for sysadmin work to keep it stable.
The main interest, from my perspective (others may be able to add their
own), is in making research more accessible. Several AcaWiki users are
grad students who are writing summaries in order to consolidate their
own knowledge or prepare for qualifier exams.
OK, that's somewhat different than the goals being proposed in this thread.
I think that's a problem, but perhaps a surmountable one if different
communities can have different standards for their papers. We (or I)
need to be able to focus on writing "summaries" aimed at other
researchers; if someone else wants to come along and add additional
summaries for laypeople, that's fine. But (for example) if other people
start rewriting our lit review text because it's too technical, I don't
think it will work out.
Actually, I feel grad student summaries are an excellent
contribution. Although they might not be perfect, grad student
seminar assignments would probably be the largest single source of
stub articles. Multiply that by every academic field that requires
grad students to summarize articles, and I think promoting the wiki
as an outlet for grad student work would be the single most
effective strategy to make it huge in just one or two years. I, for
one, very much see grad students as a major contributing community.
* It doesn't look like a MediaWiki. Since the MW software is so
dominant, that means pretty much everyone who knows about editing wikis
knows how to use MW - and not looking like MW means there's no immediate
"aha! I can edit this". There's a lot of value in familiarity.
Actually, AcaWiki uses MediaWiki -- specifically Semantic Media Wiki.
Right; what I meant was that while AW does use MW it doesn't *look like*
it does, and that's a barrier to entry, which matters. The default skin
needs to look more like default MediaWiki.
Actually, I don't agree with Reid on this point. Appearance is very
much a subjective issue. Here's my purely subjective opinion:
* I find it irritating that hundreds or thousands of MediaWiki
instances all look like Wikipedia, as if MediaWiki didn't didn't
have any skinning flexibility. (I'm assuming that when Reid says
"look like the default MediaWiki", what he effectively means is
"look like Wikipedia"; Reid, please correct me if I'm
misunderstanding you.)
* I like the AcaWiki interface; I wouldn't want to change it to look
like Wikipedia.
Less subjectively, I don't think that the appearance is a
significant barrier to entry. Saying, "It works just like Wikipedia"
should do the job fine to communicate the familiarity of the wiki
language.
My only experience with "scraping" pages is
with Zotero, and it does it
beautifully. I assume (but don't know) that
the current generation of
other bibliography software would also do a
good job. Anyway, Zotero has
a huge support community, and scrapers for
major sources (including
Google Scholar for articles and Amazon for
books) are kept very well up
to date for the most part.
> Perhaps I'm just unlucky, then - I've only ever tried it on ACM
papers
> (which it failed to do well, so I stopped).
>> Zotero used to scrape quite well from the ACM digital
library -- now that
>> they've changed their site again the scraper needs to be
updated (not hard
>> to do). Last time I tried, Zotero scraped ok from certain
ACM pages (item
>> pages) but not from search results: YMMV.
>> -Jodi
That's probably the issue. We also were stuck for a while when ACM's
reformatting of their page structure broke the Zotero translators.
For our review, Mohamad on our team had to rewrite the ACM Zotero
translator from scratch. I think the problem has been fixed now,
though, on the official Zotero translator package. Thus, Reid, you
probably just happened to get a bad egg. Unfortunately, whenever
publishers reformat their page structures, Zotero translators
routinely break.
~ Chitu