I'm involved with AcaWiki, so I'll start answers to your questions here. Hopefully
others will comment, too.
<Resending from the subscribed address...>
On 23 Mar 2011, at 23:49, Reid Priedhorsky wrote:
On 3/22/11 4:28 PM, Chitu Okoli wrote:
There also appear to be various options for Semantic MediaWiki hosting:
Wikia, Referata, etc. It would be nice to not have to deal with the
sysadmin aspects of the project.
I agree that going with a reliable host would be the way to go. I think
that for the nature of our project, choosing a paid Referata plan would
probably be better than going for Wikia. I for one could probably easily
find grant funding to keep it going.
Sure. If nothing else I'd be happy to chip in personally. I could also
ask around for funding here at IBM, but I'm quite pessimistic on that.
Paid plans run from $240 to $960/year, and we could certainly get
started for free (http://www.referata.com/wiki/Referata:Features
I'm not ready to write off AcaWiki, but I have a number of significant
concerns. Some of these I've mentioned before. I'd really like someone
from that project to comment on these.
* Is the project dead? The mailing list is pretty much empty and the
amount of real editing activity in the past 30 days is pretty low.
Definitely not dead!
* It appears that the project self-hosts - this means that the project
has to do its own sysadmin work,
Neeru & Mike, can you comment on who's doing sysadmin work now?
which appears to have been a problem
(e.g., the domain expired earlier this month and no one noticed until
the site went down!).
* Is the target audience correct? I think we want to specifically target
our annotated bibliography to researchers, but AcaWiki appears to be
targeting laypeople as well as researchers (and IMO it would be very
tricky to do both well).
The main interest, from my perspective (others may be able to add their own), is in making
research more accessible. Several AcaWiki users are grad students who are writing
summaries in order to consolidate their own knowledge or prepare for qualifier exams.
Asking on the
* I don't think the focus on "summaries" is right. I think we need a
structured infobox plus semi-structured text (e.g. sections for
contributions, evidence, weaknesses, questions).
I agree! Right now there's some structured information, but that could be readily
changed. I'm definitely open to restructuring AcaWiki, so do propose this on the
mailing list (acawiki-general(a)lists.ibiblio.org), and we can discuss further.
One ongoing issue is the best way to handle bibliographic information--which has subtle
complexities which we're only partly handling now.
* It doesn't look like a MediaWiki. Since the MW software is so
dominant, that means pretty much everyone who knows about editing wikis
knows how to use MW - and not looking like MW means there's no immediate
"aha! I can edit this". There's a lot of value in familiarity.
Actually, AcaWiki uses MediaWiki -- specifically Semantic Media Wiki. For full details,
I will post an invitation on the AcaWiki mailing to come here and
note on bibliographic software: many of these claim to do
automatic import of a reference simply by pointing the software at the
publisher's web page for the references. But I have never seen this work
correctly; always, the imported data needs significant cleanup, enough
that personally I'd rather type it in manually anyway. For example,
titles of ACM papers aren't even correctly cased on the official ACM
My only experience with "scraping" pages is with Zotero, and it does it
beautifully. I assume (but don't know) that the current generation of
other bibliography software would also do a good job. Anyway, Zotero has
a huge support community, and scrapers for major sources (including
Google Scholar for articles and Amazon for books) are kept very well up
to date for the most part.
Perhaps I'm just unlucky, then - I've only ever tried it on ACM papers
(which it failed to do well, so I stopped).
Zotero used to scrape quite well from the ACM digital library -- now that they've
changed their site again the scraper needs to be updated (not hard to do). Last time I
tried, Zotero scraped ok from certain ACM pages (item pages) but not from search results:
Bi-directional synchronization is hard to get right, particularly when
the two sides have different data models. I think we are much
better off declaring one or the other to be the master and the rest
should remain read-only (i.e. export rather than synchronization).
I like this idea; with SMW as the primary, editable source, a read-only
Zotero library imported from the SMW would work well. The problem,
though, is that duplicate detection would need to prevent imports from
adding existing articles. A complete overwrite would not work, since
this would break article IDs for word processor integration. Zotero has
been slow on implementing duplicate detection, but they finally have a
very impressive solution in alpha
I don't know anything about how article IDs works in Zotero, but how to
build a unique ID for each is an interesting, subtle, and important
problem. Others have suggested using opaque IDs such as DOI. I think
this is a mistake, because it means that they are utterly meaningless to
people when creating citations. For example, consider the following two
citations that I might put in my LaTeX code.
The first means nothing to me, but the second is a useful reminder as to
the paper I'm citing. That's what CiteULike does, and it's built from
first author, year, first meaningful word of title. In the tiny
percentage of cases where this is not unique, a disambiguation digit
could be added.
I don't know how citation works in Word et al., but I would hope you're
not stuck with opaque numeric IDs and/or that Zotero doesn't force you
to use integers or something like that.
Wiki-research-l mailing list