On AcaWiki:
This sounds in line with AcaWiki's larger goal, and the small
community there is generally open to new ideas about how to structure
pages and data. I also think the project would be appropriate as a
Wikimedia project, which would address many of the self-hosting issues
and tie into similar work on a WikiScholar project. No need to have
multiple tiny projects when a single one would do.
I think we want to specifically target
our annotated bibliography to researchers, but AcaWiki appears to be
targeting laypeople as well as researchers (and IMO it would be very
tricky to do both well).
You could allow each biblio page to decide who its audience is. If
there is ever a conflict between a lay and a specialist audience, you
can have two sets of annotations. I'd like to see this happen in
practice before optimizing against it.
* I don't think the focus on "summaries"
is right. I think we need a
structured infobox plus semi-structured text (e.g. sections for
contributions, evidence, weaknesses, questions).
Again, I think both could be appropriate for a stub bibliography page;
and that a great one would include both a summary and structured
sections and infobox data. [acawiki does like infobox-style
structure]
* It doesn't look like a MediaWiki. Since the MW
software is so
This is easy to fix -- people who like the current acawiki look can
use their own skin.
On Data-scraping and WikiScholar parallels:
My only
experience with "scraping" pages is with Zotero, and it does it
beautifully. I assume (but don't know) that the current generation of
other bibliography software would also do a good job. Anyway, Zotero has
a huge support community, and scrapers for major sources (including
Google Scholar for articles and Amazon for books) are kept very well up
to date for the most part.
Perhaps I'm just unlucky, then - I've only ever tried it on ACM papers
(which it failed to do well, so I stopped).
Brian Mingus, who is working on WikiScholar (another related project
which may be suitable) has a great deal of exprience with scraping,
both using APIs and otherwise, and that is the foundation of his
effort.
I don't know anything about how article IDs works
in Zotero, but how to
build a unique ID for each is an interesting, subtle, and important
problem.
This is important, and has also been discussed elsewhere. Some of
this discussion would be appropriate here:
http://meta.wikimedia.org/wiki/Talk:WikiScholar
--
Samuel Klein identi.ca:sj w:user:sj +1 617 529 4266