On Tue, Apr 21, 2009 at 10:25, Daniel Kinzler daniel@brightbyte.de wrote:
Magnus Manske schrieb:
All in all, it would be much better directly integrated into MediaWiki (no need for text retrieval/parsing, no bulk updates). But I've been saying that for years, at least this is a first attempt.
Actually, this is part of my grand plan for world domination. I'm pushing for it behind the scenes... I have a few ideas on how it may be done nicely.
I think the main problem is that semantic mediawiki looks like the obvious answer. But i doubt it is. I only want a small subset of that functionality on wikipedia. Maybe SMW can be chopped up to fit that, but i'm personally more inclined to extend the RDF extension to store triples in the DB.
I'm pretty new to MediaWiki and I'm not sure if I understand this correctly... Here's my attempt at spelling it out in a bit more detail:
When a user edits a page and sends the new text to the server, the server / the RDF extension parses the text, extracts the desired data and saves it in a RDF store.
I hope I got that about right - please correct me if not!
Now when I think about the pros and cons of having this process run integrated in MediaWiki or on a different server, a few questions come up... again, I'm new to MediaWiki, so these may be newbie questions... :-)
How much parsing does MediaWiki currently do when it stores new text for an article? Are templates expanded / transcluded?
How are updates distributed? Do subscribers regularly poll the server for recent changes? Or is there some kind of store-and-forward / publish-subscribe?
Bye, Christopher