I certainly don't want to cookie-lick this usecase for Wikidata / Wikibase,
but I think that using the Wikibase extension for this might be easier than
it looks. The one major point would be a bit of coding to allow claims for
the media namespace. But indeed, right now we do not have the bandwidth to
do this, and if we are to work on it, it would be deferred to the Summer
(the design can be started earlier, obviously).
On the other hand, as said, we don't want to cookie-lick the usecase: if
there is a quick way to implement it, it is also obviously clear that
moving well-defined structured data from one system to the other is far
easier than the original creation of that structured data, and thus
inefficient use of human work would be minimized anyway. As long as we are
creating such well-defined structured data, I am happy either way.
2013/3/10 Yuvi Panda <yuvipanda(a)gmail.com>
While I understand that wikibase would be
'ideal' (allowing reads *and*
writes), I do not know if the WikiData people have the personell bandwidth
to do that. The separate extension outlined in this RFC would be a bit
hackish, but still faaar more efficient than the current solution of having
to retrieve the HTML and parse it again (or worse, parse the Wikitext
itself). Would also be incredibly more efficient to get the data for
multiple images, since you could feed a generator into this.
+1 for implementing this now :)
--
Yuvi Panda T
http://yuvi.in/blog
_______________________________________________
Wikitech-l mailing list
Wikitech-l(a)lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l
--
Project director Wikidata
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. +49-30-219 158 26-0 |
http://wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg unter
der Nummer 23855 B. Als gemeinnützig anerkannt durch das Finanzamt für
Körperschaften I Berlin, Steuernummer 27/681/51985.