Héllo all!
Le 02/03/2017 à 10:34, Léa Lacroix a écrit :
Hello Amirouche,
Thanks a lot for your interest in this project and your proposal to help.
Currently, the development team is still working on the new datatype structure for lexemes, and we don't have something to demo yet.
I don't need wikibase support of L, F and S right now.
What I am wondering is whether there is already work done wikimedia side regarding the *extraction* of Lexeme, Form and Sens from wikitionary pages.
I started scrapping english wiktionary. I will have demo ready by the end of the week. But I'd like to avoid duplicate work and focus on other stuff if wikimedia already plan to do this.
As soon as we can provide a viable structure to test, we will announce it here and on the talk page of the project <https://www.wikidata.org/wiki/Wikidata_talk:Wiktionary >.
Cheers,
On 1 March 2017 at 22:43, <fn@imm.dtu.dk <mailto:fn@imm.dtu.dk>> wrote:
Hi,
It is my understanding that Wikidata for Wiktionary requires new
data structures or at least new name space (L, F and S), and that
is what holding people back.
What could be interesting to have would be a prototype (not
necessarily built with MediaWiki+Wikibase) to see if the suggested
scheme is ok
On 03/01/2017 10:16 PM, Amirouche wrote:
Héllo,
I have been lurking around for some month now. I stumbled upon the
wiktionary in wikidata project
via for instance this pdf
https://upload.wikimedia.org/wikipedia/commons/6/60/Wikidata _for_Wiktionary_announcement.
<https://upload.wikimedia.org/wikipedia/commons/6/60/Wikidat >a_for_Wiktionary_announcement.
Now I'd like to help. For that I want to build a bot to
achieve that goal.
My understanding is that a proof of concept of the page 11 of
the above
pdf can be good. But I never really did any site scraping. Is
there any
abstraction that help in this regard.
_______________________________________________
Wikidata mailing list
Wikidata@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikidata