Thanks for the help, Katie. I'll be looking into
how Solr has been
integrated with the GeoData extension. About wikidata-vagrant, no problem,
I'll install it by following this
You're right, raw DB access can be painful and I'd need to rewrite a lot
of code. I'm considering two options:
*i)* Using the database-related code in the wikidata extension (I'm
studying the DataModel classes and how they interact with the database) to
fetch what I need and feed them into the recommendation engine.
*ii)* Not accessing the DB at all. Rather, I can write map-reduce scripts
to extract all the training data and everything I need for each Item from
the wikidatawiki data dump and feed it into the recommendation engine. I
can use a cron job to download the latest data dump when available, and run
the scripts on it. I don't think it would be an issue even if the engine
lags by the interval the dumps are generated in, since the whole
recommendation thing is all about approximations.
My request to the devs, the community - please discuss the pros and cons
of each method and suggest which one you think would be the best, mainly in
terms of performance. I personally feel that option (ii) would be cleaner.
On Fri, May 3, 2013 at 3:53 PM, aude <aude.wiki(a)gmail.com> wrote:
On Fri, May 3, 2013 at 5:39 AM, Nilesh
I am currently drafting my proposal, I shall submit within a few hours
the initial version is complete.
I installed mediawiki-vagrant on my PC and it went quite smoothly. I
do all the usual things through the browser; I
logged into the mysql
to examine the database schema.
I also began to clone the
But it seems that the 'git submodule update --init' part would take a
time - if I'm not mistaken, it's a huge
download (excluding the vagrant
command, which alone takes around 1.25 hours to
download everything). I
wanted to clarify something before downloading it all.
Since the entity suggester will be working with wikidata, it'll
need to access the whole live dataset from the
database (not the xml
to make the recommendations. I tried searching
for database access APIs
high-level REST APIs for wikidata, but
couldn't figure out how I to do
that. Could you point me to the proper documentation?
One of the best examples of a MediaWiki extension interacting with a Java
service is how Solr is used. Solr is still pretty new at Wikimedia,
though. It is used with the GeoData extension and then Solr is used by
geodata api modules.
I think Solr gets updated via a cronjob (solrupdate.php) which creates
in the job queue. Not 100% sure of the exact details.
I do not think direct access to the live database is very practical. I
think anyway the data (json blobs) would need indexing in some particular
way to support what the entity selector needs to do.
The Translate extension also uses Solr in some way, though I am not very
familiar with the details.
On the operations side, puppet is used to configure everything. The
git repo is available to see how things are done.
And also, what is the best way to add a few .jar files to wikidata and
execute them with custom commands (nohup java blah.jar --blah blah -->
running as daemons)? I can of course set it up on my development box
virtualbox - I want to know how to
"integrate" it into the system so
any other user can download vagrant and wikidata
and have the jars all
ready and running? What is the proper development workflow for this?
wikidata-vagrant is maintained in github, though I think might not work
perfectly right now. We need to update it and it's on our to-do, and
perhaps could be moved to gerrit. I do not know about integrating the
jars, but should be possible.
[answering from this email, as I am not subscribed to wikitech-l on my
> On Sun, Apr 28, 2013 at 3:01 AM, Nilesh Chakraborty <nilesh(a)nileshc.com
> > Awesome. Got it.
> > I see what you mean, great, thank you. :)
> > Cheers,
> > Nilesh
> > On Apr 28, 2013 2:56 AM, "Lydia Pintscher" <
> On Sat, Apr 27, 2013 at 11:14 PM, Nilesh Chakraborty <
> > Hi Lydia,
> > That helps a lot, and makes it way more interesting. Rather than
>> > one-size-fits-all solution, as it seems to me, each property or
> > of property (eg. different relationships) will need individual
> > and different methods/metrics for
> > The examples you gave, like continents, sex, relations like
>> > uncle/aunt/spouse, or place-oriented properties like place of
country of citizenship, ethnic group etc. - each type has a certain
> > to it (if a person was born in the US, US should be one of the
> countries he
> > was a citizen of; US census/ethnicity statistics may be used to
> > ethnic group etc.) I'm already
starting to chalk out a few patterns
>> > they can be used for recommendation. In my proposal, should I go
>> > details regarding these? Or should
I just give a few examples and
>> > how the algorithms would work, to explain the idea?
>> Give some examples and how you'd handle them. You definitely don't
>> need to have it for all properties. What's important is giving an
>> about how you'd tackle the problem.
Give the reader the impression
>> that you know what you are talking about and can handle the larger
>> Also: Don't make the system too intelligent like it knowing about US
>> census data for example. Keep it simple and stupid for now. Things
>> like "property A is usually used with value X, Y or Z" should cover a
>> lot already and are likely enough for most cases.
>> Lydia Pintscher - http://about.me/lydia.pintscher
>> Community Communications for Technical Projects
>> Wikimedia Deutschland e.V.
>> Obentrautstr. 72
>> 10963 Berlin
>> Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.
> Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
> unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
> Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
> Wikitech-l mailing list
A quest eternal, a life so small! So don't just play the guitar, build
You can also email me at contact(a)nileshc.com or
Wikitech-l mailing list
@wikimediadc / @wikidata
Wikitech-l mailing list
A quest eternal, a life so small! So don't just play the guitar, build one.
You can also email me at contact(a)nileshc.com or visit my