Where is the discussion and review of ContentHandler?
ContentHandler is something I'd really like to see make it into core. It's
useful for far more than just Wikidata.
I'd like to comment and review some part of the code. Like method naming
patterns that don't match the patterns we use elsewhere in core. And
coding patterns/assumptions that don't seem to leave room for some of the
rationales ContentHandler was created for.
But the ContentHandler code is spread out over many pre-accepted commits,
and that that it's not even part of a dedicated branch so it's hard to
find the relevant commits amongst the pile of Wikidata commits.
https://www.mediawiki.org/wiki/ContentHandlerhttps://gerrit.wikimedia.org/r/gitweb?p=mediawiki/core.git;a=blob;f=include…https://gerrit.wikimedia.org/r/gitweb?p=mediawiki/core.git;a=blob;f=include…
--
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
We are running a mediawikifarm (http://biowikifarm.net) on a debian
system and have OggHandler installed. Uploaded Ogg-Videos work fine,
except that the thumb normally shown before starting the video seem
not to be created.
We have installed apt-get install ffmpeg ffmpeg2theora as requested.
Normal still image thumbs work fine. We are using
http://www.mediawiki.org/wiki/Manual:$wgGenerateThumbnailOnParse =
false, i.e. the thumbs are generated through apache rewrites to
thumb.php. Working fine and wikimedia does it as well.
Does anyone had similar problem getting the still image thumbs for OGG
videos to be generated? Any advice?
Gregor
Re Denny
>
> * Questions by Bawulff I redacted from my answer (because I was
>
[..]
>
> Most of all, we need global identifiers for the different wikis. We
> could add a table which only contains mapping of the local prefixes to
> global identifiers, but we think that the current interwiki table
> could use some love anyway, and thus we decided to restructure it as a
> whole. This now has lead to the above mentioned RFC, but the original
> blocker is: for providing language links form a central source --
> Wikidata -- we need to have global wiki identifiers.
In some ways we already have that. There is the iw_wikiid field added for
the gsoc project which was never merged. wiki ids should be unique
within the wikifarm (since they correspond to db names)
> I probably misunderstand. If currently something is not set up as an
> interlanguage link and neither as an interwiki link, it will become a
> normal link, not an interwiki link (i.e. it will point to the local
> page foo:some page in the main namespace). Did you mean something
> else?
Interlanguage links are only interlanguage on subject namespace pages.
On talk pages they're normal interwikis so a link that is only an interlanguage
and not an interwiki does not make sense.
I would really like to see the interlanguage stuff re-done. Preferably with
a means to configure multiple types of interwikis-that-go-in-sidebars
so people could have interproject links and what not. Commons might
have a section (portlet) in the sidebar for each of the sister
projects, and each section
contains the language links for that project
> >> The issue I was trying to deal with was storage. Currently we 100% assume
> >>that the interwiki list is a table and there will only ever be one of them.
> > Do we really assume that? Certainly that's the default config, but I
> > don't think that is the config used on WMF. As far as I'm aware,
> > Wikimedia uses a cdb database file (via $wgInterwikiCache), which
> > contains all the interwikis for all sites. From what I understand, it
> > supports doing various "scope" levels of interwikis, including per db,
> > per site (Wikipedia, Wiktionary, etc), or global interwikis that act
> > on all sites.
>
> We did not know about that database. Who can tell us more about it?
> This would be very interesting to get our synching code optimized.
>
> It still wouldn't help us with the global identifiers, though, but it
>
> would be good to know more about it.
>
I've tried to add a brief bit on the RFC page (mostly gleaned from the docs), I
was kind of rushed though. Its basically a cdb file that has all the interwiki
links for several wikis.
-
--bawolff
You are invited to the Localisation team development demo on Tuesday 21
August 2012 at 15:00 UTC (other time zones: 08:00 PDT, 17:00 CEST,
20:30 IST). This meeting will take 40 minutes at most. In this meeting
the Localisation team will present its deliverables from sprint 22[1].
After about 20 minutes of presentation, the remainder of the meeting
is for discussion.
We hope you can attend, and please invite any other colleagues or
friends you think are interested!
This meeting will be held using WebEx. Please ensure that you log in a
few minutes before the meeting starts, so that you have time to
install any required plug-ins or software. Connection details and a
quick link to add this meeting to your calendar can be found below the
signature.
Slides of our previous sprint demo are also available[2].
[1] https://mingle.corp.wikimedia.org/projects/internationalization/cards/1071
[2] https://commons.wikimedia.org/wiki/File:Wikimedia_Localisation_team_Sprint_…
--
Siebrand Mazeland
Product Manager Localisation
Wikimedia Foundation
M: +31 6 50 69 1239
Skype: siebrand
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
-------------------------------------------------------
To join the online meeting (Now from mobile devices!)
-------------------------------------------------------
1. Go to https://wikimedia.webex.com/wikimedia/j.php?ED=183453277&UID=1297933382&RT=…
2. If requested, enter your name and email address.
3. No password is required
4. Click "Join".
To view in other time zones, please click the link:
https://wikimedia.webex.com/wikimedia/j.php?ED=183453277&UID=1297933382&ORT…
To add this meeting to your calendar program, click this link:
https://wikimedia.webex.com/wikimedia/j.php?ED=183453277&UID=1297933382&ICS…
Hi,
Its been a wonderful time working on SemanticMediaWiki this summer
as part of my Google Summer of Code project GreenSMW.
While most of the planned milestones have been reached, I am still working
on more improvements to SMW after GSoC. I have a written a self-evaluation
of my project [1]
SMW 1.8 is likely to be released soon with all my work :)
[1] http://greensmw.wordpress.com/2012/08/21/pencils-down/
--
Cheers,
Nischay Nahata
nischayn22.in
Hi everyone,
some of us at Wikidata[1] are currently thinking about the best
approach to improve the connection between our backend (web API) and
our JavaScript front-end. What we basically want is to make our data
model available in the front-end in a broader span. This will allow us
to go for more decoupled components (model/viewer) but hopefully it
will also allow gadget developers to fetch, handle, present and store
data with much less effort.
Since there might be some existing JavaScript frameworks well suited
for this already, it might be worth considering them for the job.
Backbone, Spine, Knockout, Serenade or Ember are just a few names out
of many.
Has there been any discussion touching this area so far or is this
even used in some wip MediaWiki project? I could for example imagine
the Visual Editor requiring some kind of approach going this
direction... anything?
I think this is similar to the decision to ship MW together with
jQuery instead of a similar library. So I guess if we would choose any
of these frameworks, it should be a lightweight one, allowing for
great flexibility to reuse this in MW extensions and core, not having
to introduce another one later which would just mean more confusion
for new developers and additional load between clients and servers.
In Wikidata, the first thing we would use any of those frameworks
would be to provide Wikidata Items[2] (or other entities) by fetching
them via the web-API, allowing modification to those fetched objects
and then storing all changes made back to the server via the web-API.
Also see my draft[3] with the idea of introducing a JS prototype for
Entity/Item as well as FetchedEntity/Item which could probably be
implemented using one of those frameworks.
This discussion should be both, talking about experience with such
frameworks as well as dealing with the question whether it would make
sense to introduce something of this art into core in the near future
or not.
Any thoughts on this would be highly appreciated!
Cheers,
Daniel W.
--
Daniel Werner
Software Engineer
Wikimedia Deutschland e.V. | Obentrautstr. 72 | 10963 Berlin
Tel. (030) 219 158 26-0
http://wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e.V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 B. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
Hey all. The following is a (admittedly rather thorough) "wrapup"
report on my Google Summer of Code project entitled: "TranslateSvg:
bringing the translation revolution to Wikimedia Commons". TL;DR: I'm
happy.
----
On 9 July 2011, South Sudan declared independence, and during that
buzz, an Italian Wikimedian found his map showing the borders of the
new nation had been translated into a dozen other languages, among
them English, Greek, Catalan, and Macedonian. These copies were then
uploaded onto Wikimedia Commons as separate files. Of course, one
would expect the map to change significantly over the next decade.
More often than not, these kinds of change are picked up first by
editors of the larger projects, who rapidly update their own versions
of the map. To do so takes, say, 20 minutes; but to replicate that
same change across Catalan, Greek, Macedonian? Hours of work – and
dozens of separate uploads.
My project, named "TranslateSvg", aimed to change this workflow – for
SVG format files at least – firstly by making it easier to translate
those files (thus reducing the all-too-common sight of
English-language diagrams in use on non-English wikis), and secondly
by embedding the new translations within the same SVG file. When
boundaries change, a single update will propagate to all language
versions instantly. That was the intent, anyhow.
Overall, a lot has been achieved: a test wiki was set up, and, if I
load the bleeding edge code onto it, the software is both feature
complete and has been updated in line with user comments. The video at
[1] gives a good idea of the current interface and how it works; I'll
send another message to this list when the test wiki reliably uses
feature complete code.
The most pleasing (and indeed satisfying) thing, however, is that
nothing I wanted to achieve was "left behind". Admittedly, a few
things aren't quite as polished as I'd like them to be, and there are
still a few weeks' worth of code review left to do. But fundamentally,
it is (or will be) what I want it to be. Mostly, I attribute this to
some prototyping work I did before I pitched for GSoC, which allowed
me to come up with a plan I knew to be doable (or more accurately,
doable by me), which avoided the costs of running into deadends late
in the process.
Once code review is complete, there'll be at least one more testing
phase, this time with specific questions, followed by a pitch by me to
Wikimedia Commons. Only after that will I even utter the "d" word in
the context of TranslateSvg.
I ended up with quite an unusual mentoring setup. In the end, the work
of mentoring me ended up being split between my official mentor for
the project, Max Semenik (MaxSem) and the original author of the
Translate extension (which, early on in the project, I decided to use
as a foundation for my work) Niklas Laxström. Both have been very
helpful, especially with code review and generally "keeping an eye on
me", with Niklas (I think it's fair to say) taking the lead in places
due to his specialised knowledge. Actually, this worked out well, but
my advice to potential applicants would be to think about mentor
choice carefully, considering what support they'd need *from their
mentor* and what they might instead be able to source *from the
community in general* in order to avoid overloading their mentor. We
have a great community, and thankfully I knew quite a few people
already, so I could tap that more easily.
Of course, I am greatly indebted to both Max and Niklas, as well as
the literally dozens of people who at some point contributed via IRC
(there's another protip---get on IRC early! *Such* a useful resource).
Just off the top of my head, that list includes Andrew and Ryan for
the Labs stuff (which turned out to be the most challenging aspect of
the summer, mostly because I hadn't considered it at all before [1]),
Mark and Timo's help with JavaScript stuff, Sam for his general
omnipresence, especially when a quick review was needed, Federico,
Amir and all the other potential users of the extension who tried it
out, plus of course Sumana and Greg for keeping the whole thing going.
There are plenty of other people I've forgotten, I'm sure: there are
simply far too many to properly remember.
----
Once again, thanks everyone and I hope to keep you posted over the
coming months about further progress.
Regards,
Harry
--
Harry Burt (User:Jarry1250)
[1] I think this is particularly worth flagging up because I can't be
the only student whose experience lay with PHP (etc.) programming
rather than system administration. Thus, it was probably worth
thinking about this earlier and thus coming up with a considered plan
of attack.