Hello
This is a reminder that the Language Engineering IRC office hour is
happening later today at 1700UTC on #wikimedia-office. Please see below for
the original announcement and local time.
Thanks
Runa
Monthly IRC Office Hour:
==================
# Date: May 21, 2014 (Wednesday)
# Time: 1700 UTC/1000PDT (Check local time:
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140521T1700)
# IRC channel: #wikimedia-office
# Agenda:
1. Content Translation project updates
2. Q & A (Questions can be sent to me ahead of the event)
---------- Forwarded message ----------
From: Runa Bhattacharjee <rbhattacharjee(a)wikimedia.org>
Date: Mon, May 19, 2014 at 4:11 PM
Subject: Language Engineering IRC Office Hour on May 21, 2014 (Wednesday)
at 1700 UTC
To: MediaWiki internationalisation <mediawiki-i18n(a)lists.wikimedia.org>,
Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>, Wikimedia
developers <wikitech-l(a)lists.wikimedia.org>,
wikitech-ambassadors(a)lists.wikimedia.org
[x-posted]
Hello,
The Wikimedia Language Engineering team will be hosting the next
monthly IRC office hour on Wednesday, May 21 2014 at 1700 UTC on
#wikimedia-office. The event is delayed this month as the team was
traveling.
In this office hour we will be discussing about our recent work, which
has mostly been around the upcoming first release of the Content
Translation tool[1]. We will also be taking questions during the
session.
Please see below for event details and local time. See you at the office
hour.
Thanks
Runa
[1] https://www.mediawiki.org/wiki/Content_translation
Monthly IRC Office Hour:
==================
# Date: May 21, 2014 (Wednesday)
# Time: 1700 UTC/1000PDT (Check local time:
http://www.timeanddate.com/worldclock/fixedtime.html?iso=20140521T1700)
# IRC channel: #wikimedia-office
# Agenda:
1. Content Translation project updates
2. Q & A (Questions can be sent to me ahead of the event)
--
Language Engineering - Outreach and QA Coordinator
Wikimedia Foundation
--
Language Engineering - Outreach and QA Coordinator
Wikimedia Foundation
Dear all,
today a change was merged that changes the creation of the mathtable if it
did not exist before.the mathtable. If it does not exist.
https://gerrit.wikimedia.org/r/#/c/134326/
Am Dienstag, 20. Mai 2014, 18:24:03 schrieb Sumana Harihareswara:
> Hey, folks! Since I believe all of you have written, modded, or
> maintained MediaWiki skins, I thought you would want to see MatmaRex's
> proposal and give him a "sure, ok" or a "wait nooooo" or something in
> between. :-)
>
> Best,
> Sumana
Hi Sumana, thanks for highlighting me :) I was not on that list, and as such
my reply won't be threaded, but see below... (oh, and my main mail address
changed)
>
>
> tl;dr Let's adopt a better structure for skins. A more detailed proposal is
> at the bottom.
>
> As you might know, I am doing a Google Summer of Code project aiming to
> disentangle the mess of MediaWiki's skinning system a little bit, make
> creating custom skins a bit less painful and improve the separation between
> MediaWiki and its core skins [0]
> (https://www.mediawiki.org/wiki/Separating_skins_from_core_MediaWiki).
>
> I want this thread to result in code changes :)
>
> ----
>
> So, MediaWiki supports skins, and apart from the four core ones there's a
> couple dozen of skins available for installation [1]. And looking at them,
> it seems as if every single one used a different directory structure, and
> this a different installation method.
>
> I think this is bad, and that we should standardize on something –
> preferably one of the widely used methods – and use it for the core skins
> as well to provide a good example.
>
> ----
>
> There seem to be three popular ways:
>
> * $IP/skins/SkinName.php for the main file plus $IP/skins/skinname/ for
> assets, using an autodiscovery mechanism to automagically make the skin
> available after the files are copied in the right place. This is used by
> all of the core skins (Vector has some special cases, but let's ignore that
> for now), as well as many external skins (e.g. Cavendish [2]), at a glance
> mostly older ones. * $IP/skins/SkinName/ for both assets and PHP files
> ($IP/skins/skinname/SkinName.php etc.), using require_once in LocalSettings
> like extensions to load the skin, manually adding an entry to
> $wgValidSkinNames in the main PHP file. This seems to be the preferred
> method among "modern" skins, for example Erudite [3] or Nimbus [4]. *
> $IP/extensions/SkinName/ for everything, the rest as above. This makes the
> skin work exactly like an extension. The only example I could find on
> mediawiki.org is the Nostalgia skin [5].
>From my own experience for KDE back then we indeed also used the old method
#1, and tbh, it is quite messy, especially when you want to maintain core mw
files/folders and have your own skin separate. In the end we used a merged
repo for mediawiki and the skin, which had other benefits, as we also did
mediawiki hacks.
So i am also all for solution #3
>
> ----
>
> The first one sounds like a no-go for me (in spite of being currently used
> for core skins, ugh):
>
> * The directory structure makes it annoying to both manage and write such
> skins (you need to copy/delete the PHP file and the directory separately,
> many text editors provide additional features for files contained in a
> single directory, and just look at our .gitignore file for skins oh god why
> [6]). * The usage of autodiscovery, while making installation and testing a
> bit simpler, makes it impossible or unpleasant to temporarily disable a
> skin or to provide configuration settings for it (the last point doesn't
> affect core skins).
>
> This leaves us with the two latter options: packaging skins similarly to
> extensions and sticking them in /skins, or packaging them like extensions
> and treating them like extensions. These two options are pretty similar and
> discussing them will be a bit bikesheddy, but let's do it anyway. (Note
> also that even if we wanted to, we can't stop anyone from using either of
> these if they feel like it, as MediaWiki supports loading everything from
> anywhere if you really want. We can, however, deprecate skin
> autodiscovery.)
Option #2 sounds rather messy to me, for the above reasons. All files below
/skins/ makes it harder (again) to have separate repos (not even talking about
the visual mess :P).
I guess a lot of people have a custom skin as their only modification to
mediawiki's functionality, and not everyone of them is a git hero, i guess, so
it would be a very friendly attempt to go for #3 and having a separate
directory for each skin.
Besides (as already mentioned) every other system does it like that as well,
and it is well known.
>
> ----
>
> Personally I'm leading towards the /skins/SkinName option. The pros are:
>
> * It seems to be more widely used, which means that it "felt right" to a lot
> of people, and that shouldn't be underestimated. * It's less revolutionary,
> and rather a simple improvement over the current system. * It's more
> intuitive when compared to how other applications / projects works.
> (Corollary: just because MediaWiki skins can do everything that extensions
> can do, we shouldn't encourage that.) * Since it's still similar to how
> extensions work, adapting the current system (WMF deployments, tarball
> packaging, installation via web installer) should be straightforward. *
> Switching current skins to this system within the mediawiki/core repo will
> be trivial.
>
> The pros of using /extensions/SkinName are:
>
> * We already have a battle-tested system for doing things with extensions
> (WMF deployments, tarball packaging, installation via web installer). * All
> non-core code in one place.
>
> I would like to settle this within a week or two. Help! :)
>
> Thoughts?
>
> I will document the result and, if feasible, convert core skins to be closer
> to the recommended format afterwards.
So from my POV, i am all for #3. Go Go! :)
Cheerio,
--
Ingo Malchow
Hi all!
During the hackathon, I worked on a patch that would make it possible for
non-textual content to be included on wikitext pages using the template syntax.
The idea is that if we have a content handler that e.g. generates awesome
diagrams from JSON data, like the extension Dan Andreescu wrote, we want to be
able to use that output on a wiki page. But until now, that would have required
the content handler to generate wikitext for the transclusion - not easily done.
So, I came up with a way for ContentHandler to wrap the HTML generated by
another ContentHandler so it can be used for transclusion.
Have a look at the patch at <https://gerrit.wikimedia.org/r/#/c/132710/>. Note
that I have completely rewritten it since my first version at the hackathon.
It would be great to get some feedback on this, and have it merged soon, so we
can start using non-textual content to its full potential.
Here is a quick overview of the information flow. Let's assume we have a
"template" page T that is supposed to be transcluded on a "target" page P; the
template page uses the non-text content model X, while the target page is
wikitext. So:
* When Parser parses P, it encounters {{T}}
* Parser loads the Content object for T (an XContent object, for model X), and
calls getTextForTransclusion() on it, with CONTENT_MODEL_WIKITEXT as the target
format.
* getTextForTransclusion() calls getContentForTransclusion()
* getContentForTransclusion() calls convert( CONTENT_MODEL_WIKITEXT ) which
fails (because content model X doesn't provide a wikitext representation).
* getContentForTransclusion() then calls convertContentViaHtml()
* convertContentViaHtml() calls getTextForTransclusion( CONTENT_MODEL_HTML ) to
get the HTML representation.
* getTextForTransclusion() calls getContentForTransclusion() calls convert()
which handles the conversion to HTML by calling getHtml() directly.
* convertContentViaHtml() takes the HTML and calls makeContentFromHtml() on the
ContentHandler for wikitext.
* makeContentFromHtml() replaces the actual HTML by a parser strip mark, and
returns a WikitextContent containing this strip mark.
* The strip mark is eventually returns to the original Parser instances, and
used to replace {{T}} on the original page.
This essentialyl means that any content can be converted to HTML, and can be
transcluded into any content that provides an implementation of
makeContentFromHtml(). This actually changes how transclusion of JS and CSS
pages into wikitext pages work. You can try this out by transclusing a JS page
like MediaWiki:Test.js as a template on a wikitext page.
The old getWikitextForTransclusion() is now a shorthand for
getTextForTransclusion( CONTENT_MODEL_WIKITEXT ).
As Brion pointed out in a comment to my original, there is another caveat: what
should the expandtemplates module do when expanding non-wikitext templates? I
decided to just wrap the HTML in <html>...</html> tags instead of using a strip
mark in this case. The resulting wikitext is however only "correct" if
$wgRawHtml is enabled, otherwise, the HTML will get mangled/escaped by wikitext
parsing. This seems acceptable to me, but please let me know if you have a
better idea.
So, let me know what you think!
Daniel
During the Zurich hackathon, DJ Hartman, Aude and I knocked up a
generic maps prototype extension [1]. We have noticed that many maps
like extensions keep popping up and believed it was time we
standardised on one that all these extensions could use so we share
data better.
We took a look at all the existing use cases and tried to imagine what
such an extension would look like that wouldn't be too tied into a
specific use case.
The extension we came up with was a map extension that introduces a
Map namespace where data for the map is stored in raw GeoJSON and can
be edited via a JavaScript map editor interface. It also allows the
inclusion of maps in wiki articles via a map template.
Dan Andreescu also created a similar visualisation namespace which may
want to be folded into this as a map could be seen as a visualisation.
I invite Dan to comment on this with further details :-)!
I'd be interested in people's thoughts around this extension. In
particular I'd be interested in the answer to the question "For my
usecase A what would the WikiMaps extension have to support for me to
use it".
Thanks for your involvement in this discussion. Let's finally get a
maps extension up on a wikimedia box!
Jon
[1] https://github.com/jdlrobson/WikiMaps
Four acronyms in an email subject! My grandma would be proud of me. :)
After a month of community onboarding, Google Summer of Code and FOSS
Outreach Program for Women interns will start their work officially
tomorrow, Monday 19.
The Engineering Community team will devote its monthly IRC meeting to
answer any questions from interns and mentors and to discuss any related
topics.
Tuesday 20 at 16:00 UTC at #wikimedia-office
https://www.mediawiki.org/wiki/Engineering_Community_Team/Meetings/2014-05-…
If you want to discuss other community topics, please add them to the
schedule. See you there!
--
Quim Gil
Engineering Community Manager @ Wikimedia Foundation
http://www.mediawiki.org/wiki/User:Qgil
Hi everyone,
I'm trying to figure out the reason behind some decisions that were made in
the past about bot flags to see if we can have a more optimal and clear
setup.
Presently, giving an account the bot flag does two things:
1. When editing via the API, allows the user to choose whether or not to
flag an edit as a bot edit using the bot parameter.
2. When editing via the standard editing interface, flags all edits
(i.e. all human made edits) as bot edits.
If you've not got the bot flag, the API will ignore you if you try to flag
an edit as a bot edit using the bot parameter.
So I've got a few questions to help me figure this out.
1. What's the user story for including the edit-level granularity for
bot accounts in the API?
2. What's the user story for making it so that every edit made by a
human on a bot account is flagged as bot edit?
Thanks,
Dan
--
Dan Garry
Associate Product Manager for Platform and Mobile Apps
Wikimedia Foundation
Hi, a few of us had an offline discussion about getting university professors to encourage their students to publish code that they develop for classes based on MediaWiki and Wikimedia projects. What I heard is that professors are using MW and Wikimedia projects as environments for student devs but not many are publishing the code that the students produce. Is anyone working on outreach to university professors to encourage them to get student projects published and in a form that we can use, and is there a list of projects somewhere that are suggested for professors to use when teaching? This could be a dev equivalent to the Wikimedia Education Program.
Pine