All,
We've just finished our second sprint on the new PDF renderer. A
significant chunk of renderer development time this cycle was on non latin
script support, as well as puppetization and packaging for deployment. We
have a work in progress pipeline up and running in labs which I encourage
everyone to go try and break. You can use the following featured articles
just to see what our current output is:
* http://ocg-collection-alpha.wmflabs.org/index.php/Alexis_Bachelot
*
http://ocg-collection-alpha.wmflabs.org/index.php/Atlantis:_The_Lost_Empire
Some other articles imported on that test wiki:
* http://ur1.ca/gg0bw
Please note that some of these will fail due to known issues noted below.
You can render any page in the new renderer by clicking the sidebar link
"Download as WMF PDF"; if you "Download as PDF" you'll be using the old
renderer (useful for comparison.) Additionally, you can create full books
via Special:Book -- our renderer is "RDF to Latex (PDF)" and the old
renderer is "e-book (PDF)". You can also try out the "RDF to Text (TXT)"
renderer, but that's not on the critical path. As of right now we do not
have a bugzilla project entry so reply to this email, or email me directly
-- we'll need one of: the name of the page, the name of the collection, or
the collection_id parameter from the URL to debug.
There are some code bits that we know are still missing that we will have
to address in the coming weeks or in another sprint.
* Attribution for images and text. The APIs are done, but we still need
to massage that information into the document.
* Message translation -- right now all internal messages are in English
which is not so helpful to non English speakers.
* Things using the <cite> tag and the Cite extension are not currently
supported (meaning you won't get nice references.)
* Tables may not render at all, or may break the renderer.
* Caching needs to be greatly improved.
Looking longer term into deployment on wiki, my plans right now are to get
this into beta labs for general testing and connect test.wikipedia.org up
to our QA hardware for load testing. The major blocker there is acceptance
of the Node.JS 0.10, and TexLive 2012 packages into reprap, our internal
aptitude package source. This is not quite as easy as it sounds, we already
use TexLive 2009 in production for the Math extension and we must apply
thorough tests to ensure we do not introduce any regressions when we update
to the 2012 package. I'm not sure what actual dates for those migrations /
testing will be because it greatly depends on when Ops has time. In the
meantime, our existing PDF cluster based on mwlib will continue to serve
our offline needs. Once our solution is deployed and tested, mwlib
(pdf[1-3]) will be retired here at the WMF and print on demand services
will be provided directly by PediaPress servers.
For the technically curious; we're approximately following the parsoid
deployment model -- using trebuchet to push out a source repository
(services/ocg-collection) that has the configuration and node dependencies
built on tin along with git submodules containing the actual service code.
It may not look like it on the surface, but we've come a long way and it
wouldn't have been possible without the (probably exasperated) help from
Jeff Green, Faidon, and Ori. Also big thanks to Brad and Max for their
work, and Gabriel for some head thunking. C. Scott and I are not quite off
the hook yet, as indicated by the list above, but hopefully soon enough
we'll be enjoying the cake and cookies from another new product launch.
(And yes, even if you're remote if I promised you cookies as bribes I'll
ship them to you :p)
~Matt Walker
I came across Gerrit change 79948[1] today, which makes "VectorBeta"
use a pile of non-free fonts (with one free font thrown in at the end
as a sop). Is this really the direction we want to go, considering
that in many other areas we prefer to use free software whenever we
can?
Looking around a bit, I see this has been discussed in some "back
corners"[2][3] (no offense intended), but not on this list and I don't
see any place where free versus non-free was actually discussed rather
than being brought up and then seemingly ignored.
In case it helps, I did some searching through mediawiki/core and
WMF-deployed extensions for font-family directives containing non-free
fonts. The results are at
https://www.mediawiki.org/wiki/User:Anomie/font-family (use of
non-staff account intentional).
[1]: https://gerrit.wikimedia.org/r/#/c/79948
[2]: https://www.mediawiki.org/wiki/Talk:Wikimedia_Foundation_Design/Typography#…
[3]: https://bugzilla.wikimedia.org/show_bug.cgi?id=44394
TL;DR SUMMARY: check out this short, silent, black & white video:
https://brionv.com/misc/ogv.js/demo/ -- anybody interested in a side
project on in-browser audio/video decoding fallback?
One of my pet peeves is that we don't have audio/video playback on many
systems, including default Windows and Mac desktops and non-Android mobile
devices, which don't ship with Theora or WebM video decoding.
The technically simplest way to handle this is to transcode videos into
H.264 (.mp4 files) which is well supported by the troublesome browsers.
Unfortunately there are concerns about the patent licensing, which has held
us up from deploying any H.264 output options though all the software is
ready to go...
While I still hope we'll get that resolved eventually, there is an
alternative -- client-side software decoding.
We have used the 'Cortado <http://www.theora.org/cortado/>' Java applet to
do fallback software decoding in the browser for a few years, but Java
applets are aggressively being deprecated on today's web:
* no Java applets at all on major mobile browsers
* Java usually requires a manual install on desktop
* Java applets disabled by default for security on major desktop browsers
Luckily, JavaScript engines have gotten *really fast* in the last few
years, and performance is getting well in line with what Java applets can
do.
As an experiment, I've built Xiph's ogg, vorbis, and theora C libraries
cross-compiled to JavaScript using
emscripten<https://github.com/kripken/emscripten>and written a wrapper
that decodes Theora video from an .ogv stream and
draws the frames into a <canvas> element:
* demo: https://brionv.com/misc/ogv.js/demo/
* code: https://github.com/brion/ogv.js
* blog & some details:
https://brionv.com/log/2013/10/06/ogv-js-proof-of-concept/
It's just a proof of concept -- the colorspace conversion is incomplete so
it's grayscale, there's no audio or proper framerate sync, and it doesn't
really stream data properly. But I'm pleased it works so far! (Currently it
breaks in IE, but I think I can fix that at least for 10/11, possibly for
9. Probably not for 6/7/8.)
Performance on iOS devices isn't great, but is better with lower resolution
files :) On desktop it's screaming fast for moderate resolutions, and could
probably supplement or replace Cortado with further development.
Is anyone interested in helping out or picking up the project to move it
towards proper playback? If not, it'll be one of my weekend "fun" projects
I occasionally tinker with off the clock. :)
-- brion
On Sun, Aug 25, 2013 at 7:46 PM, Yuvi Panda <yuvipanda(a)gmail.com> wrote:
> Hey rupert!
>
> On Sun, Aug 25, 2013 at 10:21 PM, rupert THURNER
> <rupert.thurner(a)gmail.com> wrote:
>> hi brion,
>>
>> thank you so much for that! where is the source code? i tried to
>> search for "commons" on https://git.wikimedia.org/. i wanted to look
>
> Android: https://git.wikimedia.org/summary/apps%2Fandroid%2Fcommons.git
> iOS: github.com/wikimedia/Commons-iOS
>
>> if there is really no account creation at the login screen or it is
>> just my phone which does not display one, and which URL the aplication
>
> Mediawiki doesn't have API support for creating accounts, and hence
> the apps don't have create account support yet.
created https://bugzilla.wikimedia.org/show_bug.cgi?id=53328, maybe
you could detail a little bit more how this api should look like?
rupert.
Hey all,
I recently had a new repository created; and I wanted to create some jobs
for it.
I dutifully created and had merged:
https://gerrit.wikimedia.org/r/#/c/115968/https://gerrit.wikimedia.org/r/#/c/115967/
Hashar told me I then needed to follow the instructions on [1] to push the
jobs to jenkins. Running the script myself was only pain; it kept erroring
out while trying to create the job. Marktraceur managed to create the jobs
after much "kicking down the door" aka running the script multiple times.
It appears that the problem is that
https://integration.mediawiki.org/ci/createItem?name=mwext-FundraisingChart…
to
https://integration.mediawiki.org/?...
So that's a problem? We're still not sure why Mark was able to create the
jobs with perseverance though.
Another problem that I'm seeing is in responsibilities -- supposedly only
jenkins admins (wmf developers) can submit jobs (and then only when it
works). And then, only people with root on galium can apply the Zuul
configs. To me this is clearly not something the average developer is
supposed to be doing.
Would it make sense to have QChris / ^demon create the standard jobs when
they create the repository?
[1]
https://www.mediawiki.org/wiki/Continuous_integration/Tutorials/Adding_a_Me…
~Matt Walker
Wikimedia Foundation
Fundraising Technology Team
PHP 5.4 added a few important features[1], namely traits, shorthand array
syntax, and function array dereferencing. I've heard that 5.3 is nearing
end of life.
I propose we drop support for PHP 5.3 soon, if possible.
- Trevor
[1] http://php.net/manual/en/migration54.new-features.php
hello,
These are some approaches i can think of instead of a text based captcha.
The image idea where users are asked to spot the odd one out like
demonstrated or find all the similar images like mentioned in
here<https://www.mediawiki.org/wiki/CAPTCHA>
.
Also a picture with a part chipped in could be shown and chipped pictures
could be given as options
like find the missing part from a jigsaw puzzle.
The image which would be shown is http://imgur.com/uefeb08http://imgur.com/KEJqCg3 is the picture which would be the correct option.
The other options could be rotated versions of this , which would not be so
easy for the bot to match. (unless it somehow worked some digital
processing algorithm and matched the color gradients or something like
that).
This is a good option for people who do not know english or are illiterate
and maybe would not understand questions like : is this a bird , plane ,
superman? after being shown a picture.
Tell me what you think
(Sorry to upload those images on imgur. i dont know how to put them on the
wiki .Hope that is ok)
have posted this on the CAPTCHA
page<https://www.mediawiki.org/wiki/Talk:CAPTCHA>also
Hey,
As you are probably aware of, it has been possible for some time now to
install Composer compatible MediaWiki extensions via Composer.
Markus Glaser recently wrote an RFC titled "Extension management with
Composer" [0]. This RFC mentioned that it is not possible for extensions to
specify which version of MediaWiki they are compatible with. After
discussing the problem with some people from the Composer community, I
created a commit that addresses this pain point [1]. It's been sitting on
gerrit getting stale, so some input there is appreciated.
[0]
https://www.mediawiki.org/wiki/Requests_for_comment/Extension_management_wi…
[1] https://gerrit.wikimedia.org/r/#/c/105092/
For your convenience, a copy of the commit message:
~~
Make it possible for extensions to specify which version of MediaWiki
they support via Composer.
This change allows extensions to specify they depend on a specific
version or version range of MediaWiki. This is done by adding the
package mediawiki/mediawiki in their composer.json require section.
As MediaWiki itself is not a Composer package and is quite far away
from becoming one, a workaround was needed, which is provided by
this commit.
It works as follows. When "composer install" or "composer update"
is run, a Composer hook is invoked. This hook programmatically
indicates the root package provides MediaWiki, as it indeed does
when extensions are installed into MediaWiki. The package link
of type "provides" includes the MediaWiki version, which is read
from DefaultSettings.php.
This functionality has been tested and confirmed to work. One needs
a recent Composer version for it to have an effect. The upcoming
Composer alpha8 release will suffice. See
https://github.com/composer/composer/issues/2520
Tests are included. Composer independent tests will run always,
while the Composer specific ones are skipped when Composer is
not installed.
People that already have a composer.json file in their MediaWiki
root directory will need to make the same additions there as this
commit makes to composer-json.example. If this is not done, the
new behaviour will not work for them (though no existing behaviour
will break). The change to the json file has been made in such a
way to minimize the likelihood that any future modifications there
will be needed.
Thanks go to @beausimensen (Sculpin) and @seldaek (Composer) for
their support.
~~
I also wrote up a little blog post on the topic:
http://www.bn2vs.com/blog/2014/02/15/mediawiki-extensions-to-define-their-m…
Cheers
--
Jeroen De Dauw
http://www.bn2vs.com
Don't panic. Don't be evil. ~=[,,_,,]:3
--
Starting on Tuesday, March 4th, the new Labs install in the eqiad data
center will be open for business. Two dramatic things will happen on
that day: Wikitech will gain the ability to create instances in eqiad,
and wikitech will lose the ability to create new instances in pmtpa.
About a month from Tuesday, the pmtpa labs install will be shut down.
If you want your project to still be up and running in April, you must
take action!
We are committed to not destroying any instances or data during the
shutdown, but projects that remain untouched by human hands during the
next few weeks will be mothballed by staff: the data will be preserved
but most likely compressed and archived, and instances will be left in a
shutdown state.
(Note: Toollabs users can sit tight for a bit; Coren will provide
specific migration instructions for you shortly.)
I've written a migration guide, here:
https://wikitech.wikimedia.org/wiki/Labs_Eqiad_Migration_Howto It's a
work in progress, so check back frequently. Please don't hesitate to
ask questions on IRC, make suggestions as to guide improvements, or
otherwise question this process. Quite a few of the suggested steps in
that guide require action on the part of a Labs op -- for that purpose
we've created a bugzilla tracking bug, 62042. To add a migration bug
that links to the tracker, use this link:
https://bugzilla.wikimedia.org/enter_bug.cgi?product=Wikimedia%20Labs&compo…
At the very least, please visit this page and edit it with your project
migration plans:
https://wikitech.wikimedia.org/wiki/Labs_Eqiad_Migration_Progress
Projects that have no activity on that page will be early candidates for
mothballing. If you want me to delete your project, please note that as
well -- that will allow us to free up resources for future projects.
I am cautiously optimistic about this migration. Most of our testing
has gone fairly well, so a lot of you should find the process smooth and
easy. That said, we're all going to be early adopters of this tech, so
I appreciate your patience and understanding when inevitable bugs shake
out. I look forward to hearing about them on IRC!
-Andrew