ICANN just delegated the gTLD .WIKI yesterday. It's being managed by Top Level
Design, LLC. I'm not entirely sure what that means for all of us exactly, but I
suspect that the WMF is going to want to at least register Wikipedia.wiki and
Wikimedia.wiki once the gTLD is open for registration.
Some of the new gTLDs are already opening up for registration. .sexy and
.tattoo will be opening for registration on 25 February.
It looks like if we want to get .wiki domains we will be getting them sometime
in May or June during the "sunrise" period.
ICANN also has a full list of new gTLDs that they have approved.
You can now submit tutorial or talk proposals for WikiConference USA, and
you can register and ask for a scholarship for your travel expenses (more
information below and at http://wikiconferenceusa.org/wiki/Scholarships ).
If it'll be hard for you to get to the Zurich or London hackathons this
year, consider meeting up at WikiConference USA.
Engineering Community Manager
Date: Tue, 28 Jan 2014 16:56:55 -0500
From: Pharos <pharosofalexandria(a)gmail.com>
Subject: [Wikimedia Announcements] WikiConference USA Announcement
Content-Type: text/plain; charset="iso-8859-1"
I am very pleased to announce that Wikimedia NYC and Wikimedia DC are
working in collaboration to host the first national Wikimedia conference in
the United States!
Here are the details for the conference:
Dates: Friday, May 30, 2014 - Sunday, June 1, 2014
Location: New York Law School (185 West Broadway, New York, NY 10013)
For more information, please review our official press release below! We
hope you will join us and help us spread the word!
Hi! I would like to discuss an idea.
In MediaWiki is not very convenient to docomputingusing the syntax of
the wiki. We have to use several extensions like Variables, Arrays,
ParserFunctions and others. If there are a lot of computing, such as
data processing received from Semantic MediaWiki, the speed of page
construction becomes unacceptable. To resolve this issue have to do
another extension (eg Semantic Maps displays data from SMW on Maps).
Becomes a lot of these extensions, they don't work well with each other
and these time-consuming to maintain.
I know about the existence of extension Scribunto, but I think that you
can solve this problem by another, more natural way. I suggest using PHP
code in wiki pages, in the same way as it is used for html files. In
this case, extension can be unificated. For example, get the data from
DynamicPageList, if necessary to process, and transmit it to display
other extensions, such as Semantic Result Formats.This will give users
more freedom for creativity.
In order to execute PHP code safely I decided to try to make a
controlled environment. I wrote it in pure PHP, it is lightweight and in
future can be included in the core. It can be viewed as an extension
Foxway. The first version in branch master. It gives an idea of what it
is possible in principle to do and there's even something like a
debugger. It does not work very quickly and I decided to try to fix it
in a branch develop. There I created two classes, Compiler and Runtime.
The first one processes PHP source code and converts it into a set of
instructions that the class Runtime can execute very quickly. I took a
part of the code from phpunit tests to check the performance. On my
computer, pure PHP executes them on average in 0.0025 seconds, and the
class Runtime in 0.05, it is 20 times slower, but also have the
opportunity to get even better results. I do not take in the calculation
time of class Compiler, because it needs to be used once when saving a
wiki page. Data returned from this class is amenable to serialize and it
can be stored in the database. Also, if all the dynamic data handle as
PHP code, wiki markup can be converted into html when saving and stored
in database. Thus, when requesting a wiki page from the server it will
be not necessary to build it every time (I know about the cache). Take
the already prepared data (for Runtime and html) and enjoy. Cache is
certainly necessary, but only for pages with dynamic data, and the
lifetime of the objects in it can be greatly reduced since performance
will be higher.
I also have other ideas associated with the use of features that provide
this realization. I have already made some steps in this direction and I
think that all of this is realistic and useful.
I'm not saying that foxway ready for use. It shows that this idea can
work and can work fast enough. It needs to be rewritten to make it
easier to maintain, and I believe that it can work even faster.
I did not invent anything new. We all use the html + php. Wiki markup
replaces difficult html and provides security, but what can replace the
I would like to know your opinion: is it really useful or I am wasting
Best wishes. Pavel Astakhov (pastakhov).
We've just finished our second sprint on the new PDF renderer. A
significant chunk of renderer development time this cycle was on non latin
script support, as well as puppetization and packaging for deployment. We
have a work in progress pipeline up and running in labs which I encourage
everyone to go try and break. You can use the following featured articles
just to see what our current output is:
Some other articles imported on that test wiki:
Please note that some of these will fail due to known issues noted below.
You can render any page in the new renderer by clicking the sidebar link
"Download as WMF PDF"; if you "Download as PDF" you'll be using the old
renderer (useful for comparison.) Additionally, you can create full books
via Special:Book -- our renderer is "RDF to Latex (PDF)" and the old
renderer is "e-book (PDF)". You can also try out the "RDF to Text (TXT)"
renderer, but that's not on the critical path. As of right now we do not
have a bugzilla project entry so reply to this email, or email me directly
-- we'll need one of: the name of the page, the name of the collection, or
the collection_id parameter from the URL to debug.
There are some code bits that we know are still missing that we will have
to address in the coming weeks or in another sprint.
* Attribution for images and text. The APIs are done, but we still need
to massage that information into the document.
* Message translation -- right now all internal messages are in English
which is not so helpful to non English speakers.
* Things using the <cite> tag and the Cite extension are not currently
supported (meaning you won't get nice references.)
* Tables may not render at all, or may break the renderer.
* Caching needs to be greatly improved.
Looking longer term into deployment on wiki, my plans right now are to get
this into beta labs for general testing and connect test.wikipedia.org up
to our QA hardware for load testing. The major blocker there is acceptance
of the Node.JS 0.10, and TexLive 2012 packages into reprap, our internal
aptitude package source. This is not quite as easy as it sounds, we already
use TexLive 2009 in production for the Math extension and we must apply
thorough tests to ensure we do not introduce any regressions when we update
to the 2012 package. I'm not sure what actual dates for those migrations /
testing will be because it greatly depends on when Ops has time. In the
meantime, our existing PDF cluster based on mwlib will continue to serve
our offline needs. Once our solution is deployed and tested, mwlib
(pdf[1-3]) will be retired here at the WMF and print on demand services
will be provided directly by PediaPress servers.
For the technically curious; we're approximately following the parsoid
deployment model -- using trebuchet to push out a source repository
(services/ocg-collection) that has the configuration and node dependencies
built on tin along with git submodules containing the actual service code.
It may not look like it on the surface, but we've come a long way and it
wouldn't have been possible without the (probably exasperated) help from
Jeff Green, Faidon, and Ori. Also big thanks to Brad and Max for their
work, and Gabriel for some head thunking. C. Scott and I are not quite off
the hook yet, as indicated by the list above, but hopefully soon enough
we'll be enjoying the cake and cookies from another new product launch.
(And yes, even if you're remote if I promised you cookies as bribes I'll
ship them to you :p)
TL;DR SUMMARY: check out this short, silent, black & white video:
https://brionv.com/misc/ogv.js/demo/ -- anybody interested in a side
project on in-browser audio/video decoding fallback?
One of my pet peeves is that we don't have audio/video playback on many
systems, including default Windows and Mac desktops and non-Android mobile
devices, which don't ship with Theora or WebM video decoding.
The technically simplest way to handle this is to transcode videos into
H.264 (.mp4 files) which is well supported by the troublesome browsers.
Unfortunately there are concerns about the patent licensing, which has held
us up from deploying any H.264 output options though all the software is
ready to go...
While I still hope we'll get that resolved eventually, there is an
alternative -- client-side software decoding.
We have used the 'Cortado <http://www.theora.org/cortado/>' Java applet to
do fallback software decoding in the browser for a few years, but Java
applets are aggressively being deprecated on today's web:
* no Java applets at all on major mobile browsers
* Java usually requires a manual install on desktop
* Java applets disabled by default for security on major desktop browsers
years, and performance is getting well in line with what Java applets can
As an experiment, I've built Xiph's ogg, vorbis, and theora C libraries
emscripten<https://github.com/kripken/emscripten>and written a wrapper
that decodes Theora video from an .ogv stream and
draws the frames into a <canvas> element:
* demo: https://brionv.com/misc/ogv.js/demo/
* code: https://github.com/brion/ogv.js
* blog & some details:
It's just a proof of concept -- the colorspace conversion is incomplete so
it's grayscale, there's no audio or proper framerate sync, and it doesn't
really stream data properly. But I'm pleased it works so far! (Currently it
breaks in IE, but I think I can fix that at least for 10/11, possibly for
9. Probably not for 6/7/8.)
Performance on iOS devices isn't great, but is better with lower resolution
files :) On desktop it's screaming fast for moderate resolutions, and could
probably supplement or replace Cortado with further development.
Is anyone interested in helping out or picking up the project to move it
towards proper playback? If not, it'll be one of my weekend "fun" projects
I occasionally tinker with off the clock. :)
On Sun, Aug 25, 2013 at 7:46 PM, Yuvi Panda <yuvipanda(a)gmail.com> wrote:
> Hey rupert!
> On Sun, Aug 25, 2013 at 10:21 PM, rupert THURNER
> <rupert.thurner(a)gmail.com> wrote:
>> hi brion,
>> thank you so much for that! where is the source code? i tried to
>> search for "commons" on https://git.wikimedia.org/. i wanted to look
> Android: https://git.wikimedia.org/summary/apps%2Fandroid%2Fcommons.git
> iOS: github.com/wikimedia/Commons-iOS
>> if there is really no account creation at the login screen or it is
>> just my phone which does not display one, and which URL the aplication
> Mediawiki doesn't have API support for creating accounts, and hence
> the apps don't have create account support yet.
created https://bugzilla.wikimedia.org/show_bug.cgi?id=53328, maybe
you could detail a little bit more how this api should look like?