PHP 5.4 added a few important features, namely traits, shorthand array
syntax, and function array dereferencing. I've heard that 5.3 is nearing
end of life.
I propose we drop support for PHP 5.3 soon, if possible.
These are some approaches i can think of instead of a text based captcha.
The image idea where users are asked to spot the odd one out like
demonstrated or find all the similar images like mentioned in
Also a picture with a part chipped in could be shown and chipped pictures
could be given as options
like find the missing part from a jigsaw puzzle.
The image which would be shown is http://imgur.com/uefeb08http://imgur.com/KEJqCg3 is the picture which would be the correct option.
The other options could be rotated versions of this , which would not be so
easy for the bot to match. (unless it somehow worked some digital
processing algorithm and matched the color gradients or something like
This is a good option for people who do not know english or are illiterate
and maybe would not understand questions like : is this a bird , plane ,
superman? after being shown a picture.
Tell me what you think
(Sorry to upload those images on imgur. i dont know how to put them on the
wiki .Hope that is ok)
have posted this on the CAPTCHA
As you are probably aware of, it has been possible for some time now to
install Composer compatible MediaWiki extensions via Composer.
Markus Glaser recently wrote an RFC titled "Extension management with
Composer" . This RFC mentioned that it is not possible for extensions to
specify which version of MediaWiki they are compatible with. After
discussing the problem with some people from the Composer community, I
created a commit that addresses this pain point . It's been sitting on
gerrit getting stale, so some input there is appreciated.
For your convenience, a copy of the commit message:
Make it possible for extensions to specify which version of MediaWiki
they support via Composer.
This change allows extensions to specify they depend on a specific
version or version range of MediaWiki. This is done by adding the
package mediawiki/mediawiki in their composer.json require section.
As MediaWiki itself is not a Composer package and is quite far away
from becoming one, a workaround was needed, which is provided by
It works as follows. When "composer install" or "composer update"
is run, a Composer hook is invoked. This hook programmatically
indicates the root package provides MediaWiki, as it indeed does
when extensions are installed into MediaWiki. The package link
of type "provides" includes the MediaWiki version, which is read
This functionality has been tested and confirmed to work. One needs
a recent Composer version for it to have an effect. The upcoming
Composer alpha8 release will suffice. See
Tests are included. Composer independent tests will run always,
while the Composer specific ones are skipped when Composer is
People that already have a composer.json file in their MediaWiki
root directory will need to make the same additions there as this
commit makes to composer-json.example. If this is not done, the
new behaviour will not work for them (though no existing behaviour
will break). The change to the json file has been made in such a
way to minimize the likelihood that any future modifications there
will be needed.
Thanks go to @beausimensen (Sculpin) and @seldaek (Composer) for
I also wrote up a little blog post on the topic:
Jeroen De Dauw
Don't panic. Don't be evil. ~=[,,_,,]:3
Starting on Tuesday, March 4th, the new Labs install in the eqiad data
center will be open for business. Two dramatic things will happen on
that day: Wikitech will gain the ability to create instances in eqiad,
and wikitech will lose the ability to create new instances in pmtpa.
About a month from Tuesday, the pmtpa labs install will be shut down.
If you want your project to still be up and running in April, you must
We are committed to not destroying any instances or data during the
shutdown, but projects that remain untouched by human hands during the
next few weeks will be mothballed by staff: the data will be preserved
but most likely compressed and archived, and instances will be left in a
(Note: Toollabs users can sit tight for a bit; Coren will provide
specific migration instructions for you shortly.)
I've written a migration guide, here:
https://wikitech.wikimedia.org/wiki/Labs_Eqiad_Migration_Howto It's a
work in progress, so check back frequently. Please don't hesitate to
ask questions on IRC, make suggestions as to guide improvements, or
otherwise question this process. Quite a few of the suggested steps in
that guide require action on the part of a Labs op -- for that purpose
we've created a bugzilla tracking bug, 62042. To add a migration bug
that links to the tracker, use this link:
At the very least, please visit this page and edit it with your project
Projects that have no activity on that page will be early candidates for
mothballing. If you want me to delete your project, please note that as
well -- that will allow us to free up resources for future projects.
I am cautiously optimistic about this migration. Most of our testing
has gone fairly well, so a lot of you should find the process smooth and
easy. That said, we're all going to be early adopters of this tech, so
I appreciate your patience and understanding when inevitable bugs shake
out. I look forward to hearing about them on IRC!
Wikitech admin peoples!
I was doing bad things to my phone last night (reflashing it) and I lost
the 2 factor auth metadata for my authentication app. Because of this I can
no longer log in to wikitech.
I wasn't able to find any documentation on wikitech about how to reset it
-- so I need your help to do that I think? I still know my password; so I'm
not looking to reset that -- maybe just temporarily disable two factor auth
on my account (Mwalker) and I'll re-enroll myself?
Fundraising Technology Team
CirrusSearch flaked out Feb 28 around 19:30 UTC and I brought it back from
the dead around 21:25 UTC. During the time it was flaking out searches
that used it (mediawiki.org, wikidata.org, ca.wikipedia.org, and everything
in Italian) took a long, long time or failed immediately with a message
about this being a temporary problem we're working on fixing.
We added four new Elasticserach servers on Rack D (yay) around 18:45 UTC
The Elasticsearch cluster started serving simple requests very slowly
around 19:30 UTC
I was alerted to a search issue on IRC at 20:45 UTC
I fixed the offending Elasticsearch servers around 21:25 UTC
Query times recovered shortly after that
We very carefully installed the same version of Elasticsearch and Java as
we use on the other machines then used puppet to configure the
Elasticsearch machines to join the cluster. It looks like they only picked
up half the configuration provided by puppet
(/etc/elasticsearch/elasticsearch.yml but not
/etc/defaults/elasticsearch). Unfortunately for us that is the bad half to
miss because /etc/default/elasticsearch contains the JVM heap settings.
The servers came online with the default amount of heap which worked fine
until Elasticsearch migrated a sufficiently large index to them. At that
point the heap filled up and Java does what it does in that case and spun
forever trying to free garbage. It pretty much pegged one CPU and rendered
the entire application unresponsive. Unfortunately (again) pegging one CPU
isn't that weird for Elasticsearch. It'll do that when it is merging. The
application normally stays responsive because the rest of the JVM keeps
moving along. That doesn't happen when heap is full.
Knocking out one of those machines caused tons of searches to block,
presumably waiting for those machine to respond. I'll have to dig around
to see if I can find the timeout but we're obviously using the default
which in our case is way way way to long. We then filled the pool queue
and started rejecting requests to search altogether.
When I found the problem all I had to do was kill -9 the Elasticsearch
servers and restart them. -9 is required because JVMs don't catch the
regular signal if they are too busy garbage collecting.
What we're doing to prevent it from happening again:
* We're going to monitor the slow query log and have icinga start
complaining if it grows very quickly. We normally get a couple of slow
queries per day so this shouldn't be too noisy. We're going to also have
to monitor error counts, especially once we get more timeouts. (
* We're going to sprinkle more timeouts all over the place. Certainly in
Cirrus while waiting on Elasticsearch and figure out how to tell
Elasticsearch what the shard timeouts should be as well. (
* We're going to figure out why we only got half the settings. This is
complicated because we can't let puppet restart Elasticsearch because
Elasticsearch restarts must be done one node at a time.
Hello, I am willing to participate in GSOC this year for the first time,
but I am a little bit worried about choosing the idea, I have one and I am
not sure if it suits this program. I will be very glad if you will take a
small look at my idea and tell your thoughts. Will be happy to every
feedback. Thank you.
What is the purpose?
Help people in reading complex texts by providing inline translation for
unknown words. For me as a non-native English speaker student sometimes is
hard to read complicated texts or articles, that's why I need to search for
translation or description every time. Why not to simplify this and change
the flow from translate and understand to translate, learn and understand?
How inline translation will appear?
While user is reading an article, he could find some unknown words or words
with confusing meaning for him. At this point he clicks on the selected
word and the inline translation appears.
What should be included in inline translation?
Thus it is not just a translator, it should include not only one
translation, but a couple or more. Also more data can be included such as
synonyms, which can be discussed during project completion.
>From which source gather the data?
Wiktionary is the best candidate, it is an open source and it has a wide
database. It also suits for growing your project by adding different
There are two ways in my mind right now. First is to make a web-site built
on Node.js with open API for users. Parsoid could be used for parsing data
is also required for front-end representation.
Second is to make a standalone library which can be used alone on other
resources as an add-on or in browser extensions. Unfortunately, last option
is more confusing for me at this point.
I am leaving in Finland right now and I don't know Finnish as I should to
understand locals, therefore this project can be expanded by adding more
languages support for helping people like me reading, learning and
understanding texts in foreign languages.