Hey all,
I recently had a new repository created; and I wanted to create some jobs
for it.
I dutifully created and had merged:
https://gerrit.wikimedia.org/r/#/c/115968/https://gerrit.wikimedia.org/r/#/c/115967/
Hashar told me I then needed to follow the instructions on [1] to push the
jobs to jenkins. Running the script myself was only pain; it kept erroring
out while trying to create the job. Marktraceur managed to create the jobs
after much "kicking down the door" aka running the script multiple times.
It appears that the problem is that
https://integration.mediawiki.org/ci/createItem?name=mwext-FundraisingChart…
to
https://integration.mediawiki.org/?...
So that's a problem? We're still not sure why Mark was able to create the
jobs with perseverance though.
Another problem that I'm seeing is in responsibilities -- supposedly only
jenkins admins (wmf developers) can submit jobs (and then only when it
works). And then, only people with root on galium can apply the Zuul
configs. To me this is clearly not something the average developer is
supposed to be doing.
Would it make sense to have QChris / ^demon create the standard jobs when
they create the repository?
[1]
https://www.mediawiki.org/wiki/Continuous_integration/Tutorials/Adding_a_Me…
~Matt Walker
Wikimedia Foundation
Fundraising Technology Team
PHP 5.4 added a few important features[1], namely traits, shorthand array
syntax, and function array dereferencing. I've heard that 5.3 is nearing
end of life.
I propose we drop support for PHP 5.3 soon, if possible.
- Trevor
[1] http://php.net/manual/en/migration54.new-features.php
hello,
These are some approaches i can think of instead of a text based captcha.
The image idea where users are asked to spot the odd one out like
demonstrated or find all the similar images like mentioned in
here<https://www.mediawiki.org/wiki/CAPTCHA>
.
Also a picture with a part chipped in could be shown and chipped pictures
could be given as options
like find the missing part from a jigsaw puzzle.
The image which would be shown is http://imgur.com/uefeb08http://imgur.com/KEJqCg3 is the picture which would be the correct option.
The other options could be rotated versions of this , which would not be so
easy for the bot to match. (unless it somehow worked some digital
processing algorithm and matched the color gradients or something like
that).
This is a good option for people who do not know english or are illiterate
and maybe would not understand questions like : is this a bird , plane ,
superman? after being shown a picture.
Tell me what you think
(Sorry to upload those images on imgur. i dont know how to put them on the
wiki .Hope that is ok)
have posted this on the CAPTCHA
page<https://www.mediawiki.org/wiki/Talk:CAPTCHA>also
Hey,
As you are probably aware of, it has been possible for some time now to
install Composer compatible MediaWiki extensions via Composer.
Markus Glaser recently wrote an RFC titled "Extension management with
Composer" [0]. This RFC mentioned that it is not possible for extensions to
specify which version of MediaWiki they are compatible with. After
discussing the problem with some people from the Composer community, I
created a commit that addresses this pain point [1]. It's been sitting on
gerrit getting stale, so some input there is appreciated.
[0]
https://www.mediawiki.org/wiki/Requests_for_comment/Extension_management_wi…
[1] https://gerrit.wikimedia.org/r/#/c/105092/
For your convenience, a copy of the commit message:
~~
Make it possible for extensions to specify which version of MediaWiki
they support via Composer.
This change allows extensions to specify they depend on a specific
version or version range of MediaWiki. This is done by adding the
package mediawiki/mediawiki in their composer.json require section.
As MediaWiki itself is not a Composer package and is quite far away
from becoming one, a workaround was needed, which is provided by
this commit.
It works as follows. When "composer install" or "composer update"
is run, a Composer hook is invoked. This hook programmatically
indicates the root package provides MediaWiki, as it indeed does
when extensions are installed into MediaWiki. The package link
of type "provides" includes the MediaWiki version, which is read
from DefaultSettings.php.
This functionality has been tested and confirmed to work. One needs
a recent Composer version for it to have an effect. The upcoming
Composer alpha8 release will suffice. See
https://github.com/composer/composer/issues/2520
Tests are included. Composer independent tests will run always,
while the Composer specific ones are skipped when Composer is
not installed.
People that already have a composer.json file in their MediaWiki
root directory will need to make the same additions there as this
commit makes to composer-json.example. If this is not done, the
new behaviour will not work for them (though no existing behaviour
will break). The change to the json file has been made in such a
way to minimize the likelihood that any future modifications there
will be needed.
Thanks go to @beausimensen (Sculpin) and @seldaek (Composer) for
their support.
~~
I also wrote up a little blog post on the topic:
http://www.bn2vs.com/blog/2014/02/15/mediawiki-extensions-to-define-their-m…
Cheers
--
Jeroen De Dauw
http://www.bn2vs.com
Don't panic. Don't be evil. ~=[,,_,,]:3
--
Starting on Tuesday, March 4th, the new Labs install in the eqiad data
center will be open for business. Two dramatic things will happen on
that day: Wikitech will gain the ability to create instances in eqiad,
and wikitech will lose the ability to create new instances in pmtpa.
About a month from Tuesday, the pmtpa labs install will be shut down.
If you want your project to still be up and running in April, you must
take action!
We are committed to not destroying any instances or data during the
shutdown, but projects that remain untouched by human hands during the
next few weeks will be mothballed by staff: the data will be preserved
but most likely compressed and archived, and instances will be left in a
shutdown state.
(Note: Toollabs users can sit tight for a bit; Coren will provide
specific migration instructions for you shortly.)
I've written a migration guide, here:
https://wikitech.wikimedia.org/wiki/Labs_Eqiad_Migration_Howto It's a
work in progress, so check back frequently. Please don't hesitate to
ask questions on IRC, make suggestions as to guide improvements, or
otherwise question this process. Quite a few of the suggested steps in
that guide require action on the part of a Labs op -- for that purpose
we've created a bugzilla tracking bug, 62042. To add a migration bug
that links to the tracker, use this link:
https://bugzilla.wikimedia.org/enter_bug.cgi?product=Wikimedia%20Labs&compo…
At the very least, please visit this page and edit it with your project
migration plans:
https://wikitech.wikimedia.org/wiki/Labs_Eqiad_Migration_Progress
Projects that have no activity on that page will be early candidates for
mothballing. If you want me to delete your project, please note that as
well -- that will allow us to free up resources for future projects.
I am cautiously optimistic about this migration. Most of our testing
has gone fairly well, so a lot of you should find the process smooth and
easy. That said, we're all going to be early adopters of this tech, so
I appreciate your patience and understanding when inevitable bugs shake
out. I look forward to hearing about them on IRC!
-Andrew
Wikitech admin peoples!
I was doing bad things to my phone last night (reflashing it) and I lost
the 2 factor auth metadata for my authentication app. Because of this I can
no longer log in to wikitech.
I wasn't able to find any documentation on wikitech about how to reset it
-- so I need your help to do that I think? I still know my password; so I'm
not looking to reset that -- maybe just temporarily disable two factor auth
on my account (Mwalker) and I'll re-enroll myself?
~Matt Walker
Wikimedia Foundation
Fundraising Technology Team
Hello and welcome to the latest edition of the WMF Roadmap and
Deployments update.
See the full roadmap for next week and beyond here:
https://wikitech.wikimedia.org/wiki/Deployments#Week_of_March_3rd
Some important call outs:
== Monday ==
The migration of WMF Labs from pmtpa to eqiad begins
* new instance creation disabled in pmtpa, only available in eqiad
* See the emails from Andrew and Marc for more details:
** http://lists.wikimedia.org/pipermail/labs-l/2014-February/002152.html
** http://lists.wikimedia.org/pipermail/labs-l/2014-February/002153.html
We will be disabling ArticleFeedBack on all wikis.
* https://bugzilla.wikimedia.org/show_bug.cgi?id=61163
== Tuesday ==
MediaWiki upgrades
* group1 to 1.23wmf16: All non-Wikipedia sites (Wiktionary, Wikisource,
Wikinews, Wikibooks, Wikiquote, Wikiversity, and a few other sites)
* see also:
** https://www.mediawiki.org/wiki/MediaWiki_1.23/Roadmap#Schedule_for_the_depl…
** https://www.mediawiki.org/wiki/MediaWiki_1.23/wmf16
== Wednesday ==
The new search cluster will be upgraded (to ElasticSearch 1.0.1).
* This will begin at 0:00 UTC March 6th/4pm Pacific March 5th and will
take a few hours to complete.
* All wikis currently using the new search (CirrusSearch) will be
temporarily switched back to the old serach (lsearchd)
* You shouldn't see much of a change in search behavior (CirrusSearch is
mostly feature parity to lsearchd) if your wiki is on new search, but
to see a list of wikis that currently have CirrusSearch enabled (and
in what way: Beta Feature or Primary), see:
** https://www.mediawiki.org/wiki/Search#Wikis
== Thursday ==
MediaWiki upgrades
* group2 to 1.23wmf16 (all Wikipedias)
* group0 to 1.23wmf17 (test/test2/testwikidata/mediawiki)
As always, questions welcome,
Greg
--
| Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
CirrusSearch flaked out Feb 28 around 19:30 UTC and I brought it back from
the dead around 21:25 UTC. During the time it was flaking out searches
that used it (mediawiki.org, wikidata.org, ca.wikipedia.org, and everything
in Italian) took a long, long time or failed immediately with a message
about this being a temporary problem we're working on fixing.
Events:
We added four new Elasticserach servers on Rack D (yay) around 18:45 UTC
The Elasticsearch cluster started serving simple requests very slowly
around 19:30 UTC
I was alerted to a search issue on IRC at 20:45 UTC
I fixed the offending Elasticsearch servers around 21:25 UTC
Query times recovered shortly after that
Explanation:
We very carefully installed the same version of Elasticsearch and Java as
we use on the other machines then used puppet to configure the
Elasticsearch machines to join the cluster. It looks like they only picked
up half the configuration provided by puppet
(/etc/elasticsearch/elasticsearch.yml but not
/etc/defaults/elasticsearch). Unfortunately for us that is the bad half to
miss because /etc/default/elasticsearch contains the JVM heap settings.
The servers came online with the default amount of heap which worked fine
until Elasticsearch migrated a sufficiently large index to them. At that
point the heap filled up and Java does what it does in that case and spun
forever trying to free garbage. It pretty much pegged one CPU and rendered
the entire application unresponsive. Unfortunately (again) pegging one CPU
isn't that weird for Elasticsearch. It'll do that when it is merging. The
application normally stays responsive because the rest of the JVM keeps
moving along. That doesn't happen when heap is full.
Knocking out one of those machines caused tons of searches to block,
presumably waiting for those machine to respond. I'll have to dig around
to see if I can find the timeout but we're obviously using the default
which in our case is way way way to long. We then filled the pool queue
and started rejecting requests to search altogether.
When I found the problem all I had to do was kill -9 the Elasticsearch
servers and restart them. -9 is required because JVMs don't catch the
regular signal if they are too busy garbage collecting.
What we're doing to prevent it from happening again:
* We're going to monitor the slow query log and have icinga start
complaining if it grows very quickly. We normally get a couple of slow
queries per day so this shouldn't be too noisy. We're going to also have
to monitor error counts, especially once we get more timeouts. (
https://bugzilla.wikimedia.org/show_bug.cgi?id=62077)
* We're going to sprinkle more timeouts all over the place. Certainly in
Cirrus while waiting on Elasticsearch and figure out how to tell
Elasticsearch what the shard timeouts should be as well. (
https://bugzilla.wikimedia.org/show_bug.cgi?id=62079)
* We're going to figure out why we only got half the settings. This is
complicated because we can't let puppet restart Elasticsearch because
Elasticsearch restarts must be done one node at a time.
Nik
Hello, I am willing to participate in GSOC this year for the first time,
but I am a little bit worried about choosing the idea, I have one and I am
not sure if it suits this program. I will be very glad if you will take a
small look at my idea and tell your thoughts. Will be happy to every
feedback. Thank you.
Project Idea
What is the purpose?
Help people in reading complex texts by providing inline translation for
unknown words. For me as a non-native English speaker student sometimes is
hard to read complicated texts or articles, that's why I need to search for
translation or description every time. Why not to simplify this and change
the flow from translate and understand to translate, learn and understand?
How inline translation will appear?
While user is reading an article, he could find some unknown words or words
with confusing meaning for him. At this point he clicks on the selected
word and the inline translation appears.
What should be included in inline translation?
Thus it is not just a translator, it should include not only one
translation, but a couple or more. Also more data can be included such as
synonyms, which can be discussed during project completion.
>From which source gather the data?
Wiktionary is the best candidate, it is an open source and it has a wide
database. It also suits for growing your project by adding different
languages.
Evaluation needs
There are two ways in my mind right now. First is to make a web-site built
on Node.js with open API for users. Parsoid could be used for parsing data
from Wiktionary API which is suitable for Node. A small JavaScript widget
is also required for front-end representation.
Second is to make a standalone library which can be used alone on other
resources as an add-on or in browser extensions. Unfortunately, last option
is more confusing for me at this point.
Growth opportunities
I am leaving in Finland right now and I don't know Finnish as I should to
understand locals, therefore this project can be expanded by adding more
languages support for helping people like me reading, learning and
understanding texts in foreign languages.
fyi
-------- Original Message --------
Subject: [Wikimedia-l] Call for Individual Engagement Grant proposals
and committee members
Date: Fri, 28 Feb 2014 11:04:47 -0800
From: Siko Bouterse <sbouterse(a)wikimedia.org>
Reply-To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
To: wikimedia-l(a)lists.wikimedia.org
Hi all,
The Wikimedia Foundation and the Individual Engagement Grants Committee
invite you to submit and review proposals for community-led experiments to
improve Wikimedia!
Individual Engagement Grants support individuals and small teams to
organize projects for 6 months. You can get funding to turn your idea for
improving Wikimedia projects into action, with a grant for online community
organizing, outreach and partnerships, tool-building, or research. Funding
is available for a few hundred dollars up to $30,000.
Proposals for this round are due 31 March 2014:
https://meta.wikimedia.org/wiki/Grants:IEG
We're also seeking new committee members to help review and recommend
proposals for funding. Candidates are invited to sign up by 9 March 2014:
https://meta.wikimedia.org/wiki/Grants:IEG/Committee
Some examples of projects we've funded in the past:
*Organizing social media for Chinese Wikipedia ($350 for materials)[1]
*Improving gadgets for Visual Editor ($4500 for developers)[2]
*Coordinating free access to reliable sources for Wikipedians ($7500 for
project management, consultants and materials)[3]
*Building community and strategy for Wikisource (EURO 10000 for
organizing and
travel)[4]
You can read more on the WMF blog:
https://blog.wikimedia.org/tag/individual-engagement-grants/
Hope to have your participation in this round!
Best wishes,
Siko
[1]
https://meta.wikimedia.org/wiki/Grants:IEG/Build_an_effective_method_of_pub…
[2]
https://meta.wikimedia.org/wiki/Grants:IEG/Visual_editor-_gadgets_compatibi…
[3] https://meta.wikimedia.org/wiki/Grants:IEG/The_Wikipedia_Library
[4]
https://meta.wikimedia.org/wiki/Grants:IEG/Elaborate_Wikisource_strategic_v…
--
Siko Bouterse
Wikimedia Foundation, Inc.
sbouterse(a)wikimedia.org
*Imagine a world in which every single human being can freely share in the
sum of all knowledge. *
*Donate <https://donate.wikimedia.org> or click the "edit" button today,
and help us make it a reality!*
_______________________________________________
Wikimedia-l mailing list
Wikimedia-l(a)lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
<mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>