Hi,
Composer recently released their first stable version of 1.0.0, which
among other things mandates usage of secure connections and validates
certificates[1]. I'd like for 1.27 to require 1.0.0 as a minimum version
people must use when fetching installing MediaWiki dependencies (people
can always use mediawiki/vendor instead of composer though).
As a side-effect, this would let us get rid of some old back-compat code
that is currently triggering a deprecation notice on every composer
install command[2].
Thoughts?
[1] https://phabricator.wikimedia.org/T119272#2125086
[2] https://phabricator.wikimedia.org/T119590#2234183
-- Legoktm
Hey, this is the first weekly update on Revision Scoring project. In case
you are not subscribed to ai-l
---------- Forwarded message ---------
From: Amir Ladsgroup <ladsgroup(a)gmail.com>
Date: Mon, Apr 25, 2016 at 7:02 PM
Subject: Weekly update
To: ai(a)lists.wikimedia.org <ai(a)lists.wikimedia.org>
Hello, This is our first weekly update being posted in this mailing list
New Developments
- Now you can abandon tasks you don't want to review in Wikilabels
(T105521)
- We collect user-agents in ORES requests (T113754)
- Precaching in ORES will be a daemon and more selective (T106638)
Progress in supporting new languages
- Russian reverted, damaging, and goodfaith models are built. They look
good and will be deployed this week.
- Hungarian reverted model is built, will be deployed this week.
Campaign for goodfaith and damaging is loaded in Wikilabels.
- Japanese reverted model are built, but there are still some issues to
work out. (T133405)
Active Labeling campaigns
- Edit quality (damaging and good faith)
- Wikipedias: Arabic, Azerbaijani, Dutch, German, French, Hebrew,
Hungarian, Indonesian, Italian, Japanese, Norwegian, Persian (v2),
Polish, Spanish, Ukrainian, Urdu, Vietnamese
- Wikidata
- Edit type
- English Wikipedia
Sincerely,
The Revision Scoring team.
<https://meta.wikimedia.org/wiki/Research:Revision_scoring_as_a_service#Team>
Muhammed Tatlısu
34080/İstanbul-Türkiye
25 Nis 2016 14:44 tarihinde "Andre Klapper" <aklapper(a)wikimedia.org> yazdı:
> On Sat, 2016-04-23 at 13:44 +0530, Tony Thomas wrote:
> > The best approach would be to open up a Conpherence with your mentors
> > ( I hope they appreciate it ), and asking the same. If you do not find
> any
> > lucky with that, kindly ping or add in the org-admins too, and we will
> > get this resolved.
>
> For those wondering: "Conpherence" is the name of the discussion tool
> in Wikimedia Phabricator. It allows private conversations.
> See https://www.mediawiki.org/wiki/Phabricator/Help#Using_Conpherence
>
> Cheers,
> andre
> --
> Andre Klapper | Wikimedia Bugwrangler
> http://blogs.gnome.org/aklapper/
>
>
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Started as quick thoughts, turned into more of an essay, so I've posted the
bulk on mediawiki.org:
https://www.mediawiki.org/wiki/User:Brion_VIBBER/ResourceLoader_and_latency
tl;dr summary:
On slow networks, latency in loading large JS and HTML resources means
things don't always work right when we first see them.
If we take advantage of HTTP 2 we could skip the concatenation of separate
ResourceLoader modules to reduce latency until each module _runs_, without
adding _network_ latency.
And if we're more clever about handling 'progressive enhancement' via JS
_while_ an HTML page loads, we could reduce the time before large pages
become fully interactive.
-- brion
Over in TimedMediaHandler extension, we've had a number of cases where old
code did things that were convenient in terms of squishing read-write
operations into data getters, that got removed due to problems with long
running transactions or needing to refactor things to support
future-facing multi-DC work where we want requests to be able to more
reliably distinguish between read-only and read-write. And we sometimes
want to put some of those clever hacks back and add more. ;)
For instance in https://gerrit.wikimedia.org/r/284368 we'd like to remove
transcode derivative files of types/resolutions that have been disabled
automatically when we come across them. But I'm a bit unsure it's safe to
do so.
Note that we could fire off a job queue background task to do the actual
removal... But is it also safe to do that on a read-only request?
https://www.mediawiki.org/wiki/Requests_for_comment/Master_%26_slave_datace…
seems to indicate job queueing will be safe, but would like to confirm
that. :)
Similarly in https://gerrit.wikimedia.org/r/#/c/284269/ we may wish to
trigger missing transcodes to run on demand, similarly. The actual re
encoding happens in a background job, but we have to fire it off, and we
have to record that we fired it off so we don't duplicate it...
(This would require a second queue to do the high-priority state table
update and queue the actual transcoding job; we can't put them in one queue
because a backup of transcode jobs would prevent the high priority job from
running in a timely fashion.)
A best practices document on future-proofing for multi DC would be pretty
awesome! Maybe factor out some of the stuff from the RfC into a nice dev
doc page...
-- brion
Hello,
A few updates from the Discovery department this week.
* Portal team brainstormed ideas for the future of the Portal work and
beyond; read more here
<https://www.mediawiki.org/wiki/Wikipedia.org_Portal_brainstorming_ideas_for…>
.
* The zero results rate
<http://discovery.wmflabs.org/metrics/#kpi_zero_results> [for search] was
updated to no longer count queries that it should've been excluding. This
shows that the completion suggester had a bigger effect than previously
thought, actually reducing the zero results rate from roughly 33% to
roughly 22%.
----
Feedback and suggestions on this weekly update are welcome.
The full update, and archive of past updates, can be found on Mediawiki.org:
https://www.mediawiki.org/wiki/Discovery/Status_updates
--
Yours,
Chris Koerner
Community Liaison - Discovery
Wikimedia Foundation
Hi everyone,
After we've been successfully serving our sites from our backup data-center
codfw (Dallas) for the past two days, we're now starting our switch back to
eqiad (Ashburn) as planned[1].
We've already moved cache traffic back to eqiad, and within the next
minutes, we'll disable editing by going read-only for approximately 30
minutes - hopefully a bit faster than 2 days ago.
[1] http://blog.wikimedia.org/2016/04/11/wikimedia-failover-test/
On Tue, Apr 19, 2016 at 6:00 PM, Mark Bergsma <mark(a)wikimedia.org> wrote:
> Hi all,
>
> Today the data center switch-over commenced as planned, and has just fully
> completed successfully. We are now serving our sites from codfw (Dallas,
> Texas) for the next 2 days if all stays well.
>
> We switched the wikis to read-only (editing disabled) at 14:02 UTC, and
> went back read-write at 14:48 UTC - a little longer than planned. While
> edits were possible then, unfortunately at that time Special:Recent Changes
> (and related change feeds) were not yet working due to an unexpected
> configuration problem with our Redis servers until 15:10 UTC, when we found
> and fixed the issue. The site has stayed up and available for readers
> throughout the entire migration.
>
> Overall the procedure was a success with few problems along the way.
> However we've also carefully kept track of any issues and delays we
> encountered for evaluation to improve and speed up the procedure, and
> reducing impact to our users - some of which will already be implemented
> for our switch back on Thursday.
>
> We're still expecting to find (possibly subtle) issues today, and would
> like everyone who notices anything to use the following channels to report
> them:
>
> 1. File a Phabricator issue with project #codfw-rollout
> 2. Report issues on IRC: Freenode channel #wikimedia-tech (if urgent)
> 3. Send an e-mail to the Operations list: ops(a)lists.wikimedia.org
>
> We're not done yet, but thanks to all who have helped so far. :-)
>
> Mark
>
--
Mark Bergsma <mark(a)wikimedia.org>
Lead Operations Architect
Director of Technical Operations
Wikimedia Foundation