Muhammed Tatlısu
34080/İstanbul-Türkiye
25 Nis 2016 14:44 tarihinde "Andre Klapper" <aklapper(a)wikimedia.org> yazdı:
> On Sat, 2016-04-23 at 13:44 +0530, Tony Thomas wrote:
> > The best approach would be to open up a Conpherence with your mentors
> > ( I hope they appreciate it ), and asking the same. If you do not find
> any
> > lucky with that, kindly ping or add in the org-admins too, and we will
> > get this resolved.
>
> For those wondering: "Conpherence" is the name of the discussion tool
> in Wikimedia Phabricator. It allows private conversations.
> See https://www.mediawiki.org/wiki/Phabricator/Help#Using_Conpherence
>
> Cheers,
> andre
> --
> Andre Klapper | Wikimedia Bugwrangler
> http://blogs.gnome.org/aklapper/
>
>
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Started as quick thoughts, turned into more of an essay, so I've posted the
bulk on mediawiki.org:
https://www.mediawiki.org/wiki/User:Brion_VIBBER/ResourceLoader_and_latency
tl;dr summary:
On slow networks, latency in loading large JS and HTML resources means
things don't always work right when we first see them.
If we take advantage of HTTP 2 we could skip the concatenation of separate
ResourceLoader modules to reduce latency until each module _runs_, without
adding _network_ latency.
And if we're more clever about handling 'progressive enhancement' via JS
_while_ an HTML page loads, we could reduce the time before large pages
become fully interactive.
-- brion
Over in TimedMediaHandler extension, we've had a number of cases where old
code did things that were convenient in terms of squishing read-write
operations into data getters, that got removed due to problems with long
running transactions or needing to refactor things to support
future-facing multi-DC work where we want requests to be able to more
reliably distinguish between read-only and read-write. And we sometimes
want to put some of those clever hacks back and add more. ;)
For instance in https://gerrit.wikimedia.org/r/284368 we'd like to remove
transcode derivative files of types/resolutions that have been disabled
automatically when we come across them. But I'm a bit unsure it's safe to
do so.
Note that we could fire off a job queue background task to do the actual
removal... But is it also safe to do that on a read-only request?
https://www.mediawiki.org/wiki/Requests_for_comment/Master_%26_slave_datace…
seems to indicate job queueing will be safe, but would like to confirm
that. :)
Similarly in https://gerrit.wikimedia.org/r/#/c/284269/ we may wish to
trigger missing transcodes to run on demand, similarly. The actual re
encoding happens in a background job, but we have to fire it off, and we
have to record that we fired it off so we don't duplicate it...
(This would require a second queue to do the high-priority state table
update and queue the actual transcoding job; we can't put them in one queue
because a backup of transcode jobs would prevent the high priority job from
running in a timely fashion.)
A best practices document on future-proofing for multi DC would be pretty
awesome! Maybe factor out some of the stuff from the RfC into a nice dev
doc page...
-- brion
Hello,
A few updates from the Discovery department this week.
* Portal team brainstormed ideas for the future of the Portal work and
beyond; read more here
<https://www.mediawiki.org/wiki/Wikipedia.org_Portal_brainstorming_ideas_for…>
.
* The zero results rate
<http://discovery.wmflabs.org/metrics/#kpi_zero_results> [for search] was
updated to no longer count queries that it should've been excluding. This
shows that the completion suggester had a bigger effect than previously
thought, actually reducing the zero results rate from roughly 33% to
roughly 22%.
----
Feedback and suggestions on this weekly update are welcome.
The full update, and archive of past updates, can be found on Mediawiki.org:
https://www.mediawiki.org/wiki/Discovery/Status_updates
--
Yours,
Chris Koerner
Community Liaison - Discovery
Wikimedia Foundation
Hi everyone,
After we've been successfully serving our sites from our backup data-center
codfw (Dallas) for the past two days, we're now starting our switch back to
eqiad (Ashburn) as planned[1].
We've already moved cache traffic back to eqiad, and within the next
minutes, we'll disable editing by going read-only for approximately 30
minutes - hopefully a bit faster than 2 days ago.
[1] http://blog.wikimedia.org/2016/04/11/wikimedia-failover-test/
On Tue, Apr 19, 2016 at 6:00 PM, Mark Bergsma <mark(a)wikimedia.org> wrote:
> Hi all,
>
> Today the data center switch-over commenced as planned, and has just fully
> completed successfully. We are now serving our sites from codfw (Dallas,
> Texas) for the next 2 days if all stays well.
>
> We switched the wikis to read-only (editing disabled) at 14:02 UTC, and
> went back read-write at 14:48 UTC - a little longer than planned. While
> edits were possible then, unfortunately at that time Special:Recent Changes
> (and related change feeds) were not yet working due to an unexpected
> configuration problem with our Redis servers until 15:10 UTC, when we found
> and fixed the issue. The site has stayed up and available for readers
> throughout the entire migration.
>
> Overall the procedure was a success with few problems along the way.
> However we've also carefully kept track of any issues and delays we
> encountered for evaluation to improve and speed up the procedure, and
> reducing impact to our users - some of which will already be implemented
> for our switch back on Thursday.
>
> We're still expecting to find (possibly subtle) issues today, and would
> like everyone who notices anything to use the following channels to report
> them:
>
> 1. File a Phabricator issue with project #codfw-rollout
> 2. Report issues on IRC: Freenode channel #wikimedia-tech (if urgent)
> 3. Send an e-mail to the Operations list: ops(a)lists.wikimedia.org
>
> We're not done yet, but thanks to all who have helped so far. :-)
>
> Mark
>
--
Mark Bergsma <mark(a)wikimedia.org>
Lead Operations Architect
Director of Technical Operations
Wikimedia Foundation
Hey Everyone,
I am an active contributer to Wikiversity (German and English). Over the
years we realized that Mediawiki is not really providing everything a
student and teacher needs in the classroom. Therefor Sebastian Schlicht and
me have created a bunch of javascript, Lua modules and templates in order
to pimp the user interface and some processes on the english wikiversity.
A community poll has showed that our scripts should have been moved to
common.js
https://en.wikiversity.org/wiki/Wikiversity_talk:MOOC_Interface#Support
The improvements can be seen live in this course:
https://en.wikiversity.org/wiki/Web_Science/Part1:_Foundations_of_the_web/I…
Over the time we have realised that it would be better to create a
standalone Mediawiki extension since this is more stable. So we propsed our
Idea to this years fOERder award which gave us some founding for this OER
(Open educational resources) related project.
Today Sebastian and I started our process.
* We have been installing vagrant and have a mediawiki running locally!
This is great and was well documented.
* I have created: https://www.mediawiki.org/wiki/Extension:MOOC
* We need a gerrit Project extension/MOOC
* We need a vagrent role for this
Abraham Taherivand suggested to send a mail to this list with our needs and
our introduction.
Being new in the Wikimedia world it can be a little bit confusing - even
though everything is well documented - so I am sorry if we had overseen
something in the documentation. So we would be really happy for support,
pointers and hints and especially for a gerrit project and a vagrent role.
My username in gerrit as in other wikis is renepick
Best regards Sebastian Schlicht and Rene Pickhardt
--
--
www.rene-pickhardt.de
<http://www.beijing-china-blog.com/>
Skype: rene.pickhardt
mobile: +49 (0)176 5762 3618 office: +49 (0) 261 / 287 2765 fax: +49
(0) 261 / 287 100 2765
Hello!
The analytics team is happy to announce that the Unique Devices data is now
available to be queried programmatically via an API.
This means that getting the daily number of unique devices [1] for English
Wikipedia for the month of February 2016, for all sites (desktop and
mobile) is as easy as launching this query:
https://wikimedia.org/api/rest_v1/metrics/unique-devices/en.wikipedia.org/a…
You can get started by taking a look at our docs:
https://wikitech.wikimedia.org/wiki/Analytics/Unique_Devices#Quick_Start
If you are not familiar with the Unique Devices data the main thing you
need to know is that
is a good proxy metric to measure Unique Users, more info below.
Since 2009, the Wikimedia Foundation used comScore to report data about
unique web visitors. In January 2016, however, we decided to stop
reporting comScore numbers [2] because of certain limitations in the
methodology, these limitations translated into misreported mobile usage. We
are now ready to replace comscore numbers with the Unique Devices Dataset .
While unique devices does not equal unique visitors, it is a good proxy for
that metric, meaning that a major increase in the number of unique devices
is likely to come from an increase in distinct users. We understand that
counting uniques raises fairly big privacy concerns and we use a very
private conscious way to count unique devices, it does not include any
cookie by which your browser history can be tracked [3].
[1] https://meta.wikimedia.org/wiki/Research:Unique_Devices
[2] [https://meta.wikimedia.org/wiki/ComScore/Announcement
[3]
https://meta.wikimedia.org/wiki/Research:Unique_Devices#How_do_we_count_uni…
devices.3F