Hello all,
This list is now decommissioned. Further messages to the list will be
automatically discarded with a rejection message.
The phabricator task for this is here if there are any further
questions/concerns: https://phabricator.wikimedia.org/T278516
Thanks!
Marielle Volz
This list hasn't seen a posting since April 2019 and no longer has any
list admins. Are there any objections to closing this list?
See also <https://phabricator.wikimedia.org/T278516>.
-- Legoktm
*TL;DR*: As part of the annual plan, we are moving Wikimedia Services to be
run in an orchestrated platform driven by a new build pipeline (yes, that
means containers). All new services for Wikimedia production must now start
using this platform. By the end of June 2019, we will have started
converting most remaining services. If you're responsible for an existing
or new service, please talk to us.
Some Wikimedia functionality is provided by services which operate
alongside MediaWiki, complimenting or supporting important features for
users. They currently mostly run directly on the "bare metal" shared
service clusters. The Deployment Pipeline "TEC3" annual plan[0] involves
modernising production platforms, and we have a goal to move all of these
to the new platform we've been building.
There are many benefits of changing to use this platform:
* Adding/removing capacity is easy as easy as a deployment; consequently,
overall our clusters will be more scalable, and used more efficiently;
* Rolling deployments are the default, reducing disruption and increasing
dependability for readers, editors, and other end-users of our services;
* Deployments are automatically versioned, rather than relying on a manual
process from deployers;
* There is increased testing before the deployment, by means of a staging
cluster; and
* Developers no longer need to create and use deploy repos manually, the
pipeline does this.
If you are an owner, developer, or deployer of a current or forthcoming
Wikimedia production service, we need to hear from you.
Some services have already moved to this new platform, and others are
moving to it.[1] We are building and improving the features as we discover
needs, so while we are confident now that services can run on this platform
run reliably, we still lack comprehensive documentation for this platform.
We are already in the process of creating it but we would like your input
to make sure that we are supporting all our stakeholders.
Please peruse the page about the deployment pipeline on wikitech[2] and
poke around. What documentation do you actually want? How can we make your
work simpler and more straight-forward? Please file a task in
Phabricator[3] or comment on the talk page.[4]
[0] -
https://www.mediawiki.org/wiki/Wikimedia_Technology/Annual_Plans/FY2019/TEC…
[1] - https://phabricator.wikimedia.org/T198901
[2] - https://wikitech.wikimedia.org/wiki/Deployment_pipeline
[3] - https://phabricator.wikimedia.org/project/profile/2453/
[4] - https://wikitech.wikimedia.org/wiki/Talk:Deployment_pipeline
Deployment Pipeline team
--
*James D. Forrester* (he/him <http://pronoun.is/he> or they/themself
<http://pronoun.is/they/.../themself>)
Wikimedia Foundation <https://wikimediafoundation.org/>
Hello,
We're having some problems with the results we're getting from the Wiki API.
Although we've set the results to provide us with all results in a 100km
radius, we're only getting results in a 10km radius.
Our tech team say:
"We've realized that Wikipedia API couldn't respond with the correct results
when search radius is over 10km.
According to the online documentation, invoker could be searching in the
100km range. But we only have the results in 10km range most when we pass
100km as the radius."
Is there a known issue? This hasn't been a problem for us in the past. Your
assistance is much appreciated.
Regards,
Chris Smyth
Christopher Smyth
CEO/Co-Founder
Inflighto
<mailto:chris@inflighto.com> chris(a)inflighto.com
+61 (0)417 298 598
"The airplane app that's revolutionizing inflight entertainment" - CNN
Travel
<https://www.inflighto.com/>
> On Sat, Jan 5, 2019 at 7:49 PM <chris(a)inflighto.com> wrote:
>
>> Hello,
>>
>> We’re having some problems with the results we’re getting from the Wiki
>> API.
>>
>> Although we’ve set the results to provide us with all results in a 100km
>> radius, we’re only getting results in a 10km radius.
>>
>> Our tech team say:
>>
>>
>>
>> “We’ve realized that Wikipedia API couldn't respond with the correct
>> results when search radius is over 10km.
>>
>> According to the online documentation, invoker could be searching in the
>> 100km range. But we only have the results in 10km range most when we pass
>> 100km as the radius.”
>>
>>
>>
>> Is there a known issue? This hasn’t been a problem for us in the past.
>> Your assistance is much appreciated.
>>
>>
>>
>
Hi,
if we are speaking about the geosearch generator or list module [0] and If
I'm not mistaken 10km (20km for wikivoyage) has been the max radius allowed
for a long time (since 2014 [1]). The documentation and API response should
properly mention the max value.
It's limited for performance reasons because this API is designed to sort
*all* the results it sees.
If sorting by distance is not required there is a way to run a simple
"distance filter" using the search APIs[2] and the search keyword nearcoord
[3], e.g. nearcoord:1000km,0,0 [4].
[0]:
https://en.wikipedia.org/w/api.php?action=help&modules=query%2Bgeosearch
[1]: https://gerrit.wikimedia.org/r/c/operations/mediawiki-config/+/119313
[2]: https://en.wikipedia.org/w/api.php?action=help&modules=query%2Bsearch
[3]: https://www.mediawiki.org/wiki/Help:CirrusSearch#bounded
[4]:
https://en.wikipedia.org/w/api.php?action=query&list=search&srsearch=nearco…
David C.
<tl;dr>: Read https://www.mediawiki.org/wiki/Google_Code-in/Mentors and
add your name to the mentors table and start tagging #GCI-2018 tasks.
We'll need MANY mentors and MANY tasks, otherwise we cannot make it.
Google Code-in is an annual contest for 13-17 year old students. It
will take place from Oct23 to Dec13. It's not only about coding:
we also need tasks about design, docs, outreach/research, QA.
Last year, 300 students worked on 760 tasks supported by 51 mentors.
For some achievements from last round, see
https://blog.wikimedia.org/2018/03/20/wikimedia-google-code-in-2017/
While we wait whether Wikimedia will get accepted:
* You have small, self-contained bugs you'd like to see fixed?
* Your documentation needs specific improvements?
* Your user interface has some smaller design issues?
* Your Outreachy/Summer of Code project welcomes small tweaks?
* You'd enjoy helping someone port your template to Lua?
* Your gadget code uses some deprecated API calls?
* You have tasks in mind that welcome some research?
Note that "beginner tasks" (e.g. "Set up Vagrant") and generic
tasks are very welcome (like "Choose and fix 2 PHP7 issues from
the list in https://phabricator.wikimedia.org/T120336" style).
We also have more than 400 unassigned open #easy tasks listed:
https://phabricator.wikimedia.org/maniphest/query/HCyOonSbFn.z/#R
Can you mentor some of those tasks in your area?
Please take a moment to find / update [Phabricator etc.] tasks in your
project(s) which would take an experienced contributor 2-3 hours. Read
https://www.mediawiki.org/wiki/Google_Code-in/Mentors
, ask if you have any questions, and add your name to
https://www.mediawiki.org/wiki/Google_Code-in/2018#List_of_Wikimedia_mentors
(If you have mentored before and have a good overview of our
infrastructure: We also need more organization admins! See
https://www.mediawiki.org/wiki/Google_Code-in/Admins )
Thanks (as we cannot run this without your help),
andre
--
Andre Klapper | ak-47(a)gmx.net
https://blogs.gnome.org/aklapper/
Hi all,
tl;dr
On Monday August 6 we are making EventStreams multi-DC, and this should be
transparent to users.
Due to a recent outage
<https://wikitech.wikimedia.org/wiki/Incident_documentation/20180711-kafka-e…>
of the our main eqiad Kafka cluster, we want to make the EventStreams
service support multiple datacenters for better high availability. To do
so, we need to hide the Kafka cluster message offsets from the
SSE/EventSource clients. On Monday August 6th, we will deploy a change to
EventStreams that will make it use message timestamps instead of message
offsets in the SSE/EventSource id field that is returned for every received
message. This will allow EventStreams to be backed by any Kafka cluster,
with auto-resuming during reconnect based on timestamp instead of Kafka
cluster based logical offsets.
This deployment should be transparent to clients. SSE/EventSource clients
will reconnect automatically and begin to use timestamps instead of offsets
in the Last-Event-ID.
You can read more about this work here:
https://phabricator.wikimedia.org/T199433
- Andrew Otto, Systems Engineer, WMF
Not sure if this factors in, but Analytics will be having an offsite in NYC
the week of Sept 16 - 22. Luca and I will be less available during that
week.
On Mon, Jul 30, 2018 at 3:23 PM Alexandros Kosiaris <akosiaris(a)wikimedia.org>
wrote:
> Hello everyone,
>
> I hope I have included all the relevant teams, please add anyone I
> might have forgotten
>
> As you probably know, Ops has decided to perform a datacenter
> switchover this quarter. It's already close to 1.5 years[1] since the
> last one and we 've been wanting to do more like 1 per year. Our
> tracking task is https://phabricator.wikimedia.org/T199073 and work
> has already started in various sub areas. One thing that we need to
> decide on is the actual dates. Having looked at the various
> possibilities and the work that needs to be done up to that point, we
> have come to the conclusion that the weeks of
>
> * 17-21 Sept 2018
> * 24-28 Sept 2018
>
> for the switchover and
>
> * 08-12 Oct 2018
> * 15-19 Oct 2018
>
> for the switch back.
>
> Keep in mind that this time around we want to do at least 3 weeks, 1
> more week than the previous switchover, so if we do choose the week of
> 24-29 Sept for the switchover, we have to do the week of 15-19 for the
> switchback.
>
> We also need to decide on the actual time of the date of course. So
> this is your invitation to a scheduling meeting about discussing that.
>
> Remembering the previous switchover's scheduling meeting I think that
> coming up with a proposal that can be discussed individually/per team
> before the large meeting can be beneficial.
>
> So here's a proposal that just copies the previous
> switchover/switchback schedule [1]
>
> Switchover:
>
> Services: Tuesday, September 18th 2018 14:30 UTC
> Media storage/Swift: Tuesday, September 18th 2018 15:00 UTC
> Traffic: Tuesday, September 18th 2018 19:00 UTC
> Mediawiki: Wednesday, September 19th 2018: 14:00 UTC
>
> Switchback:
>
> Traffic: Tuesday, October 9th 2018 19:00 UTC (and maybe some prep work
> on Monday)
> Mediawiki: Wednesday, October 10th 2018: 14:00 UTC
> Services: Thursday, October 11th 2018 14:30 UTC
> Media storage/Swift: Thursday, October 11th 2018 15:00 UTC
>
> Now as for the meeting date, google for this week suggested August 1st
> 14:00 UTC as the time with the least possible conflicts, so here it
> is.
>
> [1] https://wikitech.wikimedia.org/wiki/Switch_Datacenter
>
> Regards,
>
> --
> Alexandros Kosiaris <akosiaris(a)wikimedia.org>
>
> _______________________________________________
> Ops-private mailing list
> Ops-private(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/ops-private
>
[Adding some other mailing lists in Cc]
Hi everybody,
as a lot of you have probably already noticed yesterday reading the
operations@ mailing list, we had an outage of the Kafka Main eqiad cluster
that forced us to switch the Eventbus and Eventstreams services to codfw.
All the precise timings will be listed in
https://wikitech.wikimedia.org/wiki/Incident_documentation/20180711-kafka-e…,
but for a quick glimpse:
2018-07-11 17:00 UTC - Eventbus service switched to codfw
2018-07-11 18:44 UTC - Eventstreams service switched to codfw
We are going to switch back those services to eqiad during the next couple
of hours. The consumers of the Eventstreams service may get some failures
or data drops, apologies in advance for the trouble.
Cheers,
Luca
Il giorno gio 12 lug 2018 alle ore 00:00 Luca Toscano <
ltoscano(a)wikimedia.org> ha scritto:
> Hi everybody,
>
> as you might have seen from the operations' channel on IRC the Kafka Main
> Eqiad cluster (kafka100[1-3].eqiad.wmnet) suffered a long outage due to new
> topics pushed out with too long names (causing fs operation issues, etc..).
> I'll update this email thread tomorrow EU time with more details, tasks,
> precise root cause, etc.., but the important bit to know is that Eventbus
> and Eventstreams have been failed over to the Kafka Main Codfw cluster.
> This should be transparent to everybody but please let us know otherwise.
>
> Thanks for the patience!
>
> (a very sleepy :) Luca
>
>