Hi,
On Tue, Mar 1, 2016 at 3:36 PM, David Strine <dstrine(a)wikimedia.org> wrote:
> We will be holding this brownbag in 25 minutes. The Bluejeans link has
> changed:
>
> https://bluejeans.com/396234560
I'm not familiar with bluejeans and maybe have missed a transition
because I wasn't paying enough attention. is this some kind of
experiment? have all meetings transitioned to this service?
anyway, my immediate question at the moment is how do you join without
sharing your microphone and camera?
am I correct thinking that this is an entirely proprietary stack
that's neither gratis nor libre and has no on-premise (not cloud)
hosting option? are we paying for this?
-Jeremy
Hello,
can someone to update list https://phabricator.wikimedia.org/P10500 which
contains repositories which haven't mediawiki/mediawiki-codesniffer.
I found in list that much repositories are empty, and repositories which
aren't available on Gerrit.
So, can someone please update this list of repositories (in
mediawiki/extensions) which haven't mediawiki/mediawiki-codesniffer, but at
least, contains one PHP file. or to provide me command with which I can
update list when I want, so I don't need to request it every time.
Best regards,
Zoran.
P. S.: Happy weekend! :)
Hi,
We're currently in the process of upgrading the MediaWiki servers to
Debian Buster and expect a performance regression to come with it.
The cause appears to be better Spectre[1] mitigations in the Buster 4.19
kernel, which we can't disable. Most of the effect is seen in code that
ends up invoking syscalls like filemtime, file_get_contents, etc.
I posted some numbers and charts on the Phabricator investigation
ticket[2]. For normal requests it looks like ~5% worse for p50/p75 and
around ~13% for p95/p99. API requests look much worse, at 10% for p50
22% for p75.
What now? We're going to continue with the upgrade as planned, but we
also need help to try and make some performance improvements to reduce
the impact of the regression.
The PHP profiling flamegraphs[3] are a great tool to use to identify
potentially slow spots. We now also have flamegraphs that only contain
Buster requests. I created a set of differential flamegraphs[4] that
compare Stretch vs Buster so you can see what specific areas slowed down.
You can also use WikimediaDebug/XHGui[5] to profile a specific request.
mwdebug1001/mwdebug1002 are Stretch and mwdebug1003 is Buster.
If you have questions or suggestions please ask or let us know. Thanks
to everyone who helped with the investigation and those who've started
working on improvements already.
[1] https://en.wikipedia.org/wiki/Spectre_(security_vulnerability)
[2] https://phabricator.wikimedia.org/T273312#6802330
[3] https://performance.wikimedia.org/php-profiling/
[4]
https://people.wikimedia.org/~legoktm/T273312/data/clean/images/flamegraphs/
[5] https://wikitech.wikimedia.org/wiki/WikimediaDebug#Request_profiling
-- Kunal
Hi there,
I am investigating a breakage in my extension that has occurred in MW 1.34
but which didn't seem to be a problem on MW 1.29. (I have not tested
interim versions to see where the issue first arises.)
One of the parser hooks in the extension needs to perform variable
expansion. What is happening is a lot more complicated than this example,
but effectively
<my_hook Foo="What the foo!">{{{Foo}}}</my_hook>
should end up generating the following output, using variable expansion:
What the foo!
The semantics of variable handling need to follow the MW semantics,
including default values (possibly nested), parser functions, etc. therefore
it needs to use the MW parser to perform the expansion.
Assuming the arguments that MW passes into the parser hook are named $Text,
$Vars, $Parser and $Frame, the relevant code looks something like this
(again, a bit more complicated in practice):
$NewFrame = new PPTemplateFrame_DOM($Frame->preprocessor, $Frame,
array(), $Vars, $Frame->title);
return $Parser->replaceVariables($Text, $NewFrame);
(I have included a more detailed listing of the code that I am using for
doing the parse at the end of this message.)
My code was working fine on MW 1.29 and earlier, but when I upgrade to 1.34
I am finding that I get a fatal exception thrown when my tag is encountered:
/index.php?title=Main_Page MWException
from line 373 of ~\includes\parser\PPFrame_DOM.php:
PPFrame_DOM::expand: Invalid parameter type
I have generated a backtrace and the top of the stack is as follows:
#0 ~\includes\parser\Parser.php(3330): PPFrame_DOM->expand(PPNode_Hash_Tree,
integer)
#1 MyExtension.php (434): Parser->replaceVariables(string,
PPTemplateFrame_DOM)
#2 ~\includes\parser\Parser.php(4293): MyExtensionParserHook(string, array,
Parser, PPTemplateFrame_Hash)
(The subsequent call stack entries are the parent functions you would expect
to see in that situation.)
Can anyone see why the above code would no longer work as it did on previous
versions? What is the current recommended method for manually expanding
template variables from within a parser hook?
Kind regards,
- Mark Clements (HappyDog)
----------------------------------
Full example (with extension-specific code omitted):
----------------------------------
function MyExtensionParserHook($Text, $Vars, $Parser, $Frame) {
// 1) Manipulate $Text and $Vars
// (omitted)
// 2) Expand variables in the resulting text.
// Set up a new frame which mirrors the existing one but which also has
the
// field values as arguments.
// If we are already in a template frame, merge the field arguments with
the
// existing template arguments first.
if ($Frame instanceof PPTemplateFrame_DOM) {
$NumberedArgs = $Frame->numberedArgs;
$NamedArgs = array_merge($Frame->namedArgs, $Vars);
}
else {
$NumberedArgs = array();
$NamedArgs = $Vars;
}
$NewFrame = new PPTemplateFrame_DOM($Frame->preprocessor, $Frame,
$NumberedArgs, $NamedArgs,
$Frame->title);
// Perform a recursive parse on the input, using our newly created
frame.
return $Parser->replaceVariables($Text, $NewFrame);
}
/ sorry for cross-posting
Hi,
On a few first wikis[1], you can now highlight pairs of brackets in
wikitext. For this to work, you need to turn on the syntax highlighting
feature, which is part of the 2010 and 2017 wikitext editors. By placing
your cursor next to or within a set of brackets, you can then match round,
square and curly brackets. For more information about this feature please
visit its project page.[2]
Deployment to other wikis is planned for later this year. If your wiki
community wants to get bracket matching now, please contact me.
This change has been implemented by the Technical Wishes team who is
currently working on several projects within the focus area "Make working
with templates easier"[3]. Other projects in this focus area, including
those for the Visual Editor, are in the making.
Many thanks to all who have contributed to the realization of this project
through comments, interviews and more. Feedback is, as always, welcome on
the project's talk page.[4]
Thanks,
Johanna for the Technical Wishes team
[1] dewiki, cawiki and trwiki
[2] https://meta.wikimedia.org/wiki/WMDE_Technical_Wishes/Bracket_Matching
[3] https://meta.wikimedia.org/wiki/WMDE_Technical_Wishes/Templates
[4]
https://meta.wikimedia.org/wiki/Talk:WMDE_Technical_Wishes/Bracket_Matching
Hello deployers,
TLDR: We are upgrading deployment servers, both physical hardware
(older R430-> newer R440) and OS version (stretch->buster) [1]
What happened so far:
Today we switched the deployment server and scap master for codfw from
deploy2001 to deploy2002. [2]
What happens next:
On Monday, March 1st, we want to switch the deployment server and scap
master for eqiad from deploy1001 to deploy1002. [3][4]
The window is "20:00–22:00 UTC # 12:00–14:00 PST" after the morning
backport window for up to 2 hours. In this time you won't be able to
deploy.
https://wikitech.wikimedia.org/wiki/Deployments#Monday,_March_01
Do you have to do anything?
If you connect to "deployment.eqiad.wmnet" the host key will change,
you can use the scripts to update host keys though and the
fingerprints are also on wiktech on new (and protected) pages. [5]
Yes, you'll have to retrain muscle memory to switch to deploy1002,
sorry for that but it's just a new generation every couple of years
and this way we can also fall back to something if needed. In return
you should get more performance from the new hardware.
No, you don't need to worry about losing data in your home dir,
everything has been rsynced over straight into /home on these hosts.
Let us know if you have any questions,
Daniel & Mukunda
[1] https://phabricator.wikimedia.org/T265963
[2] https://gerrit.wikimedia.org/r/c/operations/puppet/+/667043
[3] https://wikitech.wikimedia.org/wiki/Deployments#Monday,_March_01
[4] https://gerrit.wikimedia.org/r/q/topic:%22deployment-switch%22+(status:open)
[5] https://wikitech.wikimedia.org/wiki/Deploy1002
--
Daniel Zahn <dzahn(a)wikimedia.org>
Operations Engineer
Hello folks!
We are excited to announce the release of version 0.4 of VideoCutTool
<https://videocuttool.wmflabs.org/> [1].
VideoCutTool helps users to edit videos in commons and also converts MP4
videos on the user's device to Wikimedia Commons accepted formats (i.e
WebM/OGV) and upload/re-upload them to Commons on-the-fly.
In the last few years, we have been tirelessly working to improve our tool
and we believe that VideoCutTool will help you enjoy your video editing
experience! Special thanks to our team Pratik Shetty, Hassan Amin, James
Heilman, Jayprakash, and all the volunteers for their contributions!
*About VideoCutTool*VideoCutTool is a video editing tool that helps to
provide various types of editing processes on videos that are currently in
Wikimedia Commons and also the videos present in the user devices. It is
deployed on Wikimedia VPS. Cropping, Trimming, Audio Disabling, and
Rotating are the current features of the tool. From the tool, the edited
videos can be either downloaded or re-upload to Wikimedia Commons.
VideoCutTool work's similar to the CropTool
<https://croptool.toolforge.org/> [2]. More info about the tool is
available on Commons: VideoCutTool
<https://commons.wikimedia.org/wiki/Commons:VideoCutTool> [3].
VideoCutTool is also available as a gadget in Wikimedia Commons, You can
turn it on from Preferences -> Gadgets -> Check on VideoCutTool -> Save!
Try out VideoCutTol from here: https://videocuttool.wmflabs.org/
*Changes in version 0.4*
- Support of i18n - Localisation and Internalisation.
- Optional Dark mode - handy to use!
- Mobile responsiveness.
- Fixes to various minor bugs.
If you notice any bugs or want to request any feature please feel free to
open a ticket in phabricator and add the tag #videocuttool to the same, Our
phabricator workboard is here:
https://phabricator.wikimedia.org/tag/videocuttool/ [4].
[1] https://videocuttool.wmflabs.org/
[2] https://croptool.toolforge.org/
[3] https://commons.wikimedia.org/wiki/Commons:VideoCutTool
[4] https://phabricator.wikimedia.org/tag/videocuttool/
Regards
Gopa Vasanth <https://www.mediawiki.org/wiki/User:Gopavasanth>
Amrita Vishwa Vidyapeetham <http://www.amrita.edu/> | Blog
<https://gopavasanth.wordpress.com/>
amFOSS <https://amfoss.in/@gopavasanth> | GitHub
<https://github.com/gopavasanth> | Gerrit
<https://gerrit.wikimedia.org/r/#/q/gopavasanth>
“Yesterday is not ours to recover, but tomorrow is ours to win or lose.”
Hello,
I usually wouldn't bother people with my issues but I'm sorta desperate
here. This is about https://meet.wmcloud.org. WM Cloud jitsi instance. It's
on a bigram VM and on docker
<https://jitsi.github.io/handbook/docs/devops-guide/devops-guide-docker>.
Users report that when using this "a session with three people today and it
was rather poor. Bad grainy video from my side even though I have 100 Mbps
both ways. After about 15 minutes the session froze and the other two
dropped out." or "we were 4 persons and it was very unstable (no screen
sharing). people's connection got lost, so they could often still hear but
could not participate in the call anymore." and you can reproduce the issue
too if you stay long enough in a meeting with another connection (your
phone for example).
I can't find any reason why this is happening. I wrote some of my
investigations here <https://phabricator.wikimedia.org/T268393> but we
checked the cloud's infra. CPU, memory, etc. all look fine as well as the
network throughput. It doesn't happen with a new VM but quickly (after one
meeting) builds up to have the same issues again. I added a regular restart
of the docker containers (it even destroys them and recreates them again,
also the docker service itself gets restarted) but nothing changes (maybe I
should add a restart of the network manager too?). I assume the iptables
being busy because of docker can contribute to the issue but not this much.
I'm running out of ideas. If anyone has worked with such setup and feels
comfortable debugging this, let me know and I give permission to check the
VM.
Thank you
--
Amir (he/him)
Hello all,
It's coming close to time for annual appointments of community members to
serve on the Code of Conduct (CoC) committee. The Code of Conduct Committee
is a team of five trusted individuals plus five auxiliary members with
diverse affiliations responsible for general enforcement of the Code of
conduct for Wikimedia technical spaces. Committee members are in charge of
processing complaints, discussing with the parties affected, agreeing on
resolutions, and following up on their enforcement. For more on their
duties and roles, see
https://www.mediawiki.org/wiki/Code_of_Conduct/Committee
This is a call for community members interested in volunteering for
appointment to this committee. Volunteers serving in this role should be
experienced Wikimedians or have had experience serving in a similar
position before.
The current committee is doing the selection and will research and discuss
candidates. Six weeks before the beginning of the next Committee term,
meaning 9 April 2021, they will publish their candidate slate (a list of
candidates) on-wiki. The community can provide feedback on these
candidates, via private email to the group choosing the next Committee. The
feedback period will be two weeks. The current Committee will then either
finalize the slate, or update the candidate slate in response to concerns
raised. If the candidate slate changes, there will be another two week
feedback period covering the newly proposed members. After the selections
are finalized, there will be a training period, after which the new
Committee is appointed. The current Committee continues to serve until the
feedback, selection, and training process is complete.
If you are interested in serving on this committee or like to nominate a
candidate, please write an email to techconductcandidates AT wikimedia.org
with details of your experience on the projects, your thoughts on the code
of conduct and the committee and what you hope to bring to the role and
whether you have a preference in being auxiliary or constant member of the
committee. The committee consists of five members plus five auxiliary
members and they will serve for a year; all applications are appreciated
and will be carefully considered. The deadline for applications is the end
of day on 26 March 2021.
Please feel free to pass this invitation along to any users who you think
may be qualified and interested.
Best,
Amir on behalf of the CoC committee