Hi,
On Tue, Mar 1, 2016 at 3:36 PM, David Strine <dstrine(a)wikimedia.org> wrote:
> We will be holding this brownbag in 25 minutes. The Bluejeans link has
> changed:
>
> https://bluejeans.com/396234560
I'm not familiar with bluejeans and maybe have missed a transition
because I wasn't paying enough attention. is this some kind of
experiment? have all meetings transitioned to this service?
anyway, my immediate question at the moment is how do you join without
sharing your microphone and camera?
am I correct thinking that this is an entirely proprietary stack
that's neither gratis nor libre and has no on-premise (not cloud)
hosting option? are we paying for this?
-Jeremy
Hello,
can someone to update list https://phabricator.wikimedia.org/P10500 which
contains repositories which haven't mediawiki/mediawiki-codesniffer.
I found in list that much repositories are empty, and repositories which
aren't available on Gerrit.
So, can someone please update this list of repositories (in
mediawiki/extensions) which haven't mediawiki/mediawiki-codesniffer, but at
least, contains one PHP file. or to provide me command with which I can
update list when I want, so I don't need to request it every time.
Best regards,
Zoran.
P. S.: Happy weekend! :)
Hi there,
I am investigating a breakage in my extension that has occurred in MW 1.34
but which didn't seem to be a problem on MW 1.29. (I have not tested
interim versions to see where the issue first arises.)
One of the parser hooks in the extension needs to perform variable
expansion. What is happening is a lot more complicated than this example,
but effectively
<my_hook Foo="What the foo!">{{{Foo}}}</my_hook>
should end up generating the following output, using variable expansion:
What the foo!
The semantics of variable handling need to follow the MW semantics,
including default values (possibly nested), parser functions, etc. therefore
it needs to use the MW parser to perform the expansion.
Assuming the arguments that MW passes into the parser hook are named $Text,
$Vars, $Parser and $Frame, the relevant code looks something like this
(again, a bit more complicated in practice):
$NewFrame = new PPTemplateFrame_DOM($Frame->preprocessor, $Frame,
array(), $Vars, $Frame->title);
return $Parser->replaceVariables($Text, $NewFrame);
(I have included a more detailed listing of the code that I am using for
doing the parse at the end of this message.)
My code was working fine on MW 1.29 and earlier, but when I upgrade to 1.34
I am finding that I get a fatal exception thrown when my tag is encountered:
/index.php?title=Main_Page MWException
from line 373 of ~\includes\parser\PPFrame_DOM.php:
PPFrame_DOM::expand: Invalid parameter type
I have generated a backtrace and the top of the stack is as follows:
#0 ~\includes\parser\Parser.php(3330): PPFrame_DOM->expand(PPNode_Hash_Tree,
integer)
#1 MyExtension.php (434): Parser->replaceVariables(string,
PPTemplateFrame_DOM)
#2 ~\includes\parser\Parser.php(4293): MyExtensionParserHook(string, array,
Parser, PPTemplateFrame_Hash)
(The subsequent call stack entries are the parent functions you would expect
to see in that situation.)
Can anyone see why the above code would no longer work as it did on previous
versions? What is the current recommended method for manually expanding
template variables from within a parser hook?
Kind regards,
- Mark Clements (HappyDog)
----------------------------------
Full example (with extension-specific code omitted):
----------------------------------
function MyExtensionParserHook($Text, $Vars, $Parser, $Frame) {
// 1) Manipulate $Text and $Vars
// (omitted)
// 2) Expand variables in the resulting text.
// Set up a new frame which mirrors the existing one but which also has
the
// field values as arguments.
// If we are already in a template frame, merge the field arguments with
the
// existing template arguments first.
if ($Frame instanceof PPTemplateFrame_DOM) {
$NumberedArgs = $Frame->numberedArgs;
$NamedArgs = array_merge($Frame->namedArgs, $Vars);
}
else {
$NumberedArgs = array();
$NamedArgs = $Vars;
}
$NewFrame = new PPTemplateFrame_DOM($Frame->preprocessor, $Frame,
$NumberedArgs, $NamedArgs,
$Frame->title);
// Perform a recursive parse on the input, using our newly created
frame.
return $Parser->replaceVariables($Text, $NewFrame);
}
/ sorry for cross-posting
Hi,
On a few first wikis[1], you can now highlight pairs of brackets in
wikitext. For this to work, you need to turn on the syntax highlighting
feature, which is part of the 2010 and 2017 wikitext editors. By placing
your cursor next to or within a set of brackets, you can then match round,
square and curly brackets. For more information about this feature please
visit its project page.[2]
Deployment to other wikis is planned for later this year. If your wiki
community wants to get bracket matching now, please contact me.
This change has been implemented by the Technical Wishes team who is
currently working on several projects within the focus area "Make working
with templates easier"[3]. Other projects in this focus area, including
those for the Visual Editor, are in the making.
Many thanks to all who have contributed to the realization of this project
through comments, interviews and more. Feedback is, as always, welcome on
the project's talk page.[4]
Thanks,
Johanna for the Technical Wishes team
[1] dewiki, cawiki and trwiki
[2] https://meta.wikimedia.org/wiki/WMDE_Technical_Wishes/Bracket_Matching
[3] https://meta.wikimedia.org/wiki/WMDE_Technical_Wishes/Templates
[4]
https://meta.wikimedia.org/wiki/Talk:WMDE_Technical_Wishes/Bracket_Matching
Hi,
I would like to uhhh... start the discussion? ask for opinions? about the
future of UploadWizard.
It is a rather special extension, that was from the start made mostly for
Commons' very specific needs and getting it to work anywhere else presents
some challenges (some of which I attempt to tackle here
<https://phabricator.wikimedia.org/T256616>). Interestingly, it still is
used by many third-party wikis
<https://wikiapiary.com/wiki/Extension:Upload_Wizard> and although some of
them don't need its full set of capabilities related to describing
licenses, authors and sources, there are wikis that do need that. The wiki
I maintain, Nonsensopedia, has a Commons-like file description system based
on Semantic MediaWiki (see example here
<https://nonsa.pl/wiki/Plik:Creative_Commons_evolution.jpg>) and
UploadWizard has been a *blessing* for us, greatly simplifying the task of
file moderation.
Opinion time: Wikis should be *encouraged* to properly describe the
authorship of files that they use, to meet the licensing requirements. IMO
Wikimedia Foundation as the maintainer of MediaWiki and a foundation
dedicated to dissemination of free culture should provide a usable tool for
properly describing free multimedia. UploadWizard could be just that.
At the same time, the extension has been basically unmaintained
<https://phabricator.wikimedia.org/T261589#6674315> since the Multimedia
team was dissolved and I've been rather surprised to discover that patches
improving third-party support were met with uhm... very limited enthusiasm?
<https://phabricator.wikimedia.org/T256616#6264584> There are a few obvious
features lacking like mobile support (seriously, try opening
https://commons.wikimedia.org/wiki/Special:UploadWizard on a narrow screen
device, it's been like this since.. always) and configurability (you have
to jump through some serious hoops
<https://www.mediawiki.org/wiki/Topic:V2at02b7oxy5pkwl> to just add a
license; customizing the tutorial is similarly hard).
I've been thinking of what to do with the above and I really wouldn't want
to embark on something that will be rendered redundant or obsolete in a
year, so my question is: are there any plans for UploadWizard? What makes
me suspect that things may change is primarily Structured Data on Wikimedia
Commons, which in the future will maybe (?) supersede the description
system around the {{Information}} template. Are there any rough roadmaps or
outlines of anything resembling a plan for that? If Commons was to
implement full, structured file descriptions in the upload tool, that code
would be probably hardly usable outside Commons, given that Wikibase is not
something easy to install or maintain, it is also awfully overkill for the
vast majority of applications. In such a situation, would it make sense to
consider completely separating the "Wikimedia Commons Shiny Upload Tool"
from a more general extension that would be usable for third
parties, stripped of any Commons-specific code? A lot of things could be
much simplified if the extension was to target just the needs of third
parties and not Commons.
I ask about this because I really don't see any sort of interest of the
extension's *de facto* owner (and that is WMF) in developing it, there are
also no public plans for it, as far as I know. Yes, I can make a fork
anytime, but first I'd prefer to know if I'm not missing something. Well,
actually, I already did make a fork of UW
<https://gitlab.com/nonsensopedia/extension-forks/uploadwizard-nonsa> over
a year ago, but this particular version of it is tailored for a wiki I
manage, making it useless for others. At the time that was the only
reasonable way we could get a good upload tool that was capable of properly
describing licensing information. I probably don't have to tell seasoned
open-source developers why this type of approach is not optimal for the
future of the project. :)
Any opinions on the topic are very welcome.
--
Ostrzyciel (he/him)
This script was created in 2011 and takes an offline XML dump file,
containing page content wikitext, and feeds its entries through the
Preprocessor without actually importing any content into the wiki.
The documented purpose of the script is to "get statistics" or "fill the
cache". I was unable to find any stats being emitted. I did find that the
method called indeed fills "preprocess-hash" cache keys, which have a TTL
of 24 hours (e.g. Memcached).
I could not think of a use case for this and am wondering if anyone
remembers its original purpose and/or knows of a current need for it.
-- Timo
[1] First commit: http://mediawiki.org/wiki/Special:Code/MediaWiki/80466
[2] Commit history:
https://gerrit.wikimedia.org/r/plugins/gitiles/mediawiki/core/+log/1.35.0/m…
PS: This came up because there's a minor refactor proposed to the script,
and I was wondering how to test it and whether we it makes sense to
continue maintenance and support for it.
https://gerrit.wikimedia.org/r/641323
Hello all,
I’m pleased to announce that the Technical Decision Making Process[0]
proposed as an evolution of the Wikimedia Technical Committee[1]
(TechCom) process has been approved by the Wikimedia Foundation, and
will begin operation on 22nd of January. Back in October[2] and
November[3] I sought input into a proposed process and in December I
incorporated that feedback into the approved process.
This process is designed to be more inclusive by shifting the
representations. It has clear timelines for when decisions will be
made and to develop a clear lifecycle of a decision. The process is
designed to be clear about how and which stakeholders will be engaged.
It also introduces a Technical Decision Forum and Templates for the
process.
Currently a group from the Wikimedia Technology and Product
Departments are in the process of forming the initial Decision Forum.
The initial forum will include representatives from Wikimedia
Foundation teams, Wikimedia Deutschland, and independent +2
contributors. Please see the proposal for community representation on
the Decision Forum[4] and provide input by 2021-02-15. We know we will
need to adjust the representation in the forum over time. If you
believe you are from a group that is not represented and should be,
please contact us (tech-decision-forum-support(a)wikimedia.org).
If you currently have an RFC in process with TechCom that is not on
Last Call, it may need to be moved into this process. If you have
filed an RFC that is no longer relevant please close it. The group
setting up the process will be inquiring about the RFC status on the
individual Phabricator tickets.
To get started with this new process you just need to open a
Phabricator Ticket on the Technical Decision Making Process board[5].
If you need help getting started or have further questions please
reach out at tech-decision-forum-support(a)wikimedia.org or reply on
this thread.
[0] https://www.mediawiki.org/wiki/Technical_Decision_Making_Process
[1] https://www.mediawiki.org/wiki/Wikimedia_Technical_Committee
[2] https://lists.wikimedia.org/pipermail/wikitech-l/2020-October/093968.html
[3] https://lists.wikimedia.org/pipermail/wikitech-l/2020-November/094037.html
[4] https://www.mediawiki.org/wiki/Technical_Decision_Making_Process/Draft_Prop…
[5] https://phabricator.wikimedia.org/project/board/5179/
Thanks,
Kate
--
Kate Chapman (she/her/hers)
Director of Architecture, Architecture Team
Wikimedia Foundation
kchapman(a)wikimedia.org
Hello,
we are planning to change how Cloud VPS instances and Toolforge tools contact
WMF-hosted wikis, in particular the source IP address for the network connection.
The new IP address that wikis will see is 185.15.56.1.
The change is scheduled to go live on 2021-02-08.
More detailed information in wikitech:
https://wikitech.wikimedia.org/wiki/News/CloudVPS_NAT_wikis
If you are a Cloud VPS user or Toolforge developer, check your tools after that
date to make sure they are properly running. If you detect a block, a rate-limit
or similar, please let us know.
If you are a WMF SRE or engineer involved with the wikis, be informed that this
address could generate a significant traffic volume, perhaps about 30%-40% total
wiki edits. We are trying to smooth the change as much as possible, so please
send your feedback if you think there is something we didn't account for yet.
Thanks, best regards.
--
Arturo Borrero Gonzalez
SRE / Wikimedia Cloud Services
Wikimedia Foundation
Hello,
There has been a lot of progress in abstract schema and abstract schema
changes initiative since last time
<https://lists.wikimedia.org/pipermail/wikitech-l/2020-October/093954.html>
I gave an update on it. So here's another one.
*Abstract Schema*
So far, more than 90% (51 out 56) of tables of mediawiki core are now
migrated to abstract schema.
This means much smaller schema drifts between MySQL and Postgres. We have
done more than 250 schema changes in Postgres to fix these drifts.
Including 56 index rename, 66 data type change, setting default to 43
fields and changing nullability of 29 fields. To compare, that's more
schema changes done on Postgres from 2014 until 2020. Once we have migrated
all tables, we can close this four-year old ticket
<https://phabricator.wikimedia.org/T164898>.
Similar improvement has happened on standardizing timestamp fields in MySQL
<https://phabricator.wikimedia.org/T42626>, once all tables are migrated,
we can call this eight-year old ticket done too.
One nice thing about having an abstract schema is that you can generate
documentation automatically, This page is completely made
<https://www.mediawiki.org/w/index.php?title=User:Ladsgroup/Test&oldid=43795…>
automatically from tables.json. We can make it generated in
doc.wikimedia.org on every merge. And also we can make the database layout
diagram
<https://www.mediawiki.org/w/index.php?title=Manual:Database_layout/diagram&…>
created automatically.
Another nice thing. When you have an abstract schema, you can easily write
tests and enforce database conventions. For example, you can write a test
to make sure all tables have exactly five columns (because five is your
lucky number). We haven't written such a test but now there's a test that
enforces a uniform prefix for columns and indexes of tables in core
<https://phabricator.wikimedia.org/T270033>. We are currently fixing its
violations to standardize our schema even more.
I'm planning to make reporting on drifts between the abstract schema and
our production completely automated and make it accessible to DBAs for
further investigations which is now much easier thanks to abstract schema.
You can follow the progress of that work in this ticket.
<https://phabricator.wikimedia.org/T104459>
*Abstract Schema Changes*
Now we have a new maintenance script, it produces schema change sql files
(aka ALTER TABLE files) based on snapshot of abstract schema of before and
after of a table. Here's an example of an index rename.
<https://gerrit.wikimedia.org/r/c/mediawiki/core/+/651176> It would make
creating schema change patches much easier (a little bit of work but you
don't need to know internals of Postgres anymore, it's also less prone to
mistakes)
With approval of RFC to drop support of upgrading from versions older than
two LTS releases, we can now drop hundreds and hundreds of sql files. It
would give us room to breath and audit our sql files to find orphan ones
and improve abstract schema change work. That is currently blocked on this
patch landing. <https://gerrit.wikimedia.org/r/c/mediawiki/core/+/648576>
We will work on reshaping the schema changes in general since its current
checks system is less than optimal, its tests are not very updated and so
much more to do.
*What can we do?*
Glad you asked :D The biggest request I have from people is to migrate
their extensions to abstract schema. There's a list of WMF-deployed
extensions that their schema has not migrated yet
<https://phabricator.wikimedia.org/T261912>. This is doubly important as we
want to build a reporting system for drifts in production and it's not
possible to report these drifts for extensions that their schema has not
migrated yet. So if you or your team maintain an extension from that list,
prioritize migrating that please. Reedy wrote a great script
<https://github.com/Ladsgroup/db-analyzor-tools/blob/master/db_abstractor.py>
that takes a sql file and produces its equivalent abstract schema and it
gives you a good starting point (PR is welcome!). Feel free to add me as a
reviewer to patches of migrating extensions to abstract schema.
Another thing is that if you use postgres for mediawiki, you help testing
our postgres schema by trying master (make sure to take a backup first) and
see if everything is alright.
*Thank you!*
I would really like to thank Ammarpad for migrating lots of tables of core
to abstract schema and handling all sorts of edge cases and doing most of
the work of using uniform prefix tests and fixes. Thanks to James Forrester
for reviewing lots of patches. Thanks Reedy for the script and also
abstracting lots of tables in extensions and also Tgr for helping in
reviews and getting the project going. Also a big thank you to DBAs for doing
a lot more schema changes in production
<https://phabricator.wikimedia.org/tag/blocked-on-schema-change/>. You rock!
An apology is also warranted for breaking update.php on master twice
(caused by yours truly).
Until next update!
--
Amir (he/him)
(now sharing beyond the WMF tech department)
Dear colleagues and MediaWiki contributors,
On behalf of the Platform Engineering Team, I am delighted to invite you to the
MediaWiki Authority interface[1][2] evaluation during the Platform Engineering
Office Hours[3] on Feb 04 1700 UTC. Dress Formal. Or not ;)
Before we commit to using Authority for permission checking throughout the
codebase, we want to make sure that we didn't miss anything. So if you are
working on code that needs to check user permissions, please join the PET office
hour and give us your feedback!
During the meeting, we will present the new Authority interface, which defines a
standard for checking user permissions, blocks, throttles, etc, and allows us to
easily make the relevant context for permission checks available where it is
needed.
We are going to explain the design and demonstrate its application in various
areas using exploratorypatches. We would love to hear your opinion on the
approach we’re taking. Ifyou contribute to the MediaWiki or extension codebase,
you will likely have to use the new interface if it’s accepted, and this is your
opportunity to raise concerns and objections before the interface is finalized.
The meeting during Platform Engineering Office hours will loosely correspond to
the second step of the brand new Technical Decision Process[4] for which me made
a"Decision Statement Overview"[5]. After the initial meeting, you will have two
weeks for feedback on the ticket[2]. At the end of the two week period we may
schedule a followup meeting, depending on the feedback we receive.
Cheers. PET.
1.
https://gerrit.wikimedia.org/r/plugins/gitiles/mediawiki/core/+/refs/heads/…
2.
https://phabricator.wikimedia.org/T231930
3.
https://meet.google.com/pjo-xtxv-oea
4.
https://www.mediawiki.org/wiki/Technical_Decision_Making_Process#2_Technica…
5. https://docs.google.com/document/d/1RT3mWt57RkGJdeV5kVH_eoVOBu-97w7sYheepKt…
--
Daniel Kinzler
Principal Software Engineer, Core Platform
Wikimedia Foundation