The Blueprint skin uses data['title'], which doesn't work well [1] if a
page uses
{{DISPLAYTITLE:<span style="blah blah">{{FULLPAGENAME}}</span>}}
to hide its title as suggested [2].
Manual:Skinning [3] doesn't explain where data['*somekey*'] comes from or
what the keys are. No problem, in a skin if you var_dump( $this->data )
there are a dozen other title-like keys to choose from. But how do skin
developers know which one to use? titletxt, titletext, titleprefixeddbkey,
thispage, ... Some are set(), some are setRef(), is that significant?
It seems the best you can do is read the source code of
includes/skins/SkinTemplate.php and work back through its getOutput(),
getTitle(), and getRelevantTitle() to OutputPage and the maze of title
class methods to figure out which one does what you want. Too hard for me.
Thanks for any suggestions, I'll try to improve the documentation.
[1] https://phabricator.wikimedia.org/T103454
[2]
https://www.mediawiki.org/wiki/Manual:FAQ#How_do_I_hide_the_main_page_title…
[3] https://www.mediawiki.org/wiki/Manual:Skinning
--
=S Page WMF Tech writer
Hey all,
This thread is much in line with the "wfRunHooks deprecation" one from
January. Rather than turning global functions into static functions, this
time it's about namespacing global functions.
All extensions calling wfSuppressWarnings now are supposed to change this
to MediaWiki\suppressWarnings, for no obvious gain. Important to keep in
mind here is that this is not a simple search and replace, since that'd
make extensions incompatible with anything before MediaWiki 1.26 alpha.
Either we need to ignore the deprecations (which is not something you want
people to learn as good practice), or we need to add some kind of wrapper
in each extension.
There also is the question of consistency. Nearly all global functions are
still namespaced using the wf prefix. Will they all be changed? Or will
just a few functions be migrated?
I'd really prefer this kind of busywork for extension maintainers not to be
created without very good reason. There are enough breaking changes to keep
up with as it is.
Cheers
--
Jeroen De Dauw - http://www.bn2vs.com
Software craftsmanship advocate
Developer at Wikimedia Germany
~=[,,_,,]:3
FYI
---------- Forwarded message ----------
From: Yuvi Panda <yuvipanda(a)gmail.com>
Date: Fri, Jun 19, 2015 at 11:33 PM
Subject: Labs (almost fully!) back up
To: labs-announce(a)lists.wikimedia.org
Hello everyone!
All projects (except maps and mwoffliner) are fully back up.
Theyshould be up now (including tools) - restored from a backup taken
on June 9. Some have had NFS disabled - but those mostly have had no
significant NFS usage or have had members of the project confirm NFS
is unused. This increases their reliability significantly. If your
project has something missing, please file a bug or respond on list.
We have a fsck in progress on the old corrupted file system, and will
update if / when we can recover specific files.
https://wikitech.wikimedia.org/wiki/Incident_documentation/20150617-LabsNFS…
has updates coming as well.
Thank you.
--
Yuvi Panda T
http://yuvi.in/blog
--
Yuvi Panda T
http://yuvi.in/blog
This list seems more appropriate for this type of discussion.
--
Revi
https://www.revi.pe.kr
-- Sent from Android --
---------- 전달된 메일 ----------
보낸사람: "Yuri" <yuri(a)rawbw.com>
날짜: 2015. 6. 20. 오후 2:52
제목: [Wikimedia-l] What is the wikipedia http API address now?
받는사람: <wikimedia-l(a)lists.wikimedia.org>
참조:
Now all previously http URLs redirect to https.
https://www.mediawiki.org/wiki/API:Main_page also still mentions the old
http address that now redirects.
What is the new purely http API address?
I need to know the hit of https on various processes.
Yuri
_______________________________________________
Wikimedia-l mailing list, guidelines at:
https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines
Wikimedia-l(a)lists.wikimedia.org
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
<mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe>
Hello!
MediaWiki-Codesniffer 0.3.0 is now available for use in your MediaWiki
extensions and other projects. Here are the notable changes since the
last release (0.2.0):
* Don't require "wf" prefix on functions that are namespaced (Kunal Mehta)
* Simplify PHPUnit boostrap, require usage of composer for running tests
(Kunal Mehta)
* SpaceyParenthesis: Check for space before opening parenthesis (Vivek
Ghaisas)
* SpaceyParenthesesSniff: Search for extra/unnecessary space (Vivek Ghaisas)
* CharacterBeforePHPOpeningTagSniff: Support T_HASHBANG for HHVM
>=3.5,<3.7 (Kunal Mehta)
I have submitted patches which bump the depdencies for extensions
<https://gerrit.wikimedia.org/r/#/q/status:open+topic:bump-dev-deps,n,z>, however
some are failing due to the new sniffs. Please amend the patches to make
them pass and I can review them :)
-- Legoktm
Thought some people would find this stream of tweets from Charlie Kindel [0]
<http://ceklog.kindel.com/2015/06/18/what-it-means-to-be-great-product-manag…>
interesting. I'd also recommend perusing his other posts about leadership
& engineering culture. Here's my take on a few snippets, curious to hear
your thoughts as well:
*"the only work that truly matters is that of the engineers"*
While engineers might be responsible for "actually building things,"
Charlie himself admits that the quality (and relevance) of our work is
highly dependent on multiple factors leading up to the first engineer's
keystroke.
*"left to their own devices, engineers will do two things: 1) the most
complicated thing, 2) the thing they think is fun"*
Guilty as charged, but I think engineers who are "sold" on the teams'
mission are capable of making good decisions about what to work on. Our
current situation in the Readership vertical is a live experiment on this
subject.
Finally, I wholeheartedly agree that I do my best work when it's crystal
clear *"who the customer is, where the customer is, why the customer cares,
why it’s important for the business, and when it’s relevant."*
Happy reading!
Brian
0:
http://ceklog.kindel.com/2015/06/18/what-it-means-to-be-great-product-manag…
--
EN Wikipedia user page: https://en.wikipedia.org/wiki/User:Brian.gerstle
IRC: bgerstle
One of the reasons we've always worried about using the open Ogg and WebM
formats on iPhones and iPads is that we don't get to make use of the
hardware MP4/H.264 codec... using the main CPU cores is presumed to drain
the battery faster.
I've done a first-pass test measuring energy usage of native-code WebM/Ogg
playback using the new energy reporting in the Xcode 7 / iOS 9 beta:
https://brionv.com/log/2015/06/19/webm-and-ogg-energy-usage-on-ios-9-beta-w…
The good news is that playback of buffered data at my target resolutions
(360p for 32-bit, 720p for 64-bit) is barely under the "Low" mark on the
energy drain meter! :D
The bad news is that H.264 playback with the native widget reports
post-buffer-download energy drain even lower, at the "Zero" mark... if you
can believe that!
(In both cases, reported drain is significantly higher during network
download, at least on my fast wifi.)
But "Low" sounds pretty good... If folks would like to see more concrete
measures, I can rig up my test to run continuously until the battery runs
out and time it.
-- brion
Hello wikitech-l!
This year we will be holding the MediaWiki Developers Summit 2016 between
*Monday* *January 4th and Wednesday January 6th* in San Francisco.
Due to popular demand the WMDS 2016 will be scheduled for three days
instead of two. The third day being a more relaxed/informal hacking day
where people can work on projects, continue discussion from the previous
two days and catch up with each-other.
Registration is not even close to being open so no action needed besides
blocking off your calendars.
Looking forward to a great event - we are 7.5 months away!
https://www.mediawiki.org/wiki/MediaWiki_Developer_Summit_2016
I feel that bot operators should actively pay attention to the technical
aspects of the community and the mailing lists. So, the bot operator who
never updates their software, doesn't pay attention to the announcements,
and ignores api warnings should be blocked after the deadline. Bot
operators do not operate in a vacuum, and should never run bots just for
the sake of running them.
Community should always be able to find and communicate with the bot
operators.
Obviously we should not make sudden changes (except in the
security/breaking matters), and try to make the process as easy as
possible. The rawcontinue param is exactly that, simply adding it will keep
the logic as before.
Lastly, I again would like to promote the idea discussed at the hackathon
-- a client side minimalistic library that bigger frameworks like pywikibot
rely on, and that is designed in part by the core developers. See the
proposal at
https://www.mediawiki.org/wiki/Requests_for_comment/Minimalistic_MW_API_Cli…
On Jun 3, 2015 2:29 PM, "John Mark Vandenberg" <jayvdb(a)gmail.com> wrote:
> On Wed, Jun 3, 2015 at 3:42 AM, Brad Jorsch (Anomie)
> <bjorsch(a)wikimedia.org> wrote:
> > ...
> > I've compiled a list of bots that have hit the deprecation warning more
> > than 10000 times over the course of the week May 23–29. If you are
> > responsible for any of these bots, please fix them. If you know who is,
> > please make sure they've seen this notification. Thanks.
>
> Thank you Brad for doing impact analysis and providing a list of the
> 71 bots with more than 10,000 problems per week. We can try to solve
> those by working with the bot operators.
>
> If possible, could you compile a list of bots affected at a lower
> threshold - maybe 1,000. That will give us a better idea of the scale
> of bots operators that will be affected when this lands - currently in
> one months time.
>
> Will the deploy date be moved back if the impact doesnt diminish by
> bots being fixed?
>
> --
> John Vandenberg
>
> _______________________________________________
> Wikitech-l mailing list
> Wikitech-l(a)lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/wikitech-l
---------- Forwarded message ----------
From: Yuvi Panda <yuvipanda(a)gmail.com>
Date: Thu, Jun 18, 2015 at 4:56 PM
Subject: Ongoing Labs Outage Update
To: labs-announce(a)lists.wikimedia.org
Yesterday, the filesystem used by many Labs tools suffered a
catastrophic failure, causing most tools to break. This was noticed
quickly but recovery is taking a long time because of the size of the
filesystem.
There have been file system corruption on the filesystem backing the
NFS setup that all of labs uses, causing a prolonged outage. The
Operations team is currently attempting to restore a backup made on
June 9 at 16:00 UTC. Recovery of modifications made after that date is
potentially possible, but our first priority is getting the backup
restored. We will update the incident report page
https://wikitech.wikimedia.org/wiki/Incident_documentation/20150617-LabsNFS…
with notes on our progress. E-mails will also be sent to the
labs-announce (https://lists.wikimedia.org/mailman/listinfo/labs-announce)
and labs-l (https://lists.wikimedia.org/mailman/listinfo/labs-l) on
significant changes. We are not yet able to estimate when things will
be back up fully.
This also means that tools hosted on tools.wmflabs.org will not be
accessible until this is finished, and even then they might need some
more fiddling to work properly. We will update
https://wikitech.wikimedia.org/wiki/Incident_documentation/20150617-LabsNFS…
as well as soon as we have more information.
If you have a non-tools project on labs that does not depend on NFS
and is currently down, you can recover it by getting rid of NFS. (We
can help you with that.) For instructions, see
https://wikitech.wikimedia.org/wiki/Recover_instance_from_NFS . Join
us on #wikimedia-labs and we will assist you.
--
Yuvi Panda T
http://yuvi.in/blog
--
Yuvi Panda T
http://yuvi.in/blog