Hi, I have a few questions regarding mobile stats.
I need to determine a real percentage of WAP browsers. At first glance,
[1] looks interesting: ratio of text/html to text/vnd.wap.wml
is 92M / 3987M = 2.3% on m.wikipedia.org. However, this contradicts
the stats at [2] which have different numbers and a different ratio.
I did my own research: because during browser detection in Varnish
WAPness is detected mostly by looking at accept header and because our
current analytics infrastructure doesn't log it, I quickly whipped up
a code that recorded user-agent and accept of every 10,000th request
for mobile page views hitting apaches.
According to several days worth of data, out of 14917 logged requests
1445 contained vnd.wap.wml in Accept: headers in any form. That's more
than what is logged for frontend responses, however it is expected as
WAP should have worse cache hit rate and thus should hit apaches more
often.
Next, our WAP detection code is very simple: user-agent is
checked against a few major browser IDs (all of them are HTML-capable
and this check is not actually needed anymore and will go away soon)
and if still not known, we consider every device that sends Accept:
header "vnd.wap.wml" (but not "application/vnd.wap.xhtml+xml"), to be
WAP-only. If we apply these rules, we get only 68 entries that qualify
as WAP which is 0.05% of all mobile requests.
The question is, what's wrong: my research or stats.wikimedia.org?
And if it's indeed just 0.05%, we should probably^W definitely kill
WAP support on our mobile site as it's virtually unmaintained.
-----
[1] http://stats.wikimedia.org/wikimedia/squids/SquidReportRequests.htm
[2] http://stats.wikimedia.org/wikimedia/squids/SquidReportClients.htm
--
Best regards,
Max Semenik ([[User:MaxSem]])
Hello,
On Sep 19th around 1am UTC, I have updated the Jenkins jobs running the
unit tests for MediaWiki extensions.
I did a mistake that caused Jenkins to fetch the extension to a
directory named according to its Gerrit project name (ie prefixed with
mediawiki/).
That caused the jobs to no more be running the code submitted via Gerrit
but an old copy left in extensions/Foobar.
The issue is now resolved. You might find some patches are now failing
when they used to be fine, that is because they are actually tested now!
Danke, Tobias und addshore, dass ihr dieses Problem gefunden habt. =)
Change causing the issue:
https://gerrit.wikimedia.org/r/#/c/84918/
The fix:
https://gerrit.wikimedia.org/r/#/c/85202/
--
Antoine "hashar" Musso
Hey all,
I've been scheming for a while on how to reduce the number of calls up to
the server for CentralNotice. At the same time I want to greatly reduce the
number of objects I have in cache.
To do this I propose to change the architecture to having an intermediate
proxy server with a static head JS section in mediawiki page head. The
proxy would map down all the variables to only what is required at the time.
Right now I am imagining 4 servers, 2 in eqiad and 2 in ams, that would
host a node.js proxy and/or possibly a local varnish instance.
The more detailed architecture is sketched out here:
https://www.mediawiki.org/w/index.php?title=Extension:CentralNotice/Caching…
I would appreciate any comments.
~Matt Walker
Wikimedia Foundation
Fundraising Technology Team
I wrote a tool that will import bugs from Bugzilla into either Mingle
and/or Trello (two project management tools used by some teams at the
Wikimedia Foundation). The mobile web team was finding it difficult to keep
track of two separate tools - one for new feature development, the other
for tracking bugs, so Bingle helps bridge the gap and allows us to focus on
one tool. This has had the side effect of keeping visibility of reported
bugs high and has made it easier for us to quickly prioritize incoming bugs
against existing work, and quickly respond to open issues.
You can find the code and some rudimentary usage instructions here:
https://github.com/awjrichards/bingle
I hacked this together rather quickly - expedience was my goal rather than
perfection, so it's not well documented, a little quirky, and there's a lot
of room for improvement. I've been sitting on it for a while, hoping to
make improvements before announcing it, but I have not found the time to
make the changes I would like (eg for it to use the Bugzilla API rather
than Bugzilla atom feeds). So, I invite anyone interested and willing to
fork it, and pitch in and help make it awesome :)
--
Arthur Richards
Software Engineer, Mobile
[[User:Awjrichards]]
IRC: awjr
+1-415-839-6885 x6687
> From: Aaron Schulz<aschulz4587(a)gmail.com>
>
> Until what? A timestamp? That would be more complex and prone to over/under
> guessing the right delay (you don't know how long it will take to commit). I
> think deferred updates are much simpler as they will just happen when the
> request is nearly done, however long that takes.
The two systems could be merged by putting these updates in a local
queue, and adding them to the real jobqueue at the end of the request?
LW
I'd appreciate some feedback on this enhancement to the MediaWiki
autoloader:
https://bugzilla.wikimedia.org/show_bug.cgi?id=53835https://gerrit.wikimedia.org/r/59804
The PSR-4 recommendation has not been ratified, but I think it's the most
convenient and rational namespace-based autoloading proposal, and I doubt
it will be modified before adoption.
Putting this logic into core allows extension authors a standardized way of
namespacing their classes, and also has the potential to deprecate or at
least greatly reduce the redundant wgAutoloadClasses lists.
-Adam
Hello all,
I have a mea culpa: I haven't been doing the Roadmap update emails
lately. The short of it is: it is dang hard and not time-efficient for
me to try to parse Google Doc's 'revision history' in a big spreadsheet.
That's why we (Robla) created a script to download and convert to
wikitext (and upload to mediawiki.org) the Roadmap. Then we get real
diffs.
My laptop was stolen a while ago and on it was my local modifications to
that script to make it work for me (committed to git, but not push
anywhere because the project wasn't in gerrit yet..., laziness on my
part).
With that:
Please take a look at he latest version of the Roadmap at:
http://ur1.ca/felvl (Google Doc spreadsheet, my apologies)
Some things to look at (ie: have been updated recently):
* Flow
* Language-team related items
* TechOps September column
* Platform/Site Architecture
* Wikidata
* QA sept/oct columns
* ECT's sept/oct/nov columns
Sorry about the lack of specificity.
Any questions, please don't hesitate to ask,
Greg
--
| Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
Hallo,
As you may be aware, the git-review based developer experience on Windows
is less than perfect - especially compared to the old TortoiseSVN based
workflow. I have tried several options, which I will try to document in the
coming weeks.
One of the options is using the GitHub workflow. Yuvi Panda is working on
an automatic way to move Pull requests to Gerrit [1], which would make this
workflow a possibility.
The basic idea is to have a 'triangular workflow': You pull from a central
repository, push to a public fork of that repository and then submit a pull
request. I have documented this process, using GitHub for Windows at
https://www.mediawiki.org/wiki/Gerrit/GitHub.
Any comments are, of course, welcome. Any ideas on how to improve the
contributing experience for Windows-based developers - or information on
how Windows-based developers contribute currently - would also be
appreciated.
Best,
Merlijn
How many printers would it take to keep up with updates to Wikipedia?
http://what-if.xkcd.com/59/
Cheers,
-- jra
--
Jay R. Ashworth Baylink jra(a)baylink.com
Designer The Things I Think RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA #natog +1 727 647 1274
All the documentation I could find is in docs/deferred.txt. Let me
paste the paragraph:
"A few of the database updates required by various functions here can be
deferred until after the result page is displayed to the user. For example,
updating the view counts, updating the linked-to tables after a save, etc. PHP
does not yet have any way to tell the server to actually return and disconnect
while still running these updates (as a Java servelet could), but it might have
such a feature in the future."
That text has been there at least since 2005. Given that to my
knowledge there still is no such feature: I've spent hours trying to
investigate why DeferrableUpdates delayed the page delivery as I
incorrectly assumed those would be run after page has been delivered
and trying to figure out if it is possible to make them actually work
that way with PHP-FPM and nginx.
Should we just get rid of them? That should be easy, by either moving
stuff to the jobqueue or just executing the code immediately.
Or if they are useful for something, can we at least document the
*class* to reflect how it actually works and what it is useful for?
-Niklas
--
Niklas Laxström