Hi everyone,
Unlike previous years the big European Hackathon won't be in Berlin, but
in Amsterdam. We're aiming to do the hackathon in May 2013 with a
preference for the weekend of Saturday the 25th. To make sure this is a
good weekend I've set up a straw poll at
https://www.mediawiki.org/wiki/Amsterdam_Hackathon_2013#Straw_Poll .
Please fill it out so we can finalize the date!
Thank you,
Maarten
Wikimedia Nederland
Ps. Please forward to any relevant lists I might have missed.
(Apologies for cross-posting)
Heya,
The mobile team needs accurate pageviews for the alpha and beta mobile
site. Currently, this information is only stored in a cookie, but we don't
want to go the route of starting to store this cookie because of cache
server performance, network performance and privacy policy issues. The
mobile team also needs to be able to diferentiate between initial and
secondary API requests - pages in the beta version of MobileFrontend are
dynamically loaded via the API, meaning that MobileFrontend will might make
multiple API requests to load sections of an article when they are toggled
open up by the user. At the moment, we have no way of diferentiating
between API requests to determine which one should count as a 'pageview'.
We propose that we set two additional custom HTTP headers - one to identify
alpha/beta/stable version of MobileFrontend, the other to be able to
diferentiate between initial and secondary API requests. This would make
logging the necessary information trivial, and we believe it would be
fairly lightweight to implement.
We propose the following two headers with their possible values:
X-MF-Mode: a/b/s (alpha/beta/stable)
X-MF-Req: 1/2 (primary/secondary)
X-MF-Mode would be determined by Varnish based off the existence of the
alpha/beta identifying cookies while X-MF-Req would be set by
MobileFrontend in the backend response.
These headers would only be set on the Varnish servers, on the Squids/Nginx
we will just set a dash ('-') in the log fields.
Questions:
1) Are there objections to the introduction of these two http headers?
2) We would like to aim for a late February deployment, is that an okay
period? (We will announce the real deployment date as well)
3) Are we missing anything important?
Thanks for your feedback!
Best
Arthur & Diederik
Sorry, I've replied to Sumana directly instead of the mailing list. So
now duplicating into the mailing list.
Sumana Harihareswara писал 2012-12-19 22:30:
> Try these tips:
> https://www.mediawiki.org/wiki/Git/Code_review/Getting_reviews
Sumana, it's all very good but:
1) I think it's not so comfortable to push other developers personally
when adding them as the reviewers... And I don't know whom to add as the
reviewer, so I just choose randomly. But what if that guy doesn't want
to do review for that extension? For example what if he is already very
busy in working on mediawiki _core_, and I ask him to review a trivial
extension?
2) Who can verify changes in extensions? There is no CI. So, people who
can verify changes and people who can put +2 - are they the same people?
But it again leads to short-circuiting all the work to the "core"
people, and aren't they already busy? (I assume they are as they don't
review all the changes)
3) As a solution, I think it would be good if - at least in
not-so-important-as-the-core extensions - the changes merged
automatically after getting, for example, 2x "+1"... Or will you end up
with changes reviewed by not merged by anyone? And also, maybe it would
also be good if the system automatically added some reviewers - randomly
or based on some "ownership" rules...
Hi all!
Since https://gerrit.wikimedia.org/r/#/c/21584/ got merged, people have been
complaining that they get tons of warnings. A great number of them seem to be
caused by the fact the MediaWiki will, if the DBO_TRX flag is set,
automatically start a transaction on the first call to Database::query().
See e.g. https://bugzilla.wikimedia.org/show_bug.cgi?id=40378
The DBO_TRX flag appears to be set by default in sapi (mod_php) mode. According
to the (very limited) documentation, it's intended to wrap the entire web
request in a single database transaction.
However, since we do not have support for nested transactions, this doesn't
work: the "wrapping" transaction gets implicitely comitted when begin() is
called to start a "proper" transaction, which is often the case when saving new
revisions, etc.
So, DBO_TRX sems to be misguided, or at least broken, to me. Can someone please
explain why it was introduced? It seems the current situation is this:
* every view-only request is wrapped in a transaction, for not good reason i can
see.
* any write operation that uses an explicit transaction, like page editing,
watching pages, etc, will break the wrapping transaction (and cause a warning in
the process). As far as I understand, this really defies the purpose of the
automatic wrapping transaction.
So, how do we solve this? We could:
* suppress warnings if the DBO_TRX flag is set. That would prevent the logs from
being swamped by transaction warnings, but it would not fix the current broken
(?!) behavior.
* get rid of DBO_TRX (or at least not use it per default). This seems to be the
Right Thing to me, but I suppose there is some point to the automatic
transactions that I am missing.
* Implement support for nested transactions, either using a counter (this would
at least make DBO_TRX work as I guess it was intended) or using safepoints (that
would give us support for actual nested transactions). That would be the Real
Solution, IMHO.
So, can someone shed light on what DBO_TRX is intended to do, and how it is
supposed to work?
-- daniel
It's the new year, and in light of the recent poll about which devs are
working on what, let me make another, albeit vaguely macabre, suggestion:
If you're a developer, or other staffer, can the people around you pick
up the pieces if you get hit by a bus? How badly will it impact delivery
and delivery scheduling of what you're working on?
Is the institutional knowledge about our architecture and plans sufficiently
well documented and spread out that we don't have anyone with an unreasonably
high bus factor?
Cheers,
-- jra
--
Jay R. Ashworth Baylink jra(a)baylink.com
Designer The Things I Think RFC 2100
Ashworth & Associates http://baylink.pitas.com 2000 Land Rover DII
St Petersburg FL USA #natog +1 727 647 1274
Are you going to FOSDEM? If so (or if you are considering going) please
add yourself to
http://www.mediawiki.org/wiki/Events/FOSDEM
I still don't know. Depends on whether we have a MediaWiki EU critical mass.
--
Quim Gil
Technical Contributor Coordinator
Wikimedia Foundation
Back in December, there was discussion about needing a better method of
identifying disambiguation pages programmatically (bug 6754). I wrote
some core code to accomplish this, but was informed that disambiguation
functions should reside in extensions rather than in core, per bug
35981. I abandoned the core code and wrote an extension instead
(https://gerrit.wikimedia.org/r/#/c/41043/). Now, however, it has been
suggested that this code needs to reside in core after all
(https://www.mediawiki.org/wiki/Suggestions_for_extensions_to_be_integrated#…).
Personally, I don't mind implementing it either way, but would like to
have consensus on where this code should reside. The code is pretty
clean and lightweight, so it wouldn't increase the footprint of core
MediaWiki (it would actually decrease the existing footprint slightly
since it replaces more hacky existing core code). So core bloat isn't
really an issue. The issue is: Where does it most make sense for
disambiguation features to reside? Should disambiguation pages be
supported out of the box or require an extension to fully support?
The specific disambiguation features I'm talking about are:
1. Make it easy to identify disambiguation pages via a page property in
the database (set by a templated magic word)
2. Provide a special page (and corresponding API) for seeing what pages
are linking to disambiguation pages
3. Assign a unique class to disambiguation links so that gadgets can
allow them to be uniquely colored or have special UI (not yet implemented)
Ryan Kaldari
Heya :)
On the 4th of February Quim will be at the Wikimedia Germany office to
introduce MediaWiki groups. See https://www.mediawiki.org/wiki/Groups
for more info about the groups.
We'll meet at 18:30 in the office in Obentrautstr. 72, Berlin. Quim
will talk and answer questions for about 1 hour and then we'll move on
to Brauhaus Lemke for some food and drinks.
If you're going to attend please let me know soon so I can plan
better. I'd also be delighted if you could forward it to other people
who might be interested. I hope to see many of you there.
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Community Communications for Wikidata
Wikimedia Deutschland e.V.
Obentrautstr. 72
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
Let me first say that the ResourceLoader [1] is a wonderful part of the software. Thanks goes out to everyone who contributed to this project - it's made my life much better. That being said, I don't think that I and my team have figure out how to properly take advantage of its benefits.
At Vistaprint, we are currently using the ResourceLoader to load modules, some of which contain JavaScript. The dependencies are made explicit in the registering of the ResourceLoader, and they execute in the proper order on the client side. In many of these JavaScript files we wrap our code in a jQuery .ready() callback [2]. Since these JavaScript files have dependencies on one-another (as laid out in the RL,) they need to be executed in the correct order to work properly. We're finding that when using jQuery's .ready() (or similar) function, the callbacks seem to execute in different (unexepected, browser-dependent) order. This causes errors.
Using the WikiEditor extension as a specific example:
Customizing the WikiEditor-toolbar is one of the specific cases where we've encountered problems. First, the WikiEditor provides no good events to bind to once the toolbar is loaded. This is not a problem because there is a documented work-around [3]. However, our JavaScript code needs to execute in the proper order, which it is not. We have about four JavaScript files that add custom toolbars, sections, and groups.
My questions:
It recently dawned on me that executing our code within a $(document).ready(); callback might not be necessary as the JavaScript for each ResourceLoader module is executed in its own callback on the client-side. This should provide the necessary scope to avoid clobbering global variables along with getting executed at the proper time. Is this a correct assumption to make? Is it a good idea to avoid binding our code to jQuery's ready event?
--Daniel (User:The Scientist)
[1] http://www.mediawiki.org/wiki/ResourceLoader
[2] http://docs.jquery.com/Events/ready
[3] http://www.mediawiki.org/wiki/Extension:WikiEditor/Toolbar_customization#Mo…