What would be the best way to do A/B testing on a MediaWiki site? I guess a few people here might have tried before me. (Mostly interested in the mainpage, but possibly a few content pages as well)
Best regardsLeo
_______________Leonard Wallentinleo_wallentin@hotmail.com@leo_wallentin+46 (0) 735 - 933 543
Hi,
We are using $wgVersion = '1.20.2';
But we find that the objectcache continues to grow even though we have used
the setting suggested in the FAQ to completely disable caching and also
restarted the server.
grep -i cache LocalSettings.php
$wgEnableParserCache = false;
$wgMainCacheType = CACHE_NONE;
$wgMemCachedServers = array();
$wgMessageCacheType = CACHE_NONE;
$wgParserCacheType = CACHE_NONE;
$wgCachePages = false;
# sure that cached pages are cleared.
$wgCacheEpoch = max( $wgCacheEpoch, $configdate );
Comparing the tables between 06-23 and 06-24
mysql> select count(*) from wiki24.objectcache;
+----------+
| count(*) |
+----------+
| 91184 |
+----------+
1 row in set (3.55 sec)
mysql> select count(*) from wiki23.objectcache;
+----------+
| count(*) |
+----------+
| 90866 |
+----------+
1 row in set (1.46 sec)
What do you suggest?
-john
I'm getting ready to upgrade our (hosted) corporate wiki from 1.161.1 to
1.21.1.
I have done an upgrade using the web interface on a local copy of our
wiki - this worked fine. The docs suggest that with a "large" wiki this
could fail due to timeouts, that the command line upgrade script is
preferred, and that if you don't have command line access you should
change hosting providers to one that allows command line access.
Changing hosts is not always an option, but it seems to me that you
ought to be able to upgrade the DB on a local machine, using the command
line script if necessary, then import that new DB structure (and data)
on your host. Has anyone tried this, and is there some reason it does
not/would not work?
Also, I have found that although I don't have command line access, I can
run php scripts as cron jobs. Is there any reason the upgrade script
could not be run this way? For example, is there any point at which the
upgrade script requires/requests user input?
I have not yet tried to upgrade our hosted wiki, and I think it's likely
that the web interface will work fine, I just want to be aware of all
possible safety nets in case problems do arise.
Thanks
--
Mickey Feldman
Vigil Health Solutions Inc.
2102- 4464 Markham Street
Victoria, BC Canada
V8Z 7X8
250.383.6900
Hello all,
Due to the US holiday next week (Independence Day, July 4th) on
Thursday, we have decided to delay the RFP selection announcement by one
week. You can now expect to hear something the week of July 8th.
The end of the community feedback period is still the same (this
Wednesday, June 26th) to give us time to review and discuss.
Thank you for your understanding,
Greg
--
| Greg Grossmeier GPG: B2FA 27B1 F7EB D327 6B8E |
| identi.ca: @greg A18D 1138 8E47 FAC8 1C7D |
Thanks! I did have that set in LocalSettings.php and stared right past
it. But before I read this response I did change automatic REMOTE USER
to eliminate the error.
On 6/21/2013 5:00 AM, mediawiki-l-request(a)lists.wikimedia.org wrote:
> You might have error reporting enabled in LocalSettings.php (it's a common
> recommendation for debugging). Something like
> error_reporting( -1 );
> ini_set( 'display_errors', 1 );
>
> Anyway, it's better to fix the underlying issue when possible. What
> version of automaticREMOTE_USER are you using? It looks like there was a
> fix for some strict standards errors about a year ago:
> https://bugzilla.wikimedia.org/show_bug.cgi?id=38369
>
--
Mickey Feldman
Vigil Health Solutions Inc.
2102- 4464 Markham Street
Victoria, BC Canada
V8Z 7X8
250.383.6900
Recently I encountered what I thought was a bug in my code, but it turned out to be something much more interesting.
First, some background:
Here at Vistaprint we have a number of custom extensions written for our internal-documentation wiki. One of those categorizes articles programmatically - you call it from the code and it will auto-categorize the page. Something along the effect of:
$c = new Categorizer;
$c->addToCategory( 'my-message-key' );
Whenever this code was called it broke the Cite extension. Specifically, anything defined with <ref></ref> would not show up in the <references /> section below. Doing some digging I found this issue:
1. Our custom extension parsed a message in the middle of a page (while the page was being parsed in the code)
2. Parsing a message called MessageCache::parse, by way of wfMessage() and the Message object
3. The MessageCache object clones the global parser and parses the message
4. Parsing the message means that the (cloned) parser calls Parser::clearState
The problem is that calling Parser::clearState() on the cloned parser object (a property of MessageCache) also clears the state of the global parser object. This might be due to two things:
1. PHP makes shallow-copies of objects when cloning and not deep-copies [1].
2. When the Cite extension inserts itself into a parser it sets up a callback to the 'ParserClearState' hook. When this callback is run, because of #1 (no deep-copying) it effectively gets run for all clones of the parser.
This is arguably a bug in MediaWiki: cloning a parser does not create a totally separate object with its own state. I have found a number of bug-reports that seem to be related to this issue [2][3][4]. Before I go off and submit a bunch of code, I'd like the input from the community.
Thoughts?
-Daniel ( User:DanielRenfro )
References:
[1] http://php.net/manual/en/language.oop5.cloning.php
[2] https://bugzilla.wikimedia.org/show_bug.cgi?id=34589
[3] https://bugzilla.wikimedia.org/show_bug.cgi?id=32368
[4] https://bugzilla.wikimedia.org/show_bug.cgi?id=47291
I'm an administrator of a wiki on Wikia, and I've been trying to convert a
gadget script we use to see what alternate redirects can be used for a page
(they display in small text on the first header) so it will work on my
private test wiki.
My problem is that I'm not sure what I'm doing to be honest, as I'm an
utter novice at JSON queries and how to parse them.
Basically, I was wondering if someone could take a look at the below link
to the original gadget script and tell me how I can convert it to my test
wiki, so it will have the same behavoir it does on Wikia and work on all
basic MediaWiki skins like Monobook and Vector.
Here's the link to the script:
http://tropes.wikia.com/wiki/MediaWiki:Gadget-altredirects.js
My test wiki is running MediaWiki 1.21 and the Wikia is running a modified
fork of 1.19.6.
I don't know if the version information is helpful, but there may be some
API call or parser differences, or at least that's my guess for why I can't
get this JSON query to parse correctly.
If anyone could assist me, I would be very grateful.
I'm just back from the LODLAM summit in Montreal, Canada and here there is
a short report.
==About LODLAM and why I was there==
LODLAM (http://lodlam.net) is a gathering of people interested in LOD
(linked open data) and LAM (Libraries, Archives, and Museums), so I thought
it would be interesting to find partners and raise awareness about the
Wikisource revitalization effort, all this thanks to the Grants:IEG
support. The audience was very diverse, not only from cultural
institutions, but also from some research centers and private companies.
OKFN, Europeana, DPLA, and other big players had representatives there.
AFIK, I was the only person from the Wikimedia movement, so I ended up
representing "all things wiki", specially Wikidata. These spontaneous
activities are briefly described here [1].
The format of the event was that of an [[open-space technology]] gathering,
similar to unconferences.
Some information and reflexions to share:
== Rewards & contributor retention ==
During a talk about licenses (which dealt about the difficulties of having
content with different licenses), there were some mention about Datahub
[2], a recently launched project to share datasets, formerly known as ckan.
The discussion revolved around the reward that contributors get for
releasing their datasets. There was some consensus that "the use of the
released data is the reward", which lead to another debate about how to
convey data use to contributors. It can be complicated or simplified to
just leave a gratitude comment by the person using the dataset.
All this led me to think about the emotional vs rational rewards that users
(or institutions) obtain from contributing content to Wikipedia, Commons,
Wikisource, etc. Are really "active thanks", as currently implemented,
suistainable and scalable? Will all the contributors who deserve it get a
thanks some day? Could personalized view counts/ratings reports about
uploaded pictures, major contributions to WP articles, etc. have some
impact on contributor satisfaction/retention? Would "automated personal
impact reports" free collaborators from the duty of thanking one another,
or would that mean less personal interactions?
These are some questions that I leave open here.
==Semantic annotations ==
As you might know there is a GSoC [3] which aims to convert the OKFN
Annotator [4] into a Mediawiki extension. That is a great project that will
enable inline comments in mediawiki projects, but it shouldn't be seen as
the end, but only an step in the direction of semantic annotations.
What could semantic annotations mean for Wikipedia? More precise answers to
questions. Instead of just having "millions of articles" there would be the
possibility of answering "trillions of questions" (or at least pointing to
the text fragment(s) that has/have the answer). This kind of paradigm shift
might need some pondering and broad community discussion.
What could semantic annotations mean for Wikisource? Text
interconectedness. Be able to relate concepts, authors, fragments... and
then be able to query those relationships.
==Input interfaces for linked data==
The best linked data it is the one that is invisible to the user, but then,
how to enable end users to "write" linked data? From the several
approaches, the most convincing seemed to use a text symbol (#, +, !, or
others) to indicate that the text following it represents a linked entity.
In the case of the VisualEditor in Wikipedia, one could write
"#article_name", and right after entering the "#" and the first letters, a
list of options (from Wikidata) would show up to autocomplete/disambiguate.
After selecting the right item, one could continue writing or type a dot to
select a property (like in some object-oriented programming languages do).
This approach simplifies the interlinking and also the data inclusion.
==Other news==
- The Getty vocabularies will be published as linked open data (late 2013,
ODC_BY 1.0 license) [6]
- Pund.it [5] - open source semantic annotation project that won the lodlam
challenge award
- Karma, tools for mapping data to ontologies [7]
Cheers,
Micru
[1] http://lists.wikimedia.org/pipermail/wikidata-l/2013-June/002388.html
[2] http://datahub.io/
[3]
https://www.mediawiki.org/wiki/User:Rjain/Proposal-Prototyping-inline-comme…
[4] http://okfnlabs.org/annotator/
[5] http://www.thepund.it/
[6] http://www.getty.edu/research/tools/vocabularies/index.html
[7]
http://summit2013.lodlam.net/2013/06/20/karma-tools-for-mapping-data-to-ont…
Pre upgrade testing of mw 1.21.1 with php 5.3.10 and apache 2.4.4
I set display_error = off in a php.ini in the doc root, and ran phpinfo
to confirm that it actually was set to "off", but extension
automaticREMOTE_USER is still giving me a Strict Standards error for
getCanonicalName().
I've used this extension for years, and can live with the fact that's it
has not been updated recently, but I don't want error messages
cluttering up the page.
What have I missed?
Thanks
--
Mickey Feldman
Vigil Health Solutions Inc.
2102- 4464 Markham Street
Victoria, BC Canada
V8Z 7X8
250.383.6900