Hi all,
I'm aware that Special:CommunityHiring was built directly by WMF staff, but
I was wondering if there was anyone else involved/aware of how it was built
that could give me a rough spec on it? I'm considering building an
equivalent but much simpler/shorter version for my own purposes and
understand there's no documentation available.
Thanks,
Steven Walling
We noticed a kernel panic message and stack trace in the logs on the
server that servers XML dumps. The web server that provides access to
these files is temporarily out of commission; we hope to have it back on
line in 12 hours or less. Dumps themselves have been suspended while we
investigate. I hope to have an update on this tomorrow as well.
Ariel
Here just for fun is Jimbo coincidentally juxtaposed some dirty words:
http://bug-attachment.wikimedia.org/attachment.cgi?id=7842
By the way, with only the above URL it seems extra hard for the common
man to trace back to just what bug it was attached to! Tell me your
method so I can include it in a bug report to the Bugzilla people.
Hey,
What is the recommended approach to loading external libraries (such as the
Google Maps one) using the resource loader?
I poked Roan about this a few weeks back, and got told that at that point
there was no way of doing this yet. Is there one now, and if not, how to
best add one?
Cheers
--
Jeroen De Dauw
http://blog.bn2vs.com
Don't panic. Don't be evil.
--
I wish to do some MediaWiki hacking which uses the codebase,
specifically the parser, but not the database or web server.
I'm running on Windows XP on an offline machine with PHP installed but
no MySql or web server.
I've unarchived the source and grabbed a copy of somebody's
LocalSettings.php but not attempted to to install MediaWiki beyond
this.
Obviously I don't expect to be able to do much, but when I try to run
any of the maintenance scripts I get no output whatsoever, not even
errors.
I was hoping to let the error messages guide me as to what is
essential, what needs to be stubbed, wrapped etc.
Am I missing something obvious or do these scripts return no errors by design?
Andrew Dunbar (hippietrail)
--
http://wiktionarydev.leuksman.comhttp://linguaphile.sf.net
Hello. I'm developing a Semantic MediaWiki extension to partition an #ask query
based upon the first character of one of the queried properties so that the user
can page through partitions of what would otherwise be a very large query
result. For example, I'd like to query all pages in Category:Author in my wiki,
but there are almost 4,000 of those, so I want to partition the results
dynamically by the first character of the author's last name. I have a working
version, but I'm in the process of enhancing it to use Ajax so that it only
needs to refresh the <div> with the new query results rather than the entire
page as well as retaining previous partitions in hidden <div>'s so previous
queries don't need to be repeated. I'm having two parsing issues in the Ajax
callback. I do not believe that they are related to Ajax per se. The second
issue may be caused by my inexpert fix to the first issue, although I ran across
the same second issue previously in a different context. I have reproduced both
issues in a vastly simplified, minimal test program that I will discuss below.
I'm using MediaWiki 1.16.0 and SemanticMediaWiki 1.5.2.
The first issue is that I need access to the parser in the Ajax callback so I
can call recursiveTagParse on the results for the query. I tried using
$wgParser, but it does not appear to be in a good state in the callback. I had a
series of errors where the parser was calling functions on non-objects. I played
around with it a bit and was able to get past all of these errors with this
rather ugly code:
global $wgParser;
$wgParser->mOptions = new ParserOptions;
$wgParser->initialiseVariables();
$wgParser->clearState();
$wgParser->setTitle(new Title($title));
where $title is the title retrieved using getTitle on the original (pre-Ajax)
working parser and sent as a parameter to JavaScript and back. These calls allow
me to get a parser that does not give me run-time errors, but I'm not at all
confident that this is the correct approach.
The second issue I'm having is that when I call recursiveTagParse on the data
returned from the query in the Ajax callback, some of the data is "disappears".
I should note that I'm invoking the same query and parsing code in both the
initial display of the page and in the Ajax callback. It works initially but
does not work in the callback using the questionable parser instance described
above. However, I had also seen the same blanking behavior in a previous task
where I did not have the first contributing issue. In that case, I changed my
approach completely to avoid the recursiveTagParse, but I don't believe I have
the option of doing that here. I found a bug report that seems similar, but I'm
not sure if it is the same problem, and there doesn't seem to be any movement on
that bug: https://bugzilla.wikimedia.org/show_bug.cgi?id=24556.
I'm reproducing my simplified test case below. You can run it by placing the
following wikitext on a page:
{{#testQuery:[[Category:Author]]
|limit=5
|searchlabel=
|format=table
}}
replacing [[Category:Author]] with the query term of your choice. When you
install the PHP extension code below, you'll see a page with a button and a
table containing up to 5 query results. When you press the button, the same
query is invoked, but the table is empty. If you comment out the line that
contains the call to recursiveTagParse in the PHP code below, you'll see that
the query results are indeed being returned in both cases, but they are getting
obliterated in the Ajax callback by recursiveTagParse.
It may very well be that solving the first issue will solve the second one, but
since I saw the second issue in another context previously, I wonder if they
really are two separate issues.
Thank you very much for any assistance in working through these issues!
Cindy
PHP extension code:
<?php
/**
* To activate the functionality of this extension include the following
* in your LocalSettings.php file:
* include_once("$IP/extensions/TestQuery/TestQuery.php");
*/
if( !defined( 'MEDIAWIKI' ) ) die( "This is an extension to the MediaWiki
package and cannot be run standalone." );
# credits
$wgExtensionCredits['parserhook'][] = array (
'name' => 'TestQuery',
'version' => '1.0',
'author' => "Cindy Cicalese",
'description' => "Bug test"
);
$wgUseAjax = true;
$wgAjaxExportList[] = 'testQueryPopulateDiv';
$wgHooks['LanguageGetMagic'][] = 'wfExtensionTestQuery_Magic';
$wgHooks['ParserFirstCallInit'][] = 'efTestQueryParserFunction_Setup';
function efTestQueryParserFunction_Setup (& $parser) {
$parser->setFunctionHook('testQuery', 'testQuery');
return true;
}
function wfExtensionTestQuery_Magic(& $magicWords, $langCode) {
$magicWords['testQuery'] = array (0, 'testQuery');
return true;
}
function testQuery($parser, $query) {
$params = func_get_args();
array_shift($params); // first is $parser; strip it
array_shift($params); // second is query string; strip it
$testQuery = new TestQuery();
$output = $testQuery->firstVisit($parser, $query, $params);
$parser->disableCache();
return array($parser->insertStripItem($output, $parser->mStripState),
'noparse' => false);
}
function testQueryPopulateDiv($query, $paramString, $title) {
$params = explode("|", $paramString);
$testQuery = new TestQuery();
$output = $testQuery->populateDiv($query, $params, $title);
return $output;
}
class TestQuery {
private $template = false;
function firstVisit($parser, $query, $params) {
$js = <<<EOT
<script type="text/javascript">
function buttonClicked() {
var query = document.forms['TestQuery'].Query.value;
var params = document.forms['TestQuery'].Params.value;
var title = document.forms['TestQuery'].Title.value;
var div = document.getElementById('TestDiv');
sajax_do_call('testQueryPopulateDiv', [query, params, title], div);
}
</script>
EOT;
$parser->mOutput->addHeadItem($js);
$this->parseParameters($params);
$output = $this->buildForm($query, $params, $parser->getTitle());
$result = $this->getData($parser, $query, $params);
$output .= $this->buildDiv($result);
return $output;
}
function populateDiv($query, $params, $title) {
global $wgParser;
$wgParser->mOptions = new ParserOptions;
$wgParser->initialiseVariables();
$wgParser->clearState();
$wgParser->setTitle(new Title($title));
$this->parseParameters($params);
$currentPartitionData = $this->getData($wgParser, $query, $params);
return $currentPartitionData;
}
private function parseParameters($params) {
foreach ($params as $param) {
if (preg_match("/^ *format *= *template *$/", $param) === 1) {
$this->template = true;
}
}
}
private function buildForm($query, $params, $title) {
$paramString = implode("|", $params);
$out = <<<EOT
<center>
<button type='button' id='TestButton' onClick="buttonClicked()">Test Button
</button>
<form id='TestQuery' method='post' action=''>
<input type='hidden' name='Query' value='$query'>
<input type='hidden' name='Params' value='$paramString'>
<input type='hidden' name='Title' value='$title'>
</form>
</center><br>
EOT;
return $out;
}
private function getData($parser, $query, $params) {
$result = $this->doSMWAsk($parser, $query, $params);
$result = $parser->recursiveTagParse($result);
return $result;
}
private function doSMWAsk($parser, $query, $rawParams) {
SMWQueryProcessor::processFunctionParams($rawParams, $qs, $params,
$printouts);
$output = SMWQueryProcessor::getResultFromQueryString($query, $params,
$printouts, SMW_OUTPUT_WIKI);
return $output;
}
private function buildDiv($result) {
$output = "<div id='TestDiv' display='block'>";
$output .= $result;
$output .= "</div>";
return $output;
}
}
--
Dr. Cynthia Cicalese
Lead Software Systems Engineer
The MITRE Corporation
There was a thought about the job queue that popped into my mind today.
From what I understand, for a Wiki Farm, in order to use runJobs.php
instead of using the in-request queue (which on high traffic sites is
less desireable) the Wiki Farm has to run runJobs.php periodically for
each and every wiki on the farm.
So, for example. If a Wiki Farm has 10,000 wiki it's hosting, say the
Wiki Host really wants to ensure that the queue is run at least hourly
to keep the data on the wiki reasonably up to date, the wiki farm
essentially needs to call runJobs.php 10,000 times an hour (ie: one time
for each individual wiki), irrelevantly of whether a wiki has jobs or
not. Either that or poll each database before hand, which in itself is
10,000 database calls an hour plus the runJobs execution which still
isn't that desireable.
What do people think of having another source class for the job queue
like we have for file storage, text storage, etc...
The idea being that Wiki Farms would have the ability to implement a new
Job Queue source which instead derives jobs from a single shared
database with the same structure as the normal job queue, but with a
farm specific wiki id inside the table as well.
Using this method a Wiki Farm would be able to set up a cron job (or
perhaps a daemon to be even more effective at dispatching the job queue
runs) which instead of making 10,000 calls to runJobs outright, it would
fetch a random job row from the shared job queue table, look at the wiki
id inside the row and execute a runJobs (perhaps with a limit=1000)
script for that wiki to dispatch the queue and run some jobs for that
wiki. It would of course continue looking at random jobs from the shared
table and dispatching more runJobs executions serving the role of trying
to keep the job queues running for all wiki on the farm, but without
making wasteful runJobs calls for a pile of wikis which have no jobs to run.
Any comments?
--
~Daniel Friesen (Dantman, Nadir-Seen-Fire) [http://daniel.friesen.name]
Hi all,
as you probably know, we mark commits related to extensions (and other
things) not used by Wikimedia as "deferred" in CodeReview. I myself work
mostly on extensions not used on Wikimedia (for example, SocialProfile) and
thus most of my commits are marked as deferred by fellow developers.
However, I think that this is evil and we shouldn't do this.
I want to have my SocialProfile-related commits (as well as other extension
commits) reviewed, but the deferred status doesn't mean "review me later",
it means "nobody cares about this revision", or at least currently it means
that. Nowadays when a revision is marked as deferred, it's highly unlikely
that anyone ever bothers reviewing it. Ideally there should be a way to tag
commits, such as commits to extensions not used by WMF, as non-mission
critical yet in need of review.
The deployment queue is already long enough and people who are reviewing
code for Wikimedia deployment are having a hard time catching up; I don't
want to make their work any more difficult than it already is, I just want
to have my extensions reviewed to make sure that no glaring errors or
security vulnerabilities slip in.
Thanks and regards,
--
Jack Phoenix
MediaWiki developer