What is the simplest, correct way to create a new Parser object with the same initialization as the current Parser object (e.g., $wgParser)? An actual code fragment would be great.
Application: I have written an parser function extension that, internally, needs to parse several other wiki pages to complete its task. When I do this with the Parser object supplied to my hook function, there are all kinds of unwanted side-effects. For example, the ParserOutput's category links get applied to the current article, which I don't want, but if I delete them (e.g., $parserOutput->setCategoryLinks(array())), this causes worse problems. So I'd rather use a fresh new Parser, except it doesn't have all the tag extensions & parser functions initialized, and probably a bunch of other things missing too....
Is there an extension or another easy way to automatically add a template to
a page when its content meets predifined criteria?
Actually I would like to add a "Stub" template to articles that match
certain criteria like for example :
- no internal links
- less than x characters
Thank your for any advice !
I am using Mediawiki 1.12 on a Linux Debian system. I have installed the
LDAP_Authentication extension for medaiwiki version 1.12 . The good news
is that I am able to connect and log into mediawiki using our company's
Active Directory server authentication with the following settings in
$wgLDAPEncryptionType = array( "mycompany.net" => "clear" );
However the bad news is that if I try to use the TLS encryption method like
$wgLDAPEncryptionType = array( "mycompany.net" => "tls" );
I get the following debug messages:
User is using a valid domain.
Setting domain as: mycompany.net
Username isn't empty.
Munged username: JohnS
Using TLS or not using encryption.
Using servers: ldap://ad1.mycompany.net
Warning:ldap_start_tls() [function.ldap-start-tls]: Unable to start TLS:
Decoding error in
Failed to start TLS.Failed to connect
Returning true in strict().
with medaiwiki login page saying "Login error: Incorrect password entered.
Please try again."
How can I check if my Active Directory server uses TLS method? Is the
problem with the Active Directory or in my setup of the LDAP_Authentication
I decided to try setting the open_basedir setting to be "/htdocs/dir" (with the wiki living inside a subdirectory of this). Now file uploads fail because, for some unknown reason, mediawiki insists on dumping temporary files inside "/tmp" which is outside my open_basedir path, this despite setting php's "upload_tmp_dir" and allowing mediawiki's temporary directory to be inside "/htdocs/dir". For some reason mediawiki is forcing the temporary upload path to be /tmp and I don't see how to change this. Any ideas on how to fix this other than digging around and hacking includes/upload code? Thanks.
Ok, for all those to come, the answer to antivirus not working with a chrooted apache can be found at http://de.php.net/manual/en/function.passthru.php#84773: you need a shell program inside the chroot too (duh!). Mediawiki's function wfShellExec (line 2392 in GlobalFunctions.php) calls the php command 'passthru' which tries to execute a shell command. Php tries to open a shell to execute clamdscan or clamscan so when it gives you a 127 (command not found) it's not necessarily saying that clamd(d)scan is not found, in this case it doesn't find a shell to use to execute clamd(d)scan and the 127 says 'shell not found'.
Recap: make sure that.......
the clamd(d)scan executable is inside the chroot and executable by the user that will call the program (so you can scan)
there is a shell program is executable inside the chroot (so you can execute the scanner)
there is hardlinked clamav.sock (or equivalent) socket for communicating with clamd outside of the chroot
there is a clamd.conf file to direct clam(d)scan to the right socket
your $wgAntivirusSetup['command'] reads something like "/usr/local/bin/clamdscan --fdpass --no-summary" (otherwise scan's will fail with an error 2)
...... now all the messages from clamav work like they should! Thanks Platonides.
-------- Original Message --------
From: Platonides <Platonides(a)gmail.com>
Apparently from: mediawiki-l-bounces(a)lists.wikimedia.org
Subject: Re: [Mediawiki-l] Setting up clamav for chrooted apache
Date: Mon, 06 Sep 2010 00:25:24 +0200
> tojja(a)Safe-mail.net wrote:
> > Passing --fdpass or --stream to clamdscan works for calling up a scan on the command prompt however calling it through mediawiki (via the chrooted web user www) still fails with an error 127. If I make a file called test.php containing:
> > <?php
> > define("MEDIAWIKI", "mediawiki");
> > require_once("/htdocs/w/includes/GlobalFunctions.php" );
> > $output = wfShellExec( "command=/usr/local/bin/clamdscan --fdpass --no-summary '/htdocs/file.txt' 2>&1, $exitCode );
> > echo "exitcode is $exitCode";
> > ?>
> > Executing "chroot -u www /var/www /usr/local/bin/clamdscan --fdpass --no-summary '/htdocs/file.txt' 2>&1" will work just fine but running the script will always fail with error 127. Even substituting in the $output line something like wfShellExec( "/bin/echo 'hello world' > world.txt" ); will always fail with error 127 as well, despite echo being at /var/www/bin/echo and permissions readable and executable by the proper www user. Appears that there may be something up with how mediawiki is executing shell commands, maybe I'm going about testing this the wrong way.
> > Thanks for the insight.
> Exit code 127 is usually the shell not finding out the executable.
> Try removing command= from the beginning (you also miss the closing of
> the double quotes, by that seems an overlook on copying).
> MediaWiki-l mailing list
I installed the UsabilityInitiative and followed the directions (ran the maintenance and all). It is showing up as installed in special:version but the has the old edit bar.
Is there anything special I need to do to enable it?
[sorry if this appears "cross-posted" -- I accidentally sent this to
the SMW list yesterday and realized my mistake this morning]
I'd like to add a page search extension that allows the user to begin
typing the name of a page and see all possibilities based on the
titles of existing pages -- ie, what is commonly known as
"autocompletion". I notice Semantic Forms supports autocompletion for
property searches with restricted values, but I'd like this to be
independent of SMW. Does anyone know if such a thing exists already?
On my french wiki the special page "Special:Upload" was called
"Spécial:Téléchargement" before (I think this was the standard name in
french, wasn't it??). But after upgrade to 1.16 the URL "
doesn't work anymore. It looks like this special page has been renamed to
"Spécial:Téléverser" (instead of "Spécial:Téléchargement").
The problem is that a lot of links are still pointing to the old URL.
Therefore I had to change the following alias in the file
'Upload' => array( 'Téléverser', 'Televerser', 'Téléversement',
'Upload' => array( 'Téléverser', 'Televerser', 'Téléversement',
'Televersement', 'Téléchargement', 'Telechargement' ),
Well, it works now. But is that normal that I had to do that? Did I miss
Thank you for your help
I am trying to upgrade from MediaWiki 1.15.1 to MediaWiki 1.16.0, but
when I run the upgrade file, I just get an error:
Fatal error: Call to a member function getDbType() on a non-object in /.../wiki/maintenance/doMaintenance.php on line 79
I have Apache 2.2.16, PHP 5.3.3 and MySQL 5.1.48 running on CentOS 5.5.
I can provide a list of extensions if necessary.
Michael "Oldiesmann" Eshom
Christian Oldies Fan