$wgDBtransactions gets set to true if using InnoDB tables. Is there
an advantage to using InnoDB tables?
The disadvantage is that with MySQL there is a file, ibdata1, that
seems to grow endlessly if InnoDB tables are used. See
http://bugs.mysql.com/bug.php?id=1341
We're wondering if we should just convert everything to MyISAM. Any
thoughts?
=====================================
Jim Hu
Associate Professor
Dept. of Biochemistry and Biophysics
2128 TAMU
Texas A&M Univ.
College Station, TX 77843-2128
979-862-4054
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi all,
I've created some custom namespaced on one of my wikis, Botwiki
(previously known as pywikipedia).
I've put these lines in my LocalSettings.php file:
- ---
#Custom namespaces
$wgExtraNamespaces =
array(100 => "Manual",
101 => "Manual talk",
102 => "Python",
103 => "Python talk",
104 => "Php",
105 => "Php talk",
106 => "Perl",
107 => "Perl talk",
108 => "AWB",
109 => "AWB talk",
110 => "IRC",
111 => "IRC talk",
112 => "Other",
113 => "Other talk"
);
$wgContentNamespaces[] = 100;
$wgContentNamespaces[] = 102;
$wgContentNamespaces[] = 104;
$wgContentNamespaces[] = 106;
$wgContentNamespaces[] = 108;
$wgContentNamespaces[] = 110;
$wgContentNamespaces[] = 112;
- ---
However, I have a big problem: when I go to a page in one of these new
namespaces (not the discussion, the main ones), for example
http://botwiki.sno.cc/wiki/Perl:Copyright_Violation_Bot , I found the
red link to the discussion page. It's right, as there is no discussion
page for that article. But if you click on it, it brings you to
http://botwiki.sno.cc/w/index.php?title=Perl_talk:Copyright_Violation_Bot&a…
correct, of course. But have a look of the article and discussion tabs:
they are both red! The first, "article", leads to
http://botwiki.sno.cc/w/index.php?title=Perl_talk:Copyright_Violation_Bot&a…
when it should lead to
http://botwiki.sno.cc/wiki/Perl:Copyright_Violation_Bot and the second,
"discussion", leads to
http://botwiki.sno.cc/w/index.php?title=Talk:Perl_talk:Copyright_Violation_…
, when it should lead to
http://botwiki.sno.cc/w/index.php?title=Perl_talk:Copyright_Violation_Bot&a…
.
It's the first time I deal with custom namespaces :-( but I have some
ideas of what it can be. Can the problem be with the
$wgContentNamespaces settings? So it detects everything as ns0? (don't
think so).
Or can it be the fact that I haven't used an underscore in the
$wgExtraNamespaces definition?
Snowolf
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFGWhk7sdafW5NQMtERAuX+AKDQ7QLNjXv9cu+ZbSLXidMzgi/vNgCaA7VT
+VTgR3iI/BI7FVDqcyRZVJ0=
=a4yP
-----END PGP SIGNATURE-----
I want to moving all pages in a certain namespace (about 60 pages) into
the the "main" namespace. I couldn't find how to do this, so I tried
exporting the pages and importing them and I ran into all sorts of
problems. Is there a way to do what I want without using the import and
export features (and without having to move each of them manually)?
Thanks.
Hallo,
in the apache logfiles of my Wiki
http://arktur.de/Wiki
I find sometimes such lines (shortened):
"GET /Wiki/index.php?title=http://example.tld/some-side.htm? HTTP/1.1" 301
"GET /Wiki/index.php?title=Http://example.tld/some-side.htm%3F HTTP/1.1" 200
They always differ in "[Hh]ttp" and in "htm(?|%3F)", the second line is
always accepted from Mediawiki.
I had updated Mediawiki last week to 1.12.0 - no change.
Is that a behaviour which may be dangerous?
Viele Gruesse!
Helmut
http://www.bbref.com/bullpen/
Fairly large wiki, with 12,000 images and 46,000 pages. This morning we
upgraded our database server to a newer machine. All of the content, etc
works fine and nothing was lost, but now images no longer appear and are
replaced with their names. Clicking on the name says image not found.
The images are still there on the server in the images dir. For example.
http://www.baseball-reference.com/bullpen/Image:100px-Floridamarlins.gif
can be found here
http://www.baseball-reference.com/bpv/images/5/51/100px-Floridamarlins.gif
I've tried rebuildImages.php from the command line and that doesn't do the
trick. Any thoughts? Re-import them?
I'm guessing that updating the db somehow messed up mediawiki knowing the
exact path to the image, but I'm not sure about that.
thanks,
sean
We host description of our research work in form of Wiki, using
MediaWiki. I would like to start a blog inside of our Wiki, not on one
of publicly available blog hosting services, becuase we want to have
more control over the storage of data and uniform access to them
(through MediaWiki).
Can you suggest me any extension to MediaWiki which allows for having a
blog as one of Wiki pages ? What are your experience with existing
extensions ?
By blog I mean a single Wiki page, where:
* Entries are sorted by date of publication
* Under each entry there is a list of comments or a link to the
separate page with only commented entry and detailed list of
comments made to it
Thank you for your time,
Maciej
It seems that one slow extension can bring MediaWiki to a halt. For example, if you define a <wait> tag that simply sleeps for 20 seconds, and you hit a page that contains it, no other MediaWiki pages can be served during those 20 seconds.
Other PHP pages on the same Apache server, however, work just fine during those 20 seconds, so I'd guess this is not an Apache or PHP configuration issue. Only MediaWiki pages are affected.
Although the <wait> tag is artificial, the situation is realistic. We have a parser tag that hits an external database, and when the connection is slow (for even ONE wiki page), no other wiki pages can be served.
This seems dangerous. What's happening, and what's the workaround? This is in 1.13.0. (And maybe it's my imagination, but the problem seemed less in 1.12.0.)
Here's my toy <wait> code:
<?php
# Wait for N seconds
$wgExtensionFunctions[] = 'wfWaitSetup';
function wfWaitSetup() {
global $wgParser;
$wgParser->setHook('wait', 'wfWait');
}
function wfWait($input) {
global $wgParser;
$wgParser->disableCache();
sleep($input);
return "Slept for $input seconds";
}
Thanks for any advice,
DanB
Hello all,
My wiki is being used to support projects and format plans. I am also using
it to format documentation, and very soon contributors may be adding pages
at a high rate.
What I am hoping to develop is a page that lists all the pages in an
automated site map.
Any advice would be greatly appreciated
Thanks in advance, John
--
Empathy
http://thinman.com/empathy
Photography
http://thinman.com/photography
Technology
http://thinman.com
Hello,
First sorry for my bad english. I recently upgraded from Mediawiki 1.12 to
1.13. Everything is ok except that the tooltips messages are now in
english instead of french.
I did not find anything clear regarding how to solve this problem on the web.
My website URL: http://fr.wikimini.org;
In my local settings I have this line: $wgLanguageCode = "fr";
I did the upgrade with my browser (not command line).
Thank you very much for your help.
Laurent Jauquier
I think I read that this is possible somewhere, but I can't find it.
Anyway, I have a parameterized template. The end result of this
template would be a sortable wikitable where each parameter value
would create a row. If that value didn't exist I would use the #if
parser function to hide that row from the results. My question is
this:
What if I don't end up with any values for the parameters thus
producing zero table rows? Is there a way to hide the top/bottom of
the table and put something in its place such as "No values were given
for this article" as text? I just don't want to end up with something
like this if there are no table rows due to someone not entered any
values:
{|class="wikitable sortable"
!Name!!Surname!!Height
|}
It would obviously be more user friendly in that instance to just
print "No values were given for this article" as stated before. I
appreciate any help you can give me on this.