$wgDBtransactions gets set to true if using InnoDB tables. Is there
an advantage to using InnoDB tables?
The disadvantage is that with MySQL there is a file, ibdata1, that
seems to grow endlessly if InnoDB tables are used. See
http://bugs.mysql.com/bug.php?id=1341
We're wondering if we should just convert everything to MyISAM. Any
thoughts?
=====================================
Jim Hu
Associate Professor
Dept. of Biochemistry and Biophysics
2128 TAMU
Texas A&M Univ.
College Station, TX 77843-2128
979-862-4054
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi all,
I've created some custom namespaced on one of my wikis, Botwiki
(previously known as pywikipedia).
I've put these lines in my LocalSettings.php file:
- ---
#Custom namespaces
$wgExtraNamespaces =
array(100 => "Manual",
101 => "Manual talk",
102 => "Python",
103 => "Python talk",
104 => "Php",
105 => "Php talk",
106 => "Perl",
107 => "Perl talk",
108 => "AWB",
109 => "AWB talk",
110 => "IRC",
111 => "IRC talk",
112 => "Other",
113 => "Other talk"
);
$wgContentNamespaces[] = 100;
$wgContentNamespaces[] = 102;
$wgContentNamespaces[] = 104;
$wgContentNamespaces[] = 106;
$wgContentNamespaces[] = 108;
$wgContentNamespaces[] = 110;
$wgContentNamespaces[] = 112;
- ---
However, I have a big problem: when I go to a page in one of these new
namespaces (not the discussion, the main ones), for example
http://botwiki.sno.cc/wiki/Perl:Copyright_Violation_Bot , I found the
red link to the discussion page. It's right, as there is no discussion
page for that article. But if you click on it, it brings you to
http://botwiki.sno.cc/w/index.php?title=Perl_talk:Copyright_Violation_Bot&a…
correct, of course. But have a look of the article and discussion tabs:
they are both red! The first, "article", leads to
http://botwiki.sno.cc/w/index.php?title=Perl_talk:Copyright_Violation_Bot&a…
when it should lead to
http://botwiki.sno.cc/wiki/Perl:Copyright_Violation_Bot and the second,
"discussion", leads to
http://botwiki.sno.cc/w/index.php?title=Talk:Perl_talk:Copyright_Violation_…
, when it should lead to
http://botwiki.sno.cc/w/index.php?title=Perl_talk:Copyright_Violation_Bot&a…
.
It's the first time I deal with custom namespaces :-( but I have some
ideas of what it can be. Can the problem be with the
$wgContentNamespaces settings? So it detects everything as ns0? (don't
think so).
Or can it be the fact that I haven't used an underscore in the
$wgExtraNamespaces definition?
Snowolf
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFGWhk7sdafW5NQMtERAuX+AKDQ7QLNjXv9cu+ZbSLXidMzgi/vNgCaA7VT
+VTgR3iI/BI7FVDqcyRZVJ0=
=a4yP
-----END PGP SIGNATURE-----
Hi there,
I am using Lucene-Search 2.1/MWSearch for my MediaWiki 1.15.1.
It's working fine, but it can't search any Japanese characters.
I have tried (language,ja) in the lsearch-global.conf file, but it doesn't seem to make any difference.
Any idea would be appreciated,
Ross
__________________________________________________________________
Looking for the perfect gift? Give the gift of Flickr!
http://www.flickr.com/gift/
Hi,
I have been trying to create a maintainable way of grouping pages
together and allowing readers to page through them in sequence. Most
samples I've seen use templates that require you to supply the
'previous' and 'next' page. This however results in three page edits
to insert a new page between two existing pages and does not guarantee
that your prev-sequence is identical to the next-sequence...
Being a programmer, that is way too much duplicate and error-prone
work for me ;-) There must be a better way to do this and I was hoping
to solve it with a plain MW installation (with ParserFunctions and
DynamicPageList at the moment).
Given an 'index' page that holds a list of all page titles in the
preferred order, isn't it possible to create a template that selects
the correct previous and next page title given the current page title?
I get stuck in getting the correct lines from the index page. DPL can
select based on section name (= page title), but then the contents of
that index section must be the prev/next links themselves:
=First Page=
Prev [[Second Page|Next]]
=Second Page=
[[First Page|Prev]] [[Third Page|Next]]
=Third Page=
.. etc.
This works, except that it is still a lot of duplication of page names
(but the edits are contained in a single page, big plus).
I hoped to simplify the index page by creating a template that writes
the section header and prev/next links, but then DPL no longer
recognizes the sections :-( Apparently DPL 'sees' the page text before
the templates are called ({{Page|Prev page|Page title|Next page}}:
{{Page||First Page|Second Page}}
{{Page|First Page|Second Page|Third Page}}
{{Page|Second Page|Third Page|Fourth Page}}
Basically my questions are:
1. Am I completely off track here?
2. Can DPL be coerced to evaluate templates before looking at the page
3. Can DPL (or another extension) select the text from a section
before/after a matched section?
4. Is it possible to determine the section sequence number given the
section name (so 'Second Page' results in 2, allowing me to use DPL to
retrieve the name of section 2 -/- 1 and 2 + 1 to create the prev/next
links?
Apologies for the long post, hopefully someone can point me to some
good resources (I've been to Meta, Wikibooks, Medawiki.org but could
very well have overlooked something there as the amount of info is a
bit overwhelming and it is difficult to judge how up to date it is).
--
Regards,
Jean-Marc
The high moral standards of the MediaWiki community appeals to me. There
is a general believe these
standards are higher then in "the outside world".
As within any community some persons do not act as they preach.
Especially with larger communities this is unfortunately normal. So this
also may (might) happen within the MW community.
This question seems abstract but seems to me to have practical
consequences. I would like to keep this
discussion abstract to processes and not to specific situations or
persons. Which would only create confusion.
In the end of course the result of this discussion can be used as
reference to be implemented.
How the community act if our high moral standards are abused? Block
legal authorities because these authorities can not fulfil our high
level of moral standards?
With regards,
Bernard
Hello everyone
I was unable to find the answer to my question in the official documentation, thus I send this mail.
It seems that some organizations are able to create multiple wikis with just one mediawiki installation (i.e. a multi-client capability). Unfortunately, I wasn't able to find any "official" information about it nor any instructions of how it is realized.
Does someone know how to make a mediawiki multi-client capable?
I look forward to any hints.
Kind regards,
Moove
--
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01
I would like to add a prefix to the title of many pages, in order to
move them to a new namespace. the pages are all in the same category
so
in principle the replace text extension would be the tool. however,
since replace text extension does not support regular expressions (I
need to match the start of the title and replace it with the prefix)
...
any help/solutions are very much appreciated!
tomy
--
sent from nil
> That is a moral standard. Freedom is more important then moral
> imperative.
Freedom is subject to obligations. You have the "freedom" - read, "power" -
to yell "Fire!" in a crowded theater. You '''don't''' have a right to do
so.
> I do not have the freedom to vandalise Wikipedia.
Sure you do. People do it all the time. But you don't have a '''right''' to
do so. And you don't have a '''right''' to fill this list with off-topic
issues. Create a new list, and let others exercise '''their''' freedom to
spend allocate their time on lists as they choose.
On Wikipedia, is there a way to change the color of the images used in the
{{cquote}} template without uploading a new image. I expect this could
reasonably start with SUBST:ing the template to get the code, but what
about the image or the image color?
Hi,
I really like MediaWiki's watchlist, but it sucks if you
don't have the time to use it every 30 days or x edits or
don't like the edits partitioned by day (which is inconve-
nient for both inter- and intra-day use).
So, the natural idea would be to extract the list of
watched pages, feed them to a local database, then, from
time to time, for every page find the last revision of the
page prior to the last visit, find the current revision,
execute a browser with the diff link if the two revisions
are different and, on success, update the time of the last
visit with the time of the current revision in the database
(saving timestamps instead of revision numbers to deal with
the possibility of deleted revisions).
Seems straightforward and not much work, but before I
script it, is anyone aware of someone who has already done
it? :-)
TIA,
Tim