$wgDBtransactions gets set to true if using InnoDB tables. Is there
an advantage to using InnoDB tables?
The disadvantage is that with MySQL there is a file, ibdata1, that
seems to grow endlessly if InnoDB tables are used. See
http://bugs.mysql.com/bug.php?id=1341
We're wondering if we should just convert everything to MyISAM. Any
thoughts?
=====================================
Jim Hu
Associate Professor
Dept. of Biochemistry and Biophysics
2128 TAMU
Texas A&M Univ.
College Station, TX 77843-2128
979-862-4054
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi all,
I've created some custom namespaced on one of my wikis, Botwiki
(previously known as pywikipedia).
I've put these lines in my LocalSettings.php file:
- ---
#Custom namespaces
$wgExtraNamespaces =
array(100 => "Manual",
101 => "Manual talk",
102 => "Python",
103 => "Python talk",
104 => "Php",
105 => "Php talk",
106 => "Perl",
107 => "Perl talk",
108 => "AWB",
109 => "AWB talk",
110 => "IRC",
111 => "IRC talk",
112 => "Other",
113 => "Other talk"
);
$wgContentNamespaces[] = 100;
$wgContentNamespaces[] = 102;
$wgContentNamespaces[] = 104;
$wgContentNamespaces[] = 106;
$wgContentNamespaces[] = 108;
$wgContentNamespaces[] = 110;
$wgContentNamespaces[] = 112;
- ---
However, I have a big problem: when I go to a page in one of these new
namespaces (not the discussion, the main ones), for example
http://botwiki.sno.cc/wiki/Perl:Copyright_Violation_Bot , I found the
red link to the discussion page. It's right, as there is no discussion
page for that article. But if you click on it, it brings you to
http://botwiki.sno.cc/w/index.php?title=Perl_talk:Copyright_Violation_Bot&a…
correct, of course. But have a look of the article and discussion tabs:
they are both red! The first, "article", leads to
http://botwiki.sno.cc/w/index.php?title=Perl_talk:Copyright_Violation_Bot&a…
when it should lead to
http://botwiki.sno.cc/wiki/Perl:Copyright_Violation_Bot and the second,
"discussion", leads to
http://botwiki.sno.cc/w/index.php?title=Talk:Perl_talk:Copyright_Violation_…
, when it should lead to
http://botwiki.sno.cc/w/index.php?title=Perl_talk:Copyright_Violation_Bot&a…
.
It's the first time I deal with custom namespaces :-( but I have some
ideas of what it can be. Can the problem be with the
$wgContentNamespaces settings? So it detects everything as ns0? (don't
think so).
Or can it be the fact that I haven't used an underscore in the
$wgExtraNamespaces definition?
Snowolf
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iD8DBQFGWhk7sdafW5NQMtERAuX+AKDQ7QLNjXv9cu+ZbSLXidMzgi/vNgCaA7VT
+VTgR3iI/BI7FVDqcyRZVJ0=
=a4yP
-----END PGP SIGNATURE-----
I want to moving all pages in a certain namespace (about 60 pages) into
the the "main" namespace. I couldn't find how to do this, so I tried
exporting the pages and importing them and I ran into all sorts of
problems. Is there a way to do what I want without using the import and
export features (and without having to move each of them manually)?
Thanks.
It seems like {{REVISIONID}} doesn't
work<https://bugzilla.wikimedia.org/show_bug.cgi?id=12694>.
I've tried it on Wikipedia, Wikiversity and Appropedia, and it comes out
blank.
Is there another way that I can insert a revision ID into a document? I want
to do it with a bot, rather than manually for each page.
It should probably be the last revision id, as I need to "subst:" the value,
and I'm guessing that won't work with the current value. But I'll try
anything, and I'm not so concerned whether it's the current or previous
revision ID.
I asked on Pywikipedia-L, and got this response. Unfortunately it requires
coding, and I don't understand it - so I'm wondering if there's another
option.
You're looking for the Page.latestRevision() function here:
nicdumz@host:~/pywikipedia$ python
Python 2.5.2 (r252:60911, Oct 5 2008, 19:24:49)
[GCC 4.3.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import wikipedia
Checked for running processes. 1 processes currently running,
including the current process.
>>> mainpage = wikipedia.Page(wikipedia.
getSite('en', 'wikipedia'), 'Main Page')
>>> mainpage.latestRevision()
Getting 1 pages from wikipedia:en...
260905624
But you'll have to code yourself in python to insert it somewhere...
It's sort of a "custom" use.
Thanks!
--
Chris Watkins (a.k.a. Chriswaterguy)
Appropedia.org - Sharing knowledge to build rich, sustainable lives.
identi.ca/appropedia / twitter.com/appropediablogs.appropedia.org
I like this: five.sentenc.es
Hello,
I installed the extension
<http://www.mediawiki.org/wiki/Extension:NamespacePermissions>.
After defining new groups and granting rights to some users, these
rights are shown in detail on Special:ListUsers. How could I remove this
information from page Special:ListUsers?
Could I declare every Special:<page> to be a "Restricted special page"
(a page beeing shown in bold on "Special:SpecialPages" )?
TIA
Peter
Hey guys.
For my thesis project I am doing a ground up redesign of mediawiki's
interaction. (blue prints, not construction)
Ill be looking at how people use it, from a user / contributer /
administrator role and how we can help make it batter and easier.
From the many wikis I have run, It seems most people are afraid to
edit/ contribute to it because it is so complicated. (or seems so)
Would anyone be interested in talking to me about what they do/don't
like, and what they have a hard time with as an administrator?
Also what your users have issues with.
Thanks,
Adam Meyer
ameyer(a)g.risd.edu
Industrial + Interaction Designer
http://www.adam-meyer.com
Hi !
I have installed the Extension:FileIndexer new variant
(http://www.mediawiki.org/wiki/Extension_talk:FileIndexer#New_Variant) from
Ramon Dohle (raZe) on my version 1.12 and it works well for english text.
When I upload a PDF file containing french accented characters such as
e-acute ("é"), those are wrongly indexed and show on the file upload page.
I've looked inside the wiki database (table wikiprefix_searchindex, column
si_text) and found that an e-acute is represented as the string "u8c3a9" for
any standard page while it is represented by "u8efbfbd" for the uploaded PDF
entry. Actually any accented character is represented by "u8efbfbd" ! Of
course searching doesn't work with such caracter substitution.
"u8c3a9" is actually the code for UTF-8. I'm not sure about "u8efbfbd" but
it seems is it a kind of placer holder.
Any advice appreciated.
--
francois.piette(a)overbyte.be
Author of ICS (Internet Component Suite, freeware)
Author of MidWare (Multi-tier framework, freeware)
http://www.overbyte.be
Hello, I run a sort of semi busy wiki, and I have been experiencing
difficulties with its CPU load lately, with load jumping to as high as 140
at noon (not 1.4, not 14, but ~140). Obviously this brought the site to a
crawl. After investigation I have found the course- multiple diff3
comparisons were called at the same time.
To explain the cause of this needs a little background explanation. The wiki
I run deals with the edit of large text files. It is common to see pages
with hundreds of kb of pure text on any given wiki page. Normally my servers
would be able to handle the edit requests of these pages.
However, it seems that searchbots/crawlbots (from both search engines and
individual users) have been hitting my wiki pretty hard lately. Each of
these bots tries to copy all the pages, this include Revision History of
each of these 100kb sized wiki text pages. Since each page could have
potentially hundreds of edits, for every single large text files, hundreds
of Revision history diff (from lighttpd/apache -> php5 -> diff3? ) are
spawned.
I have done some testing on my servers, and I found that each diff3
comparison of a typical large text page leads to a 3 increase of CPU load.
Right now I have implemented a few temporary restrictions-
1. Limit # of conn per IP
2. Disallow all search bots
3. increase ram limit in php config file
4. Memcache wherever it's possible (not all servers have memcache)
I have some problems with 1. and 2. . First of all, 1. doesn't really solve
the load problem. The slowdown could still occur if multiple bots hit the
site at the same time.
2. faces a similar problem. After I edited my rebots.txt, I discovered that
some clowns are ignoring my robots.txt. Also, only Google supports regular
expression in robots.txt, so I can't just use Disallow: *diff=* .
I don't want to break these large text pages up because it makes it harder
for scripts to compile the scripts together from the database directly.
So I turn my attention to system level optimization. Does anyone have any
experience with messing with diff3? Like for example switching to say
libxdiff? Or renice the fcgi? (I use lighttpd) Or is it possible to disable
Revision Comparison altogether for pages older than a certain age?
Thanks for the help
Tim
On my MW 1.14.0 wiki, Special:WantedPages contains the first entry:
Invalid title in result set;
What's the best way to track this down? On a hunch, I tried querying the pagelinks table and found five entries where pl_title is empty:
mysql> select * from pagelinks where pl_title = '';
+---------+--------------+----------+
| pl_from | pl_namespace | pl_title |
+---------+--------------+----------+
| 653 | 0 | |
| 686 | 0 | |
| 690 | 0 | |
| 717 | 0 | |
| 824 | 0 | |
+---------+--------------+----------+
5 rows in set (0.01 sec)
Thinking this might be the problem, I examined the fivearticles, but I don't see anything in them to produce an error. Unfortunately, all the articles use fairly complex templates so it's hard to track anything down in them.
Any advice appreciated!
DanB