> From: sebastien bracq <sbracq(a)hotmail.fr>
>
> maybetheproblemissomedeleteeditsinwiikipediawhentheyoccurandidnotacademical
> soinformationhasnotimetoleaveanddoesnotreachhisgoaltoinform
...
> yesmaybemostpeaplesdon'careaboutfillininformationsinmediawiki
> manyjustreadwhentheiy'reaskedsomeonetoread
Please get your space bar fixed on your keyboard!
:::: We’ve had nothing but exponential growth, and so we’ve developed
an exponential growth culture. Politicians are always trying to
maintain our growth. If we could maintain it, it would destroy us. --
M. King Hubbert ::::
:::: Jan Steinman, EcoReality http://www.EcoReality.org ::::
We just upgraded to MW 1.14. We display an RSS feed of our recent
changes on our main page. With 1.11, the URL's of the changed articles
just pointed to the articles. Now they have appended to them a request
for diffs, eg: "&diff=24679&oldid=prev"
Is there a setting to NOT have this code appended to the URLs? We've
set $wgFeedDiffCutoff = 0; but that only prevents the diffs from being
shown on the feed page. It doesn't remove the request to show diffs
from the URL that people will click on.
I think the code that does this is in PageHistory.php:
return new FeedItem(
$title,
$text,
$this->mTitle->getFullUrl( 'diff=' . $rev->getId() .
'&oldid=prev' ), <<<< appends to URL
$rev->getTimestamp(),
$rev->getUserText(),
$this->mTitle->getTalkPage()->getFullUrl() );
}
Would it not make sense to have the URL be "simple" if
$wgFeedDiffCutoff = 0 ? I think I can hack the above code (if it's the
right code), but I try to avoid doing that.
Thanks!
Michelle
Hi. Moved from 1.13.3 to x.4. Both before and after re-running the
"installer" (as my preferred update method), I could see the site
fine, but after logging in I can't "edit" any pages.
I get this:
--
A database query syntax error has occurred. This may indicate a bug in
the software. The last attempted database query was:
(SQL query hidden)
from within function "Article:getHiddenCategories". MySQL returned
error "1146: Table 'donschae_tikiwiki.mw_page_props' doesn't exist
(localhost:/tmp/mysql5.sock)".
--
MediaWiki 1.13.4
PHP; 5.2.8 (cgi-fcgi)
MySQL 5.0.67-msl-usrs-icd1-log
Any ideas?
Thank you
don
--
don schaefer
This has come up on one of the wikis I run for a collaborative writing
project. Is there an easy way to get an average page length for pages on a
wiki particularly within a specific category?
Hi !
I have installed the Extension:FileIndexer new variant
(http://www.mediawiki.org/wiki/Extension_talk:FileIndexer#New_Variant) from
Ramon Dohle (raZe) on my version 1.12 and it works well for english text.
When I upload a PDF file containing french accented characters such as
e-acute ("é"), those are wrongly indexed and show on the file upload page.
I've looked inside the wiki database (table wikiprefix_searchindex, column
si_text) and found that an e-acute is represented as the string "u8c3a9" for
any standard page while it is represented by "u8efbfbd" for the uploaded PDF
entry. Actually any accented character is represented by "u8efbfbd" ! Of
course searching doesn't work with such caracter substitution.
"u8c3a9" is actually the code for UTF-8. I'm not sure about "u8efbfbd" but
it seems is it a kind of placer holder.
Any advice appreciated.
--
francois.piette(a)overbyte.be
Author of ICS (Internet Component Suite, freeware)
Author of MidWare (Multi-tier framework, freeware)
http://www.overbyte.be
Hello, I run a sort of semi busy wiki, and I have been experiencing
difficulties with its CPU load lately, with load jumping to as high as 140
at noon (not 1.4, not 14, but ~140). Obviously this brought the site to a
crawl. After investigation I have found the course- multiple diff3
comparisons were called at the same time.
To explain the cause of this needs a little background explanation. The wiki
I run deals with the edit of large text files. It is common to see pages
with hundreds of kb of pure text on any given wiki page. Normally my servers
would be able to handle the edit requests of these pages.
However, it seems that searchbots/crawlbots (from both search engines and
individual users) have been hitting my wiki pretty hard lately. Each of
these bots tries to copy all the pages, this include Revision History of
each of these 100kb sized wiki text pages. Since each page could have
potentially hundreds of edits, for every single large text files, hundreds
of Revision history diff (from lighttpd/apache -> php5 -> diff3? ) are
spawned.
I have done some testing on my servers, and I found that each diff3
comparison of a typical large text page leads to a 3 increase of CPU load.
Right now I have implemented a few temporary restrictions-
1. Limit # of conn per IP
2. Disallow all search bots
3. increase ram limit in php config file
4. Memcache wherever it's possible (not all servers have memcache)
I have some problems with 1. and 2. . First of all, 1. doesn't really solve
the load problem. The slowdown could still occur if multiple bots hit the
site at the same time.
2. faces a similar problem. After I edited my rebots.txt, I discovered that
some clowns are ignoring my robots.txt. Also, only Google supports regular
expression in robots.txt, so I can't just use Disallow: *diff=* .
I don't want to break these large text pages up because it makes it harder
for scripts to compile the scripts together from the database directly.
So I turn my attention to system level optimization. Does anyone have any
experience with messing with diff3? Like for example switching to say
libxdiff? Or renice the fcgi? (I use lighttpd) Or is it possible to disable
Revision Comparison altogether for pages older than a certain age?
Thanks for the help
Tim